text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Im wondering that maybe its possible for that of a super super super massive black hole that merged with other super super super massive black holes would become so large that eventually merging with even more massive black holes would feast eventualy on the entire universe, causing that massive black hole to burn, shrink become so dence where it explodes leading to the creation of the next big bang. I have had my thoughts similar to this. http://hubpages.com/hub/Our-Universe--A-Black-Hole Check out Stephen Hawkings "Theory of Everything". You might have to research it on Google or somewhere. I have seen a fascinatin TV documentary on this a couple of weeks ago. Sounds like science fiction to me. I don't believe there is any such thing as black holes. Seeing is believing. Knolyourself - by definition, you can never see a black hole as light can't escape from it. But there is plenty of evidence for their existence in gravitational field effects. And in fact, even basic Classical physics explains their formation. They're not all that strange. Biggest category of black holes are called supermassive black hole and The largest known supermassive black hole is located in OJ 287 weighing in at 18 billion solar masses (http://en.wikipedia.org/wiki/Black_hole). Till now this is what we humans have been able to find hence it is quite possible that one we might know about a black hole of the size of 50billion Suns. I Believe they do for a number of General Motors Executives. I think PearlDiver means the the execs suck in money like a black hole. I don't know if there is a physical limit to the size of a black hole. I thought the size was related to how massive the original star was but I may be wrong. As far as them sucking up the universe goes, there is far too much dark energy pushing the galaxies away from each other to have anything physical suck them up. by emrldphx 6 years ago First, I beg forgiveness for any facts about black holes that I miss or state incorrectly. It has been some time since I looked at papers, studies, experiments, or theories specific to black holes, so some things may be different than I used to know.Anyway, the interesting thing about black holes... by nsabari4 6 years ago What is really black hole??????????is it really true that if we enter a black hole,we will go to another galaxyy or some other dimension by bradmasterOCcal 2 years ago Do super massive black holes in the galaxies, question the theory of the Big Bang?In this century scientists have found that all of the galaxies have super massive black holes, but they don't know why these exist or what they mean/. by vydyulashashi 7 years ago Do you think black holes re;;y exist? by jerami 7 years ago Having a wondering about it moment. I don't have an answer. That black hole at the center of our universe may not exert its pull upon the earth as much as it does on our sun. Our sun revolves around it. So it does pull upon the sun. Would... by jomine 7 years ago the new science deals in concepts like relativity and quantum, which are irrational. it says about annihilation by black holes. it says everything "created" from singularity out of nothing. it says virtual particles collide.will it ask as to "believe" in all these and worship... |HubPages Device ID| |Login||This is necessary to sign in to the HubPages Service.| |HubPages Google Analytics| |HubPages Traffic Pixel| |Google Hosted Libraries| |Google AdSense Host API| |Conversion Tracking Pixels| |Author Google Analytics| |Amazon Tracking Pixel|
<urn:uuid:aaa7d01d-f53c-4cd4-a328-c33ba33f9ee1>
2.90625
809
Comment Section
Science & Tech.
59.448204
95,497,963
Energy Stored in a Magnetic Field Magnetic field can be of permanent magnet or electro-magnet. Both magnetic fields store some energy. Permanent magnet always creates the magnetic flux and it does not vary upon the other external factors. But electromagnet creates its variable magnetic fields based on how much current it carries. The dimension of this electro-magnet is responsible to create the strength the magnetic field and hence the energy stored in this electromagnet.First we consider the magnetic field is due to electromagnet i.e. a coil of several no. turns. This coil or inductor is carrying current I when it is connected across a battery or voltage source through a switch. Suppose battery voltage is V volts, value of inductor is L Henry, and current I will flow at steady state. When the switch is ON, a current will flow from zero to its steady value. But due to self induction a induced voltage appears which is this E always in the opposite direction of the rate of change of current. Now here the energy or work done due to this current passing through this inductor is U. As the current starts from its zero value and flowing against the induced emf E, the energy will grow up gradually from zero value to U. dU = W.dt, where W is the small power and W = - E.I So, the energy stored in the inductor is given by Now integrate the energy from 0 to its final value. Again, as per dimension of the coil, where N is the number of turns of the coil, A is the effective cross-sectional area of the coil and l is the effective length of the coil. Again, Where, H is the magnetizing force, N is the number of turns of the coil and l is the effective length of the coil. Now putting expression of L and I in equation of U, we get new expression i.e. So, the stored energy in a electromagnetic field i.e. a conductor can be calculated from its dimension and flux density. Now let us start discussion about energy stored in the magnetic field due to permanent magnet. You may also be interested on Total flux flowing through the magnet cross-sectional area A is φ. Then we can write that φ = B.A, where B is the flux density. Now this flux φ is of two types, (a) φr this is remanent flux of the magnet and (b) φd this is demagnetizing flux. So, as per conservation of the magnetic flux Law. Again, Bd = μ. H, here H is the magnetic flux intensity. Now MMF or Magneto Motive Force can be calculated from H and dimension of the magnet. where l is the effective distance between two poles. Now to calculate energy we have to first go for the reluctance of the magnetic flux path. Magnet’s internal reluctance path that is for demagnetizing is denoted as Rm, And Now Wm, is the energy stored in the magnet's internal reluctance. Now energy density Look at the model below. Dotted lined box is the magnet and one reluctance path Rl for the mechanical load is connected across the magnet. Now apply node equation and loop equation, we get Now, If we do any mechanical work inside a magnetic field, then the energy required W. Again, if we place a electromagnetic coil in the vicinity of a permanent magnet, then this coil will experience a force. To move this coil some work is done. This energy density is the co-energy with respect to the permanent magnet and the coil magnet. Magnetizing flux intensity for the permanent magnet is H and for the coil is HC. This co-energy is denoted as Where, B is the flux density at the coil position near the permanent magnet.
<urn:uuid:adb55e15-3c9d-45ac-9e53-c2f529cb42fa>
3.71875
779
Tutorial
Science & Tech.
61.599595
95,497,966
Commercial and bureaucratic hindrances collided with an uncontrollable reality: the faith of many players. At about 40 light-years from Earth, it is possible that some of them could have water on their surface. All of these seven planets could have liquid water – key to life as we know it – under the right atmospheric conditions, but three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water. The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our solar system. The new results were published Wednesday in the journal Nature, and announced at a news briefing at NASA Headquarters in Washington. At about 40 light-years from Earth, the system of planets is relatively close to us, in the constellation Aquarius. Because they are located outside of our solar system, these planets are scientifically known as exoplanets. This exoplanet system is called TRAPPIST-1, named for The Transiting Planets and Planetesimals Small Telescope (TRAPPIST) in Chile. In May 2016, researchers using TRAPPIST announced they had discovered three planets in the system. Assisted by several ground-based telescopes, including the European Southern Observatory's Very Large Telescope, Spitzer confirmed the existence of two of these planets and discovered five additional ones, increasing the number of known planets in the system to seven. All seven planets were detected by watching how their star dims as each passes — or transits — in front of it. Scientists measured how much of the star’s light each transit blocked from Earth’s view. Knowing how big a planet would have to be to do that, the astronomer calculated that all seven must have roughly the same radius as Earth. In contrast to our sun, the TRAPPIST-1 star – classified as an ultra-cool dwarf – is so cool that liquid water could survive on planets orbiting very close to it, closer than is possible on planets in our solar system. "This is the first time that so many Earth-sized planets are found around the same star", says Michael Gillon, lead author of the paper and the principal investigator of the TRAPPIST exoplanet survey at the University of Liege, Belgium. "It is also the best target yet for studying the atmospheres of potentially habitable, Earth-size worlds", he adds. Named from 1b to 1h in order of their distance from TRAPPIST-1, the planets take between 1.5 to 20 days to completely orbit the star — a time that is less than what it takes our Moon to orbit Earth. All seven of the TRAPPIST-1 planetary orbits are closer to their host star than Mercury is to our sun. The planets also are very close to each other. Because the planets are so close to this cooler star, they receive similar amounts of energy as the inner planets in our solar system. This means it is possible that some of the alien planets could have water on their surface. "The energy output from dwarf stars like TRAPPIST-1 is much weaker than that of our sun. Planets would need to be in far closer orbits than we see in the solar system if there is to be surface water. Fortunately, it seems that this kind of compact configuration is just what we see around TRAPPIST-1", Amaury Triaud, one of the co-authors of the Nature paper, states. “SUPRISINGLY SIMILAR IN SIZE TO THE EARTH” The six inner rocky planets are located in the temperate zone, where the surface temperature could be between 0 and 100 degrees Celsius. “The three inner-most planets may be too hot to have water covering the entire surface, but the fourth, fifth and sixth planets all fall well within the Goldilocks zone, which makes them very good candidates for hosting life”, explains Dr Gillion. The seventh planet, however, is likely to be too distant and cold to harbour liquid water unless there is some internal mechanism or atmosphere to keep the planet warm. Its mass has not yet been estimated, but scientists believe it could be an icy, "snowball-like" world, although further observations are needed. "This is an amazing planetary system — not only because we have found so many planets, but because they are all surprisingly similar in size to the Earth", Dr Gillon points out. JUST THE BEGINING "The TRAPPIST-1 system provides one of the best opportunities in the next decade to study the atmospheres around Earth-size planets", said Nikole Lewis, co-leader of the Hubble study and astronomer at the Space Telescope Science Institute in Baltimore, Maryland. NASA's planet-hunting Kepler space telescope also is studying the TRAPPIST-1 system, making measurements of the star's minuscule changes in brightness due to transiting planets. Spitzer, Hubble, and Kepler will help astronomers plan for follow-up studies using NASA's upcoming James Webb Space Telescope, launching in 2018. With much greater sensitivity, Webb will be able to detect the chemical fingerprints of water, methane, oxygen, ozone, and other components of a planet's atmosphere. Webb also will analyse planets' temperatures and surface pressures – key factors in assessing their habitability.
<urn:uuid:cc1732f5-2655-4290-8afe-29f7966cb8f2>
3.6875
1,110
News Article
Science & Tech.
39.827069
95,497,985
Symbioses with Microorganisms Microorganisms are an inescapable presence in most biotic interactions, and they influence the nutritional ecology of natural enemies in at least two major ways. First, their interactions with the food items themselves often change the quality and attractiveness of these substances for natural enemies. Presented in this chapter are three such interactions: when microorganisms (especially fungi) affect seeds, nectar, and honeydew for natural enemies. The microbial community of insect guts plays an important and often underestimated role in the nutritional ecology of entomophagous species, and internal nutritional symbionts are the focus of the second half of this chapter. Clearly, as a discipline we are only just beginning to understand how microbes render the nutritional ecology of entomophagous species more complex, and it is hoped that this short review will stimulate more research in this expanding area of biology. KeywordsNatural Enemy Endophytic Fungus Tall Fescue Fungal Endophyte Malpighian Tubule Unable to display preview. Download preview PDF.
<urn:uuid:94dab3b7-731e-4eb9-89fa-76815f3d487f>
3.265625
218
Truncated
Science & Tech.
6.693354
95,497,995
Migration bottlenecks provide researchers with fascinating opportunities to study animal movement ecology. Advances in technology enable the dissemination of migratory ground-speed data in relation to independent variables such as weather conditions and time of day. I spent a week in the region of Calabria, Italy monitoring raptor migrations over the Strait of Messina bottleneck. In a single field day over 1,382 migratory raptors were counted; approximately 80% of these were European Honey Buzzards (Pernis apivorus). This post identifies Europe’s most important migratory raptor bottlenecks and highlights the threats facing migratory avifauna. I teach only half of the course material on theoretical ecology, and allow the other half to be discovered and learnt by students on their own via various group activities. If you’re a grad student who has wondered if your ecology Ph.D. will be useful outside of academia, check out these lessons learned after a summer in Washington, D.C.! Ecologists interested in the types of science policy opportunities for our field will want to check out these lessons as well. We seek out ecologists with diverse backgrounds and perspectives to highlight their work and share their stories and experiences. Check out this week’s Ecologist Spotlight: Dr. Kate Laskowski. We seek out ecologists with diverse backgrounds and perspectives to highlight their work and share their stories and experiences. Check out this week’s Ecologist Spotlight: Natasha Phillips. Challenging the extractive paradigm in field work: suggestions from a case study in community engagement The paradigm in fieldwork of travelling to remote locations, extracting data, and leaving to publish findings without engaging with local communities – particularly Indigenous ones – must be challenged. As students on a long term research project, we distributed a survey to better understand what local people wanted from us. Community engagement needs to be more than purely research-focused initiatives, and engagement with Indigenous peoples can require specific and separate efforts. No spoilers here. The villain in The Avengers: Infinity War understands ecology pretty well and we should consider his motivation as an ecologist. We need to talk a lot more about how to slow population growth.
<urn:uuid:6283056c-8637-4bf6-a1d6-575683f4ef28>
2.65625
451
Personal Blog
Science & Tech.
30.78616
95,497,996
Share this article: Gusty winds will raise the fire danger across the southern High Plains of the United States into Monday evening, a region largely starved of meaningful rainfall since fall of 2017. The most recent U.S. drought monitor reveals that severe drought covers a large area from southern Kansas, Oklahoma and the Texas Panhandle to most of the Four Corners region. A pocket of extreme drought engulfs the northern Texas Panhandle, western Oklahoma and neighboring parts of southern Kansas. “To put the dryness in perspective, Feb. 16 marked the 126th-consecutive day without rain in Amarillo, Texas, shattering the previous mark of 75 days from 1957,” AccuWeather Meteorologist Faith Eherts said. A new record for consecutive days without measurable precipitation was set in Lubbock, Texas, on Feb. 15, “marking the 99th-consecutive day without measurable precipitation,” Eherts said. The dry streak was then broken late Friday evening as showers returned. While the rainfall was welcome, amounts in these cities were held to less than 0.25 of an inch. That will barely put a dent in the drought. More substantial rain fell in between these two cities on Friday. This includes Plainview, Texas, which hit 130 days without measurable precipitation on Feb. 14. “Similar statistics are emanating out of Oklahoma, where Woodward and Laverne reached 128 days without measurable precipitation on Feb. 15,” Eherts said. In Oklahoma City, less than 1 inch of precipitation fell between Nov. 1 and Feb. 16, more than 5 inches shy of the normal amount of precipitation for this span. Similar statistics hold for Wichita, Kansas, where residents have only seen 18 percent of normal precipitation since Nov. 1 of last year. Farther to the west in Dodge City, Kansas, the period from Oct. 7 to Feb. 13 is the driest on record dating back to 1874 with only 0.15 of an inch of rain. The dry periods from 1880 and 1876 previously held this record. Lakes, rivers and reservoirs in the southern Plains and Southwest are running well below normal and some are approaching dangerously low levels. Long-term drought concerns are growing each day that passes without meaningful precipitation. More rain is in store for the South Central states into Wednesday but the most meaningful rain will miss most of the areas enduring the worst drought conditions. Instead, the southern Plains will face a heightened fire danger to kick off the week. "The combination of the prolonged dry spell, gusty winds and low relative humidity heightened the fire danger," AccuWeather Meteorologist Evan Duffey said. There are no signs that the weather pattern will produce rain in these areas through at least the first few days of March. Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated. The intense record heat baking the south-central United States is expected to get trimmed back early this week, but a sweep of refreshing air is not on the horizon. This past weekend's rainstorm was only the start of an abnormally wet pattern that will elevate the flood risk in the eastern United States into the end of the month. Despite NASCAR moving up the start time of the Foxwoods Resort Casino 301, rain has hung on and delayed the race at Loudon, New Hampshire. Yet another round of severe weather is threatening the southeastern United States to close out this weekend. The remainder of July will be dominated by a resurgence of heat across the northwestern United States. An uptick in monsoon rainfall is expected to heighten the flood threat across eastern and northern India this week.
<urn:uuid:ed48caac-4aeb-45db-a586-e30f5b02745f>
2.859375
778
News Article
Science & Tech.
54.011045
95,497,998
Theory predicts that the phenotypic variance observed in a trait subject to stabilizing selection should be negatively correlated with the trait's impact on fitness. However, this relationship has rarely been tested directly. The offspring sex ratios produced by pollinating fig wasp foundresses upon entrance to a fruit and oviposition alone (single foundress sex ratios) are subject to stabilizing selection because too many males reduce the total number of dispersing females and too few males will result in unmated females or complete loss of the brood. Furthermore, we argue that the impact on fitness of, and therefore the intensity of stabilizing intensity on, single foundress sex ratios are correlated to how frequently a species produces single foundress broods in nature. Specifically, the intensity of stabilizing selection will be greater in species that encounter single foundress broods more frequently, both because the trait is expressed more often and because fitness shows a greater sensitivity to variation (narrower fitness profile) when that trait is expressed. Across 16 species of Panamanian pollinating fig wasps, the phenotypic variance in single foundress sex ratios was negatively correlated with the frequency with which that species encounters single foundress broods in nature. In addition, a formal comparative analysis based upon a molecular phylogeny of the wasps gave results that were the same as when species were used as independent data points. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:e4fa1896-47b8-4ebc-ae30-cb1a1e4d829f>
2.828125
296
Academic Writing
Science & Tech.
5.3025
95,498,000
Week 1 Written Assignment SCI 207 Dependence of Man on the Environment August 22, 2011 One production habit humans have related to material resources is the production of new housing. Our text went into detail of the effects of “sprawl”, and how this sprawl is affecting the environment around us. Sprawl is basically the expansion of cities into non urban areas of the land surrounding these cities. With Sprawl we are seeing more types of land, water, and air pollutions due to the converting of porous lands, to more concrete type foundations that cover the land. Water becomes affected because this production of concrete foundations over previously porous lands causes water that would normally absorb into the ground and filter to streams and rivers slowly through to ground to “run-off” on the pavement more quickly to these streams and rivers. While it runs off the pavement it is picking up pollutants that would normally not be present if it were channeled through the ground, and dumping it all directly into our waterways. Another water problem caused by this is flooding. Being that the run-off water is making its way to the streams and rivers at a much higher rate, it is cause them to flood much more frequently. As far as land goes, we are losing much prime farmland to sprawl. Our book even goes so far to state that sprawl could even be affecting health of human beings negatively, as we tend to have lower blood pressure and other positive health affects while surrounded by farmland and vegetation, but being in city environments reverses these positive effects. A consumption habit of humans related to city sprawl is our increased consumption of gas, and the affects it is having on our environment. Because people are driving greater distances as the cities expand outward, we are consuming more gas in our vehicles, and thus causing pollution levels from the emission of our vehicles to rise. Combine that with... References: Turk, J., Bensel, T. (2011). Contemporary Environment Issues. Retrieved from: Please join StudyMode to read the full document
<urn:uuid:ca328992-64ab-4079-840a-f69a5c5a7f19>
3.09375
413
Academic Writing
Science & Tech.
48.703094
95,498,001
Luminescence in Low-dimensional Nanostructures Published: Last Edited: - NANO AU RSY Luminescence in Low-dimensional Nanostructures: Quantum Confinement Effect, Surface Effect Whenever the carrier localization, at least in one spatial direction, becomes comparable or smaller than the de Broglie wavelength of carriers, quantum mechanical effects occur. In this limit the optical and electronic properties of the material change as a function of the size and the system is called a nanostructure. As the size is reduced the electronic states are shifted toward higher energy and the oscillator strength is concentrated into few transitions. Nanostructures are classified by the number of dimensions in which the carriers are confined or, alternatively, free to move. In case of confinement in only one spatial direction, the nanostructure is named a quantum well (QW). The carrier motion is frozen in one dimension but electrons and holes can still freely move over the other two directions. Therefore the QW is a quasi two-dimensional (2D) system. A structure which provides carrier confinement in two directions, allowing the motion along the remaining dimension, is called quantum wire (QWR) and it is a quasi 1D system. In the case of confinement in all three spatial coordinates, the nanostructure is denominated quantum dot (QD). QDs are 0D systems since the carrier motion is completely frozen. The physics of the quantum size effect relies on the Heisenberg uncertainty principle between the spatial position and kinetic momentum of a quantum particle. It is not possible to measure both the momentum and position of a particle to an arbitrary precision. The product of the standard deviation in space and momentum satisfies the uncertainty relation: â-³x.â-³p ≥ â„/2 (1.26) This equation means that the smaller is the carrier localization in the nanostructure, the larger is the spread in the momentum p, or, better said for semiconductor systems, in the crystal momentum â„k. The energy may still be well defined, but the momentum is not well defined. In bulk systems, for states around the edge of conduction and valence band, the dependence of the energy on the wavevector k is quadratic, Where m* is the carrier effective mass. Following this equation, the spread in the momentum â„k gives minimum kinetics energy to the localized particle. This is in contrast with the classical physics, where the lowest energy state in whatever potential corresponds to no kinetic energy. The uncertainty principle of quantum mechanics imposes a positive zero-point energy, which is approximately inversely proportional to the square of the nanostructure size. Therefore, the energy of theground state of electrons and holes in semiconductor nanostructures not only depends on the materials but also on the dimension of the confinement region. Nanostructured materials with a size range of 1-100 nm have been the focus of recent scientific research because of their important optical properties, quantum size effects, electrical properties, chemical properties, etc. The low-dimensional materials have exhibited a wide range of optical properties that depend sensitively on both size and shape, and are of both fundamental and technological interest. The ability to control the shapes and size of nanocrystals affords an opportunity to further test theories of quantum confinement and yields materials with desirable optical characteristics from the point of view of application. The exciting emerging important application of low-dimensional nanocrystals is in light-emitting diodes (LEDs) and Displays. Recently, there has been much recent interest in low dimensional systems such as quantum well (two dimensional system), quantum wire (one dimensional system) and quantum dot (zero dimensional system). Optical properties of low-dimensional systems are substantially different from those of three-dimensional (3D) systems. The most remarkable modification comes from different distributions of energy levels and densities of states originating from the spatial confinement of electrons and holes. The simplest model for two dimensional (2D) systems is that of a particle in a box with an infinitely deep well potential, as shown in Figure 1.6. The wave functions and energy levels in the well are known from basic quantum mechanics and are described by: Ψn(z)=(2/Lz)1/2 cos ( nπz/Lz ) (1.28 ) n = 1,2,3,…. (1.29) Figure 1.6: A particle in a box made of infinitely tall potential barriers In semiconductor quantum wells (two dimensional (2D) systems such as layered materials and quantum wells), both electrons and holes are confined in the same wells. The energy levels for electrons and holes are described by [1.8]: Where and are the effective masses of electron and hole, respectively If electric dipole transitions are allowed from the valence band to the conduction band, the optical transition occurs from the state described by nh , kx , and ky to the state described by ne, kx and ky . Therefore, the optical transition takes place at energy: Where μ is the reduced mass given by μ-1 = The joint density of states ρ3D for the 3D for an allowed and direct transition in semiconductors is: The joint densities of states for 2D, 1D and 0D systems are: Where θ is a step function and δ is a delta function. The sum of quantum confinement energies of electrons and holes are represented by El , Em and En ; where El , Em and En refer to the three directions of spatial confinement Obviously the physics of the nanostructures strongly depends on their dimensionality (Figure 1.7). In a semiconductor structure a given energy usually corresponds to a large number different electronic states resulting from the carrier motion. In a bulk material where the motion can occur in three different directions the density of states increases proportionally to the square root of the energy. In quantum wells the motion in the plane gives a staircase DOS, where each step is associated with a newstate in the confining potential. In quantum wires a continuum of states is still present, but strong resonances appear in the DOS associated with the states in the confining potential. Finally in quantum dots only discrete energy states are allowed and the DOS is therefore a comb of delta functions. The possibility to concentrate the DOS in a reduced energy range is extremely important for a large variety of fundamental topics and device applications. It is at the base of the quantum Hall effect in quantum well (QW), of the quantization of the conductance in quantum wire (QWR), and of the single electron tunnelling in QDs. In the case of lasers the presence of a continuum DOS leads to losses associated with the population of states that do not contribute to the laser action. Conversely, the concentration of the DOS produces a reduction of the threshold current and enhances the thermal stability of the device operation. Clearly this property is optimized in QD structures. Due to the three-dimensional carrier confinement and the resulting discrete energy spectrum, semiconductor QDs can be regarded as artificial atoms. Figure1. 7: Density of states of three-dimensional ( 3D ) bulk semiconductors, a two dimensional ( 2D ) quantum well, a one dimensional ( 1D ) quantum wire, and zero dimensional ( 0D ) quantum dots. The most striking property of nanoscale semiconductor materials is the massive change in optical properties as a function of size due to quantum confinement. This is most readily manifest as a blue-shift in the absorption spectra with the decrease of the particle size. The blue-shift in the absorption spectra with decrease of particle size in semiconductor nanoparticles is due to the spatial confinement of electrons, holes, and excitons increases the kinetic energy of these particles. Simultaneously, the same spatial confinement increases the Coulomb interaction between electrons and holes. The exciton Bohr radius is a useful parameter in quantifying the quantum confinement effects in nanometer size semiconductor particles. The exciton Bohr radius is given by [1.8]: and an inequality holds. Here and are defined as: and (1.38 ) Where μ is the reduced mass given by are the effective masses of electron and hole, respectively. And also ε is the dielectric constant, â„ is the Planck constant. As the particle size is reduced to approach to the exciton Bohr radius, there are drastic changes in the electronic structure and physical properties. These changes include shifts of the energy levels to higher energy, the development of discrete features in the spectra (Figure 1.8). Figure 1.8: A schematic models for the energy structures of bulk solids, nanoparticles and isolated molecules. The quantum confinement effect can be classified into three categories: the weak confinement, the intermediate confinement and the strong confinement regimes, depending on the relative size of the radius of particles R compared to an electron , a hole , and an exciton Bohr radius , respectively. In strong confinement (R < < , < ), the individual motion of electrons and holes is quantized and the Coulomb int eraction energy is much smaller than the quantized kinetic energy. The ground state energy is [1.8]: Where the second term is the kinetic energy of electrons and holes, the third term is the Coulomb energy, and the last term is the correlation energy. In intermediate confinement ( ), the electron motion is quantized, while the hole is bound to the electron by their Coulombic attraction. In weak confinement ( ), the center-of-mass motion of exciton is quantized. The ground state energy is written as: Where is the translational mass of the exciton Figure 1.9: Size dependence of band gap for CdS nanoparticles. In strong confinement, there is appearance of an increase of the energy gap (blue shift of the absorption edge), which is roughly proportional to the inverse of the square of the particle radius or diameter. For example, it can be observed from Figure 1.9 that the strong confinement is exhibited by CdS particles with diameter less than ~ 6 nm (R ~ 3 nm), and this is consistent with the strong confinement effect for particles with The luminescence dynamics in low-dimensional nanostructures also deals with the interaction of light with the material. The interaction of light depends strongly on the surface properties of the materials. As the size of the particle approaches a few nm, both surface area to volume ratio and surface to bulk atom ratio dramatically increases. The basic relationship between the surface area to volume ratio or surface atoms to bulk atoms and the diameter of nanoparticles can be seen in Figure 1.10. Figure 1.10: Surface area to volume ratio and percentage of surface atoms (%) as a function of particle size. It is observed that the percentage of surface atoms in corner and edge vs. Particle sizes display dramatic increase when the size is decreased below a few nm, whereas percentage of face atoms decreases. For particles of ~1 nm, more than 70% atoms are at corners or edges. This aspect is important because light interaction with material highly dependent on the atomic scale surface morphology. As in nanoparticles, a large percentage of the atoms are on or near the surface, therefore, surface states near the band gap can mix with interior levels to a substantial degree, and these effects may also influence the spacing of the energy levels. Thus in many cases it is the surface of the particles rather than the particle size that determines the optical properties. Optical excitation of semiconductor nanoparticles often leads to both band edge and deep trap luminescence. The size dependence of the excitonic or band edge emission has been studied extensively. The absence of excitonic or band edge emission has attributed to the large non-radiative decay rate of the free electrons trapped in these deeptraps of surface states. As the particle size becomes smaller, the surface to volume ratio and hence the number of surface states increases rapidly, reducing the excitonic emission. The semiconductor nanoparticles exhibit broad and Stokes-shifted luminescence arising from the deep traps of surface states [1.25 – 1.27]. Cite This Essay To export a reference to this article please select a referencing stye below:
<urn:uuid:47cdacfb-45b8-4ad0-b709-83fa96831ace>
2.671875
2,544
Truncated
Science & Tech.
31.712336
95,498,002
William E. Galloway, 1985. "Meandering Streams – Modern and Ancient", Recognition of Fluvial Depositional Systems and their Resource Potential, Romeo M. Flores, Frank G. Ethridge, Andrew D. Miall, William E. Galloway, Thomas D. Fouch Download citation file: A fluvial system consists of a skeleton of fluvial channel-fill facies and closely associated splay and levee facies within a matrix of floodbasin muds and organics (Fig. G-9). Fluvial systems display a wide range of variation in such basic parameters as average proportion of sand to mud and dimensions and geometry of sand bodies. Consequent-ly, they also vary in their capacity to transmit fluids. Further, within a fluvial system, systematic variations in these same parameters are observed to be both parallel and transverse to the sediment disper-sal axis. Large fluvial complexes tend to produce integrated drainage net-works containing one or more trunk streams of the same type (i.e., meandering, braided, etc.) for large parts of the network. Deposi-tional characteristics of these trunk streams provide a logical basis for differentiating significantly different portions of a fluvial sys-tem. In a series of papers, Schumm (1960, 1972) has described rela-tionships in modern streams among sediment load transported by the channel, channel geometry, and sediment type deposited by the channel. These relationships can be quantified for modern river segments (Table G-3) and provide qualitative trends that can be applied in interpretation and classification of ancient fluvial depositional sys-tems. The basis of SchuITn's classification lies in the empirical obser-vation of a fundamental correlation between the ratio of bed load to suspended load transported by a stream and the cross-sectional geometry of the channel (expressed as width/depth ratio). This relationship is independent of other variables, such as slope, discharge, or periodici-ty of flow. Thus, alluvial channels
<urn:uuid:6258d149-9566-4810-9828-13018f995238>
2.65625
426
Academic Writing
Science & Tech.
39.005883
95,498,017
Sides of triangle Triangle has circumference 42 cm. Side a is 2 times shorter than side b and sice c is 2 cm longer than side a. Determine the sizes of sides of a triangle. Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: - RT leg and perimeter Calculate the length of the sides of a right triangle ABC with hypotenuse c when the length of a leg a= 84 and perimeter of the triangle o = 269. Area of square garden is 6/4 of triangle garden with sides 56 m, 35 m and 35 m. How many meters of fencing need to fence a square garden? - ISO triangle Calculate the area of an isosceles triangle KLM if the length of its sides are in the ratio k:l:m = 4:4:3 and has perimeter 377 mm. Parallelogram has sides lengths in the ratio 3: 4 and perimeter 2.8 meters. Determine the lengths of the sides. - Two squares Two squares whose sides are in the ratio 5:2 have sum of its perimeters 73 cm. Calculate the sum of area this two squares. Equilateral triangle with side 40 cm has the same perimeter as an isosceles triangle with arm of 45 cm. Calculate the base x of an isosceles triangle. - Right triangle Alef The area of a right triangle is 294 cm2, the hypotenuse is 35 cm long. Determine the lengths of the legs. - Isosceles triangle The perimeter of an isosceles triangle is 112 cm. The length of the arm to the length of the base is at ratio 5:6. Find the triangle area. - Trapezoid MO The rectangular trapezoid ABCD with right angle at point B, |AC| = 12, |CD| = 8, diagonals are perpendicular to each other. Calculate the perimeter and area of the trapezoid. - Rectangle diagonals It is given rectangle with area 24 cm2 a circumference 20 cm. The length of one side is 2 cm larger than length of second side. Calculate the length of the diagonal. Length and width are yet expressed in natural numbers. The garden has the square shape with circumference 124 m. Divide it into two rectangular gardens, one should circumference 10 meters more than the second. What size will have a gardens? The garden has a rectangular shape and has a circumference of 130 m and area 800.25 m2. Calculate the dimensions of the garden. If we increase one side of square by its one-half then square perimeter increase by 10 cm. What is the side of the square? The length of the rectangle-shaped property is 8 meters less than three times of the width. If we increase the width 5% of a length and lendth reduce by 14% of the width it will increase the property perimeter by 13 meters. How much will the property cost - Rectangle SS Perimeter of a rectangle is 296 km and its diagonal is 104.74 km. Determine the dimensions of the rectangle. - Quarter circular The wire that is hooked around the perimeter of quarter-circular arc has length 3π+12. Determine the radius of circle arc. Convert 270° to radians. Write result as multiple of number π.
<urn:uuid:7040f6c9-cb21-46c1-be42-c3eeaa2c84fe>
3.46875
744
Tutorial
Science & Tech.
71.933657
95,498,020
A team of physical geographers and hydrogeologists at Friedrich-Alexander Universität Erlangen-Nürnberg (FAU) have submitted a successful application for follow-up funding to the DFG for an innovative research project on climate change that is based on the island of Corsica. They have been awarded 1 million euros to cover personnel and material costs. In order to be able to analyse the sensitivity of past and future climate changes to carbon dioxide and other factors that influence the climate, ‘CorsicArchive’ combines investigations into the aspects of tree ecology, hydrology and climatology on Corsica – in terms of approach and scientific concept, this is something completely new. Climate change is one of the greatest challenges facing mankind.The Mediterranean itself is one of the regions that are showing the clearest signs of being affected by climate change; its ecosystems are being threatened by the more frequent heat spells and drought periods. It is thus no coincidence that, in order to assess and gain a better understanding of past and present climate scenarios, the researchers from the Institute of Geography and the GeoZentrum Nordbayern of FAU selected the island of Corsica to carry out their work in collaboration with their colleagues from Marburg and Corsica. Of particular interest to them are Pinus nigra, the Austrian or black pine, and Pinus pinaster, the maritime or cluster pine, two types of pine trees that grow at different altitudes on the island. ‘Through the CorsicArchive project, we hope to gain information that will fill a major gap in our knowledge of the natural fluctuations of the climate in the western Mediterranean during the past millennium,’ explains Prof. Dr. Achim Bräuning, Chair of Physical Geography at FAU’s Institute of Geography. ‘At the same time, we are also learning more about the effects of climate extremes in the Mediterranean at different altitudes. This interdisciplinary collaboration between climate researchers, hydrologists and biogeographers makes it possible to establish the correlations between atmospheric processes and physiological plant reactions. This will provide new insights that will help us analyse substance turnover in the water cycle.’ The methodology – a tried and tested technology but a new approach The annual growth rings of old trees, such as those of the species of pine trees that are native to Corsica, represent a kind of ‘long-term memory’ that provides interesting information on the effect of climatic extremes on forest ecosystems. Hence, wider rings are evidence of wet years, while narrower rings document dry periods with low growth. The current investigations of precipitation, ground and soil water and the measurements of climatic factors at different altitudes being undertaken by the researchers from Erlangen are designed not only to provide data on past climate changes but also the changes to climate that are currently occurring and the possible consequences of these for the water supply in forests. The new approach being taken in CorsicArchive combines the already established methodology of measuring the distribution of isotopes of carbon, oxygen and hydrogen with the determination of various current climatic variables in trees and waters. In various sites on Corsica, wood core samples from trees and samples of groundwater are taken, rainwater is collected, while climate stations have been installed to record information on air temperature and humidity, wind speed, solar radiation and any irregularities in the lower atmosphere. In the laboratory in Erlangen, the harvested materials are then subjected to geochemical analyses and evaluated. On the basis of the determined isotope signatures – element fingerprints as it were – the researchers can then draw conclusions as to what changes have happened and are happening to the climate. Major relevance to climate research This interdisciplinary co-operation between the various research fields of biogeography, hydrogeology and climatology, and the combination of the past and present, are contributing to the development of innovative approaches in climate research. When it comes to ongoing climate change, it is important to understand both the past and present processes in order to make an accurate assessment of what is likely to happen in future. An important undertaking. Thanks to the information it is gathering, CorsicArchive can also provide important recommendations for the forestry industry and tourism of the island. It is possible, for example, to work out how changes to the climate may impinge on the productivity of forest ecosystems. If the long-term trend is towards global warming and more frequent drought, for example, it would be advisable to move the pine growing areas away from the coast and towards higher regions of the island. Then there are the ski resorts of Val d’Ese and Ghisoni-Capanelle – not something one would necessarily expect to find on Corsica – and the associated tourism. Global warming would have consequences that are economically significant here and would impact on future tourism on the island. Prof. Dr. Achim Bräuning Phone: +49 9131 85 29372 Dr. Susanne Langer | idw - Informationsdienst Wissenschaft New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:297562c9-67fa-41e2-83eb-0918e2e5c289>
3.140625
1,662
Content Listing
Science & Tech.
29.979899
95,498,021
Walsh diagrams, often called angular coordinate diagrams or correlation diagrams, are representations of calculated orbital binding energies of a molecule versus a distortion coordinate (bond angles), used for making quick predictions about the geometries of small molecules. By plotting the change in molecular orbital levels of a molecule as a function of geometrical change, Walsh diagrams explain why molecules are more stable in certain spatial configurations (e.g. why water adopts a bent conformation). A major application of Walsh diagrams is to explain the regularity in structure observed for related molecules having identical numbers of valence electrons (e.g. why H2O and H2S look similar), and to account for how molecules alter their geometries as their number of electrons or spin state changes. Additionally, Walsh diagrams can be used to predict distortions of molecular geometry from knowledge of how the LUMO (Lowest Unoccupied Molecular Orbital) affects the HOMO (Highest Occupied Molecular Orbital) when the molecule experiences geometrical perturbation. Walsh's rule for predicting shapes of molecules states that a molecule will adopt a structure that best provides the most stability for its HOMO. If a particular structural change does not perturb the HOMO, the closest occupied molecular orbital governs the preference for geometrical orientation. Walsh diagrams were first introduced by A.D. Walsh, a British chemistry professor at the University of Dundee, in a series of ten papers in one issue of the Journal of the Chemical Society. Here, he aimed to rationalize the shapes adopted by polyatomic molecules in the ground state as well as in excited states, by applying theoretical contributions made by Mulliken. Specifically, Walsh calculated and explained the effect of changes in the shape of a molecule on the energy of molecular orbitals. Walsh diagrams are an illustration of such dependency, and his conclusions are what are referred to as the "rules of Walsh." In his publications, Walsh showed through multiple examples that the geometry adopted by a molecule in its ground state primarily depends on the number of its valence electrons. He himself acknowledged that this general concept was not novel, but explained that the new data available to him allowed the previous generalizations to be expanded upon and honed. He also noted that Mulliken had previously attempted to construct a correlation diagram for the possible orbitals of a polyatomic molecule in two different nuclear configurations, and had even tried to use this diagram to explain shapes and spectra of molecules in their ground and excited states. However, Mulliken was unable to explain the reasons for the rises and falls of certain curves with increases in angle, thus Walsh claimed "his diagram was either empirical or based upon unpublished computations." Walsh originally constructed his diagrams by plotting what he described as "orbital binding energies" versus bond angles. What Walsh was actually describing by this term is unclear; some believe he was in fact referring to ionization potentials, however this remains a topic of debate. At any rate, the general concept he put forth was that the total energy of a molecule is equal to the sum of all of the "orbital binding energies" in that molecule. Hence, from knowledge of the stabilization or destabilization of each of the orbitals by an alteration of the molecular bond angle, the equilibrium bond angle for a particular state of the molecule can be predicted. Orbitals which interact to stabilize one configuration (ex. Linear) may or may not overlap in another configuration (ex. Bent), thus one geometry will be calculably more stable than the other. Typically, core orbitals (1s for B, C, N, O, F, and Ne) are excluded from Walsh diagrams because they are so low in energy that they do not experience a significant change by variations in bond angle. Only valence orbitals are considered. However, one should keep in mind that some of the valence orbitals are often unoccupied. Generating Walsh Diagrams In preparing a Walsh diagram, the geometry of a molecule must first be optimized for example using the Hartree–Fock (HF) method for approximating the ground-state wave function and ground-state energy of a quantum many-body system. Next, single-point energies are performed for a series of geometries displaced from the above-determined equilibrium geometry. Single-point energies (SPEs) are calculations of potential energy surfaces of a molecule for a specific arrangement of the atoms in that molecule. In conducting these calculations, bond lengths remain constant (at equilibrium values) and only the bond angle should be altered from its equilibrium value. The single-point computation for each geometry can then be plotted versus bond angle to produce the representative Walsh diagram. Structure of a Walsh Diagram For the simplest AH2 molecular system, Walsh produced the first angular correlation diagram by plotting the ab initio orbital energy curves for the canonical molecular orbitals while changing the bond angle from 90° to 180°. As the bond angle is distorted, the energy for each of the orbitals can be followed along the lines, allowing a quick approximation of molecular energy as a function of conformation. It is still unclear whether or not the Walsh ordinate considers nuclear repulsion, and this remains a topic of debate. A typical prediction result for water is a bond angle of 90°, which is not even close to the experimental derived value of 104°. At best the method is able to differentiate between a bent and linear molecule. This same concept can be applied to other species including non-hydride AB2 and BAC molecules, HAB and HAAH molecules, tetraatomic hydride molecules (AH3), tetraatomic nonhydride molecules (AB), H2AB molecules, acetaldehyde, pentaatomic molecules (CH3I), hexaatomic molecules (ethylene), and benzene. Walsh diagrams in conjunction with molecular orbital theory can also be used as a tool to predict reactivity. By generating a Walsh Diagram and then determining the HOMO/LUMO of that molecule, it can be determined how the molecule is likely to react. In the following example, the Lewis acidity of AH3 molecules such as BH3 and CH3+ is predicted. Six electron AH3 molecules should have a planar conformation. It can be seen that the HOMO, 1e’, of planar AH3 is destabilized upon bending of the A-H bonds to form a pyramid shape, due to disruption of bonding. The LUMO, which is concentrated on one atomic center, is a good electron acceptor and explains the Lewis acid character of BH3 and CH3+. Walsh correlation diagrams can also be used to predict relative molecular orbital energy levels. The distortion of the hydrogen atoms from the planar CH3+ to the tetrahedral CH3-Nu causes a stabilization of the C-Nu bonding orbital, σ. Other correlation diagrams - Correlation diagram: IUPAC. Compendium of Chemical Terminology, 2nd ed. (the "Gold Book"). Compiled by A. D. McNaught and A. Wilkinson. Blackwell Scientific Publications, Oxford (1997). XML on-line corrected version created by M. Nic, J. Jirat, B. Kosata; updates compiled by A. Jenkins. ISBN 0-9678550-9-8. doi:10.1351/goldbook.C01348. - Walsh Diagrams: Molecular Orbital and Structure Computational Chemistry Exercise for Physical Chemistry Carrie S. Miller and Mark Ellison Journal of Chemical Education 2015 92 (6), 1040-1043 doi:10.1021/ed500813d - Chen, E.; Chang, T. (1998). "Walsh Diagram and the Linear Combination of Bond Orbital Method". Journal of Molecular Structure: THEOCHEM. 431: 127–136. doi:10.1016/S0166-1280(97)00432-6. - Mulliken, R.S. (1955). "Structures of the Halogen Molecules and the Strength of Single Bonds". J. Am. Chem. Soc. 77 (4): 884–887. doi:10.1021/ja01609a020. - Walsh, A.D. (1953). "The Electronic Orbitals, Shapes, and Spectra of Polyatomic Molecules. Part I. AH2 Molecules". J. Chem. Soc.: 2260–2266. doi:10.1039/JR9530002260. - Walsh, A.D. (1953). "The Electronic Orbitals, Shapes, and Spectra of Polyatomic Molecules. Part II. AB2 and BAC Molecules". J. Chem. Soc.: 2266–2288. doi:10.1039/JR9530002266. - Walsh, A.D. (1953). "The Electronic Orbitals, Shapes, and Spectra of Polyatomic Molecules. Part III. HAB and HAAH Molecules". J. Chem. Soc.: 2288–2296. doi:10.1039/JR9530002288. - Walsh, A.D. (1953). "The Electronic Orbitals, Shapes, and Spectra of Polyatomic Molecules. Part IV. Tetratomic hydride molecules, AH3". J. Chem. Soc.: 2296–2301. doi:10.1039/JR9530002296. - Walsh, A.D. (1953). "The Electronic Orbitals, Shapes, and Spectra of Polyatomic Molecules. Part V. Tetratomic, non-hydride molecules, AB3". J. Chem. Soc.: 2301–2306. doi:10.1039/JR9530002301. - Walsh, A.D. (1953). "The Electronic Orbitals, Shapes, and Spectra of Polyatomic Molecules. Part VI. H2AB Molecules". J. Chem. Soc.: 2306–2317. doi:10.1039/JR9530002306. - Walsh, A.D. (1953). "The Electronic Orbitals, Shapes, and Spectra of Polyatomic Molecules. Part VII. A note on the near-ultra-violet spectrum of acetaldehyde". J. Chem. Soc.: 2318–2320. doi:10.1039/JR9530002318. - Walsh, A.D. (1953). "The Electronic Orbitals, Shapes, and Spectra of Polyatomic Molecules. Part VIII. Pentatomic molecules: CH3I Molecules". J. Chem. Soc.: 2321–2324. doi:10.1039/JR9530002321. - Walsh, A.D. (1953). "The Electronic Orbitals, Shapes, and Spectra of Polyatomic Molecules. Part IX. Hexatomic molecules: ethylene". J. Chem. Soc.: 2325–2329. doi:10.1039/JR9530002325. - Walsh, A.D. (1953). "The Electronic Orbitals, Shapes, and Spectra of Polyatomic Molecules. Part X. A note on the spectrum of benzene". J. Chem. Soc.: 2330–2331. doi:10.1039/JR9530002330. - Mulliken, R.S. (1955). "Bond Angles in Water-Type and Ammonia-Type Molecules and Their Derivatives". J. Am. Chem. Soc. 77 (4): 887–891. doi:10.1021/ja01609a021. - Walsh, A.D. (1976). "Some Notes on the Electronic Spectra of Small Polyatomic Molecules". Int. Rev. Sci.: Phys. Chem., Ser. Two. 3: 301–316. - O'Leary, B.; Mallion, R.B. (1987). "Walsh Diagrams and the Hellman-Feynman Theorem: A Tribute to the Late Professor Charles A. Coulson, F.R.S. (1910-1974)". Journal of Mathematical Chemistry. 1 (4): 335–344. doi:10.1007/BF01205066. - Atkins, P.W. (1970). Molecular Quantum Mechanics. Oxford, Massachusetts: Clarendon Press. ISBN 0-19-855129-0. - Peters, D. (1966). "Nature of the One-Electron Energies of the Independent Electron Molecular Orbital Theoryand the Walsh Diagrams". [[Transactions of the Faraday Society|]]. 6: 1353–1361. - Chen, E.; Chang, T. (1997). "Orbital Interaction and the Mulliken-Walsh Diagram for AH2 Systems". Journal of the Chinese Chemical Society (Taipei). 44: 559–565. - Takahata, Y.; Parr, R.G. (1974). "Three Methods to Look at Walsh-type Diagrams Including Nuclear Repulsions". Bulletin of the Chemical Society of Japan. 47 (6): 1380–1386. doi:10.1246/bcsj.47.1380. - Atkins, P.W..; et al. (1970). Inorganic Chemistry: Shriver and Atkins. Oxford, U.K.: Oxford University Press. ISBN 978-0-19-926463-6.
<urn:uuid:012c5696-0554-4524-a723-6aca9936d44f>
3.90625
2,828
Knowledge Article
Science & Tech.
57.638583
95,498,031
Biologists at the University of Central Florida recently completed a study that shows this slender tree once used by Native Americans for medicinal purposes, may be thriving because of water-management projects initiated in the 1950s. Canals were built to control runoff and provide water for agriculture. The unintended consequence -- stable water levels -- allowed Carolina Willow to spread and thrive. UCF scientists study the Carolina Willow in Florida's waterways. They now cover thousands of acres. Willows form impenetrable thickets that prevent boating and eliminate duck habitat. Willow thickets also use tremendous amounts of water, leaving less available for wildlife and people. The findings were published today in Restoration Ecology, the peer-reviewed journal of the Society for Ecological Restoration. The St. Johns Water Management District funded the study. While the trees previously were kept in check by natural annual flooding, they can now be found thriving in wetlands, swamps and marshes. Some trees grow as tall as 35 feet. The leaves of the tree contain salicin, which is the compound behind the pain-relieving effect of salicylic acid found in aspirin. UCF professors Pedro F. Quintana-Ascencio and John Fauth worked with Kimberli Ponzio and Dianne Hall, scientists from the St. Johns River Water Management District, to run experiments that found ways to control the willow, which is taking over marshes in the upper St. Johns River basin. UCF students helped plant hundreds of willow seedlings and saplings onto small islands built for the project by the St. Johns River Water Management District's staff. Willows planted low on the islands drowned during summer floods, but plants above the waterline grew and flowered one year later. The biologists confirmed the importance of water fluctuation using experimental ponds on UCF's main campus. Willow seedlings and saplings planted on the pond banks grew poorly when the biologists raised the water level and flooded the plants for several months. At the same time, control plants just above the waterline grew over 3 feet tall. Combined, the two experiments show that the key to controlling willow is allowing water levels to fluctuate in early spring. Seedlings and small saplings cannot survive dry conditions and are easily drowned in wet marshes. Once plants become larger, willows can survive droughts and tolerate floods and are very difficult to eradicate, Fauth said. Based on the conclusions of the study, the UCF biologists are helping scientists at the water district develop new ways to reduce willow cover and slow down the expansion, Fauth said. "It's important that these trees be controlled to maintain water quality and availability, conserve wildlife and continue enjoying recreational activities in the river, " Fauth said. The study may also aid other countries fighting the Carolina willow, including Australia and South Korea where they were introduced for erosion control. Quintana-Ascencio joined UCF in 2003 after working at El Colegio de la Frontera Sur, in San Cristóbal de Las Casas, Chiapas, Mexico. He has a Ph.D. in ecology and evolution from State University of New York at Stony Brook. He has been a guest scholar at institutions around the world including the University of Melbourne in Victoria, Australia, and the Universidad Rey Juan Carlos in Madrid, Spain. He also has earned several fellowships and has published more than 60 articles and book chapters. Fauth also joined UCF in 2003. Previously he had worked at the College of Charleston and at Denison University. He has a Ph.D. in zoology from Duke University. He has written more than 35 articles and book chapters. He also serves on several boards and was a founding member of the Coral Disease and Health Consortium. Other contributors to the study include: former UCF biology student Luz M. Castro Morales and Ken Snyder of the St. Johns River Water Management District. 50 Years of Achievement: The University of Central Florida, the nation's second-largest university with nearly 60,000 students, is celebrating its 50th anniversary in 2013. UCF has grown in size, quality, diversity and reputation, and today the university offers more than 200 degree programs at its main campus in Orlando and more than a dozen other locations. Known as America's leading partnership university, UCF is an economic engine attracting and supporting industries vital to the region's success now and into the future. For more information, visit http://today.ucf.edu. Zenaida Gonzalez Kotala | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:4fee190b-a9b9-4221-b3ae-db3db4499c4a>
3.484375
1,595
Content Listing
Science & Tech.
42.355651
95,498,060
At EGU I had the pleasure of talking about BioSNICAR and biological albedo reduction with two of the big-names in albedo research. A very interesting point they raised was that the term ‘bioalbedo’ does not precisely describe the concept that it is attached to. This is true. The term bioalbedo was not coined by spectroscopy or remote sensing experts, but by microbiologists and glaciologists, and is now well-baked into the literature. I will outline here the reasons why we should be cautious of this terminology. Albedo is the survival probability of a photon entering a medium. Light incident upon a material partly reflects from the upper surface, the remainder enters the medium and can scatter anywhere there is a change in the refractive index (e.g. a boundary between air and ice, or ice and water, etc). Where there are opportunities for scattering, light bounces around in the medium, sometimes preferentially in a certain direction depending upon the optical properties of the medium (ice is forward-scattering) but always changing direction to some extent each time it scatters, until it is either absorbed or it escapes back out of the medium travelling in a skywards direction. The albedo of the material is the likelihood that the down-welling light entering the medium exits again later as up-welling light. The more strongly absorbing the material, the more likely the light is to be absorbed before exiting. Ice is very weakly absorbing in blue wavelengths (~400 nm), becoming generally more strongly absorbing at longer wavelengths into the near infra-red (hence ice often appearing blue). Solar energy is mostly concentrated within the wavelength range 300 – 5000 nm and the term albedo concerns the survival probability of all photons with wavelengths within this range either at a particular wavelength (spectral albedo) or integrated over the entire solar spectrum (broadband albedo). This means that a photon entering a material with a broadband albedo of 0.8 has an 80% chance of exiting again. Therefore, when a material is bombarded with billions of photons, 80% of them are returned skywards and 20% are absorbed, and the surface appears bright. A lower albedo therefore means less likelihood of photon survival. For a single material, its absorbing and scattering efficiencies are described using the scattering and absorption coefficients. The ratio of these two coefficients is known as the single scattering albedo (SSA), which is a crucial term for radiative transfer. A higher SSA is associated with a greater likelihood of a particle scattering a photon rather than absorbing it. a particle with SSA = 1 is non-absorbing. Therefore, with these definitions we can see why the term bio-albedo is not semantically perfect. The term bio-albedo implies that the relevant measurement is the light reflected from biological cells, which is really the inverse of the measurement of interest. Algal cells are strongly absorbing and their effect on snow and ice albedo is to increase the likelihood of a photon being absorbed rather than scattered back out of the medium. For this reason, the better term to use would be bio-co-albedo, where co-albedo describes the fraction of incident energy absorbed by the particles (i.e. 1-SSA). Bio-co-albedo is more technically correct terminology, but it is also quite a subtle distinction, and arguably if we have calculated the single scattering albedo, we have by default calculated the co-albedo (co-albedo = 1 – single scattering albedo), and the outcome is the same. The meaning of the term ‘bio-co-albedo’ is not obvious to those outside of spectroscopy and remote sensing communities, which i think is a major issue since the topic is so broadly interdisciplinary. The more aesthetic and simpler ‘bio-albedo’ is justified in most cases, especially because it is already well-used in the literature and more widely accessible. From a utilitarian perspective, bio-albedo wins out. As an aside, it reminds me that I have often wondered whether ‘evolution’ is really an acceptable word for cryosphere scientists to use to describe the temporal development of – for example – a snowpack or ice surface. Evolution implies changes resulting from inherited characteristics passed through successive generations plus random mutations that are selected for or against based on goodness of fit for the specific environment. A melting snowpack cannot ‘evolve’ as there are no ancestors, no selection, no inheritance, no generations. People also age over time, influenced by external factors, but we do not describe individuals as evolving – same applies to a snowpack or glacier. Overall, I suspect splitting hairs over terms like bio-co-albedo does more to dissuade non-specialists from joining the conversation than it does to improving understanding of the processes involved. I recently published an article in French pop-sci magazine La Recherche about the wondrous microbial ecosystems on glaciers and ice sheets (here for French speakers). For those English speakers who do not subscribe to la Recherche, here is a translation. Also, I strongly recommend the excellent translator who worked on this article with me – contact me if you need translation services and I can link you up. The microbes accelerating glacier melting Our planet is getting warmer and losing its ice. Mountain glaciers are disappearing and the great Greenland and Antarctic ice sheets are shrinking. These masses of ice are giant coolers for the planet and they reflect energy from the Sun back out into space, meaning the smaller they become, the more the planet warms. Surprisingly, the process of melting the vast glaciers and ice sheets is accelerated by microscopic life. Glacier and ice sheet melting depends upon more than just temperature. Most of the energy driving melt comes from sunlight that hits the ice surface. Dirtier, darker ice absorbs more solar energy than clean, bright ice meaning more energy is available to drive melting. On the Greenland Ice Sheet in particular, the ice becomes very dark in the summer, with large areas reflecting just 20-30% of the sunlight hitting them. This is not a new phenomenon – in fact it was noticed by explorers during the great polar expeditions of the late 1800s. Intrigued, they examined samples of ice under their microscopes. The dark colour of the ice was not simply due to dust as they expected – astonishingly, the ice was stained by life (Nordenskjold, 1875). The ice surface is a patchwork of greys, reds and purples coloured by the collective effect of countless microscopic organisms, with potential knock-on effects for Earth’s climate (Uetake et al., 2010; Takeuchi et al., 2006; Yallop et al., 2012; Cook et al., 2017). Microbes on Ice When explorer Adolf E Nordenskjold arrived on the Greenland Ice Sheet in 1870 he immediately noticed the dark grey-purple colour of the ice. His colleague, a biologist called Berggren, examined the ice under the microscope and discovered a rich variety of microbial life. The importance of their discovery was clear to them – this life darkens the ice and increases its melt rate. Nordenskjold even suggested that the microbial life was the “greatest enemy of the mass of ice” and an accelerator of deglaciation at the global scale (Nordenskjold, 1875)! Until recently, Nordenskjold’s observations of life on ice have remained obscure footnotes in the history of Polar exploration; however, as climate science has become increasingly urgent in the twenty-first century, Nordenskjold’s work has gained new significance. Contemporary scientists have confirmed the presence of a microbial ecosystem growing on the surface of the Greenland Ice Sheet and elsewhere and are now attempting to quantify their ice-darkening effect. Although it is an extreme environment where temperatures are low and nutrients scarce, there is abundant sunlight and liquid water to support photosynthesis, meaning microalgae can grow on the ice surface (Uetake et al., 2010; Yallop et al., 2012). The days are long in the Arctic in summer, with the sun staying above the horizon for twenty-four hours per day for part of the season, exposing the algae to intense and prolonged solar energy. This powers photosynthesis but over time the exposure stresses the ice algae, causing them to produce biological sunscreen molecules to protect their delicate photosynthetic machinery. These ‘carotenoids’ colour their cells very dark purple and enhance the biological darkening of the ice surface. At the same time, the ice surface is peppered with holes that are often cylindrical but can have complex and irregular shapes (Cook et al., 2015). These holes range from centimeters to meters in diameter and depth and contain mixtures of biological and nonbiological material bundled up into small balls that sit on the hole floors. Nordenskjold first noticed these holes on the Greenland Ice Sheet and named them ‘cryoconite holes’, from the Greek for ‘holes with frozen dust’. These holes are the most biodiverse microbial habitat on Earth’s ice. They form when dust and debris becomes tangled up by long, thread-like cyanobacteria. The cyanobacteria are photosynthetic and as they grow they exude polymers that act as biological glues, binding the bundles of material together into stable granules. This biological bundling and binding of material creates a microhabitat for other microbes, especially those that can feed on molecules produced by the photosynthesizing cyanobacteria. As the granules grow they become heavier, meaning they settle on the ice surface. The biological material makes them especially dark, so the ice underneath melts quickly, causing holes to form in the ice surface with the granules sitting on the hole floor. The holes provide protection from the weather and intense sunlight and also prevent the microbes from being washed away. The cyanobacteria therefore sculpt the ice surface and engineer a comfortable, stable habitat where diverse microbial life can thrive in this extreme environment. Cryoconite holes are more than icy buckets that hold microbial life. They are more like microbial mini-cities on ice, with each connected to many others by meltwater flowing between ice crystals just under the ice surface. Cryoconite microbes engage in engineering and construction, production, consumption, competition, predation, growth, reproduction, death, decay, immigration and emigration. There is both import and export of nutrients, waste and other biological material. At the same time, the hole itself changes its shape and size in response to changing environmental conditions with the emergent effect of maintaining the light intensity at the hole floor, promoting photosynthesis (Cook et al., 2010). Algal blooms and cryoconite are crucial components of the wider Arctic ecosystem, acting as stores of carbon (which they draw down from the atmosphere and fix into organic molecules), nutrients and biomass which can all be delivered to soils, rivers and oceans as glaciers melt (Stibal et al., 2012). Truly, these are widely interconnected complex adaptive systems created biologically on Earth’s ice. The Cutting Edge of Life on Ice While life on ice has been known for many years, most of the literature on the subject has been produced during the twenty-first century. Modern molecular biological techniques have enabled scientists to catalogue the species present in cryoconite and algal blooms, and modern instruments can measure their darkening effect. However, there are several major gaps in our understanding of life on ice. To quantify their effect on ice darkening worldwide, we need a reliable method to map icy microbes at the scale of entire ice sheets. From a biological perspective, we know which organisms live in algal blooms and cryoconite so we must now concentrate on determining how they function and what ecosystem services they might provide that could impact human society. To estimate the total coverage of life on ice, we must detect it without actually being present to take samples. It is relatively easy to take samples and analyse them in a laboratory to tell if life is present, but doing the same from the air is a different problem. In addition to biological darkening, soots and mineral dusts colour the ice. Also, as the ice melts the crystals change shape and melt water can fill the spaces between them, which in itself changes the way the ice absorbs and reflects solar energy. Disentangling the biological signal from these other darkening processes has proven to be challenging. However, because the darkening of ice by living cells is due to biological molecules that absorb light at specific wavelengths, we may be able to use the spectrum of reflected light to identify them. Chlorophyll, for example, absorbs red and blue light much more effectively than it absorbs green light (which is why we see leaves as green). For other biological molecules, the peak absorption will be at slightly different wavelengths, and non-biological materials will have their own absorption patterns too. However, while identifying ‘signature spectra’ is simple when only one material is present, it is much more difficult when several species with different light absorbing properties are mixed with non-biological materials. All of the light absorbers can be scattered unevenly and mixed vertically within the volume of ice which can itself be a complex aggregate of variously sized ice crystals and liquid water. The reflected light is a tangle of signals that can be hard to unpick. At our laboratory at the University of Sheffield, we are working on a purpose-built drone which will fly back and forth over a patch of the Greenland Ice Sheet taking images at specific wavelengths of light. By analysing these images we hope to be able to produce a map of life on ice. Using the drone means we can follow the flight on foot and take ground samples to examine in the laboratory, enabling us to link the drone images to actual concentrations of different light absorbers on the ground. The wavelengths imaged by the drone match up with those measured by several Earth observation satellites, meaning that achieving life-detection using a drone should then enable the same from space. As well as knowing where the life is, we also need a deeper understanding of how it functions. Recognition of ice surfaces as microbial habitats came at the same time as an explosion in accessible and affordable techniques in field molecular microbial ecology, meaning several groups have used high-throughput sequencing of marker-genes to identify the particular microbes present within cryoconite communities (e.g. Cameron et al., 2012; Edwards et al., 2014; Stibal et al., 2014, 2015). Environmental genomic techniques have also been used to investigate the total genetic composition of cryoconite communities (Edwards et al., 2013). To date, these have been snapshot studies, but in the very near future great insights into the functioning of cryoconite microbes will come from rapid metagenomic, metabolomic and metatranscriptomic studies. It has been suggested that ice surface microbes might be good targets for bioprospecting. Since they are able to thrive in conditions of low temperature, high light and low nutrients, they may well utilize survival strategies that we can exploit, either by extracting novel genes and biomolecules, or by observing and gaining ecological knowledge. Cryoconite has been suggested to be a potential source of antifreeze proteins, novel antibiotics and cold-active enzymes. The shape, illumination conditions and flushing with flowing meltwater make cryoconite holes natural analogs to industrial bioreactors which are commonly used to synthesise valuable biomolecules (Cook et al., 2015). Deep insights will come from combining the expertise of microbial ecologists with glaciologists and physicists who, together, will link processes operating at the molecular level with changes in ice surface colour and patterns of melt, which suggests insights into the ecology of ice surfaces might one day be obtainable from the sky or from space. While this is some way off, great insights could be gained from a shift towards a holistic understanding of the ice surface as a ‘living landscape’. We are working hard to achieve remote detection of life on ice for the purposes of mapping biological ice darkening from satellites and improving our ability to predict future ice melt. However, there is another potential outcome from this work… what if instead of looking down from space at our own planet, we turn the sensors around and start looking out? The Greenland Ice Sheet is, in many ways, a good place for developing life detection technologies that can be applied to the search for life on other icy planets and moons. Take, for example, Europa. A recently funded NASA project will examine this icy moon of Jupiter for signs of life because of its potentially habitable icy shell and subsurface ocean. On Europa, the icy surface is sunlit and seeded with possibly mineral-rich snow that forms when liquid water in its subsurface oceans escapes via huge geysers (Hand et al., 2017). There is therefore a potentially dusty ice surface illuminated by sunlight that could support photosynthesis, just like the Greenland Ice Sheet (although the solar energy flux and temperature is lower on Europa and photosynthesis is highly unlikely). Any life detection technology that works on the Greenland Ice Sheet will have to overcome the challenges of ice optics, interference by mineral dusts and uncertain biological pigment composition, which would also be the main challenges for remote detection of life on the surface of other icy planets and moons. The frontiers of glacier biology on Earth may therefore intersect with the cutting edge search for extraterrestrial life. While many people think of Arctic and Antarctic ice as lifeless places, there is in fact abundant microbial activity on Earth’s glaciers and ice sheets. But more surprising is the huge impacts of these tiny organisms. By changing the colour of the ice surface, microbes are potentially enhancing the rate at which glaciers and ice sheets are shrinking, but we cannot yet build them into our climate models. The research priority now is mapping these ecosystems from space because this will enable us to estimate their impact on ice melt worldwide and improve our melt forecasts. The same technologies that will enable us to detect life on Earth may eventually be useful tools for searching for icy life elsewhere in the universe. There is also much to be learned about way these microbes function that can educate us about the limits of life in extreme environments. The true sharp edge of glacier biology research involves understanding how these microbes are able to sense, survive and drive environmental change. The study of life on Earth’s ice is deeply interdisciplinary and ultimately it requires us to recognize – as Nordenskjold did – the intricate bridges joining the very big and the very small. Cameron K, Hodson A J, Osborn M (2012) Carbon and nitrogen biogeochemical cycling potentials of supraglacial cryoconite communities. Polar Biology, 35: 1375-1393 Cook J, Hodson A, Telling J, Anesio A, Irvine-Fynn T, Bellas C (2010) The mass-area relationship within cryoconite holes and its implications for primary production. Annals of Glaciology, 51 (56): 106-110 Cook, J.M., Edwards, A., Irvine-Fynn, T.D.I., Takeuchi, N. 2015. Cryoconite: Dark biological secret of the Cryosphere. Progress in Physical Geography, 40 (1): 66 -111, doi: 10.1177/0309133315616574Cook et al., 2017 Edwards A, Pachebat J A, Swain M, Hegarty M, Hodson A, Irvine-Fynn T D L, Rassner S M, Sattler B (2013) A metagenomic snapshot of taxonomic and functional diversity in an alpine glacier cryoconite ecosystem. Environmental Research Letters, 8 (035003): 11pp Edwards A, Mur L, Girdwood S, Anesio A, Stibal M, Rassner S, Hell K, Pachebat J, Post B, Bussell J, Cameron S, Griffith G, Hodson A (2014) Coupled cryoconite ecosystem structure-function relationships are revealed by comparing bacterial communities in Alpine and Arctic glaciers. FEMS Microbial Ecology, 89 (2): 222-237 Hand, K.P., Murray, A.E., Garvin, J.B., Brinckerhoff, W.B., Christner, B.C., Edgett, K.S., Ehlmann, B.L., German, C.R., Hayes, A.G., Hoehler, T.M., Horst, S.M., Lunine, J.I., Nealson, H.H., Paranicas, C., Schmidt, B.E., Smith, D.E., Rhoden, A.R., Russell, M.J., Templeton, A.S., Willis, P.A., Yingst, R.A., Phillips, C.B., Cable, M.L., Craft, K.L., Hofmann, A.E., Nordheim, T.A., Pappalardo, R.P., and the Project Engineering Team (2017). NASA, Report of the Europa Lander Science Definition team. Posted Feb 2017. https://solarsystem.nasa.gov/docs/Europa_Lander_SDT_Report_2016.pdf Stibal M, Sabacka M, Zarsky J (2012a) Biological processes on glacier and ice sheet surfaces. Nature 1554 Geoscience, 5: 771-774 Stibal M, Schostag M, Cameron K A, Hansen L H, Chandler D M, Wadham J L, Jacobsen C S (2014) Different 1558 bulk and active microbial communities in cryoconite from the margin and interior of the Greenland ice 1559 sheet. Environmental Microbiology Reports, DOI: 10.1111/1758-2229.12246 Stibal, M., Schostag, M., Cameron, K. A., Hansen, L. H., Chandler, D. M., Wadham, J. L. and Jacobsen, C. S. (2015), Different bulk and active bacterial communities in cryoconite from the margin and interior of the Greenland ice sheet. Environmental Microbiology Reports, 7: 293–300. doi:10.1111/1758-2229.12246 Takeuchi, N., Dial, R., Kohshima, S., Segawa, T., Uetake, J., 2006. Spatial distribution and abundance of red snow algae on 35 the Harding Icefield, Alaska derived from a satellite image. Geophysical Research Letters, 33, L21502, doi:10.1029/2006GL027819 Uetake, J., Naganuma, T., Hebsgaard, M. B., and Kanda, H. 2010. Communities of algae and cyanobacteria on glaciers in west Greenland. Polar Sci. 4, 71–80. doi: 10.1016/j.polar.2010.03.002 Yallop, M.L., Anesio, A.J., Perkins, R.G., Cook, J., Telling, J., Fagan, D., MacFarlane, J., Stibal, M., Barker, G., Bellas, C., 25 Hodson, A., Tranter, M., Wadham, J., Roberts, N.W. 2012. Photophysiology and albedo-changing potential of the ice-algal community on the surface of the Greenland ice sheet, ISME Journal, 6: 2302 – 2313 In recent decades there has been a significant increase in snow melt on the Antarctic Peninsula and therefore more ‘wet snow’ containing liquid water. This wet snow is a microbial habitat In our new paper, we show that distance from the sea controls microbial abundance and diversity. Near the coast, rock debris and marine fauna fertilize the snow with nutrients allowing striking algal blooms of red and green to develop, which alter the absorption of visible light in the snowpack. This happens to a lesser extent further inland where there is less fertilization. A particularly interesting finding is that the absorption of visible light by carotenoid pigments has greatest influence at the surface of the snow pack whereas chlorophyll is most influential beneath the surface. Higher concentrations of dissolved inorganic carbon and carbon dioxde were measured in interstitial air near the coast compared to inland and a close association was found between chlorophyll and dissolved organic carbon. These observations suggest in situ production of carbon that can support more diverse microbial life, including species originating in nearby terrestrial and marine habitats. These observations will help to predict microbial processes including carbon exchange between snow, atmosphere, ocean and soils occurring in the fastest-warming part of the Antarctic, where snowmelt has already doubled since the mid-twentieth century and is expected to double again by 2050. I’m very pleased to report our new paper is now in open discussion in The Cryosphere. The paper presents a new model for predicting the spectral bioalbedo of snow and ice, which confirms that ice algae on ice surfaces can change its colour and by doing so enhance its melt rate (“bioalbedo”). We also used the model to critique the techniques used to measure bioalbedo in the field. The model is based on the SNow ICe and Atmosphere Radiative model (SNICAR), but adapted to interface with a mixing model for pigments in algal cells. We refer to the coupled models as BioSNICAR. The model uses Mie theory to work out the optical properties of individual algal cells with refractive indices calculated using a pigment mixing model. The user can decide how much of each pigment the cell contains, the cell size, the biomass concentration in each of n vertical layers, the snow/ice optical properties, angle and spectral distribution of incoming sunlight and the mass concentration, optical properties and distribution of inorganic impurities including mineral dusts and black carbon (soot). From this information, the model predicts the albedo of the surface for each wavelength in the solar spectrum. This can then be used to inform an energy balance model to see how much melt results from changes to any of the input values, including growth or pigmentation of algae. The model shows that smaller cells with photoprotective pigments have the greatest albedo-reducing effect. The model experiments suggest that in most cases algal cells have a greater albedo-reducing effect than mineral dusts (depending upon optical properties) but less than soot. As well as making predictions about albedo change, the modelling is useful for designing field experiments, as it can quantify the error resulting from certain practises, such as using devices with limited wavelength ranges, or neglecting to characterise the vertical distribution of cells. I’ll cover this in some further posts. The most important thing is metadata collection, since standardising this enables the measurement conditions to be as transparent as possible and encourages complementarity between different projects. Importantly, following a protocol for albedo measurements and collecting sufficient metadata will make it easier to couple ground measurements to satellite data. We outline two key procedures: hemispheric albedo measurement, and hemispherical-conical reflectance factor measurement. To accompany the discussion in our paper, we’ve produced some metadata collection sheets that might be useful to other researchers making albedo measurements in the field (download here: metadata sheets) and made our code and data available in an open repository.
<urn:uuid:2a708aaf-c000-4f48-845b-285b9de04ac9>
2.921875
5,735
Personal Blog
Science & Tech.
41.732404
95,498,098
Gene hunters at Johns Hopkins have discovered a common genetic mutation that increases the risk of inheriting a particular birth defect not by the usual route of disrupting the gene’s protein-making instructions, but by altering a regulatory region of the gene. Although the condition, called Hirschsprung disease, is rare, its complex genetics mimics that of more common diseases, such as diabetes and heart disease. "It’s a funny mutation in a funny place," says study leader Aravinda Chakravarti, Ph.D., director of the McKusick-Nathans Institute of Genetic Medicine. "But I think the majority of mutations found in major diseases are going to be funny mutations in funny places." Far from being a problem, the finding is good news, he suggests. "Mutations in the protein-coding sequence can’t really be fixed, but those outside the protein-coding regions -- perhaps we can fiddle with them, perhaps they are ’tunable.’ The protein should be fine if we can just get the cells to make the right amount," he says. Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:38d9bec0-b17f-443f-93b4-c66d9a4d7def>
3.171875
810
Content Listing
Science & Tech.
39.951191
95,498,099
Astronomers from KU Leuven, Belgium, have shown that the interaction between the surface and the atmosphere of an exoplanet has major consequences for the temperature on the planet. This temperature, in turn, is a crucial element in the quest for habitable planets outside our Solar System. In the quest for habitable planets outside our Solar System - also known as exoplanets - astronomers are currently focusing on rocky planets that don't look like Earth. These planets orbit so-called M dwarfs - stars that are smaller than our Sun. The figures show the wind, temperature, and surface-atmosphere friction on a planet 1.45 times the size of the Earth in a 1-day orbit around an M dwarf. The two topmost figures show the wind and the temperature in the upper layers of the atmosphere. The two figures in the middle show the wind and the temperature on the surface of the planet. On the left-hand figures, the surface-atmosphere friction equals that on Earth. On the right-hand figures, there is ten times as much friction between surface and atmosphere than is the case on Earth. Both scenarios have a different impact on the climate of a planet: the climate represented in the right-hand figures is more habitable. Credit: KU Leuven - Ludmila Carone and Leen Decin In our universe, there are many more M dwarfs than there are sun-like stars, making it more likely that astronomers will discover the first habitable exoplanet around an M dwarf. Most planets orbiting these M dwarfs always face their star with the same side. As a result, they have permanent day and night sides. The day side is too hot to make life possible, while the night side is too cold. Last year, KU Leuven researchers Ludmila Carone, Professor Rony Keppens, and Professor Leen Decin already showed that planets with permanent day sides may still be habitable depending on their 'air conditioning' system. Two out of three possible 'air conditioning' systems on these exoplanets use the cold air of the night side to cool down the day side. And with the right atmosphere and temperature, planets with permanent day and night sides are potentially habitable. Whether the 'air conditioning' system is actually effective depends on the interaction between the surface of the planet and its atmosphere, Ludmila Carone's new study shows. Carone: "We built hundreds of computer models to examine this interaction. In an ideal situation, the cool air is transported from the night to the day side. On the latter side, the air is gradually heated by the star. This hot air rises to the upper layers of the atmosphere, where it is transported to the night side of the planet again." But this is not always the case: on the equator of many of these rocky planets, a strong air current in the upper layers of the atmosphere interferes with the circulation of hot air to the night side. The 'air conditioning' system stops working, and the planet becomes uninhabitable because the temperatures are too extreme. Ludmila Carone: "Our models show that friction between the surface of the planet and the lower layers of the atmosphere can suppress these strong air currents. When there is a lot of surface friction, the 'air conditioning' system still works." The KU Leuven researchers created models in which the surface-atmosphere interaction on the exoplanet is the same as on Earth, and models in which there is ten times as much interaction as on Earth. In the latter case, the exoplanets had a more habitable climate. If planets with a well-functioning 'air conditioning' system also have the right atmosphere composition, there's a good chance that these exoplanets are habitable. Ludmila Carone | EurekAlert! Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication 16.07.2018 | Chinese Academy of Sciences Headquarters Theorists publish highest-precision prediction of muon magnetic anomaly 16.07.2018 | DOE/Brookhaven National Laboratory For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Life Sciences 16.07.2018 | Physics and Astronomy 13.07.2018 | Event News
<urn:uuid:562a6c85-9402-4650-a192-2c91f025ff93>
3.890625
1,439
Content Listing
Science & Tech.
41.695972
95,498,122
Unit of measurement(Redirected from Units of measurement) A unit of measurement is a definite magnitude of a quantity, defined and adopted by convention or by law, that is used as a standard for measurement of the same kind of quantity. Any other quantity of that kind can be expressed as a multiple of the unit of measurement. For example, a length is a physical quantity. The metre is a unit of length that represents a definite predetermined length. When we say 10 metres (or 10 m), we actually mean 10 times the definite predetermined length called "metre". Measurement is a process of determining how large or small a physical quantity is as compared to a basic reference quantity of the same kind. The definition, agreement, and practical use of units of measurement have played a crucial role in human endeavour from early ages up to the present. A multitude of systems of units used to be very common. Now there is a global standard, the International System of Units (SI), the modern form of the metric system. In trade, weights and measures is often a subject of governmental regulation, to ensure fairness and transparency. The International Bureau of Weights and Measures (BIPM) is tasked with ensuring worldwide uniformity of measurements and their traceability to the International System of Units (SI). Metrology is the science of developing nationally and internationally accepted units of measurement. In physics and metrology, units are standards for measurement of physical quantities that need clear definitions to be useful. Reproducibility of experimental results is central to the scientific method. A standard system of units facilitates this. Scientific systems of units are a refinement of the concept of weights and measures historically developed for commercial purposes. Science, medicine, and engineering often use larger and smaller units of measurement than those used in everyday life. The judicious selection of the units of measurement can aid researchers in problem solving (see, for example, dimensional analysis). A unit of measurement is a standardised quantity of a physical property, used as a factor to express occurring quantities of that property. Units of measurement were among the earliest tools invented by humans. Primitive societies needed rudimentary measures for many tasks: constructing dwellings of an appropriate size and shape, fashioning clothing, or bartering food or raw materials. The earliest known uniform systems of measurement seem to have all been created sometime in the 4th and 3rd millennia BC among the ancient peoples of Mesopotamia, Egypt and the Indus Valley, and perhaps also Elam in Persia as well. Weights and measures are mentioned in the Bible (Leviticus 19:35–36). It is a commandment to be honest and have fair measures. In the Magna Carta of 1215 (The Great Charter) with the seal of King John, put before him by the Barons of England, King John agreed in Clause 35 "There shall be one measure of wine throughout our whole realm, and one measure of ale and one measure of corn—namely, the London quart;—and one width of dyed and russet and hauberk cloths—namely, two ells below the selvage..." As of the 21st Century, multiple unit systems are used all over the world such as the United States Customary System, the British Customary System, and the International System. However, the United States is the only industrialized country that has not yet completely converted to the Metric System. The systematic effort to develop a universally acceptable system of units dates back to 1790 when the French National Assembly charged the French Academy of Sciences to come up such a unit system. This system was the precursor to the metric system which was quickly developed in France but did not take on universal acceptance until 1875 when The Metric Convention Treaty was signed by 17 nations. After this treaty was signed, a General Conference of Weights and Measures (CGPM) was established. The CGPM produced the current SI system which was adopted in 1954 at the 10th conference of weights and measures. Currently, the United States is a dual-system society which uses both the SI system and the US Customary system. Systems of unitsEdit Historically many of the systems of measurement which had been in use were to some extent based on the dimensions of the human body. As a result, units of measure could vary not only from location to location, but from person to person. Metric systems of units have evolved since the adoption of the original metric system in France in 1791. The current international standard metric system is the International System of Units (abbreviated to SI). An important feature of modern systems is standardization. Each unit has a universally recognized size. Both the imperial units and US customary units derive from earlier English units. Imperial units were mostly used in the British Commonwealth and the former British Empire. US customary units are still the main system of measurement used in the United States despite Congress having legally authorised metric measure on 28 July 1866. Some steps towards US metrication have been made, particularly the redefinition of basic US and imperial units to derive exactly from SI units. Since the international yard and pound agreement of 1959 the US and imperial inch is now defined as exactly m, and the US and imperial avoirdupois pound is now defined as exactly 0.0254 37 g. 453.592 While the above systems of units are based on arbitrary unit values, formalised as standards, some unit values occur naturally in science. Systems of units based on these are called natural units. Similar to natural units, atomic units (au) are a convenient system of units of measurement used in atomic physics. Also a great number of unusual and non-standard units may be encountered. These may include the solar mass (×1030 kg), the 2megaton (the energy released by detonating one million tons of trinitrotoluene, TNT) and the electronvolt. Legal control of weights and measuresEdit To reduce the incidence of retail fraud, many national statutes have standard definitions of weights and measures that may be used (hence "statute measure"), and these are verified by legal officers. Base and derived unitsEdit Different systems of units are based on different choices of a set of base units. The most widely used system of units is the International System of Units, or SI. There are seven SI base units. All other SI units can be derived from these base units. For most quantities a unit is necessary to communicate values of that physical quantity. For example, conveying to someone a particular length without using some sort of unit is impossible, because a length cannot be described without a reference used to make sense of the value given. But not all quantities require a unit of their own. Using physical laws, units of quantities can be expressed as combinations of units of other quantities. Thus only a small set of units is required. These units are taken as the base units. Other units are derived units. Derived units are a matter of convenience, as they can be expressed in terms of basic units. Which units are considered base units is a matter of choice. The base units of SI are actually not the smallest set possible. Smaller sets have been defined. For example, there are unit sets in which the electric and magnetic field have the same unit. This is based on physical laws that show that electric and magnetic field are actually different manifestations of the same phenomenon. Calculations with units of measurementEdit Units as dimensionsEdit Any value of a physical quantity is expressed as a comparison to a unit of that quantity. For example, the value of a physical quantity Z is expressed as the product of a unit [Z] and a numerical factor: - For example, let be "2 candlesticks", then candlestick. The multiplication sign is usually left out, just as it is left out between variables in scientific notation of formulas. The conventions used to express quantities is referred to as quantity calculus. In formulas the unit [Z] can be treated as if it were a specific magnitude of a kind of physical dimension: see dimensional analysis for more on this treatment. Units can only be added or subtracted if they are the same type; however units can always be multiplied or divided, as George Gamow used to explain. Let be "2 candlesticks" and "3 cabdrivers", then - "2 candlesticks" times "3 cabdrivers" candlestick cabdriver. A distinction should be made between units and standards. A unit is fixed by its definition, and is independent of physical conditions such as temperature. By contrast, a standard is a physical realization of a unit, and realizes that unit only under certain physical conditions. For example, the metre is a unit, while a metal bar is a standard. One metre is the same length regardless of temperature, but a metal bar will be exactly one metre long only at a certain temperature. There are certain rules that have to be used when dealing with units: - Treat units algebraically. Only add like terms. When a unit is divided by itself, the division yields a unitless one. When two different units are multiplied or divided, the result is a new unit, referred to by the combination of the units. For instance, in SI, the unit of speed is metres per second (m/s). See dimensional analysis. A unit can be multiplied by itself, creating a unit with an exponent (e.g. m2/s2). Put simply, units obey the laws of indices. (See Exponentiation.) - Some units have special names, however these should be treated like their equivalents. For example, one newton (N) is equivalent to 1 kg⋅m/s2. Thus a quantity may have several unit designations, for example: the unit for surface tension can be referred to as either N/m (newtons per metre) or kg/s2 (kilograms per second squared). Whether these designations are equivalent is disputed amongst metrologists. Expressing a physical value in terms of another unitEdit Conversion of units involves comparison of different standard physical values, either of a single physical quantity or of a physical quantity and a combination of other physical quantities. just replace the original unit with its meaning in terms of the desired unit , e.g. if , then: Now and are both numerical values, so just calculate their product. Or, which is just mathematically the same thing, multiply Z by unity, the product is still Z: For example, you have an expression for a physical value Z involving the unit feet per second ( ) and you want it in terms of the unit miles per hour ( ): - Find facts relating the original unit to the desired unit: - 1 mile = 5280 feet and 1 hour = 3600 seconds - Next use the above equations to construct a fraction that has a value of unity and that contains units such that, when it is multiplied with the original physical value, will cancel the original units: - Last, multiply the original expression of the physical value by the fraction, called a conversion factor, to obtain the same physical value expressed in terms of a different unit. Note: since valid conversion factors are dimensionless and have a numerical value of one, multiplying any physical quantity by such a conversion factor (which is 1) does not change that physical quantity. Or as an example using the metric system, you have a value of fuel economy in the unit litres per 100 kilometres and you want it in terms of the unit microlitres per metre: One example of the importance of agreed units is the failure of the NASA Mars Climate Orbiter, which was accidentally destroyed on a mission to Mars in September 1999 instead of entering orbit due to miscommunications about the value of forces: different computer programs used different units of measurement (newton versus pound force). Considerable amounts of effort, time, and money were wasted. On 15 April 1999, Korean Air cargo flight 6316 from Shanghai to Seoul was lost due to the crew confusing tower instructions (in metres) and altimeter readings (in feet). Three crew and five people on the ground were killed. Thirty-seven were injured. In 1983, a Boeing 767 (which came to be known as the Gimli Glider) ran out of fuel in mid-flight because of two mistakes in figuring the fuel supply of Air Canada's first aircraft to use metric measurements. This accident was the result of both confusion due to the simultaneous use of metric and Imperial measures and confusion of mass and volume measures. - "measurement unit", in International Vocabulary of Metrology – Basic and General Concepts and Associated Terms (VIM) (PDF) (3rd ed.), Joint Committee for Guides in Metrology, 2008, pp. 6–7. - Yunus A. Çengel & Michael A. Boles (2002). Thermodynamics: An Engineering Approach (Eighth ed.). TN: McGraw Hill. p. 996. ISBN 9780073398174. - "US Metric Act of 1866". Archived from the original on 10 October 2014. as amended by Public Law 110–69 dated 9 August 2007 - "NIST Handbook 44 Appendix B". National Institute of Standards and Technology. 2002. Archived from the original on 13 February 2007. - Emerson, W.H. (2008). "On quantity calculus and units of measurement". Metrologia. 45 (2): 134–138. Bibcode:2008Metro..45..134E. doi:10.1088/0026-1394/45/2/002. - "Unit Mixups". US Metric Association. Archived from the original on 23 September 2010. - "Mars Climate Orbiter Mishap Investigation Board Phase I Report" (PDF). NASA. 10 November 1999.[permanent dead link] - "Korean Air Flight 6316" (Press release). NTSB. Archived from the original on 6 October 2006. - "Korean Air incident". Aviation Safety Net. Archived from the original on 31 July 2013. Witkin, Richard (30 July 1983). "Jet's Fuel Ran Out After Metric Conversion Errors". New York Times. Retrieved 21 August 2007. Air Canada said yesterday that its Boeing 767 jet ran out of fuel in mid-flight last week because of two mistakes in figuring the fuel supply of the airline's first aircraft to use metric measurements. After both engines lost their power, the pilots made what is now thought to be the first successful emergency dead stick landing of a commercial jetliner. - Rowlett, Russ (2005) A Dictionary of Units of Measurement – Russ Rowlett and the University of North Carolina at Chapel Hill - NIST Handbook 44, Specifications, Tolerances, and Other Technical Requirements for Weighing and Measuring Devices - Official SI website - Quantity System Framework – Quantity System Library and Calculator for Units Conversions and Quantities predictions - List of units with selected conversion factors - "Arithmetic Conventions for Conversion Between Roman [i.e. Ottoman] and Egyptian Measurement" is a manuscript from 1642, in Arabic, which is about units of measurement. - Ireland – Metrology Act 1996 - Text of the Units of Measurement Regulations 1995 as in force today (including any amendments) within the United Kingdom, from legislation.gov.uk - Metric information and associations - BIPM (official site) - UK Metric Association - US Metric Association - The Unified Code for Units of Measure (UCUM) - Imperial measure information
<urn:uuid:f03897db-8371-49b4-8300-dd74a8c964de>
3.8125
3,213
Knowledge Article
Science & Tech.
41.538275
95,498,136
National Center for 1RQR: Crystal structure and mechanism of a bacterial fluorinating enzyme, product complex Nature (2004) 427 p.561-565» All references (2) Fluorine is the thirteenth most abundant element in the earth's crust, but fluoride concentrations in surface water are low and fluorinated metabolites are extremely rare. The fluoride ion is a potent nucleophile in its desolvated state, but is tightly hydrated in water and effectively inert. Low availability and a lack of chemical reactivity have largely excluded fluoride from biochemistry: in particular, fluorine's high redox potential precludes the haloperoxidase-type mechanism used in the metabolic incorporation of chloride and bromide ions. But fluorinated chemicals are growing in industrial importance, with applications in pharmaceuticals, agrochemicals and materials products. Reactive fluorination reagents requiring specialist process technologies are needed in industry and, although biological catalysts for these processes are highly sought after, only one enzyme that can convert fluoride to organic fluorine has been described. Streptomyces cattleya can form carbon-fluorine bonds and must therefore have evolved an enzyme able to overcome the chemical challenges of using aqueous fluoride. Here we report the sequence and three-dimensional structure of the first native fluorination enzyme, 5'-fluoro-5'-deoxyadenosine synthase, from this organism. Both substrate and products have been observed bound to the enzyme, enabling us to propose a nucleophilic substitution mechanism for this biological fluorination reaction.
<urn:uuid:1fa68448-4aee-4a6a-b3df-f7034b4328f0>
2.984375
321
Academic Writing
Science & Tech.
0.790088
95,498,138
Outside a host cell, these weird microscopic particles, or virions, only consist of a tiny piece of genetic information (about 10,000 times less than that contained in the human genome) and a protein or lipid (fatty molecule) shell. Whether these particles are living things is the subject of much debate, as they don’t meet many of the usual criteria for life. While there isn’t any formal agreement on what defines life, most definitions include the ability to adapt to the environment, to reproduce, to respond to stimuli, and to use energy. While the virus particle may fall short of the definition of life depending on the criteria used, for some virologists like myself, thinking of the virion as the “virus” is like calling a sperm or unfertilised egg a “person”. Sure, a sperm is an essential step towards creating a person, but few people would argue that a sperm or unfertilised egg should be described as the finished product. Much like a sperm, virions are produced in the millions. Many will never reach their destination and are lost and degrade in the environment. It is only when the virus binds to and enters a target cell that its cycle of replication can begin. A virion doesn’t even always contain a majority of the molecules a virus can create. For example, the norovirus virion contains just three different types of protein and one type of RNA (a nucleic acid like DNA which uses a different sugar to form its backbone). Infected cells, however, make at least eight different viral proteins and four different viral RNAs. Nor does the virus particle itself usually result in the symptoms of disease. Typically, when you catch a virus, your symptoms come from either infected cells dying, or your immune response to those infected cells. For these reasons, some virologists consider the infected cell, rather than the virion, to be the virus. I am virus While this idea sounds outlandish, from conception to grave, your cells are intricately associated with viruses. Even if you don’t have a cold or the flu, you are still part-virus as human DNA plays host to a range of different viruses. These are retroviruses, the best-known example of which is HIV. While HIV only entered the human population relatively recently, viruses very much like it have been infecting us and the creatures we evolved from since long before humans even existed. While HIV infects immune cells, when a retrovirus instead infects the cells that produce eggs or sperm, the viral DNA can be inherited by any offspring. Over millions of years, these viruses have lost their ability to produce infectious particles, but have in some cases found other vital roles, and are now indispensable for human life. One well-studied example is a protein called Syncytin-1, which is vital for the development of the placenta. This was originally a retroviral protein which entered the monkey population which gave rise to humans around 24m years ago. If we deleted this protein from our DNA, humanity would rapidly go extinct as we could no longer produce a functional placenta. All these viruses which inserted into our DNA long ago are termed endogenous retroviruses (ERVs). In humans, ERVs have long since lost the ability to produce infectious virions, but this is not the case in all animals. Pig ERVs, for example, can produce infectious particles and are a concern when considering the use of pig organs for transplant, as these are known to be able to infect human cells in the lab. If a virus is the infected cell, rather than the virion, you could even think of the viruses that can infect us as more than 99.9% human. This is because they need many of the human proteins or other molecules present in your cells and encoded in your DNA to make more virus. A human cell is vastly more complex than even the largest virus, and viruses can make use of this to compensate for their own simplicity. Viruses and their host cells share many common needs. They need to be able to produce RNA, protein, lipids and have access to the raw materials to generate these. As a host cell already contains all the needed components to achieve this, a virus can simply provide its own instructions, in the form of the viral genome, and let the cell do most of the work. It takes many more cellular proteins to make a virus, than it does viral proteins. A virus only needs to provide instructions for the few components the host cell cannot produce. An example of this would be viruses which have a virion with a lipid membrane, such as influenza. This membrane is usually recycled from host cell membranes. The addition of a couple of viral proteins converts this into the membrane coat of the virion. This use of host components by viruses also makes it clear why it has been so difficult to develop effective antiviral drugs. Much as with cancer treatment, there is very little to distinguish infected cells from normal human cells, which makes coming up with a drug that will only target infected cells extremely challenging. To be effective, you have to target that tiny part of the infected cell that is purely virus, without harming the remainder. So are viruses alive? It’s still not settled, and really depends on what you think a virus is. What does seem clear, however, is that the viruses which infect us can be seen as part human, and we are part virus.
<urn:uuid:0f338c1f-202a-441e-9f22-82f8d46da76f>
3.859375
1,136
Nonfiction Writing
Science & Tech.
43.353344
95,498,146
vector produces a vector of the given length and mode. as.vector, a generic, attempts to coerce its argument into a vector of mode mode (the default is to coerce to whichever vector mode is most convenient): if the result is atomic all attributes are removed. x is a vector of the specified mode having no attributes other than names. It returns vector(mode = "logical", length = 0) as.vector(x, mode = "any") is.vector(x, mode = "any") - character string naming an atomic mode or "expression"or (except for - a non-negative integer specifying the desired length. For a long vector, i.e., length > .Machine$integer.max, it has to be of type "double". Supplying an argument of length other than one is an error. - an R object. The atomic modes are mode = "any", is.vector may return the atomic modes, mode, it will return x has any attributes except names. (This is incompatible with S.) On the other as.vector removes all attributes including names for results of atomic mode (but not those of mode Note that factors are not vectors; as.vector converts a factor to a character mode = "any". vector, a vector of the given length and mode. Logical vector elements are initialized to FALSE, numeric vector elements to 0, character vector elements to "", raw vector elements to nulbytes and list/expression elements to as.vector, a vector (atomic or of type list or expression). All attributes are removed from the result if it is of an atomic mode, but not in general for a list result. The default method handles 24 input types and 12 values of type: the details of most coercions are undocumented and subject to change.For is.vector(x, mode = "numeric")can be true for vectors of types is.vector(x, mode = "double")can only be true for those of type as.vector(x) is not necessarily a null operation if is.vector(x) is true: any names will be removed from an atomic "pairlist" are accepted but have long been undocumented: they are used to implement as.pairlist, and those functions should preferably be used directly. None of the description here applies to those modes: see the help for the preferred forms. Writers of methods for as.vector need to take care to follow the conventions of the default method. In particular "any", any of the atomic modes, "pairlist"or one of the aliases - The return value should be of the appropriate mode. For mode = "any"this means an atomic vector or list. - Attributes should be treated appropriately: in particular when the result is an atomic vector there should be no attributes, not even names. is.vector(as.vector(x, m), m)should be true for any mode m, including the default Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole. df <- data.frame(x = 1:3, y = 5:7) ## Not run: ## Error: # as.vector(data.frame(x = 1:3, y = 5:7), mode = "numeric") # ## End(Not run) x <- c(a = 1, b = 2) is.vector(x) as.vector(x) all.equal(x, as.vector(x)) ## FALSE ###-- All the following are TRUE: is.list(df) ! is.vector(df) ! is.vector(df, mode = "list") is.vector(list(), mode = "list")
<urn:uuid:5aa268fe-dfaa-4693-acc1-c9e95f16bcf7>
4.03125
838
Documentation
Software Dev.
65.360004
95,498,169
What is GOCE? ESA's dart-like Gravity field and Ocean Circulation Explorer (GOCE) Earth Explorer orbits as close to Earth as possible - just 260 km up - to maximise its sensitivity to variations in Earth's gravity field. Launched in 2009, GOCE's state-of-the-art gradiometer is mapping Earth's geoid to an unprecedented level of accuracy, opening a window into Earth's interior structure as well as the currents circulating within the depths of it's oceans. Latest Mission Operations News ESA has started a process to improve the Earth Observation data distribution services, aiming at facilitating access to data and information for the end users, initially by shortening the access path and reviewing authentication and authorisation processes. 09 March 2018 The level 1 data of the redundant GOCE GPS receiver have been made available on the GOCE Virtual Online Archive. The data set contains the RINEX data and the SST nominal L1b products and covers the de-orbiting period. Latest Mission Results News 29 June 2018 A team of researchers, supported under ESA's Basic Activities, has recently investigated a resourceful new method of monitoring space weather. They utilised the data from the Swarm and GOCE Earth Explorer missions, and from LISA Pathfinder to investigate whether platform magnetometer data could also be used for space weather diagnostics. Parts of Earth's crust are rising very slowly owing to post-glacial rebound, but using GPS, researchers have found that West Antarctica is rising faster than almost anywhere else in the world. And, ESA's GOCE gravity mission has, in turn, helped them to understand that the mantle below is unusually fluid. 27 July 2015 Data from ESA's GOCE gravity satellite are being used to improve models of Earth's geology, indicating the potential locations of subsurface energy sources. 04 May 2015 Registration is open for a free online course that provides an introduction to monitoring climate change using satellite Earth observation. Proceedings and Presentations: GOCE Data Quality Related (Key) Documentation Related Software Tools
<urn:uuid:fd8af3c2-6972-462b-b6e8-f648e7756687>
3.296875
438
Content Listing
Science & Tech.
25.447924
95,498,193
Researchers have solved the long-standing conundrum of how the boundary between grains of graphene affects heat conductivity in thin films of the miracle substance - bringing developers a step closer to being able to engineer films at a scale useful for cooling microelectronic devices and hundreds of other nano-tech applications. The study, by researchers at the University of Illinois at Chicago, the University of Massachusetts-Amherst and Boise State University, is published in Nano Letters. Since its discovery, graphene has attracted intense interest for its phenomenal ability to conduct heat and electricity. Virtually every nanotech device could benefit from graphene's extraordinary ability to dissipate heat and optimize electronic function, says Poya Yasaei, UIC graduate student in mechanical and industrial engineering and first author on the paper. In a two-year, multidisciplinary investigation, the researchers developed a technique to measure heat transfer across a single grain boundary - and were surprised to find that it was an order of magnitude - a full 10 times - lower than the theoretically predicted value. They then devised computer models that can explain the surprising observations from the atomic level to the device level. Graphene films for nanotech applications are made up of many tiny graphene crystals, says Amin Salehi-Khojin, UIC assistant professor of mechanical and industrial engineering and principal investigator on the study. Producing films large enough for practical use introduces flaws at the boundaries between the crystals that make up the film. Salehi-Khojin's team developed a finely tuned experimental system that lays down a graphene film onto a silicon-nitrate membrane only four-millionths of an inch thick and can measure the transfer of heat from one single graphene crystal to another. The system is sensitive to even the tiniest perturbations, such as a nanometer-scale grain boundary, says co-author Reza Hantehzadeh, a former UIC graduate student now working at Intel. When two crystals are neatly lined up, heat transfer occurs just as predicted by theory. But if the two crystals have mis-aligned edges, the heat transfer is 10 times less. To account for the order-of-magnitude difference, a team led by Fatemeh Khalili-Araghi, UIC assistant professor of physics and co-principal investigator on the paper, devised a computer simulation of heat transfer between grain boundaries at the atomic level. Khalili-Araghi's group found that when the computer "built" grain boundaries with different mismatch angles, the grain boundary was not just a line, it was a region of disordered atoms. The presence of a disordered region significantly affected the heat transfer rate in their computer model and can explain the experimental values. "With larger mismatched angles, this disordered region could be even wider or more disordered," she said. To realistically simulate mismatched grain boundaries and natural heat transfer, it was necessary to model the synthesis of a large area of graphene film, with grains growing and coalescing -- a very complex simulation, Khalili-Araghi said, which required the "enormous computing power" of UIC's High Performance Computing Cluster. "With our simulation we can see exactly what is going on at an atomic level," said co-author Arman Fathizadeh, UIC postdoctoral research associate in physics. "Now we can explain several factors -- the shape and size of the grain boundaries, and the effect of the substrate." Researchers from the University of Illinois at Chicago and Lawrence Berkeley National Laboratory have developed a new technique that lets them pinpoint the location of chemical reactions happening inside lithium-ion batteries in three dimensions at the nanoscale level. "Knowing the precise ... more Researchers examining the flow of electricity through semiconductors have uncovered another reason these materials seem to lose their ability to carry a charge as they become more densely "doped." Their results may help engineers design faster semiconductors in the future. Semiconductors ar ... more University of Illinois at Chicago scientists have discovered a new chemical method that enables graphene to be incorporated into a wide range of applications while maintaining its ultra-fast electronics. Graphene, a lightweight, thin, flexible material, can be used to enhance the strength a ... more In an unexpected finding, chemist Sankaran "Thai" Thayumanavan and colleagues at the University of Massachusetts Amherst show for the first time how movement of a single chemical bond can compromise a membrane made up of more than 500 chemical bonds. Their system uses light as a switch to c ... more Materials chemists have been trying for years to make a new type of battery that can store solar or other light-sourced energy in chemical bonds rather than electrons, one that will release the energy on demand as heat instead of electricity - addressing the need for long-term, stable, effi ... more At a conference marking the five-year anniversary of the North American Center for Research on Advanced Materials (NORA), the center’s members convened to discuss the outcomes of the research alliance to date as well as future areas of focus, including bioscience and catalysis research, dig ... more
<urn:uuid:6837d9d7-aba4-472d-ac99-60ab400da043>
3.578125
1,061
News Article
Science & Tech.
24.675
95,498,204
Berkeley Lab Researchers Win Three Popular Science "Best of What's New" Awards |By Jeffery Kahn, [email protected] November 12, 1996 BERKELEY, CA -- Researchers at Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab) have been named as the winners of three of Popular Science magazine's "Best of What's New" awards for 1996. Each year, the editors of the magazine review thousands of new products, technology developments, and scientific achievements. Then, they select 100 for distinction as the "Best of What's New." This year, the winners were announced at a November 12 awards event in New York City. They include: The editors of Popular Science recognized Moridis and Pruess for their development of a radically new technique for containing underground hazardous waste. To immobilize waste, researchers drill a series of wells outside the perimeter of a contaminated area, and then inject a fluid into the ground. Once in the ground, the fluid gels, forming an impermeable barrier that contains underground waste and prevents its spread. The technique has been field tested but still is in a developmental stage. - Berkeley Lab earth scientists George Moridis and Karsten Pruess. They developed a technique for creating an underground barrier that stops the spread of contaminants from hazardous waste sites. - Berkeley Lab scientist Ashok Gadgil for his development of "UV Waterworks." This inexpensive device uses ultraviolet light to cheaply disinfect water from the viruses and bacteria that, every year, kill millions of people in poor, developing nations. - Berkeley Lab scientist Mark Modera for a new aerosol-based technology for sealing air leaks in heating, cooling, and ventilation (HVAC) ducts. In typical homes, sealing these leaks can reduce heating and cooling energy costs from 15 to 30 percent. Currently, the state of the art of cleaning up sites with contaminated soils is the same as it was 30 years ago -- contractors dig the soil out and truck it to a hazardous waste site. That's because contaminants are very difficult to strip from the soil. Once toxins get into the ground, just a few gallons of hazardous fluids can contaminate huge areas. Unfortunately, the costs and limitations of the soil removal approach have severely handicapped the nation's cleanup efforts. Thousands of contaminated sites have been identified but few have been cleaned up. Over time, water in the ground can cause the contamination to spread. Depending on what is nearby, water supplies, rivers, residential areas, and human health can be further jeopardized by these delays. "Up until now," says Moridis, "the country has been fighting a losing battle. We believe the new approach we are developing is a vitally needed supplement to today's standard cleanup method." Gadgil's UV Waterworks, which was also a recipient of a Discover Magazine Award for Technology Innovation this year, has the potential to save millions of lives. In developing nations, safe, home-delivered tap water is rare. Consequently, each year, waterborne diseases such as cholera, typhoid, and dysentery that are transmitted mainly through the drinking of unsanitary water, kill an estimated four million children under the age of five and make adults sick enough to lose billions of hours of work productivity and income. The two most common methods of disinfecting water in developing nations -- chlorination and boiling -- both have drawbacks and limitations. Chlorine disinfection requires a continual supply of chlorine bleach and trained personnel to make sure chlorine is added to water supplies at effective levels. Boiling is usually done over wood stoves in unvented rooms which poses health risks of its own and contributes to air pollution and deforestation. Gadgil, who is from India and has had several cousins die from these diseases, worked after-hours creating a purification system that uses an off-the-shelf ultraviolet light to kill bacterial and viral contaminants. Running on a car battery if necessary, one unit can provide water for a village of 1,000 people. Each unit should cost between $250 and $600. "What we've done," said Gadgil, "is build a device that makes water purification so inexpensive that it's almost impossible not to use it." Modera was honored by Popular Science for his development of an elegant solution to the ubiquitous problem of how to seal leaky heating, cooling, and ventilation ducts. According to a 1991 study, sealing these leaks could save some one quadrillion BTU's per year in this country. That amounts to an annual energy savings of approximately $7 billion. Currently, contractors use a variety of techniques to seal leaks in existing HVAC systems. Though sealing leaks can save a lot of energy and money, relatively little of this work is done because of the difficulty involved in locating the leaks. Also, in many instances, it is almost impossible to get to the leaky ductwork. Modera's process resolves these obstacles. First, all grilles are temporarily sealed. Then, aerosolized adhesive particles are blown into the duct system and flow to the leakage sites, eventually sealing them. Compared to conventional duct-sealing methods, aerosol-based sealing plugs more of the leaks, is less time consuming and costly to homeowners, and provides better working conditions for the contractor. The research to develop aerosol sealing was funded by the California Institute for Energy Efficiency, the U.S. Environmental Protection Agency, the Electric Power Research Institute, and the U.S. Department of Energy. Berkeley Lab conducts unclassified scientific research for the U.S. Department of Energy. It is located in Berkeley, California and is managed by the University of California. Popular Science features the winners of its "Best of What's New" awards on its website at http://www.popsci.com
<urn:uuid:c7481c28-956b-48ce-8539-0cbf9ea7d1e0>
2.984375
1,186
News (Org.)
Science & Tech.
37.62906
95,498,205
Floyd, Peter A (1986): Geochemistry and provenance of basaltic clasts within volcaniclastic debris flows at DSDP Site 89-585. PANGAEA, https://doi.org/10.1594/PANGAEA.793294, Supplement to: Floyd, PA (1986): Geochemistry and provenance of basaltic clasts within volcaniclastic debris flows, East Mariana Basin, Deep Sea Drilling Project Site 585. In: Moberly, R; Schlanger, SO; et al. (eds.), Initial Reports of the Deep Sea Drilling Project, Washington (U.S. Govt. Printing Office), 89, 449-458, https://doi.org/10.2973/dsdp.proc.89.115.1986 Always quote above citation when using data! You can download the citation in several formats below. Pebble-sized basaltic and glassy clasts were extracted from seamount-derived volcaniclastic debris flows and analyzed for various trace elements, including the rare earths, to determine their genetic relationships and provenance. All the clasts were originally derived from relatively shallow submarine lava flows prior to sedimentary reworking, and have undergone minor low-grade alteration. They are classified into three petrographic groups (A, B, and C) characterized by different phenocryst assemblages and variable abundances and ratios of incompatible elements. Group A (clast from Hole 585) is a hyaloclastite fragment which is olivine-normative and distinct from the other clasts, with incompatibleelement ratios characteristic of transitional or alkali basalts. Groups B and C (clasts from Hole 585A) are quartz-normative, variably plagioclase-clinopyroxene-olivine phyric tholeiites, all with essentially similar ratios of highly incompatible elements and patterns of enrichment in light rare earth elements (chrondrite-normalized). Variation within Groups B and C was governed by low-pressure fractionation of the observed phenocryst phases, whereas the most primitive compositions of each group may be related by variable partial melting of a common source. The clasts have intraplate chemical characteristics, although relative to oceanic hot-spot-related volcanics (e.g., Hawaiian tholeiites) they are marginally depleted in most incompatible elements. The source region was enriched in all incompatible elements, compared with a depleted mid-ocean-ridge basalt source. Latitude: 13.483300 * Longitude: 156.815200 Date/Time Start: 1982-10-18T00:00:00 * Date/Time End: 1982-10-18T00:00:00
<urn:uuid:2683abef-e75b-4fea-bca0-9324c4cec862>
2.734375
574
Academic Writing
Science & Tech.
39.717897
95,498,224
Microorganisms such as bacteria and single celled algae in rivers and streams decompose organic matter as it flows downstream. They convert the carbon it contains into carbon dioxide, which is then released to the atmosphere. Recent estimates by Battin's team and others conclude there is a net flux, or outgassing, of carbon dioxide from the world's rivers and streams to the atmosphere of at least two-thirds to three-quarters of a gigatonne (Gt) of carbon per year. This flux has not been taken into account in the models of the global carbon cycle used to predict climate change. "Surface water drainage networks perfuse and integrate the landscape, across the whole planet," says Battin, "but they are missing from all global carbon cycling, even from the IPCC (Intergovernmental Panel on Climate Change) reports. Rivers are just considered as inert pipelines, receiving organic carbon from Earth and transporting it to the ocean." This thinking, according to Battin, has changed radically in last few years. He argues that the latest estimates of how much carbon is transferred to the atmosphere from rivers and streams are very conservative. "The actual outgassing of carbon dioxide is probably closer to 2 Gt of carbon per year," says Battin. "Our surface area estimates only consider larger streams and rivers, because it is very hard to estimate accurately the surface area of small streams. So small streams are excluded, although in terms of microbial activity, they are the most reactive in the network." Two gigatonnes of carbon per year is close to half the estimated net primary production of the world's vegetation each year. Realising that this quantity of carbon may be delivered straight back to the atmosphere, rather than being taken to the ocean where some of it is removed by marine organisms and ends up in sediment, could have profound consequences for our understanding of the system. In a disturbing development, Battin's team lab has recently found that engineered nanoparticles can significantly compromise the freshwater microbes involved in carbon cycling. "This finding is a real challenge to science," says Battin. "Engineered nanoparticles such as titanium dioxide are expected to increase in the environment, but it remains completely unknown how they might affect the functioning of ecosystems." Thomas Lau | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Science Education 23.07.2018 | Health and Medicine 23.07.2018 | Life Sciences
<urn:uuid:26127360-ff0d-43ca-bc6f-216af2b9d6f1>
3.859375
1,054
Content Listing
Science & Tech.
38.427106
95,498,231
Cortical spreading depression is a slowly propagating wave of near-complete depolarization of brain cells followed by temporary suppression of neuronal activity. Accumulating evidence indicates that cortical spreading depression underlies the migraine aura and that similar waves promote tissue damage in stroke, trauma, and hemorrhage. Cortical spreading depression is characterized by neuronal swelling, profound elevation of extracellular potassium and glutamate, multiphasic blood flow changes, and drop in tissue oxygen tension. The slow speed of the cortical spreading depression wave implies that it is mediated by diffusion of a chemical substance, yet the identity of this substance and the pathway it follows are unknown. Intercellular spread between gap junction-coupled neurons or glial cells and interstitial diffusion of K(+) or glutamate have been proposed. Here we use extracellular direct current potential recordings, K(+)-sensitive microelectrodes, and 2-photon imaging with ultrasensitive Ca(2+) and glutamate fluorescent probes to elucidate the spatiotemporal dynamics of ionic shifts associated with the propagation of cortical spreading depression in the visual cortex of adult living mice. Our data argue against intercellular spread of Ca(2+) carrying the cortical spreading depression wavefront and are in favor of interstitial K(+) diffusion, rather than glutamate diffusion, as the leading event in cortical spreading depression. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:a02032b7-ea4d-4521-84ae-5e14c63c0d92>
2.609375
292
Academic Writing
Science & Tech.
-2.066109
95,498,232
Clever bees can identify different flowers by patterns of scent Bumblebees can tell flowers apart by patterns of scent, according to new research involving Queen Mary University of London and led by the University of Bristol. 13 June 2018 A captive bumblebee walks across the surface of an artificial flower, working out the pattern of scent that has been made by placing peppermint oil in some of the holes. Credit: Dave Lawson Flowers have lots of different patterns on their surfaces that help to guide bees and other pollinators towards the flower’s nectar, speeding up pollination. These patterns include visual signals like lines pointing to the centre of the flower or colour differences. Flowers are also known to have different patterns of scent across their surface, and so a visiting bee might find that the centre of the flower smells differently to the edge of the petals. This new research, published in the journal Proceedings of the Royal Society B, shows that bumblebees can tell flowers apart by how scent is arranged on their surface. Patterns of scent Professor Lars Chittka, from Queen Mary’s School of Biological and Chemical Sciences, said: “We already knew that bees were clever, but we were really surprised by the fact that bees could learn invisible patterns on flowers – patterns that were just made of scent. “The scent glands on our flowers were either arranged in a circle or a cross, and bees had to figure out these patterns by using their feelers. But the most exciting finding was that, if these patterns are suddenly made visible by the experimenter, bees can instantly recognise the image that formerly was just an ephemeral pattern of volatiles in the air.” During the research, Professor Chittka advised on experimental controls as well as methods of data evaluation and interpretation. Lead author Dr Dave Lawson, from the University of Bristol’s School of Biological Sciences, said: “If you look at a flower with a microscope, you can often see that the cells that produce the flower’s scent are arranged in patterns. “By creating artificial flowers that have identical scents arranged in different patterns, we are able to show that this patterning might be a signal to a bee. For a flower, it’s not just smelling nice that’s important, but also where you put the scent in the first place.” The study also shows that once bees had learnt how a pattern of scent was arranged on a flower, they then preferred to visit unscented flowers that had a similar arrangement of visual spots on their surface. Dr Lawson added: “This is the equivalent of a human putting their hand in a bag to feel the shape of a novel object which they can’t see, and then picking out a picture of that object. Being able to mentally switch between different senses is something we take for granted, but it’s exciting that a small animal like a bee is also able to do something this abstract.” Senior author, Dr Sean Rands, also from Bristol, said: “Flowers often advertise to their pollinators in lots of different ways at once, using a mixture of colour, shape, texture, and enticing smells. “If bees can learn patterns using one sense, smell, and then transfer this to a different sense, vision, it makes sense that flowers advertise in lots of ways at the same time, as learning one signal will mean that the bee is primed to respond positively to different signals that they have never encountered. “Advertising agencies would be very excited if the same thing happened in humans.” Around 75 per cent of all food grown globally relies on flowers being pollinated by animals such as bees. The study explores the ways in which plants communicate with their pollinators, using different innovative techniques to explore how bees perceive the flowers that they visit. - Research paper: 'Bumblebees distinguish floral scent patterns, and can transfer these to corresponding visual patterns'. David A. Lawson, Lars Chittka, Heather M. Whitney and Sean A. Rands (2018). Proceedings of the Royal Society B. - Find out more about studying Biology at Queen Mary
<urn:uuid:981d76ac-d853-4f76-a6e7-27ac1227247c>
3.8125
867
News (Org.)
Science & Tech.
42.83872
95,498,239
Their findings show for the first time that the chemical and meteorological boundaries between the two air masses are not necessarily the same. Observations of the novel boundary will provide important clues to help scientists to model simulations of the movement of pollutants in the atmosphere more accurately, and to assess the impact of pollution on climate, the researchers say. A scientific paper about the chemical equator is to be published in the Journal of Geophysical Research - Atmospheres, a publication of the American Geophysical Union (AGU). Scientists had previously thought that a meteorological feature-- the Intertropical Convergence Zone (ITCZ)--formed the boundary between the polluted air of the Northern Hemisphere and the clearer air of the Southern Hemisphere. The ITCZ is a cloudy region circling the globe where the trade winds from each hemisphere meet. It is characterized by rapid vertical uplift and heavy rainfall, and acts as a meteorological barrier to pollutant transport between the hemispheres. But, in cloudless Western Pacific skies well to the north of the ITCZ, Jacqueline Hamilton of York University and her colleagues found evidence for an atmospheric chemical equator around 50 kilometers (31 miles) wide. Across that newfound borderline, air quality differed dramatically. For instance, carbon monoxide, a tracer of combustion, increased from 40 parts per billion to the south, to 160 parts per billion in the north. The difference in pollutant levels was increased by extensive forest fires to the north of the boundary and very clean air south of the chemical equator being pulled north from the Southern Indian Ocean by a land based cyclone in northern Australia. The scientists discovered evidence of the chemical equator using sensors on a specially equipped airplane during a series of flights north of Darwin, a city on the northern coast of Australia. At the time, the ITCZ was situated well to the south over central Australia. Researchers from the universities of York, Manchester, Cambridge, Leicester, and Leeds--all in the United Kingdom--collaborated in the study. "The shallow waters of the Western Pacific, known as the Tropical Warm Pool, have some of highest sea surface temperatures in the world, which result in the region's weather being dominated by storm systems," says Hamilton, lead author of the scientific paper. "The position of the chemical equator was to the south of this stormy region." "This means that these powerful storms may act as pumps, lifting highly polluted air from the surface to high in the atmosphere where pollutants will remain longer and may have a global influence," Hamilton notes. "To improve global simulations of pollutant transport, it is vital to know when the chemical and meteorological boundaries are in different locations." This research was funded by the United Kingdom's Natural Environment Research Council (NERC) as part of the ACTIVE project (Aerosol and Chemical Transport in Tropical Convection). Other partners include the Australian Bureau of Meteorology and Flinders University in Adelaide, Australia. Flights were carried out onboard the NERC Airborne Research and Survey Facility Dornier 228 aircraft.Title: Grant Allen, Geraint Vaughan, Keith N. Bower, Michael J. Flynn, and Jonathan Crosier: School of Earth, Atmospheric and Environmental Science, University of Manchester, Manchester, United Kingdom; Glenn D. Carver and Neil R. P. Harris: Chemistry Department, University of Cambridge, Cambridge, United Kingdom; Robert J. Parker and John J. Remedios: Earth Observation Science, Space Research Centre, Department of Physics &Astronomy, University of Leicester, Leicester, UK. Nigel A.D. Richards: Institute for Atmospheric Science, School of Earth and Environment, University of Leeds, Woodhouse Lane, Leeds, United Kingdom.Citation: email@example.comAlastair Lewis: Professor, Department of Chemistry, University of York; Composition Director, National Centre for Atmospheric Science; phone: +44 (0)1904 432522 email: firstname.lastname@example.orgGeraint Vaughan: Professor, School of Earth, Atmospheric and Environmental Sciences, University of Manchester; Weather Director, National Centre for Atmospheric Science; phone: +44 (0) 161 306 3931, email: email@example.com Peter Weiss | American Geophysical Union Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:8ee95e76-c5eb-4425-b446-666d57df6247>
4.1875
1,524
Content Listing
Science & Tech.
34.002813
95,498,248
A View from Emerging Technology from the arXiv Carbon Jelly: C60's Latest Trick Carbon soccer balls can form into gels all by themselves, say chemists, overturning the long-held belief that gels must consist of at least two chemical components. Gels are something of a puzzle for chemists. These jelly-like materials are not quite solid and yet not liquid either. Gels live in a kind of chemical twilight zone where they share many properties of both phases of matter. So confusing is this, that chemists find it hard even to define what it is to be a gel, or what properties its components must have. One thing they agree on however is that gels consist of at least two components: a liquid component and a solid component that forms into a loose network which binds the substance together. This is how the jelly-like properties arise. Now even that piece of common-lore might have to change. Today, Patrick Royall at the University of Bristol in the UK and Stephen Williams at the Australian National University say that C60, the soccer ball form of carbon, can form into a gel all by itself. So how come? For some time, chemists have known that C60 forms several different phases of matter. It can be a solid crystal, for example. But it is also known to form into clusters of a wide range of sizes. And it can form a liquid over a limited range of temperatures (although whether this liquid is stable or not, nobody is quite sure). The question that interests chemists is whether a liquid-like state can exist at the same as the clusters, which could then bind together forming the characteristic network structure that would hold the jelly-like substance together. Royall and Williams answer this question by creating a computer model of this substance and then seeing whether it is stable. And their conclusion is that it can. “We have presented numerical evidence that C60, under the right conditions can form a gel,” they say. Such a substance would be a bizarre chemical curiosity. It means that in addition to forming diamond, graphite, graphene and an infinite number of carbon chickenwire structures such as tubes and footballs, carbon can also be a jelly. But there’s more work ahead. Knowing that a substance can be stable is obviously useful but that doesn’t mean that it’s possible to make it. Royall and Williams say it should exist over the time scales that they can simulate–up to 100 nanoseconds. But these kinds of simulations are notoriously difficult to fine tune. It’s possible that C60 might prefer to crystallise. Of course, there’s only one way to find out. And now there’s likely to be no shortage of volunteers willing to try. Ref: arxiv.org/abs/1102.2959: C60: the first one-component gel? Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:5885b78f-820a-44dd-ac77-d1a2a91dc02d>
3.09375
645
Truncated
Science & Tech.
57.41117
95,498,252
The C + N2O → CN(A,X) + NO Reaction: a Possible Candidate for a Near Infrared Electronic Transition Chemical Laser? Preliminary experiments with a fast-flow low-pressure apparatus had shown that a total population inversion existed between the CN(A,v′=0) and (X,v″=1) levels. However, the CN density was too low for lasing. In order to increase the atomic carbon production, a hollow cathode was realized, to form carbon atoms by electrical dissociation of CO highly diluted in He. In fact, a 20 times increase in the CN density was observed, corresponding to a 3% dissociation degree. But the total population inversion previously observed didn’t exist anymore. This was ascribed to secondary reactions which modify the vibrational distribution, and also to the influence of high-lying electronic states of atomic carbon. KeywordsVibrational Level Hollow Cathode Vibrational Temperature Negative Glow Vibrational Distribution Unable to display preview. Download preview PDF.
<urn:uuid:c89a2297-5183-4b37-a246-a4bd0b1c2eb3>
2.828125
216
Truncated
Science & Tech.
29.95462
95,498,267
7600 scientist from 36 nations have gained insights into the polar oceans aboard Polarstern, facilitating our current understanding of the earth as a system. Polarstern provides ideal working conditions for international and interdisciplinary research teams and offers safe transport in polar seas. Currently, Polarstern is on her way to the Antarctic as part of the International Polar Year 2007/08. The birthday celebration for Polarstern will take place on November 28 at the Museum of Natural History in Berlin, and will include a special address by Chancellor Dr Angela Merkel. The international research community owes a vast amount of knowledge to the operation of Polarstern, e.g. concerning past climate and the largely unexplored deep sea. The largest German research vessel was funded by the then Federal Ministry of Education and Research, and is operated by the Alfred Wegener Institute for Polar and Marine Research. "For 25 years, expeditions aboard Polarstern have been producing scientific results which have significantly advanced our understanding of important parts of the earth as a system", says Prof Dr Karin Lochte, Director of the Alfred Wegener Institute. Polarstern expeditions are designed as international and interdisciplinary ventures in order to enhance insights into the polar regions through optimal scientific exchange and data gain. Polarstern can accommodate up to 55 scientists, who, aside from being provided with a bunk, have access to modern laboratories, aquaria and measuring equipment, but are also able to bring their own instruments or work around the clock. During extended research voyages to the polar regions, Polarstern must be entirely self-sufficient, with the crew being able to carry out even complicated repairs independently. Polarstern not only represents a floating laboratory, it also supplies Neumayer Station in the Antarctic, which is operated year round, with food, materials and fuel.Polar research is climate research In order to facilitate participation by the general public and future young scientists in polar research, individual Polarstern voyages are occasionally accompanied by media representatives, teachers and artists. University students and PhD candidates regularly join Polarstern in order to get to know the practical aspects of polar research and to collect important data for diploma and doctoral research. Both anchored and drifting recording platforms, deployed during expeditions in the Arctic and Southern Oceans, have been providing multi-year data records of salinity, temperature and currents, including during winter and from below the sea ice. Oceanographers at the Alfred Wegener Institute have been using these data to examine the effects of current climate change on polar oceans and broader global climate developments. The operation of such observation systems requires continual maintenance and a reliable polar research programme. The winter expeditions attracted particular attention. Usually, Polarstern travels to the Arctic during the European (i.e. Arctic) summer and spends her winters (austral summers) in the Antarctic. However, several research expeditions were also carried out during the polar winter seasons. These investigations pose very high technical demands, but are essential for observations of full annual cycles in polar environments. 2005/2006 was the last time Polarstern spent a full year in the Antarctic to explore the development of sea ice and associated species communities. One important component of polar animal life is the Antarctic krill. These crustaceans not only represent the most significant food source for many marine mammals, but also have much commercial potential. Geoscientists owe their current knowledge about the origin of the polar oceans to seismic investigations aboard Polarstern. The opening and closing of ocean basins severely influenced the development of life, as well as climate, during the various geological periods on Earth. Sediment cores obtained from aboard Polarstern, provide insights into the climate history of the planet.Polarstern during the International Polar Year 2007/08 On October 26, 2007, Polarstern left Bremerhaven for Cape Town. During this voyage, marine biologists have been studying the species composition and distribution of small animals drifting in the water, the so-called zooplankton. The microscopic organisms form the dietary basis of many fishes, thus representing an essential component of marine food webs in the ocean. Within the framework of the project 'Census of Marine Zooplankton', the marine biologists are expecting to discover numerous new species of plankton. After a brief stop in Cape Town, Polarstern will start heading towards Antarctica on November 28, 2007.Polarstern - 25 years of research in Arctic and Antarctic Review copies of the book "Polarstern - 25 Jahre Forschung in Arktis und Antarktis" are available from the publisher, Verlag Delius Klasing, attention Christian Ludewig (Tel: ++49-521-559902, email: email@example.com).Technical Specifications for Polarstern Margarete Pauls | idw New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:019d8dbd-f9f6-43f7-a175-24ca167e8122>
2.984375
1,619
Content Listing
Science & Tech.
30.411626
95,498,280
Authors marked in boldface are EvolEcol members. Fülöp A, Daróczi SJ, Dehelean AS, Dehelean LA, Domahidi Z, Dósa A, Gyékény G, Hegyeli Z, Kis RB, Komáromi IS, Kovács I, Miholcsa T, Nagy AA, Nagy A, Ölvedi SZ, Papp T, Pârâu LG, Sándor AK, Sos T, Zeitz R 2018. Autumn passage of soaring birds over Dobrogea (Romania): a migration corridor in Southeast Europe. Ardea 106: 61-77. The Dobrogea region in southeastern Romania, which is part of the Eurasian- East African Flyway, is listed as one of the important migration corridors for soaring birds on the western coast of the Black Sea. However, our knowledge regarding migration intensity, phenology and geographical patterns of soaring birds over the area is poor. To determine the migration intensity and phenology of soaring birds, we recorded the autumn migration in the Măcin Mountains (northern Dobrogea) from mid-August to the end of October between 2002–2007. To describe the geographical patterns of migration at a regional scale, we recorded migration intensity in the second half of September in 2010 and 2011, simultaneously from 15 and 13 counting points, respectively, covering the entire region of Dobrogea. In the Măcin Mountains we recorded a mean number (±SD) of 11,297 ± 2333.5 (CV = 20.7%) migrating raptors per year, and of 21,367 ± 10,949.3 (51.2%) and 455.6 ± 43.6 (9.6%) migrating White Storks Ciconia ciconia and Black Storks Ciconia nigra, respectively. Migration phenology parameters varied across raptor and non-raptor species. Migration occurred over a broad front, covering all of Dobrogea. However, migration intensity was more pronounced in the western, central and eastern parts of the region, and was less intensive in the northern central areas. Overall, we recorded 30 migrating raptor species and three non-raptor species. The most abundant raptors were Common Buzzard Buteo buteo, European Honey Buzzard Pernis apivorus, Lesser Spotted Eagle Clanga pomarina, Eurasian Sparrowhawk Accipiter nisus and Western Marsh Harrier Circus aeruginosus. The three non-raptor species were White Stork, Black Stork and Great White Pelican Pelecanus onocrotalus. Our study provides the first general overview of the autumn passage of soaring birds over Dobrogea, highlighting the importance of this area as part of a global migration network.
<urn:uuid:6483df13-a83e-4687-9ae5-f13bdeac0d98>
2.609375
601
Academic Writing
Science & Tech.
47.829982
95,498,295
Your questions are welcomE! Don't see your question about whole brain emulation, mind uploading, neural prosthesis, neural interfaces, etc, answered in the FAQ below? Please send us your question at email@example.com. WHAT IS WHOLE BRAIN EMULATION? (WBE) Whole brain emulation (WBE) is the principal method proposed by which to accomplish SIM and many other purposes (e.g. general brain research, research into clinical neural prostheses, research into artificial intelligence, etc.). In academic research, the shorter term "brain emulation" is sometimes used, and terms such as "whole-brain activity mapping" are used to describe data acquisition tools developed and used in closely related fields of neuroscience. Whole brain emulation uses high resolution data about specific brain structure (e.g. the connector) and specific brain activity (e.g. electrophysiology). how can whole brain emulation be accomplished? There are presently 2 proposed approaches: - Gradually replace each small piece of a brain with a neuroprosthetic device that does the same thing until all parts of that brain have been replaced and the entire brain residing inside the skull is artificial. - Preserve a biological brain (it is dead at that point), cut it into extremely tiny slivers and image each one of those, reconstruct a working artificial brain from the information in those slices (plus some additional information previously recorded). The same two approaches are presented at a high level in step-wise fashion in the following diagram panels: What are substrate-independent minds (SIM)? Neuroscience research has demonstrated that a mind's functions are implemented through neurobiological mechanisms of the nervous system. If the same functions are recreated in a different operating substrate (e.g. a special "neuromorphic device", software executed in a digital computer, etc.) such that they produce the same results as the original mind then it is a substrate-independent mind. The possibility for execution of this process is supported by the strong neuroscientific consensus that behavior and experience, phenomena correlated with what we consider mental processes of the mind, emerge from biophysical functions that are adequately described in terms of classical physics. These information processing functions of mind are computable, so it follows that the mind is computable. What Sim technology & research Is COMPLETED? Neuroinformatics investigations that seek to map the detailed "connector" of the human brain are essential for the whole brain emulation (WBE) approach to SIM. Well-known examples of research in this field are the Blue-Brain Project, large-scale brain models created by Eugene Izhikevich and the DARPA Synapse Project. Another important component is scanning technology capable of rapid brain tissue preparation and in-vivo recording, and increased spatial and temporal resolution. Examples are the Automatic Tape-Collecting Lathe Ultramicrotome (ALTUM) developed by Kenneth Hayworth and use of optogenetics and nanotechnology. See the article Fundamentals of Whole Brain Emulation: State, Transition and Update Representations and Whole Brain Emulation: A Roadmap for an introduction and an overview of projects. Is SIM related to ideas about "mind uploading"? The popular term "mind uploading" (sometimes called "mind copying" or "mind transfer") is the hypothetical process of copying mental content (including long-term memory and "self") from a particular brain substrate to a computational device, such as a digital, analog, quantum-based or artificial neural network. However, "uploading" has very different meanings in the conversations of different communities. In some cases the target medium referred to is intended merely to store the data, while in others it is also intended to carry out the functions, i.e. to emulate the"uploaded mind".
<urn:uuid:4e325ab0-b449-4bc3-ab5e-19914999b96d>
2.59375
784
FAQ
Science & Tech.
25.190881
95,498,301
In a survey of NASA's Hubble Space Telescope images of 2,753 young, blue star clusters in the neighboring Andromeda galaxy (M31), astronomers have found that M31 and our own galaxy have a similar percentage of newborn stars based on mass. By nailing down what percentage of stars have a particular mass within a cluster, or the Initial Mass Function (IMF), scientists can better interpret the light from distant galaxies and understand the formation history of stars in our universe. This is a Hubble mosaic of 414 photographs of the M31, or the Andromeda galaxy. On the bottom left is an enlargement of the boxed field (top) reveals myriad stars and numerous open star clusters as bright blue knots,spanning 4,400 light-years across. On the bottom right are six bright blue clusters extracted from the field. Each cluster square is 150 light-years across. Credits: NASA/ESA, J. Dalcanton, B.F. Williams, L.C. Johnson (Univ. of Washington), PHAT team, and R. Gendler The intensive survey, assembled from 414 Hubble mosaic photographs of M31, was a unique collaboration between astronomers and "citizen scientists," volunteers who provided invaluable help in analyzing the mountain of data from Hubble. "Given the sheer volume of Hubble images, our study of the IMF would not have been possible without the help of citizen scientists," said Daniel Weisz of the University of Washington in Seattle. Weisz is lead author on a paper that appeared in the June 20 issue of the Astrophysical Journal. Measuring the IMF was the primary driver behind Hubble's ambitious panoramic survey of our neighboring galaxy, called the Panchromatic Hubble Andromeda Treasury (PHAT) program. Nearly 8,000 images of 117 million stars in the galaxy's disk were obtained from viewing Andromeda in near-ultraviolet, visible, and near-infrared wavelengths. Stars are born when a giant cloud of molecular hydrogen, dust and trace elements collapses. The cloud fragments into small knots of material that each precipitate hundreds of stars. The stars are not all created equally: their masses can range from 1/12th to a couple hundred times the mass of our sun. Prior to Hubble's landmark survey of the star-filled disk of M31, astronomers only had IMF measurements made in the local stellar neighborhood within our own galaxy. But Hubble's bird's-eye view of M31 allowed astronomers to compare the IMF among a larger-than-ever sampling of star clusters that are all at approximately the same distance from Earth, 2.5 million light-years. The survey is diverse because the clusters are scattered across the galaxy; they vary in mass by factors of 10, and they range in age from 4 to 24 million years old. To the researchers' surprise, the IMF was very similar among all the clusters surveyed. Nature apparently cooks up stars like batches of cookies, with a consistent distribution from massive blue supergiant stars to small red dwarf stars. "It's hard to imagine that the IMF is so uniform across our neighboring galaxy given the complex physics of star formation," Weisz said. Curiously, the brightest and most massive stars in these clusters are 25 percent less abundant than predicted by previous research. Astronomers use the light from these brightest stars to weigh distant star clusters and galaxies and to measure how rapidly the clusters are forming stars. This result suggests that mass estimates using previous work were too low because they assumed that there were too few faint low-mass stars forming along with the bright massive stars. This evidence also implies that the early universe did not have as many heavy elements for making planets, because there would be fewer supernovae from massive stars to manufacture heavy elements for planet building. It is critical to know the star-formation rate in the early universe--about 10 billion years ago--because that was the time when most of the universe's stars formed. The PHAT star cluster catalog, which forms the foundation of this study, was assembled with the help of 30,000 volunteers who sifted through the thousands of images taken by Hubble to search for star clusters. The Andromeda Project is one of the many citizen science efforts hosted by the Zooniverse organization. Over the course of 25 days, the citizen scientist volunteers submitted 1.82 million individual image classifications based on how concentrated the stars were, their shapes, and how well the stars stood out from the background, which roughly represents 24 months of constant human attention. Scientists used these classifications to identify a sample of 2,753 star clusters, increasing the number of known clusters by a factor of six in the PHAT survey region. "The efforts of these citizen scientists opens the door to a variety of new and interesting scientific investigations, including this new measurement of the IMF," Weisz said. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore, Maryland, conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, in Washington. For images and more information about PHAT and the Hubble Space Telescope, visit: Ray Villard | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:dd97b100-ec79-406b-895f-5273626122fc>
3.671875
1,721
Content Listing
Science & Tech.
45.120415
95,498,303
What is BL? BL meaning Abbreviation for Boundary Layer; a layer of air adjacent to a bounding surface. Specifically, the term most often refers to the planetary boundary layer, which is the layer within which the effects of friction are significant. For the earth, this layer is considered to be roughly the lowest one or two kilometers of the atmosphere. It is within this layer that temperatures are most strongly affected by daytime insolation and nighttime radiational cooling, and winds are affected by friction with the earth’s surface. The effects of friction die out gradually with height, so the “top” of this layer cannot be defined exactly. reference: National Weather Service Glossary
<urn:uuid:7360e702-3d33-46e4-baec-510292c705f2>
3.671875
141
Knowledge Article
Science & Tech.
40.979344
95,498,308
Presentation on theme: "Ice & Glaciers By: Mario Solórzano Arnold Inga Juan Arresis Carder Brown."— Presentation transcript: Ice & Glaciers By: Mario Solórzano Arnold Inga Juan Arresis Carder Brown Geographic Location Approximate Worldwide Area Covered by Glaciers square kilometers) Antarctica 11,965,000 (without iceshelves and ice rises) Total glacier coverage is nearly 15,000,000 square kilometers, or a little less than the total area of the South American continent. The numbers listed do not include smaller glaciated polar islands or other small glaciated areas, which is why they do not add up to 15,000,000.) Greenland1,784,000 Canada200,000 Central Asia109,000 Russia82,000 United States75,000 (including Alaska) China and Tibet33,000 South America25,000 Iceland11,260 Scandinavia2,909 Alps2,900 New Zealand1,159 Mexico11 Indonesia7.5 Africa10 Zones The accumulation zone is the area above the firm line, where snowfall accumulates and exceeds the losses from ablation, (melting, evaporation, and sublimation). Light As well as warmer air directly melting the surface of the ice sheet, glaciers are an important part of the picture. Glaciers move ice from the ice sheet to the sea, and react quickly to changes in atmospheric conditions. Temperatures Figure 9.40 Climatogram for McMurdo, Antarctica Latitude/Longitude = 77 o S; 166 o E Average Annual Temperature ( o C) = -17.0 Annual Temperature Range ( o C) = 23 Total Annual Precipitation (mm) = 7.8 Summer Precipitation (mm) = 3.7 Winter Precipitation (mm) = 4.1 Geological Factors Geological evidence for ice ages comes in various forms, including rock scouring and scratching, glacial moraines, drumlins, valley cutting, and the deposition of till and glacial erratic. Successive glaciations tend to distort and erase the geological evidence, making it difficult to interpret. The advent of sediment and ice cores revealed the true situation: glacials are long, interglacials short. It took some time for the current theory to be worked out. Chemical Factors Sediment yields are high from glaciers, this suggests that water flux, rather than physical erosion, exerts the primary control on chemical erosion by glaciers. Potassium and calcium concentrations are high relative to other cations in glacial water, probably due to dissolution of soluble trace phases, such as carbonates, exposed by comminution, and cation leaching from biotite. Species and Niche Polar bears are one of the many animals that depend on solid ice to survive. Thy need stable ice that will not break under them to live. But as global warming gets worse more and more ice melts so that means less ice for polar bears to live on. Environmental Pressures In recent years it has been recognised that ice/sediment coupling occurred beneath the Quaternary ice sheets that advanced over the soft sediments of lowland areas. This paper looks in detail at the effects of this coupling on the sediments, which results in glaciotectonic deformation, and also discusses the interaction of deformation and deposition within the subglacial environment. Human Impacts The effect of global warming has caused many glaciers to melt and that causes for more ocean water so ocean water level increases.
<urn:uuid:b2e2d4d8-2726-4249-81d9-60a36c0b5d3e>
3.59375
732
Truncated
Science & Tech.
33.046379
95,498,323
In physics, Lorentz symmetry, named for Hendrik Lorentz, is "the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space".Lorentz covariance, a related concept, is a key property of spacetime following from the special theory of relativity. Lorentz covariance has two distinct, but closely related meanings: An equation is said to be Lorentz covariant if it can be written in terms of Lorentz covariant quantities (confusingly, some use the term "invariant" here). The key property of such equations is that if they hold in one inertial frame, then they hold in any inertial frame; this follows from the result that if all the components of a tensor vanish in one frame, they vanish in every frame. This condition is a requirement according to the principle of relativity, i.e., all non-gravitational laws must make the same predictions for identical experiments taking place at the same spacetime event in two different inertial frames of reference.
<urn:uuid:ee203866-8fcd-463c-8f61-0b0407ebc474>
3.765625
222
Knowledge Article
Science & Tech.
24.058323
95,498,324
A team of MIT engineers has identified two key physical processes that lend spider silk its unrivaled strength and durability, bringing closer to reality the long-sought goal of spinning artificial spider silk. Manufactured spider silk could be used for artificial tendons and ligaments, sutures, parachutes and bulletproof vests. But engineers have not managed to do what spiders do effortlessly. In a study published in the November issue of the Journal of Experimental Biology, Gareth H. McKinley, professor of mechanical engineering, and colleagues examined how spiders spin their native silk fibers, with hopes of ultimately reproducing the process artificially. McKinley heads the Non-Newtonian Fluid Dynamics research group in MIT's Department of Mechanical Engineering. Non-Newtonian fluids behave in strange and unexpected ways because their viscosity, or consistency, changes with both the rate and the total amount of strain applied to them. Spider silk is a protein solution that undergoes pronounced changes as part of the spinning process. Egg whites, another non-Newtonian fluid, change from a watery gel to a rubbery solid when heated. Spider silk, it turns out, undergoes similar irreversible physical changes. Stickiness and Flow McKinley and Nikola Kojic, a graduate student in the Harvard-MIT Division of Health Sciences and Technology, studied the silk of Nephila clavipes, the golden silk orb-weaving spider. One species of golden orb spider creates a web so strong it can catch small birds. In the South Pacific, people make fishing nets out of this web silk. The researchers chose the golden silk spider because of the formidable strength of its web. But Kojic was taken aback when the first palm-sized spider crawled out of the box he received in the mail from an accommodating employee of Miami's MetroZoo. (She simply gathered some up from the grounds; the zoo does not exhibit golden orb spiders.) "This is pretty scary," he said. "I'd never seen a spider this big. I never grew up around anything with furry knuckles." But he quickly settled into dissecting the peanut-sized and -shaped protuberance on the spiders' backs containing their silk-producing glands and spinnerets. Spiders don't actually spin ("spinning" refers to the age-old art of drawing out and twisting fibers to form thread); instead, they squirt out a thick gel of silk solution. (One teaspoonful can make 10,000 webs.) They then use their hind legs as well as their body weight and gravity to elongate the gel into a fine thread. Kojic, who first practiced on silkworms, learned how to extract a microscopic amount of the gel-like solution from the spider's silk-producing major ampullate gland. The researchers used devices called micro-rheometers-custom-made to handle the tiny drops of silk solution-to test the material's behavior when subjected to forces. The team tested the thick solution's viscosity, or how it flowed, by "shearing" it, or placing it between two rapidly moving glass plates. They tested its stickiness by pulling it apart, like taffy, between two metal plates. The magic that makes silk so strong, the researchers discovered, happens while it flows out of the spider's gland, lengthens into a filament and dries. The key to spider silk is polymers. Plastics, Kevlar (used in bulletproof vests) and parts of the International Space Station are some of the many items made from polymers. The proteins in our bodies are polymers made from amino acids. From the Greek for "many" and "units," polymers are long linked chains of small molecules. They can be flexible or stiff, water-soluble or insoluble, resistant to heat and chemicals and very strong. Silk protein solution consists of 30-40 percent polymers; the rest is water. The spider's silk-producing glands are capable of synthesizing large fibrous proteins and processing those proteins into an insoluble fiber. "The amazing thing nature has found is how to spin a material out of an aqueous solution and produce a fiber that doesn't re-dissolve," McKinley said. Like a cooked egg white, dry spider silk doesn't revert to its former liquid state. What started out as a water-based solution becomes impervious to water. The silk protein's long molecules are like tangled spaghetti. They form a viscous solution but are slippery enough to slide past each other easily and squeeze through the spider's ampullate gland. As the silk gel flows from the gland through an S-shaped, tapered canal to the outside of the spider's body, the long protein molecules become aligned and the viscosity (or resistance to flow) drops by a factor of 500 or more. As the resulting liquid exits the abdomen through the spinneret, it has the characteristics of a liquid crystal. It's the exquisite alignment of the protein fibers, Kojic said, that gives silk threads their amazing strength. While the silk stretches and dries, it forms miniscule crystalline structures that act as reinforcing agents. Engineered nanoparticles-tiny materials suspended in artificial silk-may be able to serve the same purpose. In conjunction with the polymer synthesis and analysis work of Paula T. Hammond, an MIT professor of chemical engineering, McKinley's laboratory will use the new insights about spider silk to team up with MIT's Institute for Soldier Nanotechnologies to emulate the properties of silk through polymer processing. "We're interested in artificial materials that emulate silk," McKinley said. Tailoring the properties of the liquid artificial spinning material to match the properties of the real thing "may prove essential in enabling us to successfully process novel synthetic materials with mechanical properties comparable to, or better than, those of natural spider silk," the authors wrote. This work was supported by the NASA Biologically Inspired Technology Program, the DuPont-MIT Alliance and the MIT Institute for Soldier Nanotechnologies. Elizabeth A. Thomson | MIT News Office In borophene, boundaries are no barrier 17.07.2018 | Rice University Research finds new molecular structures in boron-based nanoclusters 13.07.2018 | Brown University For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:4ed402c1-12bb-4306-bbfd-2903a5e9b70d>
3.90625
1,882
Content Listing
Science & Tech.
42.461678
95,498,379
Christopher Pöhlker from the Max Planck Institute for Chemistry is being awarded the Otto Hahn Medal for his outstanding research of the characteristics and sources of biogenic aerosol particles. Pöhlker has proven that biogenic aerosols have a much greater influence on clouds and rain than previously assumed. Biogenic aerosols are tiny airborne particles that originate from plants, fungi, and bacteria. Their effects on the climate and the environment are still largely unknown. For his pioneering research, the 30-year-old chemist will be awarded the Otto Hahn Medal at the General Meeting of the Max Planck Society on June 4, 2014. The Max Planck Society awards the Otto Hahn Medal to young scientists every year to promote their research careers. This year the award, worth 7,500 euros, goes to Christopher Pöhlker for his outstanding academic work in connection with his doctoral thesis. In the opinion of the jury, the thesis gives a new perspective on the role of aerosols in terms of their interactions with the atmosphere, the biosphere, and the global climate. Pöhlker’s results show, for example, that plants and fungi have greater influence on the formation of clouds and the production of precipitation in the rainforest than previously thought. They release potassium-rich particles that trace gases accumulate on. These particles then serve as condensation nuclei for atmospheric moisture, forming clouds and producing rain. The chemist did his research in the Amazonian rainforest and in a semi-arid woodland area in the US. He determined the concentration of bioparticles arising from fungal spores, pollen, and bacteria in the atmosphere and characterized their properties by using various methods such as fluorescence microscopy, fluorescence spectroscopy, X-ray microscopy, and X-ray absorption spectroscopy. Christopher Pöhlker, who is currently on a research campaign in the Brazilian rainforest and is therefore not able to attend the award ceremony in person, studied chemistry at the Philipps-Universität in Marburg. During his studies, he spent some time at the department of organic chemistry at Stockholm University, Sweden. Since October 2009 he has been active in the Biogeochemistry Department of the Max Planck Institute for Chemistry in Mainz with Meinrat O. Andreae and Ulrich Pöschl. In 2013 Pöhlker successfully completed his thesis, receiving the grade “summa cum laude”. His research has been published in the prestigious science magazine SCIENCE. Dr. Susanne Benner | Max-Planck-Institut LandKlif: Changing Ecosystems 06.07.2018 | Julius-Maximilians-Universität Würzburg “Future of Composites in Transportation 2018”, JEC Innovation Award for hybrid roof bow 29.06.2018 | Fraunhofer-Institut für Lasertechnik ILT For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:d9111808-7e00-4b73-90cc-e824a5e48f7a>
2.921875
1,186
Content Listing
Science & Tech.
39.319183
95,498,394
Photochemical air pollution has long been recognized as one of the major causes of such adverse environmental impacts as: visibility degradation, plant deterioration, eye irritation and lung function impairment. In addition, oxidation of both nitrogenous and sulfurous emissions can lead to acidic deposition products, notably nitric acid (HNO3) and sulfuric acid (H2SO4), that can have major long term impacts on ecosystems. Indeed, the consequences of acid deposition from photochemical reactions may be felt for extended periods after the removal of the source and, in some cases, may be irreversible. This situation, coupled with the need for a methodology for relating changes in precursor emissions to ambient air quality, was one of the major motivations for the research reported in this paper. Weitere Kapitel dieses Buchs durch Wischen aufrufen - Acid Deposition of Photochemical Oxidation Products — A Study Using a Lagrangian Trajectory Model Armistead G. Russell Gregory J. McRae Glen R. Cass - Springer US Neuer Inhalt/© ITandMEDIA, Product Lifecycle Management/© Eisenhans | vege | Fotolia
<urn:uuid:147528b6-31ef-4184-b46f-6e873c4d3b73>
2.5625
247
Truncated
Science & Tech.
11.662671
95,498,398
The galactose represser protein from E. coli has a pI of about 5.9. While purification protocols were being designed, it was found to bind to a Mono-S column at pH values of 7 and below. (Mono-S columns have S-type sulfonic acid groups attached to the resin and are strong cation exchangers.) What is unusual about this observation, and explain these results?© BrainMass Inc. brainmass.com July 19, 2018, 11:47 am ad1c9bdddf As you know, all proteins possess a net charge in solution, primarily dependent upon three things: the pH of the solution, their structure, and their isoelectric point (or pI). The pI of a protein is the pH at which the net charge of the protein is zero. The word "net" is important, as we will see below. The "net" charge does not mean that there are no charged groups on the protein, but rather, the total of all the charges, both negative and positive charges, equals zero. Therefore, a protein will have a net positive charge, and therefore, bind to a cation exchanger in solutions where the pH is below their pI. This is easy to remember, since in pH's below their pI, there are lots more H+ ions around, and therefore, you can link that to the extra postive ... This solution is provided in 546 words. It discusses pH, protein structure, and isoelectric points to discuss the galactose represser protein.
<urn:uuid:d84fc579-30ec-47a6-bfd4-b89867ab67cc>
2.609375
330
Q&A Forum
Science & Tech.
64.737134
95,498,404
Right triangle eq2 Hypotenuse of a right triangle is 9 cm longer than one leg and 8 cm longer than the second leg. Determine the circumference and area of a triangle. Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: - RT perimeter The leg of the rectangular triangle is 7 cm shorter than the second leg and 8 cm shorter than the hypotenuse. Calculate the triangle circumference. - Equilateral triangle Calculate the area of an equilateral triangle with circumference 72cm. - Isosceles triangle Calculate area and perimeter of an isosceles triangle ABC with base AB if a = 6 cm, c = 7 cm. Bamboo high 32 feet was at a certain height broken by the wind so the bamboo top reached the ground at a distance of 16 feet from the trunk. At what height from the ground was the bamboo broken? - Equilateral triangle v2 Equilateral triangle has a perimeter 36 dm. What is its area? - Center traverse It is true that the middle traverse bisects the triangle? - Fifth of the number The fifth of the number is by 24 less than that number. What is the number? Equation ? has one root x1 = 8. Determine the coefficient b and the second root x2. Determine the quadratic equation absolute coefficient q, that the equation has a real double root and the root x calculate: ? Determine the discriminant of the equation: ? - Theorem prove We want to prove the sentense: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started? - Expression with powers If x-1/x=5, find the value of x4+1/x4 X+y=5, find xy (find the product of x and y if x+y = 5) - Evaluation of expressions If a2-3a+1=0, find (i)a2+1/a2 (ii) a3+1/a3 For what x expression ? equals zero? - Quadratic equation Find the roots of the quadratic equation: 3x2-4x + (-4) = 0. - Holidays - on pool Children's tickets to the swimming pool stands x € for an adult is € 2 more expensive. There was m children in the swimming pool and adults three times less. How many euros make treasurer for pool entry?
<urn:uuid:2ffdabec-2b49-4525-8fc0-34880bae179b>
3.515625
574
Tutorial
Science & Tech.
71.469303
95,498,410
Observations of Photon-Induced Gas Desorption from Technological Materials Using Synchrotron Light The advent of intense light beams from synchrotron sources provides a ready opportunity to redress the dearth of reliable data on the efficiency of the process of photon-induced gas desorption from solid surfaces. Paradoxically, the operation of vacuum within synchrotron machines is itself greatly influenced by the process, as was recognised in the pioneering work of Fischer, Hack and co-workers during the 1960’s /1/. With the widespread development of electron storage rings in recent years, there has been a great deal of interest in assessing the extent of self-cleaning within machines, and in examining the contributions of surface pre-treatments such as high temperature bakeout or plasma conditioning /2/. These considerations for the provision of vacuum walls with low desorption characteristics are equally relevant to the operation of controlled fusion machines, since the gaseous impurities can lead to serious plasma contamination. Indeed, the process can possibly lead to a runaway situation as successive waves of desorbed impurity act to stimulate further photon desorption. The present programme of experimental work is directed particularly towards elucidating the operation of the JET plasma experiment at Culham Laboratory where the radiation spectrum spans an energy range around 0.5 to 10 keV at a power level of a few watts per cm2 /3/. These conditions have been simulated using the Soft X-ray beam line at the Synchrotron Radiation Source (SRS), Daresbury Laboratory. KeywordsStorage Ring Synchrotron Radiation Source Desorption Efficiency Electron Storage Ring Daresbury Laboratory Unable to display preview. Download preview PDF. - 2.B.A. Trickett: Vacuum 37, in the pressGoogle Scholar - 5.R.S. Vaughan-Watkins and E.M. Williams: Proc. 8th Vac. Cong, and 4th Int. Conf. Solid Surf. Vol. II, p. 387Google Scholar - 6.P.A. Redhead, J.P. Hobson and E.V. Kornelsen: The Physical Basis of Ultra-High Vacuum ( Chapman and Hall, London 1968 )Google Scholar
<urn:uuid:01d35526-b0c2-4b6e-84a5-f3be75c5fd1e>
2.703125
467
Truncated
Science & Tech.
43.628853
95,498,422
Researchers have found that a primitive type of ion channel similar to those found in mammalian nerve cells helps bacteria resist the blast of acid they encounter in the stomach of their hosts. The discovery suggests a plausible mechanism whereby bacteria can fend off stomach acidity long enough to establish themselves in the intestine. More broadly, said the scientists, the finding represents the first insight into why bacteria have forms of the same ion channels -- proteins that control the flow of ions through cell membranes -- found in higher organisms. In an article published in the October 17, 2002, issue of the journal Nature, researchers led by Howard Hughes Medical Institute investigator Christopher Miller present evidence that the chloride ion channel is an integral part of the extreme acid resistance (XAR) response of the bacterium E. coli. Miller co-authored the paper with colleagues Ramkumar Iyer, Tina M. Iverson and Alessio Accardi, all of Brandeis University. Jim Keeley | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:02b23cdd-ab93-481c-b238-cc649d99c6d8>
3.015625
777
Content Listing
Science & Tech.
36.660011
95,498,424
At a bridge on the border between Kenya and Tanzania, they noticed that whenever the Mara River rose by a few feet, dead fish would wash up on its banks, sometimes in the thousands. Storks, vultures, crocodiles, and hyenas made short work of the carcasses, so “if you weren’t there to see it, you’d never know it was happening,” says Dutton. Local rangers knew about the die-offs, but they blamed the events on farmers who sprayed pesticides in upstream fields. It wasn’t the farmers. Through an increasingly bold set of experiments, involving remote-controlled boats, computer simulations, a makeshift dam, and vast tankers of excrement-filled water, Dutton and Subalusky identified the real culprits: hippos. The duo, who are married, published their results in a paper with the remarkably polite title of “Organic matter loading by hippopotami causes subsidy overload resulting in downstream hypoxia and fish kills.” To translate: Hippos sometimes poop so much that all the fish choke to death. At night, hippos wander into grasslands to graze. During the day, they return to rivers to keep cool and protect themselves from sunburn. As they wallow, they constantly urinate and defecate. Every day, the 4,000 or so hippos in the Mara deposit about 8,500 kilograms of waste into a stretch of river that’s just 100 kilometers long. “Down at the bridge, you can put a net in the water for a few seconds, and the entire middle will just be coated with hippo feces,” says Dutton. “There’s hippo feces everywhere. Over the rocks. Over the bottom.” In the dry season, when the Mara becomes narrower and shallower, certain stretches of it become especially thick with hippos—and their dung. Hippos are aggressive and dangerous, so only the foolhardiest of researchers would wade into these so-called hippo pools. Instead, Dutton and Subalusky deployed a remote-controlled boat armed with sensors. It revealed that the mud and water at the bottom of these hotspots is a stagnant mess of ammonia, methane, hydrogen sulfide, and other chemical grotesqueries. It’s also starved of oxygen: Almost all of the gas is consumed by bacteria as they slowly digest the accumulated hippo poop. During heavy rains, extra water floods into the hippo pools, churning up the putrefying muck and sending it off downstream. For good reason, these events are called “flushing flows.” To study them, Dutton and Subalusky used an oxygen-logger—an arm-long cylindrical device that, to the untrained eye, looks rather like a pipe bomb. “We always get stopped at airports,” Dutton says. Once dangled off the side of a bridge, the logger revealed that flushing flows dramatically reduce the oxygen levels of the downstream river, often to levels that are lethal for many aquatic animals. That, says Dutton and Subalusky, suffocates the fish. The duo went out of their way to confirm this idea. They added hippo poop to bottles of water and demonstrated that oxygen levels fall. They added poopy water to “experimental streams”—long trays designed to simulate a flowing river. But they still craved a more realistic experiment. “We were talking about ways of how we could create a flood through a pool, and some other researchers said: Why don’t you build a small dam?” Inspired, the duo used sandbags to block off the water supply to a nearby pool that’s in hippo territory but not frequented by the animals. A Maasai fixer connected them to a guy who had a large truck, another guy who owned two huge 4,000-liter tanks, and a third guy who owned a large wastewater pump. With all of that, the team transferred 16,000 liters of soiled hippo water into their artificial pool. And when they released the sandbags, they found that oxygen levels did indeed plummet in the water downstream. Bizarrely, this is the second study to be published this week on how hippo poop affects river environments. Keenan Stears, from the University of California at Santa Barbara, did similar work in Tanzania’s Great Ruaha River. Unlike the Mara, the Ruaha’s waters have been heavily drained by upstream farms. During the dry season, it stops flowing altogether, and its hippos are confined to isolated pools. Stears found that pools with lots of hippos have much less oxygen than those where the beasts are rare. As such, they had half the diversity of fish and invertebrate species, and just 4 percent the numbers of fish. Only in the wet seasons, when water once again flowed between the pools, did the fish and invertebrates bounce back. Stears estimates that around 94 percent of hippo populations in Africa live in rivers like the Ruaha that have either already started to dry up or will likely do so as the climate changes. “The results from our study are indicative of emerging issues relevant to the whole of Africa,” he says. This “reinforces the need to maintain flows in these river systems during the dry season,” adds Frank Masese, from the University of Eldoret in Kenya. But Dutton and Subalusky’s study shows that hippos and their oxygen-sucking waste can occasionally be problematic, even in relatively pristine rivers like the Mara. And their work challenges us to reconsider what pristine even means. Last year, the duo showed that migrating wildebeest nourish the Serengeti by drowning en masse in the Mara, adding about 1,100 tons of dead meat to the river every year. “The Mara really is a unique system,” Subalusky says. Hippos and wildebeest act as conveyor belts, channeling land-based nutrients into the water in the form of excrement and carcasses. And that water runs through a landscape that’s dominated by herds of elephants, and thousands of zebras and gazelles. In this way, the Mara reflects what rivers elsewhere in the world might have once looked like, before humans slaughtered their way through mammoths, bison, and other megafauna. It’s not a babbling brook of clear water. It’s a world of dead bodies, putrefying poop, and the occasional wave of suffocation. We want to hear what you think. Submit a letter to the editor or write to email@example.com.
<urn:uuid:5d60b9a2-bdba-4e61-8c7c-e1c9fab818b7>
3.359375
1,423
Truncated
Science & Tech.
50.102128
95,498,456
Could scientists reverse global warming? The U.N. discusses plans to reflect the sun to cool the Earth A group of scientists, philosophers and legal scholars are looking into radical geoengineering plans to artificially cool the Earth in a bid to reverse global warning. A U.N. climate conference in South Africa on Friday said that - in theory - reflecting a small amount of sunlight back into space before it strikes the Earth's surface would have an immediate and dramatic effect. Within a few years, global temperatures could return to levels of 250 years ago, before the industrial revolution began dumping carbon dioxide into the air, trapping heat and causing temperatures to rise. Could geoengineering save Earth from global warming? Some scientists thinks so But no one knows what the side effects would be. They could unintentionally changing weather patterns and rainfall. The idea of solar radiation management ‘has the potential to be either very useful or very harmful,’ said the study led by Britain's Royal Society, the Washington-based Environmental Defense Fund and TWAS, the academy of sciences for the developing world based in Trieste, Italy. The final report is the climax of a year-long dialogue spanning experts in 22 countries. It was prompted by the failure of a 20-year U.N. negotiating process to take decisive action to curb greenhouse gas emissions, mainly from burning fossil fuels, responsible for climate change. 'The slow progress of international climate negotiations has led to increased concerns that sufficient cuts in greenhouse gas emissions may not be achieved in time to avoid unacceptable levels of climate change,' the report said. But geoengineering is not an alternative to climate action, said John Shepherd, a British oceanographer from the University of Southampton who was the lead author of the report. 'Nobody thought this provides a justification for not reducing carbon emissions,' Shepherd told AP. Heating up: A panel on Climate change has predicted temperatures rising as much as 6.4deg Celsius by 2100 'We have to stick with Plan A for the time being, and that could be a very long time indeed,' he said. 'This would buy time for people to make the transition to a low-carbon economy.' The Intergovernmental Panel on Climate Change sees temperatures rising as much as 6.4 degrees Celsius (11.5 degrees Fahrenheit) by 2100, swelling the seas with melted glacial water and disrupting climate conditions around the globe. Deliberately tinkering with nature to counter global warming can only be a stopgap measure, and is fraught with danger, the report said. Action such as spraying sulfur into the air or brightening clouds with sea water to reflect more sunlight would have to be sustained indefinitely because 'there would be a large and rapid climate change if it were terminated suddenly,' the report said. Theories of manipulating the climate to impede global warming have been on the fringe of scientific discussion for some time, but is now moving towards the mainstream. In the United States, a group of 18 U.S. experts from the sciences, social sciences and national security unveiled a report in October urging the federal government to begin research on the feasibility and potential effectiveness of geoengineering. Most watched News videos - Horrific video of men with women and child before execution - 'She is a remainer': Rees-Mogg launches attack on PM - Theresa May awkwardly holds Trump's hand at Blenheim Palace - President Trump is welcomed to Blenheim Palace by Theresa May - 'Massive relief': Diver describes moment he found Thai boys alive - 'Whatever you do is OK with us!' Trump reassures May on Brexit - Foulmouthed woman lashes out racial slurs in road rage incident - 'He's a solid guy': Steve Bannon on Tommy Robinson - Violent footage shows fight using baskets at Poundland store - Inspiring woman with cerebral palsy has the voice of an angel - 'Trump told me I should sue the EU': Prime Minister Theresa May - Donald Trump goes golfing at his Trump Turnberry club in Scotland
<urn:uuid:af0020f5-21d1-4d95-8344-5d544466b391>
3.59375
828
Truncated
Science & Tech.
40.144533
95,498,485
|VCS Scripts: Scripting Outline Attributes: Go_name| The primary purpose of the Outline graphics method is to display outlined continents or sea ice using a surface type array that indicates land, ocean, and sea ice points. In general, however, this graphics method can be used to outline a set of integer values for any array. For more information on the attributes that are common to all the graphics methods, see Scripting Graphics Attributes: G*_name. specific graphics attributes is used to define an Outline table entry , or to change some or all of the attributes in an existing Outline table entry. Attributes specific to the Outline graphics method are as follows: Outlines are drawn to enclose the specified values in the data array. As few as one or as many as ten values can be specified: An example of an Outline graphics method script follows:
<urn:uuid:5668bb04-9b46-4c30-9807-f3d95d8a3a17>
2.96875
185
Documentation
Software Dev.
23.678
95,498,488
Library filed under Impact on Wildlife from USA ...some wind power facilities, such as the Altamont Pass Wind Resource Area (APWRA) in eastern Alameda and Contra Costa Counties, California, are causing severe environmental impacts to raptor populations due to bird kills from collisions with turbines and electrocution on power lines. This letter from the West Virginia Field Office of the US Fish and Wildlife Service to NedPower Mount Storm responds to the developer's biological assessment for endangered bats at the proposed Mount Storm Windpower project site to be located in Grant County, West Virginia. There is less than 4% of native tallgrass prairie left in North America, and two-thirds of it is right here. Once you have experienced the spaciousness and exceptional beauty of open native grasslands, you know there is nothing in the world quite like it. These native grasslands are truly a national as well as a Kansas treasure. Wind turbines in the Altamont Pass Wind Resource Area (APWRA) provide on average 1.1 billion kilowatt-hours (kWh) of emissions-free electricity annually, enough to power almost 200,000 average households per annum, but these turbines also kill birds that are legally protected, and have been doing so for decades. This five-year research effort focused on better understanding the causes of bird mortality at the world's largest wind farm. Researchers studied 2,548 wind turbines and combined their data with results from 1,526 wind turbines they had studied previously. They sought to: (1) quantify bird use, including characterizing and quantifying perching and flying behaviors of individual birds around wind turbines; (2) evaluate flight behaviors and the environmental and topographic conditions associated with them; (3) identify possible relationships between bird mortality and bird behaviors, wind tower design and operations, landscape attributes, and prey availability; and (4) develop predictive, empirical models that identify turbine or environmental conditions that are associated with high vulnerability. Researchers concluded that bird fatalities at the APWRA result from various attributes of wind turbine configuration and placement, and that species-specific behavior plays a large role in how each contributory factor affects mortality. The report details numerous specific observations. Researchers identified and evaluated possible measures to mitigate bird mortality in the APWRA. They offer recommendations to discontinue or modify some current management actions, to implement new ones immediately, and to experiment with others. Data presented in the report support these recommendations. The results suggest that repowering with carefully placed, modern wind turbines mounted on taller towers may be the preferable means to substantially reduce bird mortality. Researched and written by Eleanor Tillinghast of Green Berkshires Inc. this is a comprehensive study of the probable impact of industrial wind plants on the rural character, quality-of-life and economy of the Berkshires in western Massachusetts. Specific issues addressed include visual aesthetics, tourism, property values, public roads and public safety. Although my research started with the visual and spatial aspects of WECSs, and continues to be focused on WECSs effects on “landscape character” i.e. impacts on the spatial environment, with implications for cultural values and social systems of our region. I am equally concerned about the predictable negative effects of WECSs on the natural systems of the Flint Hills. I am concerned about serious cumulative effects and the degradation of: the visual character of our environment; the social fabric of communities that are facing the prospect of WECS-C; the health of biological, ecological components of our regional ecosystem; and the long term viability of our local, increasingly “nature-based” economy. This document [DEIS] has not provided any demonstrable public need for the insignificant amount of power this facility is capable of producing. No valid, compelling local (or even statewide) economic reasons were offered to potentially offset the overwhelming negative impacts that will result if built. This DEIS is abundant in quantity, but extremely lacking in quality of scientific analysis and entirely deficient in analysis in certain areas. Various mitigations offered are unacceptable or unworkable. The following are areas of analysis that were either deficient or not performed at all:............ Wind turbines to produce electricity on a large scale – “wind farms” – are currently being proposed for parts of Tug Hill. Large-scale wind farms are a relatively new occurrence in the Northeast, and since they are new there are many questions that do not have clear answers. This graphic shows the relationship between the height of turbines and the collision threat to nocturnal migrants at the Chautauqua Windplant, NY, in the Fall of 2003. A companion graphic included in the NWW photo gallery depicts this threat to noctural migrants in the Spring of 2003. This graphic shows the relationship between the height of turbines and the collision threat to nocturnal migrants at the Chautauqua Wind Farm, NY, in the Fall of 2003. A companion graphic included in the NWW photo gallery depicts this threat to noctural migrants in the Spring of 2003. New Jersey Audubon Society (NJAS) and its 20,000 members generally support environmentally-responsible renewable energy sources, such as wind power, photovoltaic cells, geothermal and hydro-fuel cells. Because traditional energy sources contribute to global climate change, habitat change and degradation, smog pollution, mercury contamination in our waterways, and radioactive waste, NJAS recognizes the importance of developing emission-free sources of energy. However, we are concerned about the potential impacts of these developing technologies on wildlife, and natural habitats. More than 25 national and regional conservation groups, including Defenders of Wildlife, National Audubon Society, the Humane Society of the United States, and the Endangered Species Coalition, today called on Interior Secretary Gale Norton and other federal officials to assess the impacts of planned extensive wind power development on Appalachian mountain ridges on migratory birds, before these projects are constructed. In a letter to Secretary Norton and others, the groups cited documented bird kills by existing wind turbines in the region, and urged the U.S. Fish and Wildlife Service (FWS) to develop appropriate criteria for siting and construction of these facilities under the Migratory Bird Treaty Act, which makes it illegal to kill migratory birds. The attached pdf file contains a letter written by Steve Anschutz, Nebraska Field Supervisor of the USF&WS, to Rockford Plettner, Environmental Specialist Water/Natural Resources of Nebraska Public Power District (NPPD). The letter responds to a NPPD request for input regarding the possible construction of a wind farm south of Ainsworth, Brown County, Nebraska. The letter's comments are provided as technical assistance and predevelopment consultation.... This graphic shows the relationship between the height of turbines and the collision threat to nocturnal migrants at the Chautauqua Windplant, NY, in the Spring of 2003. A companion graphic included in the NWW photo gallery depicts this threat to noctural migrants in the Fall of 2003. This graphic shows the relationship between the height of turbines and the collision threat to nocturnal migrants at the Chautauqua Wind Farm, NY, in the Spring of 2003. A companion graphic included in the NWW photo gallery depicts this threat to noctural migrants in the Fall of 2003. The commission unanimously voted down the proposal after a five-hour hearing attended by more than 150 people at Santa Clarita City Hall. Zond Systems, the largest producer of wind energy in California, had hoped to erect more than 300 windmills--some as tall as 150 feet--on vacant, mountainous land near Gorman. The proposed Deerfield Wind project in Readsboro and Searsburg is continuing to move forward with the Public Service Board approving a plan for a bear study and the public comment period set by the Green Mountain National Forest ended.
<urn:uuid:d483c97a-cade-4b53-81b4-b27a42c558b4>
3
1,613
Content Listing
Science & Tech.
29.367826
95,498,490
Osmoregulation is a process which maintains the homeostasis of an organism’s water content through the regulation of the osmotic pressure of body fluids. As organisms began to move out of marine habitats to other environments such as land, seashores and estuaries, osmoregulation grew in importance. Osmoregulation is crucial because in order to survive, organisms must retain the appropriate concentration of both water and solutes. It is through osmoregulation that organisms are able to balance water loss and solute uptake and vice versa. In general, most organisms must tolerate the need to deal with osmotic challenges. They can tolerate these changes by one of two ways: - Conform: For species which conform, such as marine fish, this means that the changes which their bodies undergo mirror the conditions of their environments. Organisms which conform can either be stenohaline or euryhaline. A stenohaline species, such as a goldfish, cannot survive in conditions where the water has high salinity levels or high salinity fluctuations. Conversely, euryhaline species such as salmon are able to tolerate changes in salinity. - Regulate: For species which are regulators, they are able to maintain the proper osmotic balance of their bodies regardless of the solute and water concentrations of the environment. Regulators can either be a) hyperosmotic or b) hyposmotic. Hyperosmotic regulation means that the concentration of solutes within the body is greater than the solute concentration of the environment. For example, fish living in freshwater environments are hyperosmotic regulators and must combat the challenges of osmotic loss and high water uptake. Thus, these organisms have evolved chloride cells for example, which allow them to uptake chloride ions quickly as they are continuously being lost in freshwater environments. Alternatively, in hyposmotic regulation organisms faces the challenges of too much water loss and salt gain, through the skin for example. Organisms such as reptiles and mammals on land are hyposmotic regulators. The salt-gland which exists in birds and reptiles is a method which these organisms use to combat these challenges.© BrainMass Inc. brainmass.com July 15, 2018, 3:52 pm ad1c9bdddf
<urn:uuid:eae10698-5e9f-4be5-ad47-9c8cb2294bd4>
3.625
472
Knowledge Article
Science & Tech.
24.559737
95,498,492
Solid-state nuclear magnetic resonance Solid-state NMR (SSNMR) spectroscopy is a kind of nuclear magnetic resonance (NMR) spectroscopy, characterized by the presence of anisotropic (directionally dependent) interactions. - 1 Introduction - 2 Nuclear spin interactions in the solid phase - 3 History - 4 Modern solid-state NMR spectroscopy - 5 Applications - 6 References - 7 External links A spin interacts with a magnetic or an electric field. Spatial proximity and/or a chemical bond between two atoms can give rise to interactions between nuclei. In general, these interactions are orientation dependent. In media with no or little mobility (e.g. crystals, powders, large membrane vesicles, molecular aggregates), anisotropic interactions have a substantial influence on the behaviour of a system of nuclear spins. In contrast, in a classical liquid-state NMR experiment, Brownian motion leads to an averaging of anisotropic interactions. In such cases, these interactions can be neglected on the time-scale of the NMR experiment. Examples of anisotropic nuclear interactions Two directionally dependent interactions commonly found in solid-state NMR are the chemical shift anisotropy (CSA) and the internuclear dipolar coupling. Many more such interactions exist, such as the anisotropic J-coupling in NMR, or in related fields, such as the g-tensor in electron spin resonance. In mathematical terms, all these interactions can be described using the same formalism. Anisotropic interactions modify the nuclear spin energy levels (and hence the resonance frequency) of all sites in a molecule, and often contribute to a line-broadening effect in NMR spectra. However, there is a range of situations when their presence can either not be avoided, or is even particularly desired, as they encode structural parameters, such as orientation information, on the molecule of interest. High-resolution conditions in solids (in a wider sense) can be established using magic angle spinning (MAS), macroscopic sample orientation, combinations of both of these techniques, enhancement of mobility by highly viscous sample conditions, and a variety of radio frequency (RF) irradiation patterns. While the latter allows decoupling of interactions in spin space, the others facilitate averaging of interactions in real space. In addition, line-broadening effects from microscopic inhomogeneities can be reduced by appropriate methods of sample preparation. Under decoupling conditions, isotropic interactions can report on the local structure, e.g. by the isotropic chemical shift. In addition, decoupled interactions can be selectively re-introduced ("recoupling"), and used, for example, for controlled de-phasing or transfer of polarization to derive a number of structural parameters. Solid-state NMR line widths The residual line width (full width at half max) of 13C nuclei under MAS conditions at 5–15 kHz spinning rate is typically in the order of 0.5–2 ppm, and may be comparable to solution-state NMR conditions. Even at MAS rates of 20 kHz and above, however, non linear groups (not a straight line) of the same nuclei linked via the homonuclear dipolar interactions can only be suppressed partially, leading to line widths of 0.5 ppm and above, which is considerably more than in optimal solution state NMR conditions. Other interactions such as the quadrupolar interaction can lead to line widths of thousands of ppm due to the strength of the interaction. The first-order quadrupolar broadening is largely suppressed by sufficiently fast MAS, but the second-order quadrupolar broadening has a different angular dependence and cannot be removed by spinning at one angle alone. Ways to achieve isotropic lineshapes for quadrupolar nuclei include spinning at two angles simultaneously (DOR), sequentially (DAS), or through refocusing the second-order quadrupolar interaction with a two-dimensional experiment such as MQMAS or STMAS. Anisotropic interactions in solution-state NMR From the perspective of solution-state NMR, it can be desirable to reduce motional averaging of dipolar interactions by alignment media. The order of magnitude of these residual dipolar couplings (RDCs) are typically of only a few rad/Hz, but do not destroy high-resolution conditions, and provide a pool of information, in particular on the orientation of molecular domains with respect to each other. The dipolar coupling between two nuclei is inversely proportional to the cube of their distance. This has the effect that the polarization transfer mediated by the dipolar interaction is cut off in the presence of a third nucleus (all of the same kind, e.g. 13C) close to one of these nuclei. This effect is commonly referred to as dipolar truncation. It has been one of the major obstacles in efficient extraction of internuclear distances, which are crucial in the structural analysis of biomolecular structure. By means of labeling schemes or pulse sequences, however, it has become possible to circumvent this problem in a number of ways. Another way of circumventing dipolar truncation in case of rare nuclei like 13C is to study the systems at their natural isotopic abundance utilising DNP assisted solid-state NMR under magic-angle spinning, where the probability of finding a third spin is almost 100 times lower. Nuclear spin interactions in the solid phase The chemical shielding is a local property of each nucleus, and depends on the external magnetic field. Specifically, the external magnetic field induces currents of the electrons in molecular orbitals. These induced currents create local magnetic fields that often vary across the entire molecular framework such that nuclei in distinct molecular environments usually experience unique local fields from this effect. The J-coupling or indirect nuclear spin-spin coupling (sometimes also called "scalar" coupling despite the fact that J is a tensor quantity) describes the interaction of nuclear spins through chemical bonds. Main article: Dipolar coupling (NMR) Nuclear spins exhibit a dipole moment, which interacts with the dipole moment of other nuclei (dipolar coupling). The magnitude of the interaction is dependent on the spin species, the internuclear distance, and the orientation of the vector connecting the two nuclear spins with respect to the external magnetic field B (see figure). The maximum dipolar coupling is given by the dipolar coupling constant d, where r is the distance between the nuclei, and γ1 and γ2 are the gyromagnetic ratios of the nuclei. In a strong magnetic field, the dipolar coupling depends on the orientation of the internuclear vector with the external magnetic field by Consequently, two nuclei with a dipolar coupling vector at an angle of θm=54.7° to a strong external magnetic field, which is the angle where D becomes zero, have zero dipolar coupling. θm is called the magic angle. One technique for removing dipolar couplings, at least to some extent, is magic angle spinning. Nuclei with a spin greater than one-half have a non spherical charge distribution. This is known as a quadrupolar nucleus. A non spherical charge distribution can interact with an electric field gradient caused by some form of non-symmetry (e.g. in a trigonal bonding atom there are electrons around it in a plane, but not above or below it) to produce a change in the energy level in addition to the Zeeman effect. The quadrupolar interaction is the largest interaction in NMR apart from the Zeeman interaction and they can even become comparable in size. Due to the interaction being so large it can not be treated to just the first order, like most of the other interactions. This means you have a first and second order interaction, which can be treated separately. The first order interaction has an angular dependency with respect to the magnetic field of (the P2 Legendre polynomial), this means that if you spin the sample at (~54.74°) you can average out the first order interaction over one rotor period (all other interactions apart from Zeeman, Chemical shift, paramagnetic and J coupling also have this angular dependency). However, the second order interaction depends on the P4 Legendre polynomial, which has zero points at 30.6° and 70.1°. These can be taken advantage of by either using DOR (DOuble angle Rotation) where you spin at two angles at the same time, or DAS (Double Angle Spinning) where you switch quickly between the two angles. Specialized hardware (probe) has been developed for such experiments. A revolutionary advance is Lucio Frydman's multiple quantum magic angle spinning (MQMAS) NMR in 1995 and it has become a routine method for obtaining high resolution solid-state NMR spectra of quadrupolar nuclei. A similar method to MQMAS is satellite transisition magic angle spinning (STMAS) NMR proposed by Zhehong Gan in 2000. Paramagnetic substances are subject to the Knight shift. History of discoveries of NMR phenomena, and the development of solid-state NMR spectroscopy: Purcell, Torrey and Pound: "nuclear induction" on 1H in paraffin 1945, at about the same time Bloch et al. on 1H in water. Modern solid-state NMR spectroscopy Methods and techniques A fundamental RF pulse sequence and building-block in most solid-state NMR experiments starts with cross-polarization (CP) [Waugh et al.]. It can be used to enhance the signal of nuclei with a low gyromagnetic ratio (e.g. 13C, 15N) by magnetization transfer from nuclei with a high gyromagnetic ratio (e.g. 1H), or as spectral editing method (e.g. directed 15N→13C CP in protein spectroscopy). To establish magnetization transfer, the RF pulses applied on the two frequency channels must fulfill the Hartmann–Hahn condition [Hartmann, 1962], that is, the Larmor frequencies in both rf fields must be identical. Experimental optimization of such conditions is one of the routine tasks in performing a (solid-state) NMR experiment. CP-MAS is a basic building block of most pulse sequences in solid-state NMR spectroscopy. Given its importance, a pulse sequence employing direct excitation of 1H spin polarization, followed by CP transfer to and signal detection of 13C, 15N or similar nuclei, is itself often referred to as CP experiment, or, in conjunction with MAS, as CP-MAS [Schaefer and Stejskal, 1976]. It is the typical starting point of an investigation using solid-state NMR spectroscopy. Spin interactions must be removed (decoupled) to increase the resolution of NMR spectra and isolate spin systems. Homonuclear RF decoupling decouples spin interactions of nuclei that are the same as those being detected. Heteronuclear RF decoupling decouples spin interactions of other nuclei. Although the broadened lines are often not desired, dipolar couplings between atoms in the crystal lattice can also provide very useful information. Dipolar coupling are distance dependent, and so they may be used to calculate interatomic distances in isotopically labeled molecules. Because most dipolar interactions are removed by sample spinning, recoupling experiments are needed to re-introduce desired dipolar couplings so they can be measured. Protons in solid-state NMR In contrast to traditional approaches particular in protein NMR, in which the broad lines associated with protons effectively relegate this nucleus to mixing of magnetization, recent developments of hardware (very fast MAS) and reduction of dipolar interactions by deuteration have made protons as versatile as they are in solution NMR. This includes spectral dispersion in multi-dimensional experiments as well as structurally valuable restraints and parameters important for studying the materials' dynamics. Membrane proteins and amyloid fibrils, the latter related to Alzheimer's disease and Parkinson's disease, are two examples of application where solid-state NMR spectroscopy complements solution-state NMR spectroscopy and beam diffraction methods (e.g. X-ray crystallography, electron microscopy). Solid-state NMR structure elucidation of proteins has traditionally been based on secondary chemical shifts and spatial contacts between heteronuclei. Currently, paramagnetic contact shifts and specific proton-proton distances are also used for higher resolution and longer-range distance restraints. Solid-state NMR spectroscopy serves as an analysis tool in organic and inorganic chemistry, where is used as a valuable tool to study local dynamics, kinetics, and thermodynamics of a variety of systems. Objects of SSNMR studies in materials science are inorganic/organic aggregates in crystalline and amorphous states, composite materials, heterogeneous systems including liquid or gas components, suspensions, and molecular aggregates with dimensions on the nanoscale. In many cases, NMR is the uniquely applicable method for measurement of porosity, particularly for porous systems containing partially filled pores or for dual-phase systems. SSNMR is one of the most effective technique for molecular-level investigation of interfaces. Studies of solids by NMR relaxation experiments are special issues based on the following general statements. The experimental decay of macroscopic transverse or longitudinal magnetization follows the exponential law for complete domination of the spin-diffusion mechanism and a single relaxation time characterizes all of the nuclei in rigid solids, even those that are not chemically or structurally equivalent. The spin-diffusion mechanism is typical of systems with nuclei experiencing strong dipolar interactions (protons, fluorine or phosphorus nuclei at relatively small concentrations of paramagnetic centers). For other nuclei with weak dipolar coupling and/or at high concentration of paramagnetic centers, relaxation can be non-exponential following a stretched exponential function, exp(–(τ/T1)β) or exp(–(τ/T2)β). For paramagnetic solids, the β value of 0.5 corresponds to relaxation via direct electron–nucleus dipolar interactions without spin diffusion, while intermediate values between 0.5 and 1.0 can be attributed to a diffusion-limited mechanism. NMR can also be applied to art conservation. Different salts and moisture levels can be detected through the use of solid state NMR. However, sampling sizes retrieved from works of art in order to run through these large conducting magnets typically exceed levels deemed acceptable. Unilateral NMR techniques use portable magnets that are applied to the object of interest, bypassing the need for sampling. As such, unilateral NMR techniques prove to be useful in the art conservation world. - "National Ultrahigh-Field NMR Facility for Solids". Retrieved 2014-09-22. - Märker, Katharina; Pingret, Morgane; Mouesca, Jean-Marie; Gasparutto, Didier; Hediger, Sabine; De Paëpe, Gaël (2015-11-04). "A New Tool for NMR Crystallography: Complete 13C/15N Assignment of Organic Molecules at Natural Isotopic Abundance Using DNP-Enhanced Solid-State NMR". Journal of the American Chemical Society. 137 (43): 13796–13799. doi:10.1021/jacs.5b09964. ISSN 0002-7863. - Frydman Lucio; Harwood John S (1995). "Isotropic Spectra of Half-Integer Quadrupolar Spins from Bidimensional Magic-Angle Spinning NMR". J. Am. Chem. Soc. 117: 5367–5368. doi:10.1021/ja00124a023. - Massiot D.; Touzo B.; Trumeau D.; Coutures J. P.; Virlet J.; Florian P.; Grandinetti P. J. (1996). "Two-dimensional Magic-Angle Spinning Isotropic Reconstruction Sequences for Quadrupolar Nuclei". Solid-State NMR. 6: 73–83. doi:10.1016/0926-2040(95)01210-9. - Gullion T.; Schaefer J. (1989). "Rotational-echo double-resonance NMR". J. Magn. Reson. 81: 196–200. - Linser R.; Fink U.; Reif B. (2008). "Proton-Detected Scalar Coupling Based Assignment Strategies in MAS Solid-State NMR Spectroscopy Applied to Perdeuterated Proteins". J. Magn. Reson. 193: 89–93. Bibcode:2008JMagR.193...89L. doi:10.1016/j.jmr.2008.04.021. - Schanda, P.; Meier, B. H.; Ernst, M. (2010). "Quantitative Analysis of Protein Backbone Dynamics in Microcrystalline Ubiquitin by Solid-State NMR Spectroscopy". J. Am. Chem. Soc. 132: 15957–15967. doi:10.1021/ja100726a. PMID 20977205. - Knight M. J.; Webber A. L.; Pell A. J.; Guerry P.; et al. (2011). "Fast Resonance Assignment and Fold Determination of Human Superoxide Dismutase by High-Resolution Proton-Detected Solid-State MAS NMR Spectroscopy". Angew. Chem. Int. Ed. 50: 11697–11701. doi:10.1002/anie.201106340. - Linser R.; Bardiaux B.; Higman V.; Fink U.; et al. (2011). "Structure Calculation from Unambiguous Long-Range Amide and Methyl 1H−1H Distance Restraints for a Microcrystalline Protein with MAS Solid-State NMR Spectroscopy". J. Am. Chem. Soc. 133 (15): 5905–5912. doi:10.1021/ja110222h. PMID 21434634. - A. Marchetti; J. Chen; Z. Pang; S. Li; D. Ling; F. Deng; X. Kong (2017). "Understanding Surface and Interfacial Chemistry in Functional Nanomaterials via Solid-State NMR". Advanced Materials. doi:10.1002/adma.201605895. Retrieved 2017-04-09. Suggested readings for beginners - High Resolution Solid-State NMR of Quadrupolar Nuclei Grandinetti ENC Tutorial - Laws David D., Hans- , Bitter Marcus L., Jerschow Alexej (2002). "Solid-State NMR Spectroscopic Methods in Chemistry". Angewandte Chemie International Edition. 41: 3096–3129. doi:10.1002/1521-3773(20020902)41:17<3096::AID-ANIE3096>3.0.CO;2-X. - Levitt, Malcolm H., Spin Dynamics: Basics of Nuclear Magnetic Resonance, Wiley, Chichester, United Kingdom, 2001. (NMR basics, including solids) - Duer, Melinda J., Introduction to Solid-State NMR Spectroscopy, Blackwell, Oxford, 2004. (Some detailed examples of SSNMR spectroscopy) Books and major review articles - McDermott, A, Structure and Dynamics of Membrane Proteins by Magic Angle Spinning Solid-State NMR Annual Review of Biophysics, v. 38, 2009. - Mehring, M, Principles of High Resolution NMR in Solids, 2nd ed., Springer, Heidelberg, 1983. - Slichter, C. P., Principles of Magnetic Resonance, 3rd ed., Springer, Heidelberg, 1990. - Gerstein, B. C. and Dybowski, C., Transient Techniques in NMR of Solids, Academic Press, San Diego, 1985. - Schmidt-Rohr, K. and Spiess, H.-W., Multidimensional Solid-State NMR and Polymers, Academic Press, San Diego, 1994. - Dybowski, C. and Lichter, R. L., NMR Spectroscopy Techniques, Marcel Dekker, New York, 1987. - Ramamoorthy, A., NMR Spectroscopy of Biological Solids, Taylor & Francis, New York, 2006. - Bakhmutov, Vladimir. I. Solid-State NMR in Materials Science: Principles and Applications; CRC Press, 2012. Edition: 1st . ISBN 978-1439869635; ISBN 1439869634 - Bakhmutov, Vladimir. I. NMR Spectroscopy in Liquids and Solids. CRC Press, 2015. Edition: 1st . ISBN 978-1482262704, ISBN 1482262703. References to books and research articles - Andrew E. R.; Bradbury A.; Eades R. G. (1959). "Removal of Dipolar Broadening of Nuclear Magnetic Resonance Spectra of Solids by Specimen Rotation". Nature. 183: 1802–1803. Bibcode:1959Natur.183.1802A. doi:10.1038/1831802a0. - Ernst, Bodenhausen, Wokaun: Principles of Nuclear Magnetic Resonance in One and Two Dimensions - Hartmann S.R.; Hahn E.L. (1962). "Nuclear Double Resonance in the Rotating Frame". Phys. Rev. 128: 2042–2053. Bibcode:1962PhRv..128.2042H. doi:10.1103/physrev.128.2042. - Pines A.; Gibby M.G.; Waugh J.S. (1973). "Proton-enhanced NMR of dilute spins in solids". J. Chem. Phys. 59: 569–90. Bibcode:1973JChPh..59..569P. doi:10.1063/1.1680061. - Purcell, Torrey and Pound (1945). - Schaefer J.; Stejskal E. O. (1976). "Carbon-13 Nuclear Magnetic Resonance of Polymers Spinning at the Magic Angle". Journal of the American Chemical Society. 98: 1031–1032. doi:10.1021/ja00420a036. - Gullion T.; Schaefer J. (1989). "Rotational-Echo, Double-Resonance NMR". J. Magn. Reson. 81: 196. - MacKenzie, K.J.D and Smith, M.E. "Multinuclear Solid-State NMR of Inorganic Materials", Pergamon Materials Series Volume 6, Elsevier, Oxford 2002.
<urn:uuid:092663cb-3ab6-49b0-b527-2cee282f1f77>
3.234375
4,863
Knowledge Article
Science & Tech.
49.28182
95,498,524
In the previous blog here, we reverse engineered a simple binary containing plaintext password in Linux with the help of GNU Debugger (GDB). In this blog however, we will be using the same source code of the binary but compile and debug it in Windows. Reverse engineering tools in Windows are highly different from that of Linux, but on the assembly level, it would somewhat be the same. The only difference you would find would be the kernel level calls and the DLLs which would be of Windows rather than the libraries of Linux. This is however a re-posting of my own blog from here. In this post, I will be using x64dbg since I wasn’t able to find a version of x64 Immunity debugger or Olly Debugger to reverse engineer the binary. However, below are alternatives along with the download links which you can choose. If you are able to find other x64 debuggers for windows, do add them in the comment and I will mention them here.: Immunity Debugger is an awesome tool if you are debugging x86 binaries. However, since we are only focusing on x64, we will have to use x64dbg which supports both x86 and x64 disassembly. Once you have downloaded the required debugger, you can compile the source code which is uploaded on my Git repo here. You can compile the binary in Windows with the below command: $ g++ crack_me.cpp -o crack_mex64.exe -static -m64 Make sure you use a 64-bit version of g++ compiler else it will compile but won’t work. You can also download the binary from my repo mentioned above. I prefer to use the Mingw-x64 compiler, but some also use clang x64. It all boils down to the preference of which one you are familiar with. Once you have compiled the binary, let’s load it up in x64dbg. Remember, that our binary accepts an argument which is our password. So, unlike GDB where we can supply the argument inside the GDB; in Windows, we will have to supply it during the loading of binary via the command line itself. To load the binary into x64dbg, below is the commandline you can use: .\x64dbg.exe crack_mex64.exe pass123 Once, the binary is loaded, you will see six windows by default. Let me quickly explain what these windows are: The top left window displays the disassembled code. This is the same as disassemble main in GDB. It will walk you through the entire assembly code of the binary. The top right window contains the values of the registers. Since we are debugging a x64 binary, the values of x86 registers for example EAX or ECX will be inside of RAX or RCX itself. The middle two windows, left one shows you the .text section of the assembly code, and right one shows the fastcalls in x64 assembly. Fastcalls are x64 calling conventions which is done between just 4 registers. I would recommend skipping this if you are A beginner. However for the curious cats, more information can be found here. The bottom left window displays the memory dump of the binary, and the bottom right shows the stack. Whenever variables are passed on to another function, you will see them here. Once, the above screen is loaded, we will first search for strings in our binary. We know a few strings when we executed the binary i.e. ‘Incorrect password’, or ‘Correct password’ or ‘help’. As for now, our primary aim is to find the actual password and secondary aim is to modify the RAX register to Zero, to display ‘Correct Password’ since our check_pass() function returns 0 or 1 depending upon whether the password is right or wrong. To search for strings, right click anywhere in the disassembled code -> Search for -> All Modules -> String References This will bring you to the below screen where it shows you the string Incorrect Password. Since we know there will be a comparison between our input password and the original password before printing whether the password is correct or not, we need to find the same from the disassembled code to view the registers and the stack to search for the cleartext password. Now right click on the ‘Incorrect Password’ area and select Follow in Disassembler. This will display the below screen in the disassembly area: What I have done over here in the above image, is I’ve added a breakpoint at 00000000004015F6. The main reason for that is because I can see a jmp statement and a call statement right above it. This means that a function was called before reaching this point and the last function to be executed before the printing of ‘Correct/Incorrect password’ is the check_pass() function. So, this is the point where our interesting function starts. Lets just hit on the run button till it reaches this breakpoint execution. Once, you’ve reached this breakpoint, hit stepi (F7) till you reach the mov RCX, RAX or 0000000000401601 address. Once it is there, you can see our password pass123 loaded on to the RCX register from RAX register. This is nothing but our argument loaded into the function check_pass(). Now, keep stepping into the next registers till you reach the address 0000000000401584, which is where our plaintext password gets loaded into the RAX register. You can see on the top right window that our password ‘pass123’ and original password ‘PASSWORD1’ is loaded onto the registers RCX and RAX for comparison. The completes our primary motive of getting the plaintext password. Now since our passwords are different, it will be printing out ‘Incorrect password’. We now need to modify the return value of 1 to 0 which is returned by the check_pass() function. If you see the above image, 3 lines below our code where the password is loaded onto the register, you will test EAX, EAX at address 0000000000401590. And we see two jump statements after them. So, if the test value returns they are equal, it will jump (je = jump if equal) to crack_m3x64.40159B which is where it will mov 0 to the EAX register. But since the password we entered is wrong, it will not jump there and continue to the next code segment where it will move 1 to EAX i.e. at address 0000000000401594. So, we just setup a breakpoint on this address by right clicking and selecting breakpoint -> toggle since we need to modify the register value at that point and continue running the binary till it hits that breakpoint: Once, this breakpoint is hit, you will the value 1 loaded into the RAX register on the right-hand side. The EAX is a 32 bit register which is the last 32 bits of the RAX register. In short, RAX = 32 bits + EAX EAX = 16 bits + AX AX = AH(8 bits) + AL(8 bits) and so on. Therefore, when 1 is loaded into EAX, it by default goes into RAX register. Finally, we can just select the RAX register on the right-hand side, right click and decrement it to Zero. And then you should see that RAX is changed to Zero. Now continue running the binary till it reaches the point where it checks the return value of the binary as to whether its Zero or One, which is at address 000000000040160C. You can see in the below image that it uses cmp to check if the value matches to 1. It uses the jne (jump if not equal) condition, which means it will jump to crack_mex64.401636 if its is not equal to One. And crack_mex64.401636 is nothing but our printing of ’Correct Password’ at address 0000000000401636. You can also see in the register that our password is still pass123 and inspite of that it has printed it’s the correct password. This would be it for the cracking session of windows for this blog. In the next blog, we will be looking at a bit more complex examples rather than finding just plaintext passwords from binaries. Latest posts by Chetan Nayak (see all) - Kerberoasting, exploiting unpatched systems – a day in the life of a Red Teamer - May 21, 2018 - Reverse Engineering For Beginners – XOR encryption – Windows x64 - May 10, 2018 - Reverse Engineering x64 for Beginners – Windows - April 23, 2018
<urn:uuid:265b2cb7-02e7-4fe7-8482-16918556a131>
2.65625
1,857
Personal Blog
Software Dev.
64.828959
95,498,544
Researcher leads international expedition to unlock the mysteries of monsoons UMass Dartmouth engineering professor Amit Tandon has just completed an Office of Naval Research-sponsored expedition to the Indian Ocean that brought together 50 scientists from the U.S. and India to study the conditions that create monsoons, which affect weather around the globe and the agrarian economy for more than one billion people in Indian Ocean nations. “The world’s weather cycle is impacted by storms in the Indian Ocean shaped by the summer monsoon, yet we know little about these systems,” Dr. Tandon said. “A better understanding of monsoons could help us extend reliable storm forecasting from a few days to two weeks.” Summer, or southwest, monsoons are moisture-soaked seasonal winds that bring critical rainfall to the Indian subcontinent during the June-September wet season. An abundant season provides sustaining rainfall that replenishes water reservoirs and reaps bountiful crop harvests in India and nearby countries. By contrast, a weak season can lead to drought, soaring food prices and a battered economy. The Tandon-led team studied the uppermost layer of the Bay of Bengal—which is part of the Indian Ocean and a region of monsoon formation. Although the bay is composed of salt water, large amounts of fresh water are added from rivers and rainstorms into it regularly. “The layer of relatively less salty water near the surface responds rapidly and dramatically to solar heating and nighttime cooling,” said Tandon. “It regulates moisture supply to the atmosphere and can trap the sun’s heat beneath the surface of the water, releasing it into the air weeks, or even months, later—increasing the power of a monsoon.” [Tandon research] The researchers used wirewalkers (wave-powered vehicles that can sample water at various depths), buoys, gliders and towed platforms—equipped with data-collecting sensors—to compile information about the Bay of Bengal’s water and air temperature, atmospheric moisture, and mixtures and movements of fresh and salt water—all factors contributing to monsoon activity. “Our field study has mapped out the upper ocean structure of the north Bay of Bengal in unprecedented detail, with cutting-edge oceanography instrumentation, some of which is being used for the first time,” said Tandon. With this research complete, U.S. and Indian scientists will now analyze the cruise data, in the hopes of designing a computer forecasting model to accurately predict future monsoon formation. The UMass Dartmouth Center for Scientific Computing and Visualization Research and the Massachusetts Green High Performance Computing Center in Holyoke will be used to analyze much of the data. The team, led by Dr. Tandon, worked aboard the research vessel R/V Roger Revelle, which is operated by the Scripps Institution of Oceanography under a charter agreement with ONR. Another team of Indian and U.S. scientists on the Indian research vessel Sagar Nidhi worked jointly with those on the Roger Revelle. “This research will lead to an improved understanding and prediction of the ocean and atmosphere,” said Dr. Frank Herr, who heads ONR’s Ocean Battlespace Sensing Department. “It also enables ONR to test new and innovative oceanography technology, as well as develop stronger partnerships with Indian scientists.” Incomplete knowledge of the dynamics of monsoons makes it difficult to accurately predict the onset, strength and intraseasonal variations in the monsoon season. ONR’s collaborative research will gather data to help improve and refine the forecasting of these winds. “The predictability of ocean weather is very important to the Navy’s global operations and the safety of ships at seas,” said ONR Program Manager Dr. Terri Paluszkiewicz. “Fundamental studies of monsoon air-sea interaction are crucial for better weather and climate prediction for the entire global community.” Monsoon research aligns with several tenets of the Cooperative Strategy for 21st Century Seapower, a maritime strategy shared by the U.S. Navy, Marine Corps and Coast Guard. The strategy calls for increased focus on battlespace awareness, which includes surveillance, intelligence gathering and greater knowledge about the environments in which the military operates.
<urn:uuid:903a17bd-072d-4157-b245-ee67259e1910>
3.6875
901
News (Org.)
Science & Tech.
34.299048
95,498,553
COLUMBUS, Ohio (AP) - Ohio State University researchers say they have discovered a 22nd common amino acid, another step in the scientific community's effort to unlock secrets of the genetic code. It was the first time since 1986 that an amino acid was discovered. Before that, scientists had thought for nearly a century that only 20 gene-encoded amino acids existed. DNA uses amino acids as the building blocks of protein in all living things, instructing organisms how to perform tasks. The new amino acid, called pyrrolysine, means DNA has decided there is another task to perform. "That means that organisms can have the special function they need. The code in the DNA can really change and take on new tasks that we haven't seen before," OSU researcher Joseph Krzycki said Friday. "This is the most basic level the genetic code works on. ... It just serves right now to show us how plastic evolution is." The OSU researchers, made up of two teams of biochemists and microbiologists, reported their discovery Wednesday in the journal Science. "I think it's a once in a lifetime experience," said OSU researcher Michael Chan. "There were only 21 and now they're 22. The chances of finding another one is going to be a real long shot." John Atkins, a research professor in the department of human genetics at the University of Utah, said the Ohio State research empowers efforts by scientists who are looking for ways to make proteins with different amino acids. "The significance is not so much there's a new amino acid, but rather what it shows about the versatility of the genetic code," Atkins said. In 1995, Ohio State started research into what appeared to be abnormalities in the production of a certain protein by a known amino acid called lysine. By 2000, the researchers had discovered that the protein was unique and had a special function: help microbes called methanogens convert material into methane, the organisms' energy source. "The amino acid is a key piece in being able to do that," Chan said. The amino acid is found in bacteria and some single-celled organisms but likely won't be found in humans, Chan said. But Krzycki said the amino acid could still turn up in other living things. "Nature doesn't waste chemistry," he said. "Once we've found that it's doing something fundamental that can do different things, it won't waste that. We'll probably see it in other places." (Copyright 2002 by The Associated Press. All Rights Reserved.)
<urn:uuid:7ba6608e-fc8e-4488-9221-2580242b7a94>
3.6875
529
News Article
Science & Tech.
52.179824
95,498,573
Mudslides wipe away plants and topsoil, depleting terrain of nutrients for plant regrowth and burying swaths of vegetation. Buried vegetable matter decomposes and releases carbon dioxide and other gases to the atmosphere. The expected carbon dioxide release from the mudslides following the Wenchuan earthquake is similar to that caused by Hurricane Katrina's plant damage, report Diandong Ren, of the University of Texas at Austin, and his colleagues, who used a computer model to predict the ecosystem impacts of the mudslides. What's more, the vegetation destruction will lead to a loss of nitrogen from the quake-devastated region's ecosystem twice as large as the loss of that nutrient from California ecosystems because of the October 2007 wildfires there, Ren says. And, as the biomass buried by the China quake rots, 14 percent of the nitrogen will be spewed into the atmosphere as nitrous oxide, a pollutant typically released from agricultural operations, automobiles, and other sources. The team will publish its findings on 4 March 2009 in Geophysical Research Letters, a journal of the American Geophysical Union (AGU). Although landscapes devastated by the Chinese earthquake may re-green soon, the recovery will be cosmetic, says Ren. "From above, the area will look green in a few years, because grass grows back quickly, but the soil nutrients recover very slowly, and other kinds of plants won't grow," he says. The magnitude-7.9 Wenchuan quake was followed by many aftershocks in the Sichuan Basin, an area that, because of its geological features - deep valleys enclosed by high mountains with steep slopes - is already prone to landslides. May is also the rainy season in Sichuan, and the combination of aftershocks and major precipitation events in the days following the earthquake caused severe mudslides. The avalanches killed thousands, destroyed roads and blocked rivers and access to relief, and shredded water and power stations, among other facilities. To predict ecosystem impacts of the mudslides, Ren and his collaborators applied a comprehensive computer model of landslides that incorporates several physical parameters, such as soil mechanics, root mechanical reinforcement (the root's grip of the dirt, which mitigates erosion), and precipitation. Ren's model also shows that the primary mudslides following the earthquake removed large areas of nutrient-rich topsoil, leaving behind deep scars in the land that will take decades to recover, preventing the re-growth of vegetation. The researchers write in their paper that, although being able to predict the location and timing of a mudslide is essential to mitigate its impacts, current mudslide models are not accurate enough. "Previous approaches, which are mainly based on statistical approaches and empirical measures, have no predictive ability of where mudslides are going to happen," Ren says. His model, he claims, could be applied to forecast under what circumstances a landslide would occur at a specific location. He points out this would be particularly useful for places such as Southern California, where global warming predictions call for an increase in the frequency of these events.Citation: Further reports about: > China earthquake > Geophysical > Mudslides > Wenchuan earthquake > agricultural operations > atmosphere > automobiles > carbon dioxide > carbon-dioxide release > computer model > ecosystem > fossil fuel combustion > global carbon emissions > greenhouse gas > vegetation destruction Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e74780b1-636f-4541-87a9-ad80b3902bc8>
3.8125
1,262
Content Listing
Science & Tech.
31.571785
95,498,596
Males across the animal world have evolved elaborate traits to attract females, from huge peacock tails to complex bird songs and frog calls. But what keeps them from getting more colorful feathers, longer tails, or more melodious songs? Predators, for one. Increased elaboration can draw predators in, placing an enormous cost to males with these sexy traits. In a new paper appearing this week in Science, a group of biologists have shown that females themselves can also limit the evolution of increased elaboration. Studying neotropical túngara frogs, they found that females lose their ability to detect differences in male mating calls as the calls become more elaborate. "We have shown that the female túngara frog brains have evolved to process some kinds of information and not others," says Mike Ryan, professor of integrative biology at The University of Texas at Austin, "and that this limits the evolution of those signals." Imagine looking at a group of five oranges next to a group of six. At a glance, you would quickly notice that one group has one more orange than the other. Now, imagine looking at a pile of 100 oranges next to a pile of 101. It would be nearly impossible for you to notice the difference in size (one orange) between those two piles at a glance. This is known as Weber's Law, which states that stimuli are compared based on proportional differences rather than absolute differences (one orange in the case above). In túngara frogs, males gather en masse to attract female frogs with a call that is made up of a longer "whine" followed by one or more short "chucks." Through a series of experiments conducted in Panama, Ryan and his collaborators found that females prefer male calls with the most chucks, but their preference was based on the ratio of the number of chucks. As males elaborate their call by adding more chucks, their relative increase in attractiveness decreases due to a perceptual constraint on the part of females. Male túngara frog calls also attract a predator: the frog eating fringe-lipped bat. To confirm that male song elaboration wasn't limited by these predators, the researchers also studied how the bats respond to additional "chucks" in the male call. They discovered that hunting bats choose their prey based on chuck number ratio, just as the female frogs do. So, as males elaborate their call by adding chucks, the relative increase in predation risk decreases with each additional chuck. "What this tells us is that predation risk is unlikely to limit male call evolution," says Karin Akre, lecturer at The University of Texas at Austin. "Instead, it is the females' cognition that limits the evolution of increasing chuck number." Lee Clippard, public affairs, 512-232-0675, firstname.lastname@example.org Mike Ryan, professor of integrative biology, 512-471-5078, email@example.com Karin Akre | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:797b4996-0ffd-4bf4-bb4c-7571966e7e04>
2.984375
1,203
Content Listing
Science & Tech.
40.180737
95,498,597
What is global warming? What causes climate change? Since humans started using the Earth's resources in more incredible and wonderful ways we've seen a rise in global temperatures and changes to weather patterns that could have drastic consequences for future generations - but how much of this is due to the lives we lead today is still heavily debated. With rising sea levels and greenhouse gases increasing, one thing is for certain - the environmental impact of climate change can be felt today. Rising sea levels is one of the most worrying consequences of global warming, threatening over 100 million people living in vulnerable coastal areas. But measuring the rate of the rise is fraught with difficulty. Jakobshavn, the world’s fastest-moving glacier, loses 12.5 square km of ice in one of the most significant calving events on record.
<urn:uuid:206587ea-8649-4b5b-a63c-565c1752d30a>
3.59375
167
Content Listing
Science & Tech.
46.145625
95,498,624
Common name: channeled applesnail available through www.itis.gov Identification: Typical applesnails are globular in shape. Normal coloration typically includes bands of brown, black, and yellowish-tan, and color patterns are extremely variable. Albino and gold color variations exist (R. Howells, personal communication). Size: 62.5 mm shell height, 56 mm shell width (Hayes et al. 2012) Native Range: South America, central portion of the continent primarily Argentina (northern), Bolivia, Brazil, Paraguay, and Uruguay (Hayes et al. 2012). Interactive maps: Point Distribution Maps Puerto Rico & Table 1. States with nonindigenous occurrences, the earliest and latest observations in each state, and the tally and names of HUCs with observations†. Names and dates are hyperlinked to their relevant specimen records. The list of references for all nonindigenous occurrences of Pomacea canaliculata are found here. Table last updated 5/25/2018 † Populations may not be currently present. Ecology: Pomacea canaliculata optimal water temperatures for rearing is between 15-35 °C (Seuffert and Martin 2016). Means of Introduction: Probable aquarium release for initial introductions. Status: Established in California and Hawaii. Impact of Introduction: Impacts rice and taro agriculture worldwide where introduced. References: (click for full references) Hayes, K.A., R.H. Cowie, S.C. Thiengo, and E.E. Strong. 2012. Comparing apples with apples: clarifying the identities of two highly invasive Neotropical Ampullaridae (Caenogastropoda). Zoological Journal of the Linnean Society 166(4):723-753. Howells, R. Personal communication. Texas Parks and Wildlife Department. Savaya-Alkalay, A., Ovadia, O., Barki, A., and A. Sagi. 2018. Size-selective predation by all-male prawns: implications for sustainable biocontrol of snail invasions. Biological Invasions 20:137–149. Seuffert, M. E. and P. R. Martin. 2016. Thermal limits for the establishment and growth of populations of the invasive apple snail Pomacea canaliculata. Biological Invasions. DOI 10.1007/s10530-016-1305-0. Daniel, W. M. Revision Date: 2/16/2018 Daniel, W. M., 2018, Pomacea canaliculata (Lamarck, 1828): U.S. Geological Survey, Nonindigenous Aquatic Species Database, Gainesville, FL, https://nas.er.usgs.gov/queries/FactSheet.aspx?speciesID=980, Revision Date: 2/16/2018, Access Date: 7/16/2018 This information is preliminary or provisional and is subject to revision. It is being provided to meet the need for timely best science. The information has not received final approval by the U.S. Geological Survey (USGS) and is provided on the condition that neither the USGS nor the U.S. Government shall be held liable for any damages resulting from the authorized or unauthorized use of the information.
<urn:uuid:a2b922b8-6482-4fc0-a831-d1d1079d506a>
2.59375
709
Structured Data
Science & Tech.
46.195694
95,498,626
- Principles of Chemistry I: Honors Fall 2015, Unique 49310 Homework, Week 6 1. There are two basic ways in which matter (atoms or interact with molecules) light. 1) Light can be absorbed by the material to move an electron to a state of higher quantum number. 2) Light can be emitted from a material to move an electron to a state of lower quantum number. In words and equations, describe each of these processes. In this event occurs in the visible region of the spectrum, what will be the color of the atom? Although we have only quantified this process for 1-electron atoms (through the Bohr model), we will see that the basic concept is generalizable to multi-electron atoms and molecules. Find an example of a familiar absorption or emission event from your textbook or from our last problem set. 2. Our sun is 5000 K blackbody. The figure below shows the solar radiation spectrum at the top of the atmosphere (yellow) at sea level (red) and compared to a perfect 5250˚C blackbody. The chemical identities responsible for the most significant absorption bands are labeled on the figure. a) How close is the sun to a theoretical 5250˚C blackbody? b) Explain the difference between the yellow and red curves. (Don't worry about the differences in the visible region of the spectrum - we can't explain that yet.) c) Molecules in the atmosphere that absorb strongly in the infrared region of the spectrum result in warming the atmosphere, and thus the surface of the earth. These are generally called "greenhouse gases." Based on the data given in the figure, what is the most significant greenhouse gas in our atmosphere? d) High energy light can be damaging to cells and tissues. What would happen if ozone (O3) were eliminated from the atmosphere? 3. Use the Bohr model of the atom to calculate the radius and the energy of the B4+ ion in the n = 3 state. How much energy would be required to remove the electrons from 1 mol of B4+ in this state? What frequency and wavelength of light would be emitted in a transition from the n = 3 to n = 2 state of the ion? 4. a) In class we discussed the Balmer lines, which are visible emission bands from the H atom. These represent light that is emitted from the H atom from n = 3, 4, 5, and 6 to n = 2. Determine the energy of each of these 4 transitions and the corresponding wavelength of light that is observed. b) Lynman discovered another set of emission lines from the H atom, corresponding to transitions from n = 2, 3, 4, 5, and 6 to n = 1. Determine the energies of these 5 transitions and the region of the electromagnetic spectrum all 5 lines fall into. c) Paschen discovered yet another set of emission lines from the H atom, this time corresponding to transitions from n = 4, 5, 6, and 7 to n = 3. Determine the energy of each of these 4 transitions and the region of the electromagnetic spectrum that all 4 lines fall into. If you draw all of this information on one (large) figure, you will have essentially recreated Figure 4.15 in your textbook. 5. An electron in a chemical bond can be thought of as a standing wave with fixed ends. a) If the bond is 1 Å long, determine the wavelength of the electron in its ground and first excited states. b) Determine the number of nodes in the ground and first excited states. Draw these waves. 6. The position of an electron is known within 10 Å. What is the minimum uncertainty in its velocity? 7. Photons of green light are used to determine the position of a baseball to the precision of one wavelength. Determine the minimum uncertainty in the velocity of the baseball. Is it likely that this uncertainty would effect a batter's ability to hit the ball? Estimate the mass of a baseball as 150 g. 8. When an intense beam of green light is directed onto a copper surface, no electrons are ejected. What will happen if the green light is replaced with red light? 9. Alarm systems use the photoelectric effect. A beam of light strikes a piece of metal, which ejects electrons continuously to induce a small electric current. When a person steps into the light beam, the current is interrupted and the alarm is triggered. What is the maximum wavelength of light that can be used in such an alarm system if the metal is sodium, which requires a minimum energy to eject an electron to be 5.51 x 10-19 J? 10. Name a transition in C5+ that will lead to the absorption of green light.
<urn:uuid:0279f41f-64cf-410c-9d28-84cbde57f54a>
3.625
987
Tutorial
Science & Tech.
64.379774
95,498,634
It is the youngest nodosaur ever discovered, and a founder of a new genus and species that lived approximately 110 million years ago during the Early Cretaceous Era. Nodosaurs have been found in diverse locations worldwide, but they’ve rarely been found in the United States. The findings are published in the September 9 issue of the Journal of Paleontology. “Now we can learn about the development of limbs and the development of skulls early on in a dinosaur’s life,” says David Weishampel, Ph.D., a professor of anatomy at the Johns Hopkins University School of Medicine. “The very small size also reveals that there was a nearby nesting area or rookery, since it couldn’t have wandered far from where it hatched. We have the opportunity to find out about dinosaur parenting and reproductive biology, as well as more about the lives of Maryland dinosaurs in general." The fossil was discovered in 1997 by Ray Stanford, a dinosaur tracker who often spent time looking for fossils close to his home; this time he was searching a creek bed after an extensive flood. Stanford identified it as a nodosaur and called Weishampel, a paleontologist and expert in dinosaur systematics. Weishampel and his colleagues established the fossil’s identity as a nodosaur by identifying a distinctive pattern of bumps and grooves on the skull. They then did a computer analysis of the skull shape, comparing its proportions to those of ten skulls from different species of ankylosaurs, the group that contains nodosaurs. They found that this dinosaur was closely related to some of the nodosaur species, although it had a shorter snout overall than the others. Comparative measurements enabled them to designate a new species, Propanoplosaurus marylandicus. In addition to being the youngest nodosaur ever found, it is the first hatchling of any dinosaur species ever recovered in the eastern United States, says Weishampel. The area had originally been a flood plain, where Weishampel says that the dinosaur originally drowned. Cleaning the fossil revealed a hatchling nodosaur on its back, much of its body imprinted along with the top of its skull. Weishampel determined the dinosaur’s age at time of death by analyzing the degree of development and articulation capability of the ends of the bones, as well as deducing whether the bones themselves were porous, as young bones would not be fully solid. Size was also a clue: the body in the tiny fossil was only 13 cm long, just shorter than the length of a dollar bill. Adult nodosaurs are estimated to have been 20 to 30 feet long. Weishampel also used the position and quality of the fossil to deduce the dinosaur’s method of death and preservation: drowning, and getting buried by sediment in the stream. Egg shells have never been found preserved in the vicinity, and by the layout of the bones and the size of some very small nodosaur footprints found nearby, led Weishampel to believe that the dinosaur was a hatchling, rather than an embryo, because it was able to walk independently. “We didn’t know much about hatchling nodosaurs at all prior to this discovery,” says Weishampel. “And this is certainly enough to motivate more searches for dinosaurs in Maryland, along with more analysis of Maryland dinosaurs.” Stanford has donated the hatchling nodosaur to the Smithsonian’s National Museum of Natural History, where it is now on display to the public and also available for research. This study was funded by the Johns Hopkins Center of Functional Anatomy and Evolution. Valerie DeLeon, also of the Center of Functional Anatomy and Evolution, was an additional author.On the Web: Sarah Lewin | Newswise Science News New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:b605f193-cc8c-40ee-89f8-f5897f592fc1>
4.3125
1,427
Content Listing
Science & Tech.
40.095667
95,498,658
Meteor Shower (370). (photo credit: Itamar Hassan) A sparkling set of shooting stars decorated Israel’s skies during the late night hours between Sunday and Monday. The Perseids meteor shower, which has occurred every summer for two millennia, began on approximately July 17 and will end August 24 this year. Hourly Rate – the number of meteors an observer can see per hour during a dark, clear night – was expected to be 100 meteors per hour, with a velocity of 59 km. per second, according to according to Tel Aviv University Astronomy Club data. A number of events were held to mark the shower, including an exploratory evening with the Bareket Observatory star observation team at Carmi Har-Hanegev Farm, on the edge of the Ramon Crater. Activities at the Bareket event included a combination lecture and film presentation, training on star observation maps, observation through a sophisticated telescope and explanations about constellations with a special laser-guided celestial tour. Society for the Protection of Nature presented an “Evening of Shooting Stars,” a celebratory night in honor of astronomy. Yet another event, called Stardust, occured at the Negev Desert Ashram, built on the ruins of what used to be the Shittim Nahal army outpost in the Arava. Nir Lahav, a physics doctoral student at Bar-Ilan University, and Yoav Landsman, an aerospace engineer at Israel Aerospace Industries, lead discussions.
<urn:uuid:b34069cc-98b7-4e23-bb58-88a774e41fa5>
2.8125
339
News Article
Science & Tech.
30.448263
95,498,659
Schwarzschild metric(Redirected from Schwarzschild solution) In Einstein's theory of general relativity, the Schwarzschild metric (also known as the Schwarzschild vacuum or Schwarzschild solution) is the solution to the Einstein field equations that describes the gravitational field outside a spherical mass, on the assumption that the electric charge of the mass, angular momentum of the mass, and universal cosmological constant are all zero. The solution is a useful approximation for describing slowly rotating astronomical objects such as many stars and planets, including Earth and the Sun. The solution is named after Karl Schwarzschild, who first published the solution in 1916. According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric, vacuum solution of the Einstein field equations. A Schwarzschild black hole or static black hole is a black hole that has neither electric charge nor angular momentum. A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass. The Schwarzschild black hole is characterized by a surrounding spherical boundary, called the event horizon, which is situated at the Schwarzschild radius, often called the radius of a black hole. The boundary is not a physical surface, and if a person fell through the event horizon (before being torn apart by tidal forces), they would not notice any physical surface at that position; it is a mathematical surface which is significant in determining the black hole's properties. Any non-rotating and non-charged mass that is smaller than its Schwarzschild radius forms a black hole. The solution of the Einstein field equations is valid for any mass M, so in principle (according to general relativity theory) a Schwarzschild black hole of any mass could exist if conditions became sufficiently favorable to allow for its formation. The Schwarzschild metricEdit - when dτ2 is positive, τ is the proper time (time measured by a clock moving along the same world line with the test particle), - c is the speed of light, - t is the time coordinate (measured by a stationary clock located infinitely far from the massive body), - r is the radial coordinate (measured as the circumference, divided by 2π, of a sphere centered around the massive body), - θ is the colatitude (angle from north, in units of radians), - φ is the longitude (also in radians), and - rs is the Schwarzschild radius of the massive body, a scale factor which is related to its mass M by rs = 2GM/, where G is the gravitational constant. The analogue of this solution in classical Newtonian theory of gravity corresponds to the gravitational field around a point particle. The radial coordinate turns out to have physical significance as the "proper distance between two events that occur simultaneously relative to the radially moving geodesic clocks, the two events lying on the same radial coordinate line". In practice, the ratio rs/ is almost always extremely small. For example, the Schwarzschild radius rs of the Earth is roughly , while the Sun, which is 8.9 mm×105 times as massive 3.3 has a Schwarzschild radius of approximately 3.0 km. Even at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The ratio becomes large only in relatively close proximity to black holes and other ultra-dense objects such as neutron stars. The Schwarzschild metric is a solution of Einstein's field equations in empty space, meaning that it is valid only outside the gravitating body. That is, for a spherical body of radius R the solution is valid for r > R. To describe the gravitational field both inside and outside the gravitating body the Schwarzschild solution must be matched with some suitable interior solution at r = R, such as the interior Schwarzschild metric. The Schwarzschild solution is named in honour of Karl Schwarzschild, who found the exact solution in 1915 and published it in January 1916, a little more than a month after the publication of Einstein's theory of general relativity. It was the first exact solution of the Einstein field equations other than the trivial flat space solution. Schwarzschild died shortly after his paper was published, as a result of a disease he contracted while serving in the German army during World War I. In the early years of general relativity there was a lot of confusion about the nature of the singularities found in the Schwarzschild and other solutions of the Einstein field equations. In Schwarzschild's original paper, he put what we now call the event horizon at the origin of his coordinate system. In this paper he also introduced what is now known as the Schwarzschild radial coordinate (r in the equations above), as an auxiliary variable. In his equations, Schwarzschild was using a different radial coordinate that was zero at the Schwarzschild radius. A more complete analysis of the singularity structure was given by David Hilbert in the following year, identifying the singularities both at r = 0 and r = rs. Although there was general consensus that the singularity at r = 0 was a 'genuine' physical singularity, the nature of the singularity at r = rs remained unclear. In 1921 Paul Painlevé and in 1922 Allvar Gullstrand independently produced a metric, a spherically symmetric solution of Einstein's equations, which we now know is coordinate transformation of the Schwarzschild metric, Gullstrand–Painlevé coordinates, in which there was no singularity at r = rs. They, however, did not recognize that their solutions were just coordinate transforms, and in fact used their solution to argue that Einstein's theory was wrong. In 1924 Arthur Eddington produced the first coordinate transformation (Eddington–Finkelstein coordinates) that showed that the singularity at r = rs was a coordinate artifact, although he also seems to have been unaware of the significance of this discovery. Later, in 1932, Georges Lemaître gave a different coordinate transformation (Lemaître coordinates) to the same effect and was the first to recognize that this implied that the singularity at r = rs was not physical. In 1939 Howard Robertson showed that a free falling observer descending in the Schwarzschild metric would cross the r = rs singularity in a finite amount of proper time even though this would take an infinite amount of time in terms of coordinate time t. In 1950, John Synge produced a paper that showed the maximal analytic extension of the Schwarzschild metric, again showing that the singularity at r = rs was a coordinate artifact and that it represented two horizons. A similar result was later rediscovered by George Szekeres, and independently Martin Kruskal. The new coordinates nowadays known as Kruskal-Szekeres coordinates were much simpler than Synge's but both provided a single set of coordinates that covered the entire spacetime. However, perhaps due to the obscurity of the journals in which the papers of Lemaître and Synge were published their conclusions went unnoticed, with many of the major players in the field including Einstein believing that singularity at the Schwarzschild radius was physical. Real progress was made in the 1960s when the more exact tools of differential geometry entered the field of general relativity, allowing more exact definitions of what it means for a Lorentzian manifold to be singular. This led to definitive identification of the r = rs singularity in the Schwarzschild metric as an event horizon (a hypersurface in spacetime that can be crossed in only one direction). Singularities and black holesEdit The Schwarzschild solution appears to have singularities at r = 0 and r = rs; some of the metric components "blow up" (entail division by zero or division by infinity) at these radii. Since the Schwarzschild metric is expected to be valid only for those radii larger than the radius R of the gravitating body, there is no problem as long as R > rs. For ordinary stars and planets this is always the case. For example, the radius of the Sun is approximately 000 km, while its Schwarzschild radius is only 700. 3 km The singularity at r = rs divides the Schwarzschild coordinates in two disconnected patches. The exterior Schwarzschild solution with r > rs is the one that is related to the gravitational fields of stars and planets. The interior Schwarzschild solution with 0 ≤ r < rs, which contains the singularity at r = 0, is completely separated from the outer patch by the singularity at r = rs. The Schwarzschild coordinates therefore give no physical connection between the two patches, which may be viewed as separate solutions. The singularity at r = rs is an illusion however; it is an instance of what is called a coordinate singularity. As the name implies, the singularity arises from a bad choice of coordinates or coordinate conditions. When changing to a different coordinate system (for example Lemaitre coordinates, Eddington–Finkelstein coordinates, Kruskal–Szekeres coordinates, Novikov coordinates, or Gullstrand–Painlevé coordinates) the metric becomes regular at r = rs and can extend the external patch to values of r smaller than rs. Using a different coordinate transformation one can then relate the extended external patch to the inner patch. The case r = 0 is different, however. If one asks that the solution be valid for all r one runs into a true physical singularity, or gravitational singularity, at the origin. To see that this is a true singularity one must look at quantities that are independent of the choice of coordinates. One such important quantity is the Kretschmann invariant, which is given by At r = 0 the curvature becomes infinite, indicating the presence of a singularity. At this point the metric, and spacetime itself, is no longer well-defined. For a long time it was thought that such a solution was non-physical. However, a greater understanding of general relativity led to the realization that such singularities were a generic feature of the theory and not just an exotic special case. The Schwarzschild solution, taken to be valid for all r > 0, is called a Schwarzschild black hole. It is a perfectly valid solution of the Einstein field equations, although it has some rather bizarre properties. For r < rs the Schwarzschild radial coordinate r becomes timelike and the time coordinate t becomes spacelike. A curve at constant r is no longer a possible worldline of a particle or observer, not even if a force is exerted to try to keep it there; this occurs because spacetime has been curved so much that the direction of cause and effect (the particle's future light cone) points into the singularity. The surface r = rs demarcates what is called the event horizon of the black hole. It represents the point past which light can no longer escape the gravitational field. Any physical object whose radius R becomes less than or equal to the Schwarzschild radius will undergo gravitational collapse and become a black hole. The Schwarzschild solution can be expressed in a range of different choices of coordinates besides the Schwarzschild coordinates used above. Different choices tend to highlight different features of the solution. The table below shows some popular choices. |regular at horizon| extends across future horizon |regular at horizon| extends across past horizon |Gullstrand–Painlevé coordinates||regular at horizon| |Isotropic coordinates|| Valid only for |isotropic lightcones on constant time slices| |Kruskal–Szekeres coordinates||regular at horizon| Maximally extends to full spacetime |Lemaître coordinates||regular at horizon| In table above, some shorthand has been introduced for brevity. The speed of light c has been set to one. The notation is used for the metric of a two dimensional sphere. Moreover, in each entry R and T denote alternative choices of radial and time coordinate for the particular coordinates. Note, the R and/or T may vary from entry to entry. The spatial curvature of the Schwarzschild solution for r > rs can be visualized as the graphic shows. Consider a constant time equatorial slice through the Schwarzschild solution (θ = π/, t = constant) and let the position of a particle moving in this plane be described with the remaining Schwarzschild coordinates (r, φ). Imagine now that there is an additional Euclidean dimension w, which has no physical reality (it is not part of spacetime). Then replace the (r, φ) plane with a surface dimpled in the w direction according to the equation (Flamm's paraboloid) This surface has the property that distances measured within it match distances in the Schwarzschild metric, because with the definition of w above, Thus, Flamm's paraboloid is useful for visualizing the spatial curvature of the Schwarzschild metric. It should not, however, be confused with a gravity well. No ordinary (massive or massless) particle can have a worldline lying on the paraboloid, since all distances on it are spacelike (this is a cross-section at one moment of time, so any particle moving on it would have an infinite velocity). Even a tachyon would not move along the path that one might naively expect from a "rubber sheet" analogy: in particular, if the dimple is drawn pointing upward rather than downward, the tachyon's path still curves toward the central mass, not away. See the gravity well article for more information. Flamm's paraboloid may be derived as follows. The Euclidean metric in the cylindrical coordinates (r, φ, w) is written Letting the surface be described by the function w = w(r), the Euclidean metric can be written as Comparing this with the Schwarzschild metric in the equatorial plane (θ = π/) at a fixed time (t = constant, dt = 0) yields an integral expression for w(r): whose solution is Flamm's paraboloid. A particle orbiting in the Schwarzschild metric can have a stable circular orbit with r > 3rs. Circular orbits with r between 1.5rs and 3rs are unstable, and no circular orbits exist for r < 1.5rs. The circular orbit of minimum radius 1.5rs corresponds to an orbital velocity approaching the speed of light. It is possible for a particle to have a constant value of r between rs and 1.5rs, but only if some force acts to keep it there. Noncircular orbits, such as Mercury's, dwell longer at small radii than would be expected classically. This can be seen as a less extreme version of the more dramatic case in which a particle passes through the event horizon and dwells inside it forever. Intermediate between the case of Mercury and the case of an object falling past the event horizon, there are exotic possibilities such as knife-edge orbits, in which the satellite can be made to execute an arbitrarily large number of nearly circular orbits, after which it flies back outward. The group of isometries of the Schwarzschild metric is the subgroup of the ten-dimensional Poincaré group which takes the time axis (trajectory of the star) to itself. It omits the spatial translations (three dimensions) and boosts (three dimensions). It retains the time translations (one dimension) and rotations (three dimensions). Thus it has four dimensions. Like the Poincaré group, it has four connected components: the component of the identity; the time reversed component; the spatial inversion component; and the component which is both time reversed and spatially inverted. Components which are obtainable by the symmetries of the Riemann tensor are not displayed. - Deriving the Schwarzschild solution - Reissner–Nordström metric (charged, non-rotating solution) - Kerr metric (uncharged, rotating solution) - Kerr–Newman metric (charged, rotating solution) - Black hole, a general review - Schwarzschild coordinates - Kruskal–Szekeres coordinates - Eddington–Finkelstein coordinates - Gullstrand–Painlevé coordinates - Lemaitre coordinates (Schwarzschild solution in synchronous coordinates) - Frame fields in general relativity (Lemaître observers in the Schwarzschild vacuum) - Tolman–Oppenheimer–Volkoff equation (metric and pressure equations of a static and spherically symmetric body of isotropic material) - (Landau & Liftshitz 1975). - Ehlers, Jürgen (January 1997). "Examples of Newtonian limits of relativistic spacetimes" (PDF). Classical and Quantum Gravity. 14 (1A): A119–A126. Bibcode:1997CQGra..14A.119E. doi:10.1088/0264-9381/14/1A/010. - Gautreau, R., & Hoffmann, B. (1978). The Schwarzschild radial coordinate as a measure of proper distance. Physical Review D, 17(10), 2552. - Tennent, R.M., ed. (1971). Science Data Book. Oliver & Boyd. ISBN 0-05-002487-6. - Frolov, Valeri; Zelnikov, Andrei (2011). Introduction to Black Hole Physics. Oxford. p. 168. ISBN 0-19-969229-7. - Schwarzschild, K. (1916). "Über das Gravitationsfeld eines Massenpunktes nach der Einsteinschen Theorie". Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften. 7: 189–196. Bibcode:1916AbhKP......189S. For a translation, see Antoci, S.; Loinger, A. (1999). "On the gravitational field of a mass point according to Einstein's theory". arXiv: [physics]. - O'Connor, John J.; Robertson, Edmund F., "Karl Schwarzschild", MacTutor History of Mathematics archive, University of St Andrews. - Droste, J. (1917). "The field of a single centre in Einstein's theory of gravitation, and the motion of a particle in that field" (PDF). Proceedings of the Royal Netherlands Academy of Arts and Science. 19 (1): 197–215. Bibcode:1917KNAB...19..197D. - Kox, A. J. (1992). "General Relativity in the Netherlands:1915-1920". In Eisenstaedt, J.; Kox, A. J. Studies in the History of General Relativity. Birkhäuser. p. 41. ISBN 978-0-8176-3479-7. - Brown, K. (2011). Reflections On Relativity. Lulu.com. Chapter 8.7. ISBN 978-1-257-03302-7. - Hilbert, David (1924). "Die Grundlagen der Physik". Mathematische Annalen. Springer-Verlag. 92 (1–2): 1–32. doi:10.1007/BF01448427. - Earman, J. (1999). "The Penrose–Hawking singularity theorems: History and Implications". In Goenner, H. The expanding worlds of general relativity. Birkhäuser. p. 236-. ISBN 978-0-8176-4060-6. - Synge, J. L. (1950). "The gravitational field of a particle". Proceedings of the Royal Irish Academy. 53 (6): 83–114. - Szekeres, G. (1960). "On the singularities of a Riemannian manifold". Publicationes Mathematicae Debrecen 7. 7: 285. Bibcode:2002GReGr..34.2001S. doi:10.1023/A:1020744914721. - Kruskal, M. D. (1960). "Maximal extension of Schwarzschild metric". Physical Review. 119 (5): 1743–1745. Bibcode:1960PhRv..119.1743K. doi:10.1103/PhysRev.119.1743. - Hughston, L. P.; Tod, K. P. (1990). An introduction to general relativity. Cambridge University Press. Chapter 19. ISBN 978-0-521-33943-8. - Brill, D. (19 January 2012). "Black Hole Horizons and How They Begin". Astronomical Review. - Ni, Wei-Tou (ed.). One Hundred Years of General Relativity: From Genesis and Empirical Foundations to Gravitational Waves, Cosmology and Quantum Gravity. 1. World Scientific. p. I-126. - Eddington, A. S. (1924). The Mathematical Theory of Relativity (2nd ed.). Cambridge University Press. p. 93. - Misner, Charles W., Thorne, Kip S., Wheeler, John Archibald, "Gravitation", W.H. Freeman and Company, New York, ISBN 0-7167-0334-3 - Schwarzschild, K. (1916). "Über das Gravitationsfeld eines Massenpunktes nach der Einsteinschen Theorie". Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften. 7: 189–196. Bibcode:1916AbhKP1916..189S. - Text of the original paper, in Wikisource - Translation: Antoci, S.; Loinger, A. (1999). "On the gravitational field of a mass point according to Einstein's theory". arXiv: [physics]. - A commentary on the paper, giving a simpler derivation: Bel, L. (2007). "Über das Gravitationsfeld eines Massenpunktesnach der Einsteinschen Theorie". arXiv: [gr-qc]. - Schwarzschild, K. (1916). "Über das Gravitationsfeld einer Kugel aus inkompressibler Flüssigkeit". Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften. 1: 424. - Flamm, L. (1916). "Beiträge zur Einstein'schen Gravitationstheorie". Physikalische Zeitschrift. 17: 448. - Adler, R.; Bazin, M.; Schiffer, M. (1975). Introduction to General Relativity (2nd ed.). McGraw-Hill. Chapter 6. ISBN 0-07-000423-4. - Landau, L. D.; Lifshitz, E. M. (1951). The Classical Theory of Fields. Course of Theoretical Physics. 2 (4th Revised English ed.). Pergamon Press. Chapter 12. ISBN 0-08-025072-6. - Misner, C. W.; Thorne, K. S.; Wheeler, J. A. (1970). Gravitation. W.H. Freeman. Chapters 31 and 32. ISBN 0-7167-0344-0. - Weinberg, S. (1972). Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity. John Wiley & Sons. Chapter 8. ISBN 0-471-92567-5. - Taylor, E. F.; Wheeler, J. A. (2000). Exploring Black Holes: Introduction to General Relativity. Addison-Wesley. ISBN 0-201-38423-X. - Heinzle, J. M.; Steinbauer, R. (2002). "Remarks on the distributional Schwarzschild geometry". Journal of Mathematical Physics. 43 (3): 1493. arXiv: . Bibcode:2002JMP....43.1493H. doi:10.1063/1.1448684.
<urn:uuid:c4cca910-6d05-4148-8214-23cce683059e>
3.4375
5,081
Knowledge Article
Science & Tech.
55.014805
95,498,668
Rupture process of the M s 7.0 Lushan earthquake determined by joint inversion of local static GPS records, strong motion data, and teleseismograms - 85 Downloads On April 20, 2013, an M s 7.0 earthquake struck Lushan County in Sichuan Province, China, and caused serious damage to the source region. We investigated the rupture process of the Ms7.0 Lushan earthquake by jointly inverting waveforms of teleseismic P waveforms and local strong motion records as well as static GPS observations. The inverted results indicate that the rupture of this earthquake was dominated by the failure of an asperity with a triangular shape and that the main shock was dominated by thrust slip. The earthquake released a total seismic moment of 1.01×1019 Nm, with 92% of it being released during the first 11 s. The rupture had an average slip of 0.9 m and produced an average stress drop of 1.8 MPa. Compared with our previous work that was based mainly on a unique dataset, this joint inversion result is more consistent with field observations and the distribution of aftershock zones. Key WordsGPS Longmenshan Lushan Earthquake rupture process strong motion Unable to display preview. Download preview PDF. The Chinese Earthquake Network Center provided the aftershock data. Teleseismic data were downloaded from IRIS, strong motion records were provided by CSMN, and the GPS data were provided by the Institute of Earthquake Science, China Earthquake Administration. The figures were created with GMT software. Dr. Yingjie Yang of Macquarie University provided us with substantial assistance and constructive advice. We here acknowledge our respect for each of those who contributed to our research result. This work was supported by a grant from the Chinese Earthquake Administration (No. 201308013), the National Natural Science Foundation of China (Nos. 41604057, 40974034, and 41021003), and as a key project from the Institute of Geodesy and Geophysics. The final publication is available at Springer via http://dx.doi.org/10.1007/s12583-017-0757-1. - Chen, G. G., Ji, F. J., Zhou, R. J., et al., 2007. Primary Research of Activity Segmentation of Longmenshan Fault Zone since Late Quaternary. Seismology Geology, 29(3): 657–673 (in Chinese)Google Scholar - Huang, Y., Wu, J. P., Zhang, T. Z., et al., 2008. Re-Determination of the Large Wenchuan Earthquake (M s 8.0) and Its Sequence of Aftershocks. Science China Series D: Earth Science, 38(10): 1242–1249 (in Chinese)Google Scholar - Ji, C., Wald, D. J., Helmberger, D. V., et al., 2002. Source Description of the 1999 Hector Mine, California, Earthquake, Part I: Wavelet Domain Inversion Theory and Resolution Analysis. Bulletin of the Seismological Society of America, 92(4): 1192–1207. doi: 10.1785/0120000916CrossRefGoogle Scholar - Kanamori, H., Anderson, D. L., 1975. Theoretical Basis of Some Empirical Relations in Seismology. Bulletin of the Seismological Society of America, 65: 1073–1095Google Scholar - Lü, J., Wang, X. S., Miao, C. L., et al., 2012. On Characteristics of Hypocenter Distribution and Rupture Complexity of Wenchuan Aftershock Sequence along Longmenshan Fault Zone North to Beichuan. Journal of Geodynamics, 32(5): 17–21 (in Chinese)Google Scholar - Wald, D. J, Heaton, T. H., 1996. The Slip History of the 1994 Northridge, California, Earthquake Determined from Strong Motion, Teleseismic, GPS, and Leveling Data. Bulletin of the Seismological Society of America, 86: S49–S70Google Scholar - Wang, C. Y., Wu, J. P., Lou, H., et al., 2003. P-Wave Crustal Structure in Westernsichuan and Eastern Tibetan Region. Science China Series D: Earth Science, 46(Suppl): 254–265Google Scholar - Wang, W. M., Hao, J. L., Yao, Z. X., et al., 2013. Preliminary Result for Rupture Process of Apr. 20, 2013, Lushan Earthquake, Sichan, China. Chinese Journal of Geophysics, 56(4): 1421–1417. (in Chinese with English Abstract)Google Scholar - Wu, Y. Q., Jiang, Z. S., Wang, M. et al., 2013. Preliminary Results of the Co-Seismic Displacement and Pre-Seismic Strain Accumulation of the Lushan M s 7.0 Earthquake Reflected By the GPS Surveying. Chinese Science Bulletin, 58(28–29): 3460–3466. doi: 10.1007/s11434-013-5998-5CrossRefGoogle Scholar - Xie, Z. J., Jin, B. K., Zheng, Y., et al., 2013. Source Parameters Inversion of the 2013 Lushan Earthquake by Combining Teleseismic Waveforms and Local Seismograms. Science China: Earth Science, 23(6): 1020–1026. Doi: 10.1007/S11430-013-4640-3Google Scholar - Yang, X. Y., Zeng, W. H., Wang, Y., et al., 2016. Numerical Simulation on Magnitude 7 or over Earthquakes in Southeast Gansu Province and Its Neighboring Areas. Earth Science, 41(7): 1231–1237Google Scholar - Zeng, X. F., Luo, Y., Han, L., et al., 2013. The Lushan M s7.0 Earthquake on 20 April 2013: A High-Angle Thrust Event. Chinese Journal of Geophysics 56: 1418–1424 (in Chinese)Google Scholar - Zhang, Y., Wang, R., Chen, Y., et al., 2014. Kinematic Rupture Model and Hypocenter Relocation of the 2013 M w 6.6 Lushan Earthquake Constrained by Strong-Motion and Teleseismic Data. Geophysical Research Letters, 85(1): 15–22. doi: 10.1785/0220130126Google Scholar - Zhang, Y., Xu, L. S., Chen, Y. T., et al., 2013. Rupture Process of the Lushan 4.20 Earthquake and Preliminary Analysis on the Disaster-Causing Mechanism. Chinese Journal of Geophysics, 56(4): 1408–1411 (in Chinese with English Abstract)Google Scholar - Zhao, B, Gao, Y., Huang, Z. B., et al., 2013. Double Difference Relocation, Focal Mechanism and Stress Inversion of Lushan M s 7.0 Earthquake Sequence. Chinese Journal of Geophysics, 56(10): 3385–3396 (in Chinese)Google Scholar - Zhao, Z., Fan, J., Zheng, S. H., et al., 1997. Crustal Velocity Structure of Thelongmen Shan Fault Zone and Source Location Accurate Revision. Acta Seismologica Sinica, 19: 6l5–622 (in Chinese)Google Scholar - Zhou, S. Y., Kojiro, I., Cheng, X. F., et al., 2004. Analysis of the Reliability and Resolution of the Earthquake Source History Inferred from Waveforms, Taking the Chi-Chi Earthquake as an Example. Geophysical Journal International, 157(3): 1217–1232. doi: 10.1111/j.1365-246x.2004.02247.xCrossRefGoogle Scholar - Zhu, J. S. 2008. The Wenchuan Earthquake Occurrence Background in Deepstructure and Dynamics of Lithosphere. Journal of Chengdu Univeristy of Technology (Science &Technology Edition) 35(4): 348–356 (in Chinese with English Abstract)Google Scholar
<urn:uuid:3e697ebd-eaf5-42ff-87c6-cca68c04534f>
2.515625
1,756
Academic Writing
Science & Tech.
74.881217
95,498,669
Volume - 8th grade (13y) - examples - Gasoline tank 2 A gasoline tank is 1/6 full. When 25 liters of gasoline were added, it became 3/4 full. How many liters more is needed to fill it? Show your solution. - A pipe A radius of a cylindrical pipe is 2 ft. If the pipe is 17 ft long, what is its volume? How many hectoliters of water is in garden barrel with 90 cm diameter and a height of 1.3 m, if it is filled to 80% of its capacity? - Cube into sphere The cube has brushed a sphere as large as possible. Determine how much percent was the waste. - Angle of diagonal Angle between the body diagonal of a regular quadrilateral and its base is 60°. The edge of the base has a length of 10cm. Calculate the body volume. - Scientific notation Approximately 7.5x105 gallons of water flow over a waterfall each second. There are 8.6x104 seconds in 1 day. Select the approximate number of gallons of water that flow over the waterfall in 1 day. If water flows into the pool by two inlets, fill the whole for 18 hours. First inlet filled pool 10 hour longer than second. How long pool is filled with two inlets separately? - Cube in a sphere The cube is inscribed in a sphere with volume 3724 cm3. Determine the length of the edges of a cube. - Axial section Axial section of the cone is equilateral triangle with area 208 dm2. Calculate volume of the cone. The cuboid has a surface area 1771 cm2, the length of its edges are in the ratio 5:2:4. Calculate the volume of the cuboid. The swimming pool is 10 m wide and 22 m long and 191 cm deep. How many hectoliters of water is in it, if the water is 9 cm below its upper edge. One cube is inscribed sphere and the other one described. Calculate difference of volumes of cubes, if the difference of surfaces in 254 cm2. Surface of the sphere is 2820 cm2, weight is 71 kg. What is its density? Area of the side of two cylinders is same rectangle of 50 cm × 11 cm. Which cylinder has a larger volume and by how much? Fire tank has cuboid shape with a rectangular floor measuring 13.7 m × 9.8 m. Water depth is 2.4 m. Water was pumped from the tank into barrels with a capacity of 2.7 hl. How many barrels were used, if the water level in the tank fallen 5 cm? Wr - Density - simple example Material of density of 762 kg/m3 occupies a container volume of 99 cm3. What is its mass? How many 55% alcohol we need to pour into 14 liters 75% alcohol to get p3% of the alcohol? How many 65% alcohol we get? - Cuboid diagonal Calculate the volume and surface area of the cuboid ABCDEFGH, which sides abc has dimensions in the ratio of 9:3:8 and if you know that the wall diagonal AC is 86 cm and angle between AC and the body diagonal AG is 25 degrees. Mix 20 l of water with temperature of 53 °C, 27 l warm of 86 °C and 11 l water of 49 °C. What is the temperature of the mixed water immediately after mixing? - Sea water Seawater contains about 4.7% salt. How many dm3 of distilled water we must pour into 39 dm3 of sea water to get water with 1.5% salt? Tip: Our volume units converter will help you with converion of volume units. See also more information on Wikipedia.
<urn:uuid:ed915fd9-2b21-491b-980c-74a23af21b42>
3.90625
808
Tutorial
Science & Tech.
77.820263
95,498,687
The researchers are harnessing the power of the ocean to conduct their experiments, using the up-and-down motion of waves to pump deep water to the surface. Their next step is to create a pump that can withstand the rigors of the rugged Pacific and then see if the biology follows the physics. "During our first test, the ocean destroyed our pump in one day," said Angelicque "Angel" White, a post-doctoral researcher at Oregon State University and a member of the scientific team. "Initially, the system worked and we were able to bring cold water to the surface and control the depth of its release. Now we need to work on the engineering aspect." The theory behind the experiment has just been published in the journal, Marine Ecology Progress Series. The initial test of the pumps and their effect in the open ocean is the focus of a documentary that is scheduled to be broadcast Sept. 5 on the Discovery Channel. This experiment was funded in part by the National Science Foundation and the Betty and Gordon Moore Foundation. White and lead investigators Ricardo Letelier of OSU and David Karl of the University of Hawaii are part of the NSF-funded Center for Microbial Oceanography: Research and Education (C-MORE) based in Hawaii, which Karl directs. The scientists stress that the goal of creating artificially induced upwelling is to understand how marine microbial ecosystems respond to large-scale perturbations, "a critical step if we want to understand the risks of manipulating these large ecosystems in order to solve global greenhouse buildup," said Letelier, a professor in OSU's College of Oceanic and Atmospheric Sciences. "This is not a new concept," Letelier said. "It was proposed in 1976 that scientists could use wave energy to pump water from the depths to the surface and fuel plankton growth. But there are many nuances; simply bringing nutrients to the surface can result in the wrong kinds of biological growth. It also can bring water enriched with carbon dioxide, which can de-gas into the atmosphere. "If you're adding more CO2 than subtracting by fertilizing the ocean," he added, "you're running the wheel in the wrong direction." The answer, Letelier says, may be to pump water that contains specific ratios of nutrients – particularly nitrogen and phosphorous – to carbon dioxide by targeting different depths. At their research site north of Hawaii, where the ocean is about 4,500 meters deep, the bottom layers of water have too much CO2 because of the decaying organisms that have sunk to the floor. Their studies have shown, however, that water at a depth of 300 to 700 meters has the proper ratio of nitrogen and phosphorus to trigger a two-stage phytoplankton bloom. The researchers believe that upwelling with water from that depth will first cause a bloom of diatoms, which are a common type of plankton – often single-celled. The diatoms will consume the nitrogen, leaving some amount of phosphorus in the water, which will stimulate a second-stage bloom of nitrogen-fixing cyanobacteria. These blooms are often observed during summer months in open ocean waters, Letelier said. In previous field experiments, the researchers were able to create stage-one diatom blooms by mixing deep and surface water in large incubation bottles, but they need to conduct additional studies in the ocean to see if the second stage of blooms actually occurs following additions of deep water. If the pumps had survived the ocean, White said, they may have been able to generate these blooms. "We were able to pump about 50 cubic meters of water per hour using the wave energy," she said, "which is a small amount compared to the vastness of the ocean. If we want to generate a bloom in an area of one-square kilometer, we would need to replace about 10 percent of the surface waters with upwelled water, which would take about a month at the rate we pumped." The scientists used undersea gliders in their Hawaii study to monitor the water from the pump so they have an idea how widely and quickly it disperses, and how much of an impact it can have on surface waters. "We know a lot about how upwelling works and the physics of the ocean," Letelier said, "but there also are things we don't know, which is why this study is so important. In this open ocean area near Hawaii, for example, phytoplankton blooms occur in the summer when there are almost no nutrients at the surface and the winds generally are calm. What triggers the blooms and where are the nutrients coming from? We need to know. "These vast, seemingly barren regions comprise more than two-thirds of our oceans and nearly 40 percent of the entire Earth," he added. "It is a large area of exchange between the atmosphere and the ocean and understanding large-scale interactions is critical to understanding climate change." Some scientists have looked at iron fertilization as a way to trigger biological growth in nutrient-poor areas of the ocean, but "everything responds to iron," Letelier said. "You can't control what grows." The researchers believe they can control plankton growth by determining which species respond to specific nutrients, and then adjusting the rate of nutrient feeding by the frequency and duration of water pumping. "These vast regions of the open ocean may be perfect for sequestering carbon," Letelier said, "but before we can begin to seriously consider a large-scale intervention, we must better understand how the biology responds by using perturbations on a small scale. We're getting there." Ricardo Letelier | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:afade525-3bae-4099-8380-f5a42e651660>
4.09375
1,772
Content Listing
Science & Tech.
43.718778
95,498,688
Kepler Telescope Discovers 41 New Exoplanets With Curiosity getting so much attention lately, it ‘s easy to forget that NASA has just oodles of other cool stuff going on right now. Yes, most of it does not involve skycrane drops or lasers or HD pictures of the surface of an alien world, but it is still cool, you guys! Case in point: NASA’s Kepler mission announced today that is has found 41 new exoplanets in 20 star systems. The results are preliminary and some are still being peer-reviewed to ensure that they are planets and not blips in the data or just alien civilizations messing with us. If they pan out, though — and there’s every reason to believe most of them will — it will raise the number of planets discovered by the Kepler mission by nearly 50%, to a grand total of 116 planets. Finding more planets helps researchers learn more about gravitation in solar systems and how planets come to be and find their homes around stars. That’s especially true when they find planets that share solar systems, which this batch of discoveries from Kepler has in spades. In turn, that information helps us better understand how our own solar system came to be. The Kepler data is gathered by a space telescope with a wide field of vision and a very sensitive light-detector. This sensor measures dips in the brightness of far-off stars. When enough dips are correlated from a certain star and on a certain schedule, researchers can deduce that their is a planet orbiting the star, and occasionally blocking the light it sends our way. They can even make some educated inferences about the size and composition of the planet, and how far it is from the star it calls home. Kepler is continuing to outperform researchers’ expectations, showing us that planets are more common than once thought in the universe. With that comes the heartening implication that somewhere out there, we are not alone. That thought is one of the few things that makes us sleep better at night. (via ScienceDaily, image via NASA)
<urn:uuid:85f60c7f-2287-4186-b4cd-5dde3bf45778>
2.78125
427
News Article
Science & Tech.
49.121502
95,498,691
A rotting carcass which is unlike anything experts have seen before and measuring a massive six metres washed up on the shoreline of Dorob National Park. A debate quickly arose on the species of the mysterious beast with experts unsure if the carcass is from the whale family or the dolphin family. The creature has similar characteristics to a bottlenose dolphin, but at six metres long, it is almost two metres too big. Namibian Dolphin Project (NDP) Researcher Dr Simon Elwen said: “On first sighting we had no idea what species it was. “The body of this animal was in an advanced state of decomposition – making it look rather un-whale like and there were several confused reports on social media.” After a long debate, scientists came to the conclusion that the odd creature probably belongs to the whale family – most likely a Cuvier’s beaked whale. The Cuvier’s beaked whale can grow up to seven metres long and is a cigar-shaped mammal which is distributed in more than three quarters of the world’s oceans. Dr Elwen continued: “Based on the shape of the head and snout and the overall size – the research team are fairly confident that the specimen is a Cuvier’s beaked whale.” He adds it was not possible to discover the cause of death due to how much the whale had rotter and the fact its head had been severely crushes. Dr Elwen said: “The lower jawbone was cracked and broken quite severely, however, given the state of the corpse and the absence of any apparent external injuries, the damage to the jaw was possibly post-mortem. “I was quite surprised. These animals are rarely seen in the water, so to see them on land is very unique.”
<urn:uuid:662d2275-5e3f-4821-a384-74ac706e0e2d>
2.96875
385
News Article
Science & Tech.
43.9419
95,498,695
An ectotherm (from the Greek ἐκτός (ektós) "outside" and θερμός (thermós) "hot"), is an organism in which internal physiological sources of heat are of relatively small or quite negligible importance in controlling body temperature. Such organisms (for example frogs) rely on environmental heat sources, which permit them to operate at very economical metabolic rates. Colloquially, some refer to these organisms as "cold blooded" though such a term is not technically correct, as the blood temperature of the organism varies with ambient environmental temperature. Some of these animals live in environments where temperatures are practically constant, as is typical of regions of the abyssal ocean and hence can be regarded as homeothermic ectotherms. In contrast, in places where temperature varies so widely as to limit the physiological activities of other kinds of ectotherms, many species habitually seek out external sources of heat or shelter from heat; for example, many reptiles regulate their body temperature by basking in the sun, or seeking shade when necessary in addition to a whole host of other behavioral thermoregulation mechanisms. For home captivity as pet, reptile owners can use a UVB/UVA light system to assist the animals' basking behaviour. In ectotherms, fluctuating ambient temperatures may affect the body temperature. Such variation in body temperature is called poikilothermy, though the concept is not widely satisfactory and the use of the term is declining. In small aquatic creatures such as Rotifera, the poikilothermy is practically absolute, but other creatures (like crabs) have wider physiological options at their disposal, and they can move to preferred temperatures, avoid ambient temperature changes, or moderate their effects. Ectotherms can also display the features of homeothermy, especially within aquatic organisms. Normally their range of ambient environmental temperatures is relatively constant, and there are few in number that attempt to maintain a higher internal temperature due to the high associated costs. Various patterns of behavior enable certain ectotherms to regulate body temperature to a useful extent. To warm up, reptiles and many insects find sunny places and adopt positions that maximise their exposure; at harmfully high temperatures they seek shade or cooler water. In cold weather, honey bees huddle together to retain heat. Butterflies and moths may orient their wings to maximize exposure to solar radiation in order to build up heat before take-off. Gregarious caterpillars, such as the Forest Tent caterpillar and fall webworm, benefit from basking in large groups for thermoregulation. Many flying insects, such as honey bees and bumble bees, also raise their internal temperatures endothermally prior to flight, by vibrating their flight muscles without violent movement of the wings (see insect thermoregulation). Such endothermal activity is an example of the difficulty of consistent application of terms such as poikilothermy and homiothermy. In addition to behavioral adaptations, physiological adaptations help ectotherms regulate temperature. Diving reptiles conserve heat by heat exchange mechanisms, whereby cold blood from the skin picks up heat from blood moving outward from the body core, re-using and thereby conserving some of the heat that otherwise would have been wasted. The skin of bullfrogs secretes more mucus when it is hot, allowing more cooling by evaporation. During periods of cold, some ectotherms enter a state of torpor, in which their metabolism slows or, in some cases, such as the wood frog, effectively stops. The torpor might last overnight or last for a season, or even for years, depending on the species and circumstances. Pros and consEdit Ectotherms rely largely on external heat sources such as sunlight to achieve their optimal body temperature for various bodily activities. Accordingly, they depend on ambient conditions to reach operational body temperatures. In contrast, endothermic animals, as a rule, maintain nearly constant high operational body temperatures largely by reliance on internal heat produced by metabolically active organs (liver, kidney, heart, brain, muscle) or even by specialized heat producing organs like brown adipose tissue (BAT). Also, as a rule, ectotherms have lower metabolic rates than endotherms at a given body mass. As a consequence, endotherms generally rely on higher food consumption, and commonly on food of higher energy content. Such requirements may limit the carrying capacity of a given environment for endotherms as compared to its carrying capacity for ectotherms. Because ectotherms depend on environmental conditions for body temperature regulation, as a rule, they are more sluggish at night and in early mornings. When they emerge from shelter, many diurnal ectotherms need to heat up in the early sunlight before they can begin their daily activities. In cool weather the foraging activity of such species is therefore restricted to the day time in most vertebrate ectotherms, and in cold climates most cannot survive at all. In lizards, for instance, most nocturnal species are geckos specialising in "sit and wait" foraging strategies. Such strategies do not require as much energy as active foraging and do not, as a rule, require hunting activity of the same intensity. From another point of view, sit-and-wait predation may require very long periods of unproductive waiting. Endotherms cannot, in general, afford such long periods without food, but suitably adapted ectotherms can wait without expending much energy. Endothermic vertebrate species are therefore less dependent on the environmental conditions and have developed a higher variability (both within and between species) in their daily patterns of activity. Contrast between thermodynamics and biological terminologyEdit Note that because of historical accident, students encounter a source of possible confusion between the terminology of physics and biology. Whereas the thermodynamic terms "exothermic" and "endothermic" respectively refer to processes that give out heat energy and processes that absorb heat energy, in biology the sense is effectively inverted. The metabolic term "ectotherm" refers to organisms that rely largely on external heat to achieve a full working temperature, and "endotherm" refers to organisms that produce heat from within as a major factor in controlling their bodily temperature. - Davenport, John. Animal Life at Low Temperature. Publisher: Springer 1991. ISBN 978-0412403507 - Jay M. Savage; with photographs by Michael Fogden and Patricia Fogden. (2002). The Amphibians and Reptiles of Costa Rica: a Herpetofauna Between Two Continents, Between Two Seas. Chicago, Ill.: University of Chicago Press. p. 409. ISBN 0-226-73538-9. - Milton Hildebrand; G. E. Goslow, Jr. Principal ill. Viola Hildebrand. (2001). Analysis of vertebrate structure. New York: Wiley. p. 429. ISBN 0-471-29505-1. - "Best Reptile UVA/UVB Light Bulbs (Reviewed + Best Deals From Amazon) – BuddyGenius". buddygenius.com. 4 January 2018. Archived from the original on 17 January 2018. Retrieved 6 May 2018. - Lewis, L; Ayers, J (2014). "Temperature Preference and Acclimation in the Jonah Crab, Cancer borealis". Journal of Experimental Marine Biology and Ecology. 455: 7–13. doi:10.1016/j.jembe.2014.02.013. - Willmer, Pat; Stone, Graham; Johnston, Ian. Environmental Physiology of Animals. Hoboken: Wiley, 2009. Ebook Library. Web. 01 Apr. 2016. - McClure, Melanie; Cannel, Elizabeth; Despland, Emma (June 2011). "Thermal ecology and behaviour of the nomadic social forager Malacosoma disstria". Physiological Entomology. 36 (2): 120–127. doi:10.1111/j.1365-3032.2010.00770.x. - Schowalter, T. D.; Ring, D. R. (2017-01-01). "Biology and Management of the Fall Webworm, Hyphantria cunea (Lepidoptera: Erebidae)". Journal of Integrated Pest Management. 8 (1). doi:10.1093/jipm/pmw019. Archived from the original on 2017-11-15. - Rehnberg, Bradley (2002). "Heat Retention by webs of the fall webworm Hyphantria cunea (Lepidoptera: Arctiidae): infrared warming and forced convective cooling"". Journal of Thermal Biology: 525–530. - LOEWY, KATRINA. "LIFE HISTORY TRAITS AND REARING TECHNIQUES FOR FALL WEBWORMS (HYPHANTRIA CUNEA DRURY) IN COLORADO" (PDF). Journal of the Lepidopterists’ Society. Archived (PDF) from the original on 2018-05-06. - Hunter, Alison F. (2000-11-01). "Gregariousness and repellent defences in the survival of phytophagous insects". Oikos. 91 (2): 213–224. doi:10.1034/j.1600-0706.2000.910202.x. ISSN 1600-0706. - Hut RA, Kronfeld-Schor N, van der Vinne V, De la Iglesia H (2012). "In search of a temporal niche: environmental factors". Progress in brain research. 199: 281–304. doi:10.1016/B978-0-444-59427-3.00017-4. PMID 22877672.
<urn:uuid:d10428d8-34d7-42b0-8e12-f50370614850>
3.765625
2,049
Knowledge Article
Science & Tech.
39.293442
95,498,700
The transition elements are the elements that make up Groups 3 through 12 of the periodic table. These elements, all of which are metals, include some of the best-known names on the periodic table—iron, gold, silver, copper, mercury, zinc, nickel, chromium, and platinum among them. A number of other transition elements are probably somewhat less familiar, although they have vital industrial applications. These elements include titanium, vanadium, manganese, zirconium, molybdenum, palladium, and tungsten. One member of the transition family deserves special mention. Technetium (element #43) is one of only two "light" elements that does not occur in nature. It was originally produced synthetically in 1937 among the products of a cyclotron reaction. The discoverers of technetium were Italian physicists Carlo Perrier and Emilio Segré (1905–1989). The transition elements share many physical properties in common. With the notable exception of mercury, the only liquid metal, they all have relatively high melting and boiling points. They also have a shiny, lustrous, metallic appearance that may range from silver to gold to white to gray. In addition, the transition metals share some chemical properties. For example, they tend to form complexes, compounds in which a group of atoms cluster around a single metal atom. Ordinary copper sulfate, for example, normally occurs in a configuration that includes four water molecules surrounding a single copper ion. Transition element complexes have many medical and industrial applications. Another common property of the transition elements is their tendency to form colored compounds. Some of the most striking and beautiful chemical compounds known are those that include transition metals. Copper compounds tend to be blue or green; chromium compounds are yellow, orange, or green; nickel compounds are blue, green, or yellow; and manganese compounds are purple, black, or green. Words to Know Amalgam: An alloy that contains mercury. Basic oxygen process (BOP): A method for making steel in which a mixture of pig iron, scrap iron, and scrap steel is melted in a large steel container and a blast of pure oxygen is blown through the container. Bessemer convertor: A device for converting pig iron to steel in which a blast of hot air is blown through molten pig iron. Blast furnace: A structure in which a metallic ore (often, iron ore) is reduced to the elemental state. Cast iron: A term used to describe various forms of iron that also contain anywhere from 0.5 to 4.2 percent carbon and 0.2 to 3.5 percent silicon. Complex: A chemical compound in which a single metal atom is surrounded by two or more groups of atoms. Ductile: Capable of being drawn or stretched into a thin wire. Electrolytic cell: A system in which electrical energy is used to bring about chemical changes. Electrolytic copper: A very pure form of copper. Malleable: Capable of being rolled or hammered into thin sheets. Open hearth process: A method for making steel in which a blast of hot air or oxygen is blown across the surface of a molten mixture of pig iron, hematite, scrap iron, and limestone in a large brick container. Patina: A corrosion-resistant film that often develops on copper surfaces. Pig iron: A form of iron consisting of approximately 90 percent pure iron and the remaining 10 percent of various impurities. Slag: A by-product of the reactions by which iron is produced, consisting primarily of calcium silicate. The discussion that follows focuses on only three of the transition elements: iron, copper, and mercury. These three elements are among the best known and most widely used of all chemical elements. Iron is the fourth most abundant element in Earth's crust, following oxygen, silicon, and aluminum. In addition, Earth's core is believed to consist largely of iron. The element rarely occurs in an uncombined form but is usually found as a mineral such as hematite (iron[III] oxide), magnetite (lodestone, a mixture of iron[II] and iron[III] oxides), limonite (hydrated iron[III] oxide), pyrite (iron sulfide), and siderite (iron[II] carbonate). Properties. Iron is a silver-white or gray metal with a melting point of 2,795°F (1,535°C) and a boiling point of 4,982°F (2,750°C). Its chemical symbol, Fe, is taken from the Latin name for iron, ferrum. It is both malleable and ductile. Malleability is a property common to most metals, meaning that a substance can be hammered into thin sheets. Many metals are also ductile, meaning that they can be drawn into a fine wire. In a pure form, iron is relatively soft and slightly magnetic. When hardened, it becomes much more magnetic. Iron is the most widely used of all metals. Prior to its use, however, it must be treated in some way to improve its properties, or it must be combined with one or more other elements (in this case, another metal) to form an alloy. By far the most popular alloy of iron is steel. One of the most common forms of iron is pig iron, produced by smelting iron ore with coke (nearly pure carbon) and limestone in a blast furnace. (Smelting is the process of obtaining a pure metal from its ore.) Pig iron is approximately 90 percent pure iron and is used primarily in the production of cast iron and steel. Cast iron is a term used to describe various forms of iron that also contain carbon and silicon ranging in concentrations from 0.5 to 4.2 percent of the former and 0.2 to 3.5 percent of the latter. Cast iron has a vast array of uses in products ranging from thin rings to massive turbine bodies. Wrought iron contains small amounts of a number of other elements, including carbon, silicon, phosphorus, sulfur, chromium, nickel, cobalt, copper, and molybdenum. Wrought iron can be fabricated into a number of forms and is widely used because of its resistance to corrosion. How iron is obtained. Iron is one of the handful of elements that was known to ancient civilizations. Originally it was prepared by heating a naturally occurring ore of iron with charcoal in a very hot flame. The charcoal was obtained by heating wood in the absence of air. There is some evidence that this method of preparation was known as early as 3000 b.c. But the secret of ore smelting was carefully guarded within the Hittite civilization of the Near East for almost 2,000 years. Then, when that civilization fell in about 1200 b.c., the process of iron ore smelting spread throughout eastern and southern Europe. Iron-smiths were soon making ornamental objects, simple tools, and weapons from iron. So dramatic was the impact of this new technology on human societies that the period following 1200 b.c. is generally known as the Iron Age. A major change in the technique for producing iron from its ores occurred around 1709. As trees (and therefore the charcoal made from them) grew increasingly scarce in Great Britain, English inventor Abraham Darby (c. 1678–1717) discovered a method for making coke from soft coal. Since coal was abundant in the British Isles, Darby's technique provided for a consistent and dependable method of converting iron ores to the pure metal. The modern production of iron involves heating iron ore with coke and limestone in a blast furnace, where temperatures range from 400°F (200°C) at the top of the furnace to 3,600°F (2,000°C) at the bottom. Some blast furnaces are as tall as 15-story buildings and can produce 2,400 tons (2,180 metric tons) of iron per day. Inside a blast furnace, a number of chemical reactions occur. One of these involves the reaction of coke (nearly pure carbon) with oxygen to form carbon monoxide. This carbon monoxide then reacts with iron ore to form pure iron and carbon dioxide. Limestone is added to the reaction mixture to remove impurities in the iron ore. The product of this reaction, known as slag, consists primarily of calcium silicate. The iron formed in a blast furnace exists in a molten form (called pig iron) that can be drawn off at the bottom of the furnace. The slag also is molten but less dense than the iron. It is drawn off from taps just above the outlet from which the molten iron is removed. Early efforts to use pig iron for commercial and industrial applications were not very successful. The material proved to be quite brittle, and objects made from it tended to break easily. Cannons made of pig iron, for example, were likely to blow apart when they fired a shell. By 1760, inventors had begun to find ways of toughening pig iron. These methods involved remelting the pig iron and then burning off the carbon that remained mixed with the product. The most successful early device for accomplishing this step was the Bessemer converter, named after its English inventor, Henry Bessemer (1813–1898). In the Bessemer converter, a blast of hot air is blown through molten pig iron. The process results in the formation of stronger forms of iron: cast and wrought iron. More importantly, when additional elements such as manganese and chromium are added to the converter, a new product—steel—is formed. Later inventions improved on the production of steel by the Bessemer converter. In the open hearth process, for example, a mix of molten pig iron, hematite, scrap iron, and limestone is placed into a large brick container. A blast of hot air or oxygen is then blown across the surface of the molten mixture. Chemical reactions within the molten mixture result in the formation of either pure iron or, with the addition of alloying metals such as manganese or chromium, a high grade of steel. An even more recent variation on the Bessemer converter concept is the basic oxygen process (BOP). In the BOP, a mixture of pig iron, scrap iron, and scrap steel is melted in a large steel container and a blast of pure oxygen is blown through the container. The introduction of alloying metals makes possible the production of various types of steel with many different properties. Uses of iron. Alloyed with other metals, iron is the most widely used of all metallic elements. The way in which it is alloyed determines the uses to which the final product is put. Steel, for example, is a general term used to describe iron alloyed with carbon and, in some cases, with other elements. The American Iron and Steel Institute recognizes 27 standard types of steel. Three of these are designated as carbon steels that may contain, in addition to carbon, small amounts of phosphorus and/or sulfur. Another 20 types of steel are made of iron alloyed with one or more of the following elements: chromium, manganese, molybdenum, nickel, silicon, and vanadium. Finally, four types of stainless and heat-resisting steels contain some combination of chromium, nickel, and manganese alloyed with iron. Steel is widely used in many types of construction. It has at least six times the strength of concrete, another traditional building material, and about three times the strength of special forms of high-strength concrete. A combination of these two materials—called reinforced concrete—is one of the strongest of all building materials available to architects. The strength of steel has made possible some remarkable feats of construction, including very tall buildings (skyscrapers) and bridges with very wide spans. It also has been used in the manufacture of automobile bodies, ship hulls, and heavy machinery and machine parts. Metallurgists (specialists in the science and technology of metals) have invented special iron alloys to meet very specific needs. Alloys of cobalt and iron (both magnetic materials themselves) can be used in the manufacture of very powerful permanent magnets. Steels that contain the element niobium (originally called columbium) have unusually great strength and have been used, among other places, in the construction of nuclear reactors. Tungsten steels also are very strong and have been used in the production of high-speed metal cutting tools and drills. The alloying of aluminum with iron produces a material that can be used in AC (alternating current) magnetic circuits since it can gain and lose magnetism very quickly. Metallic iron has other applications as well. Its natural magnetic properties make it suitable for both permanent magnets and electromagnets. It also is used in the production of various types of dyes, including blueprint paper and certain inks, and in the manufacture of abrasives. Biochemical applications. Iron is essential to the survival of all vertebrates. Hemoglobin, the molecule in blood that transports oxygen from the lungs to an organism's cells, contains a single iron atom buried deep within its complex structure. When humans do not take in sufficient amounts of iron in their daily diets, they may develop a disorder known as anemia. Anemia is characterized by a loss of skin color, a weakness and tendency to faint, palpitation of the heart, and a general sense of exhaustion. Iron also is important to the good health of plants. It is found in a group of compounds known as porphyrins (pronounced POUR-fuhrinz) that play an important role in the growth and development of plant cells. Plants that lack iron have a tendency to lose their color, become weak, and die. Copper is one of only two metals with a distinctive color (the other being gold). Copper is often described as having a reddish-brown hue. It has a melting point of 1,985°F (1,085°C) and a boiling point 4,645°F (2,563°C). Its chemical symbol, Cu, is derived from the Latin name for the element, cuprum. Copper is one of the elements that is essential to life in tiny amounts (often referred to as trace elements), although larger amounts can be toxic. About 0.0004 percent of the weight of the human body is copper. It can be found in such foods as liver, shellfish, nuts, raisins, and dried beans. Copper also is found in an essential biochemical compound known as hemocyanin. Hemocyanin is chemically similar to the red hemoglobin found in human blood, which has an iron atom in the center of its molecule. By contrast, hemocyanin contains an atom of copper rather than iron in its core. Lobsters and other large crustaceans have blue blood whose color is caused by the presence of hemocyanin. History of copper. Copper was one of the first metals known to humans. One reason for this fact is that copper occurs not only as ores (compounds that must be converted to metal) but occasionally as native copper, a pure form of the element found in the ground. In prehistoric times an early human could actually find a chunk of pure copper in the earth and hammer it into a tool with a rock. Native copper was mined and used in the Tigris-Euphrates valley (modern Iraq) as long as 7,000 years ago. Copper ores have been mined for at least 5,000 years because it is fairly easy to get the copper out of the ore. For example, if an ore of copper oxide is heated in a wood fire, the carbon in the charcoal reacts with oxygen in the oxide and converts it to pure copper metal. Making pure copper. Extremely pure copper (greater than 99.95 percent purity) is generally called electrolytic copper because it is made by the process known as electrolysis. Electrolysis is a reaction by which electrical energy is used to bring about some kind of chemical change. The high purity is needed because most copper is used to make electrical equipment. Small amounts of impurities present in copper can seriously reduce its ability to conduct electricity. Even 0.05 percent of arsenic as an impurity, for example, will reduce copper's conductivity by 15 percent. Electric wires must be made of very pure copper, especially if the electricity is to be carried for many miles through high-voltage transmission lines. Uses of copper. By far the most important use of copper is in electrical wiring. It is an excellent conductor of electricity (second only to silver), it can be made extremely pure, it corrodes very slowly, and it can be formed easily into thin wires. Copper is also an important ingredient of many useful alloys. (An alloy is a mixture of one metal with another to improve on the original metal's properties.) Brass is an alloy of copper and zinc. If the brass contains mostly copper, it is a golden yellow color; if it contains mostly zinc, it is pale yellow or silvery. Brass is one of the most useful of all alloys. It can be cast or machined into everything from candlesticks to cheap, gold-imitating jewelry (but this type of jewelry often turns human skin green—the copper reacts with salts and acids in the skin to form green copper chloride and other compounds). Several other copper alloys include: bronze, which is mainly copper plus tin; German silver and sterling silver, which consist of silver plus copper; and silver tooth fillings, which contain about 12 percent copper. Probably the first alloy ever to be made and used by humans was bronze. Archaeologists broadly divide human history into three periods. The Bronze Age (c. 4000–3000 b.c.) is the second of these periods, occurring after the Stone Age and before the Iron Age. During the Bronze Age, both bronze and pure copper were used for making tools and weapons. Because it resists corrosion and conducts heat well, copper is widely used in plumbing and heating applications. Copper pipes and tubing are used to distribute hot and cold water through houses and other buildings. Copper's superior ability to conduct heat also makes it useful in the manufacture of cooking utensils such as pots and pans. An even temperature across the pan bottom is important for cooking so food doesn't burn or stick to hot spots. The insides of the pans must be coated with tin, however, to keep excessive amounts of copper from seeping into the food. Copper corrodes only slowly in moist air—much more slowly than iron rusts. First, it darkens in color because of the formation of a thin layer of black copper oxide. Then, as the years go by, the copper oxide is converted into a bluish-green patina (a surface appearance that comes with age) of basic copper carbonate. The green color of the Statue of Liberty, for example, was formed in this way. Mercury, the only liquid metal, has a beautiful silvery color. Its chemical symbol, Hg, comes from the Latin name of the element, hydrargyrum, for "liquid silver." Mercury has a melting point of −38°F (−70°C) and a boiling point of 673°F (352.5°C). Its presence in Earth's crust is relatively low compared to other elements, equal to about 0.08 parts per million. Mercury is not considered to be rare, however, because it is found in large, highly concentrated deposits. Nearly all mercury exists in the form of a red ore called cinnabar, or mercury (II) sulfide. Sometimes shiny globules of mercury appear among outcrops of cinnabar, which is probably the reason that mercury was discovered so long ago. The metal is relatively easy to extract from the ore. In fact, the modern technique for extracting mercury is nearly identical in principle to the method used centuries ago. Cinnabar is heated in the open air. Oxygen in the air reacts with sulfur in the cinnabar, producing pure mercury metal. The mercury metal vaporizes and is allowed to condense on a cool surface, from which it can be collected. Mercury does not react readily with air, water, acids, alkalis, or most other chemicals. It has a surface tension six times greater than that of water. Surface tension refers to the tendency of a liquid to form a tough "skin" on its surface. The high surface tension of mercury explains its tendency not to "wet" surfaces with which it comes into contact. No one knows exactly when mercury was discovered, but many ancient civilizations were familiar with this element. As long ago as Roman times, people had learned to extract mercury from ore and used it to purify gold and silver. Ore containing gold or silver would be crushed and treated with mercury, which rejects impurities, to form a mercury alloy, called an amalgam. When the amalgam is heated, the mercury vaporizes, leaving pure gold or silver behind. Toxicity. Mercury and all of its compounds are extremely poisonous. The element also has no known natural function in the human body. Classified as a heavy metal, mercury is difficult for the body to eliminate. This means that even small amounts of the metal can act as a cumulative poison, collecting over a long period of time until it reaches a dangerous level. Humans can absorb mercury through any mucous membrane and through the skin. Its vapor can be inhaled, and mercury can be ingested in foods such as fish, eggs, meat, and grain. In the body, mercury affects the nervous system, liver, and kidneys. Symptoms of mercury poisoning include tremors, tunnel vision, loss of balance, slurred speech, and unpredictable emotions. (Tunnel vision is a narrowing of the visual field so that peripheral vision—the outer part of the field of vision that encompasses the far right and far left sides—is completely eliminated.) The phrase "mad as a hatter" owes it origin to symptoms of mercury poisoning that afflicted hatmakers in the 1800s, when a mercury compound was used to prepare beaver fur and felt materials. Until recently, scientists thought that inorganic mercury was relatively harmless. As a result, industrial wastes containing mercury were routinely discharged into large bodies of water. Then, in the 1950s, more than 100 people in Japan were poisoned by fish containing mercury. Forty-three people died, dozens more were horribly crippled, and babies born after the outbreak developed irreversible damage. It was found that inorganic mercury in industrial wastes had been converted to a much more harmful organic form known as methyl mercury. As this substance works its way up the food chain, its quantities accumulate to dangerous levels in larger fish. Today, the dumping of mercury-containing wastes has been largely banned, and many of its industrial uses have been halted. Uses. Mercury is used widely in a variety of measuring instruments and devices, such as thermometers, barometers, hydrometers, and pyrometers. It also is used in electrical switches and relays, in mercury arc lamps, and for the extraction of gold and silver from amalgams. A small amount is still used in the preparation of amalgams for dental repairs. The largest single use of mercury today, however, is in electrolytic cells, in which sodium chloride is converted to metallic sodium and gaseous chlorine. The mercury is used to form an amalgam with sodium in the cells. [See also Alloy ] "Transition Elements." UXL Encyclopedia of Science. . Encyclopedia.com. (July 19, 2018). http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/transition-elements "Transition Elements." UXL Encyclopedia of Science. . Retrieved July 19, 2018 from Encyclopedia.com: http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/transition-elements Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA). Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list. Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites: Modern Language Association The Chicago Manual of Style American Psychological Association - Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates. - In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list. transition elements or transition metals, in chemistry, group of elements characterized by the filling of an inner d electron orbital as atomic number increases. This includes the elements from titanium to copper, and those lying in the columns below them in the periodic table. Many of the chemical and physical properties of the transition elements are due to their unfilled d orbitals. In the elements of the lanthanide series and the actinide series the inner f orbital is filled as atomic number increases; those elements are often called the inner transition elements. Transition elements generally exhibit high density, high melting point, magnetic properties, variable valence, and the formation of stable coordination complexes. Their variable valence is due to the electrons in the d orbitals. The study of the complex ions and compounds formed by transition metals is an important branch of chemistry. Many of these complexes are highly colored and exhibit paramagnetism. "transition elements." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (July 19, 2018). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/transition-elements "transition elements." The Columbia Encyclopedia, 6th ed.. . Retrieved July 19, 2018 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/transition-elements Modern Language Association The Chicago Manual of Style American Psychological Association "transition elements." World Encyclopedia. . Encyclopedia.com. (July 19, 2018). http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/transition-elements "transition elements." World Encyclopedia. . Retrieved July 19, 2018 from Encyclopedia.com: http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/transition-elements
<urn:uuid:d8c19e9a-7c02-4667-a683-81b2f9460f07>
4.09375
5,719
Knowledge Article
Science & Tech.
44.912735
95,498,705
Estimating Population Size of a Rare Damselfly to Support Reintroduction Efforts Over the last year, the National Park Service (NPS), Presidio Trust, and San Francisco Zoo began a captive propagation program for the reintroduction of this damselfly to the Presidio’s Mountain Lake. The zoo’s propagation protocol has since been honed to perfection and, to date, has resulted in the release of well over 2,000 nymphs from approximately 20 captured adults. Now that this captive breeding program has gotten off the ground the group is seeking to not only continue the efforts at the lake, but expand releases to other suitable sites throughout the Presidio. The biggest question now is how many adults can be collected at the source (Fort Point) without significantly impacting that fragile population. A mark-recapture study was carried out in July to address this question. The team, including NPS, Presidio Trust, and zoo staff and interns, focused on males as they are easier to spot and identify. Individuals were captured, their wings delicately numbered with a fine-tipped pen, and then placed in a cooler. The small site was scoured for several hours until all visible males were collected. A total of approximately 40 males were marked and released on the first day. The following day the team again attempted to capture as many males as possible, noting how many were recaptures. From these numbers they were able to estimate the male population size at approximately 117 individuals. Assuming a sex ratio of 1:1 (as supported by local entomologist Dr. John Hafernik) they now have a good idea of the total population size at the source, which will help inform future collection target goals. Contact Jonathan Young with questions about this project.
<urn:uuid:80c204c4-cf57-4d94-a25d-e7a840abe0ad>
3.046875
366
Knowledge Article
Science & Tech.
37.322414
95,498,706
- Views 3632 Thomas J. Goreau Global Coral Reef Alliance, Cambridge, USA Email: firstname.lastname@example.org Received 23 May 2014; revised 26 June 2014; accepted 5 July 2014 Copyright © 2014 by author and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/ Increasing stress from global warming, sea level rise, acidification, sedimentation, pollution, and unsustainable practices have degraded the most critical coastal ecosystems including coral reefs, oyster reefs, and salt marshes. Conventional restoration methods work only under perfect condi- tions, but fail nearly completely when the water becomes too hot or water quality deteriorates. New methods are needed to greatly increase settlement, growth, survival, and resistance to envi- ronmental stress of keystone marine organisms in order to maintain critical coastal ecosystem functions including shore protection, fisheries, and biodiversity. Electrolysis methods have been applied to marine ecosystem restoration since 1976, with spectacular results (Figures 1(a)-(c)). This paper provides the first overall review of the data. Low-voltage direct current trickle charges are found to increase the settlement of corals 25.86 times higher than uncharged control sites, to increase the mean growth rates of reef-building corals, soft corals, oysters, and salt marsh grass— an average of 3.17 times faster than controls (ranging from 2 to 10 times depending on species and conditions), and to increase the survival of electrically charged marine organisms—an aver- age of 3.47 times greater than controls, with the biggest increases under the most severe envi- ronmental stresses. These results are caused by the fundamental biophysical stimulation of natu- ral biochemical energy production pathways, used by all organisms, provided by electrical stimu- lation under the right conditions. This paper reviews for the first time all published results from properly designed, installed, and maintained projects, and contrasts them with those that do not meet these criteria.
<urn:uuid:85434c70-ec93-4835-99ea-56e7cdc82ab2>
2.859375
434
Academic Writing
Science & Tech.
29.014654
95,498,710
The wind was only blowing 2 mph on the ground this morning but 10 mph up high, so the cranes and planes did not fly. As the pilots have observed many times this fall, it's typical for the wind to be stronger at higher altitudes."Winds can be very light near the earth's surface, yet be blowing as much as 150 mph in the 'jet stream' directly above," says Meteorologist Wendell Bechtold. How do scientists know? Weather balloons are one way to measure wind speed and direction. The balloons are launched twice each day at airports and weather stations around the world. As they float up into the atmosphere, the balloons collect data at different altitudes. They can go as high as 100,000 feet high before they burst. Weather balloons are launched twice each day to measure wind speed and direction. In the Classroom Please send all questions, comments, and suggestions to our feedback form.
<urn:uuid:c5deab8d-a3fc-4ba1-8c85-c1750ff40840>
3.703125
191
Truncated
Science & Tech.
69.25243
95,498,714
Technology Is Moving Too Slowly to Make Climate-Change Target The IPCC says we can emit a trillion tons of carbon and still avoid major warming. We’ll emit much more. Improved technology could help reduce greenhouse gas emissions linked to climate change. One of the key findings of the Intergovernmental Panel on Climate Change (IPCC) report released last week was that we need to emit no more than one trillion tons of carbon in order to stand a good chance of limiting global warming to 2 °C. The problem is this: technology is not progressing fast enough to make this happen. The trillion-ton figure is really an estimate, as no one knows precisely how many tons of carbon will raise the temperature of the planet by 2 °C. And less warming than that could cause significant damage, while humans will probably survive higher levels. That said, the number provides one of the clearest ways of thinking about what most climate scientists believe needs to be done to avert serious climate change. Unfortunately, we’re on track to hit one trillion tons just 27 years from now. And if we keep increasing the rate of emissions as we have been, we’ll hit it even sooner. To avoid this, the world needs to reduce carbon dioxide emissions by 2.5 percent a year, according to data from climate scientists at Oxford University. That figure seems like a nice, manageable one until you realize what order of change would be required to achieve it. One of the most remarkable energy transitions in history happened when, starting in earnest in the 1970s, France went from getting 1 percent of its energy from nuclear power to getting 80 percent over a period of just 30 years. As it replaced fossil-burning plants with nuclear ones, the country reduced emissions by only about 2 percent a year, according to David Victor, co-director of the Laboratory on International Law and Regulation at the University of California, San Diego. To come in under the limit, the whole world would need to undergo a similar transition even faster. Achieving this in the United States, which is struggling to build any nuclear power plants, is hard to imagine. Its transition toward natural gas power from coal is helping. Increased use of natural gas, plus a recession, led to a reduction in carbon dioxide emissions of 6.7 percent in 2009, well in excess of the rate needed to avoid emitting one trillion tons. Unfortunately, emissions jumped 3.8 percent the following year. Over the last 10 years, on average, emissions decreased by just 1 percent per year. A continued transition to natural gas could help sustain that decrease in the United States for a while. But natural gas power is ultimately a dead end. It can cut emissions in half compared to coal, but it still emits carbon dioxide. “Long-term ‘sustainable’ emissions of fossil carbon are essentially zero,” says Myles Allen, professor of geosystem science at Oxford University, whose research has helped establish the trillion-ton number. Allen doesn’t think we have to shut down all the fossil fuel plants. Instead, he’s for capturing carbon dioxide and storing it. But so far, the technology, which could involve capturing the carbon dioxide from a power plant and injecting it into underground reservoirs, hasn’t been demonstrated at a large scale. “It will take decades to work out which reservoirs leak and which don’t, and we won’t get that information until it is deployed at scale,” he says. Only a few demonstrations of CCS are planned, though, and many of those probably won’t go forward (see “EPA Carbon Regs Won’t Help Advance Technology” and “Cheaper Ways to Capture Carbon Dioxide”). The International Energy Agency has warned that demonstrations are “seriously off pace.” Renewable power is often held up as the way to reduce carbon emissions over the long term. But even with fast growth in recent years, wind and solar account for only about 4 percent of electricity in the United States, and reaching much higher levels will bring challenges. The more that wind and solar power are added to the grid, the more utilities need to spend to deal with their intermittency (see “Wind Turbines, Battery Included, Can Keep Power Supplies Stable”). In one sense, wind power isn’t really zero emissions, since it typically requires backup from natural gas power plants. To make matters even more difficult, changes would have to be made not only in the power sector, but also in transportation, where alternatives like nuclear power don’t exist. While nuclear power can offer the same kind of consistent, around-the-clock power as fossil fuels, you can’t yet buy a zero-emissions car with the same performance as a gas-powered one. The closest you can come is probably the Tesla Model S, but that doesn’t go as far per charge as a gas-powered car (265 miles compared to 350 or so), and the existing network of charging options means that in most places in the U.S. you need hours to recharge (“How Tesla Is Driving Electric Car Innovation”). The car also costs $80,000, which means only a few people can afford it. And at any rate, it isn’t really a zero-emissions vehicle because it runs on power from the grid, which comes largely from fossil fuels. Even assuming grid energy gets considerably cleaner over the next two decades, by 2030 electric vehicles will still involve about one-third the emissions of gas-powered cars when you factor in the emissions from power plants, according to a recent analysis from John Heywood, a professor of mechanical engineering at MIT. What’s more, given the performance limitations and costs of electric vehicles, they’re unlikely to go mainstream soon, Heywood says. A more realistic scenario, he thinks, is that existing cars will gradually get more efficient as a result of fuel economy regulations, and emissions will be cut in half. But that won’t happen until 2050, in part because cars with existing technology will stay on the roads for decades. Reducing emissions faster will require people to drive less, or expect less from their vehicles. “It’s very difficult because you have to change people’s behavior,” Heywood says. The same probably goes for reducing emissions throughout the economy. But the question isn’t really whether we’ll limit emissions to a trillion tons. It seems inevitable that humankind will blow past that goal. The bigger question is how much more carbon will be emitted, given that several trillion tons remain in the ground, waiting to be extracted and burned. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:d78f34e3-0fdf-4242-988e-f903122cd63b>
3.609375
1,421
Truncated
Science & Tech.
47.343683
95,498,717
Absolute Determination of Gravity in Australia for the Purpose of Establishment of Precise Reference Frame for Mean Sea Level Change Monitoring in Southwestern Pacific Absolute gravity measurements were made at three sites in Australia as the first step of an on-going bilateral collaboration between Japan and Australia for the purpose of monitoring mean sea level change in the southwestern Pacific region. The measurements were taken with 3 Micro-g solutions FG5 absolute gravimeters: two of the instruments are owned by the Geographical Survey Institute (GSI), Japan, and the third is owned by the Exploration and Mining Division of CSIRO, Australia. Intercomparison of the 3 instruments at Mt. Stromlo over a period of 3 days, show agreement to better than 1 microgal. The standard deviations of a single drop are 12–27 microgals and the formal errors of the determined gravity values are about 1 microgal. The data taken by these instruments clearly show the 2–6 microgal ocean load tidal components, indicating the suitability of these instruments to detect any non-tidal gravity changes on the order of 1 microgal corresponding to a vertical crustal movement of 3mm. We intend to continue this project as a joint research for the future. KeywordsTide Gauge Gravity Change Absolute Gravity Geographical Survey Institute Absolute Gravimeter Unable to display preview. Download preview PDF. - Ananga, N., R. Coleman and C. Rizos, Geodetic Monitoring of Tide Gauge Bench Marks with GPS, Jour. Geod. Soc. Japan, 1995, 41, 91–97.Google Scholar - Boedecker, G., IAGBN: Absolute Gravity Observations Data Processing Standards, BGIBull. d’Information, 1991, 69, 25.Google Scholar - Marson, I., J. E. Faller, G. Cerutti, P. De Maria, J.-M. Chartier, L. Robertsson, L. Vitushkin, J. Friederich, K. Krauterbluth, D. Stizza, J. Liard, C. Gagnon, A. Lothhammer, H. Wilmes, J. Makinen, M. Murakami, F. Rehren, M. Schnull, D. Ruess and G. S. Sasagawa, Fourth International Comparison of Absolute Gravimeters, Metrologia, 1995, 32, 137–144.Google Scholar - Niebauer, T and G. Sasagawa, J. Faller, R. Hilt and F. Klopping, A New Generation of Absolute Gravimeters, Metrologia, 1995, 32, 159–180.Google Scholar
<urn:uuid:358f137e-ee27-4981-b72d-812a0beae9b5>
2.921875
557
Academic Writing
Science & Tech.
52.813339
95,498,722
Forest Service Study Finds Urban Trees Removing Fine Particulate Air Pollution, Saving Lives News Jun 25, 2013 In the first effort to estimate the overall impact of a city’s urban forest on concentrations of fine particulate pollution (particulate matter less than 2.5 microns, or PM2.5), a U.S. Forest Service and Davey Institute study found that urban trees and forests are saving an average of one life every year per city. In New York City, trees save an average of eight lives every year. Fine particulate air pollution has serious health effects, including premature mortality, pulmonary inflammation, accelerated atherosclerosis, and altered cardiac functions. In a study recently published on-line by the journal Environmental Pollution, researchers David Nowak and Robert Hoehn of the U.S. Forest Service and Satoshi Hirabayashi and Allison Bodine of the Davey Institute in Syracuse, N.Y., estimated how much fine particulate matter is removed by trees in 10 cities, their impact on PM2.5 concentrations and associated values and impacts on human health. The study, “Modeled PM2.5 Removal by Trees in Ten U.S. Cities and Associated Health Effects,” is available at: http://www.nrs.fs.fed.us/pubs/43676 “More than 80 percent of Americans live in urban areas containing over 100 million acres of trees and forests,” said Michael T. Rains, Director of the Forest Service’s Northern Research Station and Acting Director of the Forest Products Lab. “This research clearly illustrates that America’s urban forests are critical capital investments helping produce clear air and water; reduce energy costs; and, making cities more livable. Simply put, our urban forests improve people’s lives.” Cities included in the study were Atlanta, Baltimore, Boston, Chicago, Los Angeles, Minneapolis, New York City, Philadelphia, San Francisco, and Syracuse, NY. Overall, the greatest effect of trees on reducing health impacts of PM2.5 occurred in New York due to its relatively large human population and the trees’ moderately high removal rate and reduction in pollution concentration. The greatest overall removal by trees was in Atlanta due to its relatively high percent tree cover and PM2.5 concentrations. “Trees can make cities healthier,” Nowak said. “While we need more research to generate better estimates, this study suggests that trees are an effective tool in reducing air pollution and creating healthier urban environments.” The removal of PM2.5 by urban trees is substantially lower than for larger particulate matter (particulate matter less than 10 microns – PM10), but the health implications and values are much higher. The total amount of PM2.5 removed annually by trees varied from 4.7 metric tons in Syracuse to 64.5 metric tons in Atlanta, with annual values varying from $1.1 million in Syracuse to $60.1 million in New York City. Most of these values were dominated by the effects of reducing human mortality; the average value per reduced death was $7.8 million. Reduction in human mortality ranged from one person per 365,000 people in Atlanta to one person per 1.35 million people in San Francisco. Researchers used the U.S. Environmental Protection Agency’s BenMAP program to estimate the incidence of adverse health effects, such as mortality and morbidity, and associated monetary value that result from changes in PM2.5 concentrations. Local population statistics from the 2010 U.S. Census were also used in the model. i-Tree, a suite of tools developed by the Forest Service and Davey Institute, was used to calculate PM2.5 removal and associated change in concentrations in the study cities. The mission of the U.S. Forest Service is to sustain the health, diversity, and productivity of the nation’s forests and grasslands to meet the needs of present and future generations. The agency has either a direct or indirect role in stewardship of about 80 percent of our nation’s forests; 850 million acres including 100 million acres of urban forests where most Americans live. The mission of the Forest Service’s Northern Research Station is to improve people’s lives and help sustain the natural resources in the Northeast and Midwest through leading-edge science and effective information delivery. Computer Algorithms More Reliable than Standard Toxicology Tests on AnimalsNews Advanced algorithms working from large chemical databases can predict a new chemical’s toxicity better than standard animal tests, suggests a study.READ MORE Stormwater Ponds Not a Significant Source of Climate-Warming N2ONews Stormwater retention ponds, a ubiquitous feature in developed landscapes worldwide, are not a significant source of climate-warming nitrous oxide (N2O) emissions, a new study finds.
<urn:uuid:c78e4121-3dac-4446-9628-d4bcede29bc6>
2.984375
1,010
News Article
Science & Tech.
42.692824
95,498,725
Because water is one of the primary requirements for life as we know it, finding large new reservoirs of frozen water on Mars is an encouraging sign for scientists searching for life beyond Earth. The concealed glaciers extend for tens of miles from edges of mountains or cliffs and are up to one-half mile thick. A layer of rocky debris covering the ice may have preserved the glaciers as remnants from an ice sheet covering middle latitudes during a past ice age. "Altogether, these glaciers almost certainly represent the largest reservoir of water ice on Mars that's not in the polar caps. Just one of the features we examined is three times larger than the city of Los Angeles, and up to one-half-mile thick, and there are many more," said John W. Holt of The University of Texas at Austin's Jackson School of Geosciences, lead author of a report on the radar observations in the Nov. 21 issue of the journal Science. "In addition to their scientific value, they could be a source of water to support future exploration of Mars," said Holt. The gently sloping aprons of material around taller features have puzzled scientists since NASA's Viking orbiters revealed them in the 1970s. One theory contended they were flows of rocky debris lubricated by a little ice. The features reminded Holt of massive ice glaciers detected under rocky coverings in Antarctica, where he has extensive experience using airborne geophysical instruments such as radar to study Antarctic ice sheets. The Shallow Radar instrument on the Mars Reconnaissance Orbiter provided an answer to this Martian puzzle, indicating the features contain large amounts of ice. "These results are the smoking gun pointing to the presence of large amounts of water ice at these latitudes," said Ali Safaeinili, a shallow-radar instrument team member with NASA's Jet Propulsion Laboratory in Pasadena, Calif. The radar's evidence for water ice comes in multiple ways. The radar echoes received by the orbiter while passing over these features indicate that radio waves pass through the apron material and reflect off a deeper surface below without significant loss in strength, as expected if the aprons are thick ice under a relatively thin covering. The radar does not detect reflections from the interior of these deposits as would occur if they contained significant rock debris. Finally, the apparent velocity of radio waves passing through the apron is consistent with a composition of water ice. Developers of the Shallow Radar had the mid-latitude aprons in mind, along with Mars' polar-layered deposits, long before the instrument reached Mars in 2006. "We developed the instrument so it could operate on this kind of terrain," said Roberto Seu of Sapienza University of Rome, leader of the instrument science team. "It is now a priority to observe other examples of these aprons to determine whether they are also ice." The buried glaciers reported by Holt and 11 co-authors lie in the Hellas Basin region of Mars' southern hemisphere. The radar has also detected similar-appearing aprons extending from cliffs in the northern hemisphere. "There's an even larger volume of water ice in the northern deposits," said the Jet Propulsion Laboratory's Jeffrey J. Plaut, who reported their presence at a science conference earlier this year. "The fact that these features are in the same latitude bands—about 35 to 60 degrees—in both hemispheres points to a climate-driven mechanism for explaining how they got there." The rocky-debris blanket topping the glaciers has apparently protected the ice from vaporizing as it would if exposed to the atmosphere at these latitudes. "A key question is 'How did the ice get there in the first place?'" said James W. Head of Brown University. "The tilt of Mars' spin axis sometimes gets much greater than it is now, and climate modeling tells us that ice sheets could cover mid-latitude regions of Mars during those high-tilt periods," said Head. He believes the buried glaciers make sense as preserved fragments from an ice age millions of years ago. "On Earth," said Head, "such buried glacial ice in Antarctica preserves the record of traces of ancient organisms and past climate history." J.B. Bird | EurekAlert! Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication 16.07.2018 | Chinese Academy of Sciences Headquarters For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:6a3389f8-9e36-4452-866b-f845bbd85b26>
4.09375
1,509
Content Listing
Science & Tech.
40.752597
95,498,739
Language English Tropical Rainforest Photo: Biodôme Photo: Biodôme Photo: Biodôme Photo: Biodôme OngletsDescriptionDistinguishing features This tamarin sports a large white mane on the top of its head. Its face is black, its back black or brown and its chest whitish. It has a long non-prehensile tail. Reproduction In the wild, the female bears a litter of twins once a year. In captivity, reproduction occurs every 28 to 29 weeks. Gestation lasts 140 days. The female reaches sexual maturity at 18 months and the male at 24 months. Diet Cotton-top tamarins animals eat fruit and insects found in the mid-lower strata of the forest. They also lick tree sap. The liquid in their diet comes from the fruit they eat and the morning dew they lick from leaves. Predators Their enemies are raptors, snakes and large cats, including ocelots and margays. Habitat They are found in three types of habitat in northern Colombia: the Chocó wet tropical forest, the Andean moist forest and the dry thorn forest savannah in the northern coastal plain. Ecology, behaviour They live in groups of 2 to 12 individuals, with only one breeding pair per group. The entire family cares for the newborns. These tiny monkeys have a repertory of over 38 different sounds that they use for communicating with one another. They help to disperse the seeds of the fruit they eat. Large numbers of these animals have been exported for biomedical research. Even if they are now protected the protection of their natural habitat (tropical rain forest) is the only way we might expect to save this species. French nameTamarin pinché Scientific nameSaguinus oedipusPhylumChordataClassMammaliaOrderPrimatesFamilyCallithricidaeSizeOverall length: 50 to 67 cm; length of tail: 30 to 41 cmWeight350 to 510 gLife spanIn captivity: up to 16 yearsStatusCritically endangered species (UICN) Threatened species, protected under a species survival program.
<urn:uuid:2eec206b-f067-4e09-a616-898871319ff0>
3.6875
429
Knowledge Article
Science & Tech.
44.641251
95,498,744
Lagrangian and Eulerian specification of the flow field In classical field theory the Lagrangian specification of the field is a way of looking at fluid motion where the observer follows an individual fluid parcel as it moves through space and time. Plotting the position of an individual parcel through time gives the pathline of the parcel. This can be visualized as sitting in a boat and drifting down a river. The Eulerian specification of the flow field is a way of looking at fluid motion that focuses on specific locations in the space through which the fluid flows as time passes. This can be visualized by sitting on the bank of a river and watching the water pass the fixed location. The Lagrangian and Eulerian specifications of the flow field are sometimes loosely denoted as the Lagrangian and Eulerian frame of reference. However, in general both the Lagrangian and Eulerian specification of the flow field can be applied in any observer's frame of reference, and in any coordinate system used within the chosen frame of reference. These specifications are reflected in computational fluid dynamics, where "Eulerian" simulations employ a fixed mesh while "Lagrangian" ones (such as meshfree simulations) feature simulation nodes that may move following the velocity field. On the other hand, in the Lagrangian specification, individual fluid parcels are followed through time. The fluid parcels are labelled by some (time-independent) vector field x0. (Often, x0 is chosen to be the center of mass of the parcels at some initial time t0. It is chosen in this particular manner to account for the possible changes of the shape over time. Therefore the center of mass is a good parameterization of the flow velocity u of the parcel.) In the Lagrangian description, the flow is described by a function giving the position of the parcel labeled x0 at time t. The two specifications are related as follows: because both sides describe the velocity of the parcel labeled x0 at time t. Within a chosen coordinate system, x0 and x are referred to as the Lagrangian coordinates and Eulerian coordinates of the flow. The Lagrangian and Eulerian specifications of the kinematics and dynamics of the flow field are related by the material derivative (also called the Lagrangian derivative, convective derivative, substantial derivative, or particle derivative). Suppose we have a flow field u, and we are also given a generic field with Eulerian specification F(x,t). Now one might ask about the total rate of change of F experienced by a specific flow parcel. This can be computed as where ∇ denotes the gradient with respect to x, and the operator u⋅∇ is to be applied to each component of F. This tells us that the total rate of change of the function F as the fluid parcels moves through a flow field described by its Eulerian specification u is equal to the sum of the local rate of change and the convective rate of change of F. This is a consequence of the chain rule since we are differentiating the function F(X(x0,t),t) with respect to t. Conservation laws for a unit mass have a Lagrangian form, which together with mass conservation produce Eulerian conservation; on the contrary, when fluid particles can exchange a quantity (like energy or momentum), only Eulerian conservation laws exist. - Conservation form - Contour advection - Equivalent latitude - Generalized Lagrangian mean - Lagrangian particle tracking - Semi-Lagrangian scheme - Streamlines, streaklines, and pathlines - Trajectory (fluid mechanics) - Batchelor (1973) pp. 71–73. - Lamb (1994) §3–§7 and §13–§16. - Falkovich (2011) - Batchelor, G.K. (1973), An introduction to fluid dynamics, Cambridge University Press, ISBN 0-521-09817-3 - Landau, Lev; Lifshitz, E.M. (1987), Fluid Mechanics, 2nd Edition (Course of Theoretical Physics, Volume 6), Butterworth-Heinemann, ISBN 978-0750627672 - Lamb, H. (1994) , Hydrodynamics (6th ed.), Cambridge University Press, ISBN 978-0-521-45868-9 - Falkovich, Gregory (2011), Fluid Mechanics (A short course for physicists), Cambridge University Press, ISBN 978-1-107-00575-4
<urn:uuid:621b92cf-8180-4a8c-b72d-19839d85717f>
3.15625
961
Knowledge Article
Science & Tech.
39.822934
95,498,753
An ancient spider and tick were involved in a story from pre-history would have been forever lost had it not been for the tree resin than dripped from a tree and entombed the critters. The portion of amber was discovered by a collector, Patrick Müller, who was searching for pieces that may have scientific value, shows the tick wrapped in the threads of spider silk who came to an untimely end before being encased in silk. The specimen was passed onto Jason Dunlop – a scientist at the Museum für Naturkunde in Berlin who contacted tick expert Lidia Chitimia-Dobler and Paul Selden. Paul Selden, professor of geology at the University of Kansas and director of the Paleontological Institute at the KU Biodiversity Institute and Natural History Museum, has commented in a press release;”Ticks already are known from the Burmese amber — but it’s unusual to find one wrapped in spider silk. We’re not sure if the spider wrapped it in order to eat it later or if it was to get it out of the way and stop it from wriggling and destroying its web.” While the team are unable to accurately determine the species of spider, the finding is significantly rare – ticks are rarely found on tree trunks around tree resin, and are generally on the forest floor waiting for a warm-blooded animal, so there are few natural history specimens like this one that has been described recently in the journal Cretaceous Research. “It’s really just an interesting little story — a piece of frozen behaviour and an interaction between two organisms,” he said. “Rather than being the oldest thing or the biggest thing, it’s nice to be able to preserve some animal interaction and show it was a living ecosystem.” Image Credit: University of Kansas For more information about science and technology, visit our website now. If you have a tablet or smartphone, you can also download the latest digital version onto your iOS or Android device. To make sure you never miss an issue of How It Works magazine, subscribe today! Other articles you may like:
<urn:uuid:97a4d987-a72d-46ea-b1f5-4a5fa32d969f>
3.75
452
Truncated
Science & Tech.
31.736556
95,498,771
Google once brashly believed its engineers could invent a solution to the world's energy problems. These days, the company has a new strategy: finance less risky clean-energy projects where it can actually make an impact. Last year, Google invested more than ever in renewable power, spending $880 million to underwrite conventional clean energy projects such as solar panels on California rooftops. But that isn't the role Google envisioned for itself in 2007 when co-founder and current CEO Larry Page declared the search company would get into energy research directly to "rapidly" invent cheap ways to generate "renewable electricity at globally significant scale." Google believed its creativity and innovation would make the difference. It created an in-house plan for how to wean the United States off fossil fuel in 22 years. It posted jobs for engineers who could speed-up design of renewable energy projects, and put a team to work improving the heliostat, a mirrored device that focuses the sun's rays to make thermal energy. Its philanthropy arm, Google.org, began investing in start-ups with far out ideas. Google's founders were directly involved. One startup, Makani Power, originally planned to move boats using kites, but Page and co-founder Sergey Brin convinced it to pursued high-altitude flying wind turbines, instead. "They were pretty fearless. They said 'This is a risky thing, we don't know yet if it's going to work out, but we think this has promise,'" says Makani CEO Corwin Hardham. The company's speedy ways wowed energy experts, as did its goal of producing a gigawatt of renewable electricity at prices competitive with fossil fuels. "Being at Google, it was fascinating to see how rapidly things could scale. I was enthralled by it," says Dan Reicher, Google's former director of climate change and energy initiatives, who left the company in 2010 to head Stanford University's Steyer-Taylor Center for Energy Policy and Finance. "That struck me as a very fundamental difference—the software world measures time frames in months." In comparison, he notes, solar panels have been available for 30 years but account for less than one percent of total U.S. electricity production. Last November, however, Google killed the program, known as RE<C (for renewable energy cheaper than coal), along with several other endeavors Google said had hadn't had the results it wanted. Other companies, Google said, were in a better position to advance specific energy technologies. The truth was Google's eclectic bets on potentially disruptive energy innovations never got very far. Take PowerMeter, another canceled project. The software was meant to help homeowners monitor their energy use. Energy entrepreneur Kurt Brown said it had a major flaw: "Their interface was for nerds. It was something mostly a smart Googler would be intrigued by." The cancelled plans show the hazard of believing that success in computing—where products can take days to prototype—can carry over to energy. "The IT attitude is great when combined with humility with what is possible," says Jonathan Koomey, an expert on the environmental effects of computing at Stanford University. "But if you think you are going to overhaul the whole energy industry overnight, just cause you did it in software, that is false, that is hubris." Some people involved directly with the projects said it proved challenging for Google to guide energy research either directly or through startups. "We were aiming for some home runs. I think we got some doubles," said one senior manager who has since left Google. "It's difficult for a company whose sole focus is not innovating in energy to drive really substantial innovation in energy systems." Google hasn't given up on green energy. The company actually spends far more now than it ever did on the engineering projects. In 2011, Google disclosed $880 million in investments in renewables. That was about ten times the level it spent in 2010, and puts the search giant among the companies that spend most in the area (BP, by contrast, invested around $1.6 billion). Unlike its earlier engineer-led work that aimed to push forward new technology, Google's strategy now is largely focused on financing the deployment of commercial solar panels and wind turbines through so-called "tax equity" investments. Such investments, typically used by banks or large energy companies, provide a financial return as well as federal tax breaks that can be as much as 30 percent of what's invested. The funding also isn't coming any longer from Google's philanthropic arm, but from its treasury, which is sitting on $44 billion in cash. Google's energy and sustainability director Rick Needham, is careful to balance how he explains the company's motivations. As an investor, he says Google is looking to make money. But it also still wants to have a transformational impact on "the great American challenge" of securing carbon-free energy, as Google chairman Eric Schmidt once put it. Google's largest single investment to date is the $280 million it agreed provide to SolarCity, a company based in San Mateo, California that installs residential solar arrays. Lyndon Rive, SolarCity's CEO, says the money is important because his customers only pay small monthly fees. Google's financing – in effect a loan to the project – is what pays for the initial cost of installing the solar panels on homes. Google still works with new energy technology. A number of outside companies pilot or test their technologies at its facilities, and Google continues to invest in some early-stage companies through Google Ventures. It also buys renewable power for its own use. At its Mountain View headquarters, Google has installed one of the world's largest corporate solar installations and even obtained an energy-trading license from federal regulators so it could directly buy 20-year power contracts with wind farms to power its data centers. "They tried a bunch of things. Some things worked and some things didn't," says Stanford's Koomey. While being a silent partner in a residential solar panel business isn't quite as exciting as solving the world's problems, it's progress. Says Koomey: "What works is the most cost effective way to deliver the end result, which is reduced emissions." - A replacement for traffic lights gets its first test - The Best of the Physics arXiv (week ending July 7, 2018) - The US may have just pulled even with China in the race to build supercomputing’s next big thing - A better way to measure magnetic fields could make fetal heart problems easier to detect This article originally published at MIT Technology Review here
<urn:uuid:e6c043e7-8454-44f0-81b5-6e7d68d44bad>
2.703125
1,355
News Article
Science & Tech.
43.030081
95,498,780
Detecting Pollution with Living Biosensors Color-coded bacteria light the way to oil spills at sea. Last spring, on a research vessel cruising through the North Sea, Swiss scientists examined tiny vials of bacteria mixed with seawater for hints of fluorescent light. By analyzing how brightly the bacteria glowed, and with which colors, they were able to diagnose and characterize the early aftermath of an oil spill. “We were actually very happy that we could do this, and that it turned out so well,” says Jan Van der Meer, an environmental microbiologist at the University of Lausanne, in Switzerland. He announced his team’s results last week at the Society for General Microbiology’s autumn meeting in Dublin. Living biosensors like these bacteria, which are engineered to glow a particular color in response to a given chemical, have graced petri dishes in research laboratories for decades. But it is only recently that they are being put to practical use, as scientists adapt and deploy them to test for environmental contaminants. Sensor bacteria give faster and cheaper–if somewhat less precise–results than traditional chemical tests do, and they may prove increasingly important in detecting pollutants in seawater, groundwater, and foodstuffs. In preparation for their research expedition, Van der Meer and his team created three different strains of bacteria, each tailored to sense a particular kind of toxic chemical that leeches into seawater from spilled oil. They began with different strains of bacteria that naturally feast upon these chemicals, each releasing specialized enzymes when they come in contact with their chemical of choice. By hooking up the gene for a fluorescent or bioluminescent protein to the cellular machinery that makes those enzymes, the scientists effectively created a living light switch: whenever the chemical was present, the bacteria would glow. For each class of toxic chemical, Van der Meer used a different color protein, so that he could easily determine which chemicals were present based on the wavelength of emitted light. And whenever possible, he transferred the entire switch mechanism into another strain of bacteria more suited to a highly controlled lab life than its exotic, oil-eating cousins. The research team, working in concert with several other European labs, obtained permission from the Dutch government to create a small, artificial oil spill in the waters of the North Sea. They sampled seawater at various time points after the spill, using a luminometer to measure whether sensor bacteria added to each sample had detected the corresponding chemical. Unlike traditional chemical analyses, which can take weeks and require large, expensive instruments, the biosensor test could be performed on site in a matter of minutes. “Analytical methods can potentially take a long time and a lot of processing,” says Ruth Richardson, a bioenvironmental engineer at Cornell University. “It certainly isn’t something you can do remotely.” Van der Meer adds that bacterial sensing, which is inexpensive compared with chemical methods, could be particularly useful for routine monitoring. “The extreme simplicity of this is that the heart of the sensor is the bacterial cell, and that the cell is a multiplying entity,” says Van der Meer. “It’s extremely simple to reproduce them, and then you have enough for thousands of tests.” Catching an oil leak in its earliest stages is critical for directing appropriate cleanup efforts, says Van der Meer. A spill may not leave a visible trace, in the form of tar, until long after its most toxic effects have come and gone. By allowing for quick and easy detection of spills very soon after they occur, biosensor bacteria may make possible an earlier, more effective intervention. Chemical testing will still likely be necessary, however. The bacterial sensors can give a rough estimate of the relative amounts of each chemical class, but only rigorous chemical analysis can determine exactly how much of each substance is present. “We tried to develop this method to be relatively quick, and to give you an overview,” says Van der Meer, adding that biosensors could perhaps identify areas where more-extensive testing is warranted. Van der Meer ultimately hopes to incorporate the glowing bacteria into buoy-based devices, which would continuously monitor seawater for hints of an oil spill and relay pertinent information back to a laboratory. His group is developing microfluidic systems that could maintain a constant, contained population of sensor bacteria to periodically test the waters. Such a device would be subject to the vagaries of living organisms: its usefulness would be entirely dependent on whether the bacteria were alive and thriving. A negative reading could mean that no toxins are present, but it could also mean that the bacteria have died. “If they’re not healthy,” says Richardson, “the system is broken.” Deploying living sensors also raises the risk of releasing genetically altered organisms into the environment. In this case, the chemical-sensing bacteria are theoretically harmless and unlikely to survive long in the harsh open environment. Beyond detecting oil spills, Van der Meer’s group has developed and tested a bacterial strain that detects arsenic in rice. Other potential applications include testing for pollutants in soil and groundwater, and for antibiotics in meat and milk. But for now, his vision for the future of biosensor bacteria remains largely aquatic. “Why not have a robotic fish that swims through the water,” he speculates, “and if it detects something, it could send out a signal by GPS? Technically, I think these things are possible.” Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:9512ffb0-f602-4330-91c3-4d993f713ef6>
3.796875
1,171
Truncated
Science & Tech.
31.00707
95,498,785
Assume two balls of equal mass, made from the same material, approach each other head on. Both balls have the same speed v. They approach each other with relative speed 2v. As the balls collide, each ball exerts a force on the other. The forces are equal in magnitude but have opposite directions. The balls distort like spherical springs, and the same amount of energy is stored in each ball as elastic potential energy. It will be reconverted into kinetic energy. The force with which ball 1 pushes on ball 2 first decelerates ball 2 to a stop and then accelerates it into a direction opposite its initial velocity. The force with which ball 2 pushes on ball 1 decelerates ball one to a stop and then accelerates it into a direction opposite its initial velocity. We expect the two balls to fly apart with equal speeds in opposite directions. If the coefficient of restitution of the two balls is 1, then their speed will not have changed. The total momentum of the two balls is conserved. Now suppose you are looking at this collision from a different reference frame. Suppose as ball 1 approaches ball 2, you are sitting in a chair that rolls with the same velocity right along side ball 1. With respect to you, ball 1 does not move. Ball 2 approaches ball 1 with speed 2v, the relative speed of the balls. After the collision ball 2 has the same velocity as you do. So ball 2 now does not move with respect to you, but ball 1 now moves backward with speed 2v. In your reference frame ball 2 hits a stationary target. It comes to rest and ball 1 leaves the collision with a velocity equal to the incoming velocity of ball 1. Momentum is conserved. Both reference frames are valid reference frames in which to describe the collision. In any reference frame which is not accelerating, i.e. in any inertial frame, Newton's laws are valid. People in different reference frames see different things. They do not agree on the velocity, momentum, or kinetic energy of objects. They will, however, always agree on the relative velocity of two objects, and they will always be able to use Newton's laws in their reference frame to explain what they are observing. Assume a baseball hits a stationary bat. If the bat is nailed to the wall of a house and cannot move at all, then the ball will just rebound the same way it rebounds from a hard floor. If the bat is held in the hand of the batter, then the force the ball exerts on the bat will accelerate the bat backwards, and some collision energy will be transferred to the bat and will not appear as rebound energy of the ball. If the batter and the bat are very heavy, they receive little of the collision energy, and the ball rebounds with outgoing speed equal to the coefficient of restitution times the incoming speed. If the batter is swinging the bat forward as the ball hits it, the ball's outgoing speed will be much higher. Assume the bat and the ball each are moving with speed 100 km/h in opposite direction. Their relative speed is 200 km/h. A reference frame in which the bat is stationary is moving with 100 km/h speed with the bat. In this reference frame the ball approaches with 200 km/h and rebounds with speed v = coefficient of restitution times 200 km/h in the forward direction. But this reference frame is moving itself with speed 100 km/h in the forward direction. With respect to the ground the ball is therefore moving with speed vground = 100km/h + v = 100km/h + coefficient of restitution * 200 km/h. A baseball heads toward home plate at 100 km/h. The bat heads toward the pitcher at 100 km/h. The relative speed between ball and bat as they are approaching each other is 200 km/h. Assume the baseballs coefficient of restitution is 0.55. Just after the collision the relative speed between ball and bat is 0.55*200 km/h = 110 km/h. The bat still heads towards the pitcher at approximately 100 km/h. The ball moves relative to the bat with a speed of 110 km/h towards the pitcher. Relative towards home plate and the pitcher, the ball's speed is 210 km/h. The ball heads toward pitcher at 210 km/h. Animation: Ball hitting a stationary bat Animation: Ball hitting a moving bat
<urn:uuid:8fa60167-8d6b-4734-87eb-fb906f56f54e>
4.40625
909
Tutorial
Science & Tech.
67.539131
95,498,791
Principal Features of Chernobyl Hot Particles: Phase, Chemical and Radionuclide Compositions The accident at the Chernobyl Nuclear Power Plant (ChNPP) 4th Unit on 26 April 1986 was accompanied by the destruction of a reactor core and the release of solid and gaseous radioactive products. As a result of the accident, a part solid radioactive materials of the 4th Unit was dispersed by the explosion. The hot particles released settled on the surface of soil hundreds kilometers from ChNPP in the Sweden , Germany [2,7], Poland [2,3,7], Belorussia and in particular, Ukraine [5,6,8]. The size of hot particles vary from one to hundreds microns. The bulk radioactivity of a single particle based on the initial activity calculated for 26th April 1986 might differ by hundreds of kBq. While particle size tends to decrease with increasing distance from the 4th Unit, some relatively large particles of 100–300 micron in size were collected 12 km West of ChNPP. Phase, chemical and radionuclide compositions of hot particles are essentially heterogeneous [1–8]. We have suggested dividing all hot particles into two main groups: 1) fuel particles — with relatively homogeneous matrix consisted of uranium oxides, UO2+x; 2) fuel-constructional particles — with a complex chemical matrix and/or multi-phase composition that is a result of high-temperature interaction between nuclear fuel, (UO2+x), and cladding materials such as zircaloy and stainless steel composed of Fe-Gr-Ni. The temperature could have exceed 2600°C. In some places of Western Plume in Chernobyl region these particles achieve up to 40 % of all hot particles . Radionuclide composition of hot particles depends on the chemical and phase composition of their matrices [1,2,7,8]. KeywordsChernobyl Accident Uranium Oxide Fuel Particle Chernobyl Nuclear Power Plant Radioactive Particle Unable to display preview. Download preview PDF. - 2.Schubert, P. and Behrend, U. (1987) Investigation of radioactive particles from the Chernobyl fall-out, Radiochimica Acta 41, 149–155.Google Scholar - 4.Petryaev, E.R., Leinova, S.L., Sokolik G.A., Danilchenko, E.M. and Duksina, V.V. (1993) Composition and properties of radioactive particles detected in the southern district of Belarus, Geohimiya 7, 930–939 (in Russian).Google Scholar - 5.Burakov, B.E., Anderson, E.B., Galkin, B.Ya., Pazukhin, E.M. and Shabalev, S.I. (1994) Study of Chernobyl “hot” particles and fuel containing masses: implications for reconstruction the initial phase of the accident, Radiochimica Acta 65, 199–202.Google Scholar
<urn:uuid:6acb46fd-9850-49a9-a17f-d8a4d76c9b47>
2.90625
639
Academic Writing
Science & Tech.
52.087032
95,498,812
Demands on forests will become stronger and spatially more diversified. Production of wood and other traditional forest resources will have to be balanced against other kinds of goods and services from the forest ecosystems. Europe must develop frameworks capable of addressing all these demands to create optimal forest landscapes in the future while preserving biodiversity. Although preliminary assessments show that the 2010 target of halting the loss of biodiversity will not be met entirely in the forests, Europe has the institutional, legal, financial and information framework in place to make a real difference. The new European Environment Agency report was released last week during a side event at the 9th Conference of the Parties to the Convention on Biological Diversity in Bonn, Germany. The report identifies the state, trends and major pressures on the forest ecosystems across Europe and suggests needed actions and capacity-building for sustainable forest management and safeguarding biodiversity. The European Forest Institute largely contributed to the report as partner in the European Environment Agency’s European Topic Centre on Biological Diversity. Anu Ruusila | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:925d854e-e5b7-4f90-a6dd-d60147d1fd3e>
3.296875
804
Content Listing
Science & Tech.
30.650087
95,498,821
Diel migrations between habitats containing different levels of food abundance is a common phenomenon among marine organisms, both vertebrate and invertebrate. We hypothesize that in many cases this behavior constitutes a response to diel changes in the relationship between potential feeding rates and predation risks in the different habitats. For planktivores that locate their prey by sight (such as juvenile sockeye salmon) and that in turn are subject to predators that use sight to locate them, the diel time profiles of potential feeding rate and predation risk in near-surface waters may be determined largely by the relative densities of prey at the two trophic levels. A simple model of aquatic predation leads us to hypothesize the existence of brief "antipredation windows" for feeding at dawn and dusk. If this hypothesis is valid, then the optimal behavior for pelagic planktivores is to migrate into surface waters to feed during these two daily windows and to migrate to deeper, less illuminated waters during daylight hours. (Our model does not predict any specific nocturnal migration pattern.) Our arguments can also be used to predict optimal migration patterns for contact-feeding zooplankton subject to visual predation. The resulting predictions agree qualitatively with many observed patterns of diel vertical migration. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:90fe3d5f-0f0d-4ecc-bcda-742fc251edba>
3.5625
280
Academic Writing
Science & Tech.
14.041723
95,498,822
Is it possible to achieve absolute zero? So it is... Only if you do very poorly on your thermo exam. But how is it possible that there is no place below -273 cantigrate degrees in the universe? why do you feel the need for there to be something at absolute zero ? Did you read the link given in post #3 above explaining the conditions at absolute zero ? There are empty areas between galaxies. I mean no any sun nearby not even 100000000000000000000 light year nearby. Aren't they even below -273,15? Just because a chart that shows that volume is zero at absolute zero. I can't be convinced there isn't a -273 cEntigrade degrees out there. This volume thing at 0 point is not even experimental but a chart assumption. I'll ask again ... did you read that wiki link ? about the coldest thing you will find out there is the cosmic microwave background which is about 3 Kelvin (ie. 3 deg above absolute zero) This permeates the universe everywhere between the galaxies So it pretty much predetermines the coldest temperature you will find You don't need to humiliate me. That's why students don't ask questions their mentors. Anyway, yea I've read it but then it is not polynomial degradation when you leave the sun to the outer space? Because you will achieve the 0 or even below if you do so. Probably you are thinking of exponential degradation. That pattern arises when you have a hot object that transfers heat to an infinitely large cold reservoir and the rate of heat loss is proportional to the temperature difference. The hot object cools down and its temperature asymptotically approaches the temperature of the cold reservoir. Given the cosmic microwave background at 3 degrees kelvin, the temperature that is approached is 3 degrees kelvin, not absolute zero. If you were asking questions, I'd have a little more sympathy. But you're not - you are making statements like "This volume thing at 0 point is not even experimental but a chart assumption." If you don't like being told your statements are wrong, maybe you should stop making wrong statements. Our lecturer said it is because of the V-T chart. But maybe he didn't want to mention more. I am sorry for my behaviour didn't mean to make statements but tried to understand. There could be some exceptions...? ha ha , very funny answer. But seriously, didn't absolute zero come from a theoretical vanishing 'volume' of gas or object? How can anyone justify zero volume of anything? This tells us the theory breaks down before temperature reaches absolute zero. Another way of saying, temperature below absolute zero is possible in an unknown state of matter. No. Absolute zero has to do with temperature, not volume. It's perfectly possible to describe mathematically a system at absolute zero with finite volume. What chart are you talking about? Do you have a reference? Because there has to be a state of minimum possible energy. Absolute zero is that state. I think they are arguing that all we know about absolute zero came out of Charles' Law, as if no progress was made in the intervening two centuries. Zero kelvin, or -273.15 centigrade, is the temperature at which the atoms/molecules that compose an object would be solely in their ground states. The ground state is the minimum energy level for that atom/molecule, so at absolute zero an object would have the least internal energy possible. There is literally no other internal energy for it to give up. I think that's not the proper way of saying it. Its better to say, at absolute zero, by definition, particles will have zero energy. But because any system has a ground state energy which is greater than zero, no system can reach absolute zero. I'm sorry, but that's not correct. Drakkith's message is right. No need to be sorry! What I understand from he's message is that absolute zero is where the system(of course a system for which we can define temperature) is in its ground state. Now if we say absolute zero can't be reach, we're actually saying that we can't have such a system in its ground state even in principle. That seems wrong to me but maybe I'm just getting things wrong. Also as far as I remember from thermodynamics, its the definition of absolute zero that the motion of all particles stop at this temperature which means zero energy. No, that's pretty much correct. It isn't possible for objects larger than atoms and perhaps small molecules to reach their ground state through thermodynamic means. If you're talking about the 3rd law of thermodynamics(at least its Nernst's statement), then its about absolute zero, not the ground state. So what you're saying is equivalent to defining absolute zero as the temperature the object has when it is in its ground state. So this post actually is equivalent to your last post. The chart I was talking about. It is not possible to reach that temperature in labs but we can estimate it is zero volume from the chart if we lenghten the line to the -273.15 centigrade degrees. Source: http://www.avogadro.co.uk/miscellany/t-and-p/t-and-p.htm [Broken] Separate names with a comma.
<urn:uuid:bdbba263-2dac-4734-a49c-bb05a47ffb15>
3.078125
1,124
Comment Section
Science & Tech.
57.846221
95,498,837
Researchers report a new ion detector, MARIO. Using it, they show that changes in the intracellular concentration of free magnesium ions (Mg2+) is critical for the chromosome folding that must occur for cells to divide. The findings, which can be read in Current Biology, provide a new mechanism for chromosome organization. Cell division is essential for new cells to form in the body, be it during normal growth or to repair lost or damaged cells. During cell division, chromosomes begin to condense and remain so until the division is complete. A number of proteins in the cell control the condensation, but so too do free ions such as Mg2+. "Chromosomes are negatively charged. Free cations like Mg2+ neutralize the charge so that the chromosomes can condense during cell division," explains the study lead. Although it is known that Mg2+ might have an important role in chromosome rearrangement, quantitatively measuring Mg2+ concentration during cell division has been a challenge. MARIO is based on a calcium indicator known as YC3.60 and is composed of enhanced cyan fluorescent protein, the yellow fluorescent protein VENUS, and a Mg2+-binding domain found in bacteria known as CorA. Mg2+ binding to CorA causes a structural change in MARIO that changes the fluorescence signal. "We could improve MARIO's performance, both in terms of Mg2+affinity and dynamic range, by truncating CorA and by introducing random mutations into the structure," says the author. Mg2+ itself is abundant in the cell, but not in free form. Instead it is usually captured by ATP. Using MARIO, the researchers found that during cell division, free Mg2+ greatly increases, enabling the chromosomes to condense. The increase peaked during the transition from metaphase to anaphase, which marks the period in cell division that the cell membrane begins showing signs of breaking into two cells. "We found a clear relationship between ATP levels and free Mg2+," says another author. "The less ATP, the more Mg2+ and the more chromosome condensation. If we decrease ATP levels, then chromosomes even more condense. We propose a novel mechanism by which dynamics are regulated during cell division, in which ATP-bound Mg2+is released by the hydrolysis of ATP." Because cell division is an energy intensive event, it is presumed that the cell will consume more ATP. "We are not sure what causes the ATP demand. But a number of actions are taken during cell division, so we are not surprised to see Mg2+ levels increase. A number of diseases like cancer are caused by abnormalities in cell division. We expect that understanding how chromosome condensation is regulated will help us understand how these diseases develop and possible ways to treat them," says the author. Free Mg2+ ions regulate chromosome shape - 446 views
<urn:uuid:a737f627-e16f-4511-8af5-76ff61e4609b>
3.625
611
Knowledge Article
Science & Tech.
43.119587
95,498,863
It seems another observed galaxy cluster collision has shown the motion of dark matter via lensing. OK, so perhaps this dark matter stuff does actually exist. What was interesting about this observation is that it seems that dark matter not only doesn't interact with normal matter, but seems to barely interact with itself, other than the gravitational effect. So imagine two galaxy clusters colliding, mostly gas friction I suppose, causing the normal matter to crash, or at least change it's course. And then there are these two clouds of dark matter which pass right through the event, and themselves, and I suppose start swinging round back from the other side as gravity pulls them back in. Its like shadow mass. Anritsu PSN50 Power Sensor 7 months ago
<urn:uuid:b3f1d8d6-f031-4d90-8307-012f54749b08>
2.5625
151
Personal Blog
Science & Tech.
51.226453
95,498,864
NASA has unveiled the first radar video of an asteroid flyby that sent a space rock half the size of a football field buzzing by Earth last week. The new asteroid flyby video, released Tuesday, shows the asteroid 2012 DA14 as it headed away from Earth over the weekend. The asteroid zipped close by Earth on Feb. 15, when it approached closer to the planet than many communications satellites. Before Friday's flyby, astronomers suspected asteroid 2012 DA14 was about 150 feet across. At its closest point, the asteroid came within 17,200 miles of Earth, but never posed a threat of impacting the planet. Based on the new radar observations, scientists now think asteroid 2012 DA14 is about 130 feet wide at its largest point, NASA officials said in a statement. The new video of the asteroid was made by combining radar observations of 2012 DA14 by NASA's Deep Space Network radio antenna in Goldstone, Calif. The 230-foot antenna captured 72 images of asteroid 2012 DA14, which was about 74,000 miles from Earth at the time, during observing windows on Friday and Saturday. The images have a resolution of about 13 feet per pixel. "The images span close to eight hours and clearly show an elongated object undergoing roughly one full rotation," NASA officials explained. During that eight hours, asteroid 2012 DA14 moved even farther from Earth to a point about 195,000 miles away. Astronomers Lance Benner and Marina Brozovic at NASA's Jet Propulsion Laboratory in Pasadena, Calif., led the radar observing campaign for the asteroid flyby. They planned to conduct a series of follow-up observations on Feb. 18, 19 and 20. Asteroid 2012 DA14 was discovered in February 2012 by amateur astronomers at the La Sagra Observatory in Spain. Its close flyby was determined soon afterward, and astronomers ultimately found that it posed no chance of hitting the Earth. NASA scientists and astronomers around the world tracked asteroid 2012 DA14 as it approached Earth over the last week, with the space agency and several groups holding public webcasts to chronicle the space rock's close shave. It was the closest flyby of an asteroid the size of 2012 DA14 that astronomers have known about in advance. Asteroid 2012 DA14 came about 5,000 miles closer to Earth than the fleet of communications satellites that fly in geosynchronous orbits about 22,400 miles above the planet. NASA provided satellite operators with regular updates on the asteroid's position and path in case the satellites would have to be moved clear of the space rock. NASA and a network of scientists around the world regularly monitor the night sky for signs of asteroids that could pose an impact threat to the Earth. By coincidence, a 55-foot meteor exploded over Russia just hours before asteroid 2012 DA14's flyby. The meteor explosion injured nearly 1,200 people in Russia's Chelyabinsk region, shattered windows and damaged thousands of buildings in the area. NASA scientists have said the trajectory of the meteor was different than that of asteroid 2012 DA14, and that the two events were unrelated. Subsequent fireball sightings over the San Francisco Bay Area and Miami were also unrelated. Image courtesy of NASA/JPL-Caltech - Gravity Assist Podcast: Exploring Mars, with Steve Squyres - Field Notes Celebrates NASA History with Memo Books, Paper Models - Scientists Prove Einstein Right Using the Most Elusive Particles in the Universe - How Engineers Are Practicing for the BepiColombo Mission to Mercury (Video) This article originally published at Space.com here
<urn:uuid:c712f5cd-390c-41ff-9c7f-4e96485cac3a>
3.046875
734
Truncated
Science & Tech.
43.981808
95,498,872
A group of scientists from the Siberian Federal University (SFU, Krasnoyarsk, Russia) and the Nikolaev Institute of Inorganic Chemistry (NIIC, Novosibirsk, Russia) combined the useful properties of metal phthalocyanines and palladium membranes in order to create active layers in hydrogen detectors. This operation significantly increases the sensitivity of the sensors. The study is reported in the journals Dyes and Pigments and International Journal of Hydrogen Energy. High-sensitivity sensors for detecting various gases are very important for the environment, as they allow to make qualitative and quantitative assessment of the content of various gases in the air (for example, hazardous carbon monoxide or ammonia). The data obtained makes helps to combat pollution. On the other hand, there sensors play an important role in medicine. There is a disease called maladsorption: those diagnosed with it exhale more hydrogen. If we make high-sensitivity sensors capable of detecting a small increase in the concentration of hydrogen, this disease can be successfully diagnosed. The detectors discussed in the paper have a three-layered structure. At the bottom there lies a substrate (which is also a conducting electrode), a film of phthalocyanines (heterocyclic compounds of dark blue color) is applied to it, and finally palladium over this film. It is not easy to produce such a sensor. To do this, it is necessary to obtain a thin film of phthalocyanines, and then deposit a layer of palladium on top. To get this metal, precursors are used (organic compounds that contain palladium atoms). After heating they decompose, organic fragments evaporate, and atoms of metal form a layer with the required structure and thickness. The sensor works like this: hydrogen easily penetrates palladium and, acting on the surface of the phthalocyanine film, changes its conductivity. "Thin phthalocyanine films are semiconductors themselves. And it is from the change in conductivity that we can judge whether hydrogen is "clinging" or not, and in what concentration it is contained in the air", said Pavel Krasnov, Ph.D. in Physics and Mathematics, senior researcher at the Institute of Nanotechnology, Spectroscopy and Quantum chemistry of SFU. The authors of these articles for the first time obtained and investigated the crystal structure of thin films of palladium phthalocyanines, as well as the way in which its structure is altered by fluorine atoms (acting as substituents). Phthalocyanine is a flat molecule with hydrogen atoms at its edges. Earlier the authors of the paper have shown that the introduction of fluorine atoms into the phthalocyanine structure increases the sensory response (sensitivity indicator) of these compounds, as they interact with gas molecules. Fluorine is a more electronegative element compared to hydrogen, as a result of which it is able to "pull" more electrons from other atoms of phthalocyanine, including the metal atom located in the center. An increase in the positive charge of a metal atom promotes stronger binding of gas molecules, since such a bond arises predominantly from the donor-acceptor mechanism. A gas molecule is an electron donor (gives electrons), and a metal atom is their acceptor (attaches them). This hypothesis was confirmed by scientists from SFU with the help of quantum chemical calculations, and their colleagues from NIIC - as a result of the direct carrying out of experimental work that eventually allowed the prototyping of sensors. Now scientists plan to continue project. They would like to test the possibility of using different substrates -- to "plant" phthalocyanines not on electrodes, but on carbon structures -- i.e., graphene or carbon nanotubes. Such a replacement will give a stronger response and make the sensor more sensitive to hydrogen. How much sensitivity will grow, only experiments can show. The second promising line of research is to make the palladium layer thinner (also in order to improve the response of the sensor).
<urn:uuid:baf63509-9526-43d2-aaba-747612c43073>
3.671875
837
Knowledge Article
Science & Tech.
25.044155
95,498,877
Like their human hosts, bacteria need iron to survive and they must obtain that iron from the environment. While humans obtain iron primarily through the food they eat, bacteria have evolved complex and diverse mechanisms to allow them access to iron. A Syracuse University research team led by Robert Doyle, assistant professor of chemistry in The College of Arts and Sciences, discovered that some bacteria are equipped with a gene that enables them to harvest iron from their environment or human host in a unique and energy efficient manner. Doyle's discovery could provide researchers with new ways to target such diseases as tuberculosis. The research will be published in the August issue (volume 190, issue 16) of the prestigious Journal of Bacteriology, published by the American Society for Microbiology. "Iron is the single most important micronutrient bacteria need to survive," Doyle says. "Understanding how these bacteria thrive within us is a critical element of learning how to defeat them." Doyle's research group studied Streptomyces coelicolor, a Gram-positive bacteria that is closely related to the bacteria that causes tuberculosis. Streptomyces is abundant in soil and in decaying vegetation, but does not affect humans. The TB bacteria and Streptomyces are both part of a family of bacteria called Actinomycetes. These bacteria have a unique defense mechanism that enables them to produce chemicals to destroy their enemies. Some of these chemicals are used to make antibiotics and other drugs. Actinomycetes need lots of iron to wage chemical warfare on its enemies; however, iron is not easily accessible in the environments in which the bacteria live— e.g. human or soil. Some iron available in the soil is bonded to citrate, making a compound called iron-citrate. Citrate is a substance that cells can use as a source of energy. Doyle and his research team wondered if the compound iron-citrate could be a source of iron for the bacteria. In a series of experiments that took place over more than two years, the researchers observed that Streptomyces could ingest iron-citrate, metabolize the iron, and use the citrate as a free source of energy. Other experiments demonstrated that the bacteria ignored citrate when it was not bonded to iron; likewise, the bacteria ignored citrate when it was bonded to other metals, such as magnesium, nickel, and cobalt. The next task was to uncover the mechanism that triggered the bacteria to ingest iron-citrate. Computer modeling predicted that a single Streptomyces gene enabled the bacteria to identify and ingest iron-citrate. The researchers isolated the gene and added it to E. coli bacteria (which is not an Actinomycete bacteria). They found that the mutant E. coli bacteria could also ingest iron-citrate. Without the gene, E. coli could not gain access to the iron. "It's amazing that the bacteria could learn to extract iron from their environment in this way," Doyle says. "We went into these experiments with no idea that this mechanism existed. But then, bacteria have to be creative to survive in some very hostile environments; and they've had maybe 3.5 billion years to figure it out." The Streptomyces gene enables the bacteria to passively diffuse iron-citrate across the cell membrane, which means that the bacteria do not expend additional energy to ingest the iron. Once in the cell, the bacteria metabolize the iron and, as an added bonus, use the citrate as an energy source. Doyle's team is the first to identify this mechanism in a bacteria belonging to the Actinomycete family. The team plans further experiments to confirm that the gene performs the same signaling function in tuberculosis bacteria. If so, the mechanism could potentially be exploited in the fight against tuberculosis. "TB bacteria have access to an abundant supply of iron-citrate flowing through the lungs in the blood," Doyle says. "Finding a way to sneak iron from humans at no energy cost to the bacteria is as good as it gets. Our discovery may enable others to figure out a way to limit TB's access to iron-citrate, making the bacteria more vulnerable to drug treatment." Sara Miller | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:e9b63987-a0cb-41ee-898d-bd5e7cba160b>
4.1875
1,522
Content Listing
Science & Tech.
38.000953
95,498,878
Yesterday he excelled himself, launching a project to create, for the first time, a new life form in the lab. Mr Venter has spoken for years of his dream of designing and building his own living creatures, declaring that he sees no practical scientific obstacle. Yesterday, the dream acquired fiscal reality, with Mr Venter announcing a $3m (£1.9m) US government grant. Ostensibly, the goal is to design a new living organism capable of turning raw materials into hydrogen to solve the world's energy problems. But success would bring extraordinary new possibilities for scientists to alter and augment the natural world, together with difficult ethical, security and philosophical questions about the nature of life. The work will be carried out at the Institute of Biological Energy Alternatives (IBEA) in Maryland. A genome is the collective word for the set of genes carried in the cells of an organism. "We could potentially engineer an organism with the ideal qualities to begin to cope with our energy issues," said Mr Venter. Hamilton Smith, a Nobel prize-winning scientist, will head the lab work. "We have just begun what will probably be long but intellectually challenging work in trying to create a synthetic genome. I am convinced this project can succeed," he said. The programme does not quite meet the Frankenstein test of creating life, as set out by Mary Shelley's hero in the novel of that name: "After days and nights of incredible labour and fatigue, I succeeded in discovering the cause of generation and life; nay, more, I became myself capable of bestowing animation upon lifeless matter." What Mr Venter and Mr Smith plan to achieve is not so much the creation of life from scratch as the creation of a new life form by stripping down the genome of an existing creature. The humble creature in question is Mycoplasma genitalium, a single-celled organism that makes its home in the human genital tract. It is so small that it can only be seen with an electron microscope. The plan is to remove more than 200 of its 517 genes so that it becomes, effectively, a new kind of creature - but one that can live and reproduce. The work builds on previous experiments carried out on M. genitalium by Mr Venter and colleagues in the 1990s, when they methodically chipped away the creature's genes one by one to find out the minimum necessary for survival. He then commissioned an ethics panel including a rabbi and a priest and led by Margaret Cho of Stanford university, to look into whether it was right to create life in this way. They concluded in 1999 that it was. "I'm less worried about the minimal genome project taking off and creating some kind of monster bug than I would be, partly because I have a sense that the scientists are aware of the possible risks of what they're doing," Ms Cho told the Washington Post. Ultimately, another altered organism may be used for hydrogen generation. In the meantime, for satirists, the notion of man solving his thirst for oil and coal using a microorganism living inside his genitals has dizzying possibilities. But the project to create the new life form (Mycoplasma venteri?) has deep scientific and social implications. Once scientists discovered the mechanism of inheritance and reproduction - DNA, the chemical script in which genes are written - it was inevitable that curiosity would drive them to find out what would happen if they altered it. As a result, genetic modification of plants and animals has become commonplace, but a mouse with a single gene removed is still considered a mouse. Mr Venter is taking research to a new phase - making an organism so genetically altered that, although this is bound to be a subjective judgement, it is likely to be considered a new life form. The ultimate step would be to use raw DNA building blocks and chemicals to generate a human-designed, reproducing organism from scratch, a possibility filled with grave dangers for traditional life forms, including humans. Even if Mr Venter and Mr Smith succeed, some scientists will question the validity of their achievement, given that the new life form will be a delicate, enfeebled creature, stripped of the abilities acquired over millions of years of evolution, and incapable of living outside laboratory conditions. For the same reasons, however, it is unlikely that any new organism would pose a threat to the public. One of the first genes they will delete is the one enabling M. genitalium to attach itself to human cells. Nevertheless, Mr Venter said he believed there was a risk of giving dangerous details to the malicious. "We'll have a debate on what should be published and what shouldn't," he told the Post. "We may not disclose all the details that would teach somebody else how to do this."
<urn:uuid:7783f15b-55a6-4118-b272-6e37c77f8c8f>
2.859375
978
News Article
Science & Tech.
42.058917
95,498,883
Libraries@ Components@ Rules@ On creating languages: The easy way to do this is to use LEX and YACC, after first specifying your 'basic' grammar. YACC will spit out C code to parse your 'basic' language. You can use this code as your cross compiler which will run on windoze, Linux, etc. You also need to write the routines to perform the actions that the tokens in you grammar will cause to be called. People can kick out compilers like this in an afternoon. You get a serious compiler this way that can handle any level of parenthetical expressions. There are good tutorials on the web. (http://epaperpress.com/lexandyacc/) The point is -- a couple of days with a book on YACC and LEX to understand it will pay off any time you have to parse something -- for the rest of your programming life. The comp.compilers newsgroup, archive, links "Every program has at least one bug and can be shortened by at least one instruction -- from which, by induction, one can deduce that every program can be reduced to one instruction which doesn't work." -- Anon. "The most important thing in the programming language is the name. A language will not succeed without a good name. I have recently invented a very good name and now I am looking for a suitable language." -- D. E. Knuth, 1967 A computer scientist is someone who, when told to "Go to Hell," sees the "go to," rather than the destination, as harmful. -- Dr. Roger M. Firestone, firstname.lastname@example.org |file: /Techref/language/index.htm, 10KB, , updated: 2016/4/28 14:06, local time: 2018/7/19 21:48, |©2018 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?| <A HREF="http://www.piclist.com/techref/language/index.htm"> Programming Languages</A> |Did you find what you needed?| PICList 2018 contributors: o List host: MIT, Site host massmind.org, Top posters @20180719 RussellMc, Van Horn, David, Sean Breheny, David C Brown, Isaac M. Bavaresco, Bob Blick, Neil, Denny Esterline, John Gardner, Brent Brown, * Page Editors: James Newton, David Cary, and YOU! * Roman Black of Black Robotics donates from sales of Linistep stepper controller kits. * Ashley Roll of Digital Nemesis donates from sales of RCL-1 RS232 to TTL converters. * Monthly Subscribers: Gregg Rew. on-going support is MOST appreciated! * Contributors: Richard Seriani, Sr. Welcome to www.piclist.com!
<urn:uuid:f742b53b-a90c-4a73-aada-c338a78b24d9>
3.203125
653
Content Listing
Software Dev.
64.915595
95,498,900
For years scientists have been working to fundamentally understand how nanoparticles move throughout the human body. One big unanswered question is how the shape of nanoparticles affects their entry into cells. Now researchers have discovered that under typical culture conditions, mammalian cells prefer disc-shaped nanoparticles over those shaped like rods. Researchers are developing a system that uses tiny magnetic beads to quickly detect rare types of cancer cells circulating in a patient's blood, an advance that could help medical doctors diagnose cancer earlier than now possible and monitor how well a patient is responding to therapy. Work undertaken by the project 'Nano health-environment commented database' (NHECD) has culminated in a completely open-access database that incorporates a mechanism for updating the knowledge repository. With high-tech optical tools and sophisticated mathematics, researchers have found a way to pinpoint the location of specific sequences along single strands of DNA, a technique that could someday help diagnose genetic diseases. Atomically thin sheets of hexagonal boron nitride (h-BN) have the handy benefit of protecting what's underneath from oxidizing even at very high temperatures, Rice University researchers have discovered. Nanotechnology is an emerging area that engages almost every technical discipline - from chemistry to computer science - in the study and application of extremely tiny materials. This short course allows any technically savvy person to go one layer beyond the surface of this broad topic to see the real substance behind the very small.
<urn:uuid:a795d789-4238-4120-86a4-9ee8477cc3c7>
3.234375
289
Content Listing
Science & Tech.
9.533704
95,498,908
Three landing sites were shortlisted from the 2020 Mars mission. The mission aims to retrieve sample rocks from the red planet and return them to Earth. NASA tested the new landing vision system of the Mars 2020 rover, a system that will enable the new rover to react to threats. NASA selected five aerospace companies to develop a Mars orbiter concept each that can be built in time for the 2020 mission. NASA is ready to build the new Mars 2020 rover that will be more advanced than its predecessor. It can navigate better and identify danger zones, it is also equipped with a microphone and it will be designed to collect and deposit Martian samples for potential future retrieval missions.
<urn:uuid:b0f67817-c1a1-441d-870d-732e34291986>
2.5625
134
Content Listing
Science & Tech.
51.425337
95,498,939
Video length: 6:05Learn more about Teaching Climate Literacy and Energy Awareness» See how this Video supports the Next Generation Science Standards» Middle School: 1 Disciplinary Core Idea High School: 3 Disciplinary Core Ideas About Teaching Climate Literacy Other materials addressing 7b Other materials addressing 7e Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials Teaching Tips | Science | Pedagogy | - This video would be good as an introduction to a design process or engineering PBL, especially focusing on the adaptations needed in order to remain economically viable in a given region. About the Science - Video addresses Bangladesh's vulnerability to climate change, especially flooding, cyclones, and drought, and how communities are adapting. - Comments from expert scientist: Excellent video showcasing that even though we don't all feel the effects of climate change in the US, there are countries around the world who are truly affected on a day to day basis - challenging their survival. - Explains geographically, how Bangladesh is susceptible to flooding - Describes Bangladesh's adaptations of: aquaculture (switching to farming crabs with salt water), drinking water (installing desalination plants and rainwater harvesting systems), and agriculture (growing pumpkins because they are more resilient then other crops) - Explains that climate change adaptation is possible, but is very difficult with the rate at which climate change is occurring About the Pedagogy - Video describes specific strategies to adapt to changing environmental conditions and shows the human side of this aspect of climate change by centering on Bangladesh -- a poor, coastal country that is especially vulnerable to impacts of climate change. - Helpful to show social and economic adaptation strategies in response to climate changes. - Could be used as an ethics lesson since some of the people are purposely flooding the land with saltwater so they can raise more shrimp but, in the process, are further impairing the groundwater. - Shows that climate impacts are happening now and are everywhere, not just in polar regions. Technical Details/Ease of Use - Visual quality good, even in full-screen mode. - A related article accompanies the video. Related URLs These related sites were noted by our reviewers but have not been reviewed by CLEANAlso available from YouTube: https://www.youtube.com/watch?v=xWG_uzLmuug Next Generation Science Standards See how this Video supports: Disciplinary Core Ideas: 1 MS-ESS3.A1:Humans depend on Earth’s land, ocean, atmosphere, and biosphere for many different resources. Minerals, fresh water, and biosphere resources are limited, and many are not renewable or replaceable over human lifetimes. These resources are distributed unevenly around the planet as a result of past geologic processes. Disciplinary Core Ideas: 3 HS-ESS3.C1:The sustainability of human societies and the biodiversity that supports them requires responsible management of natural resources. HS-ESS3.C2:Scientists and engineers can make major contributions by developing technologies that produce less pollution and waste and that preclude ecosystem degradation. HS-ETS1.A2:Humanity faces major global challenges today, such as the need for supplies of clean water and food or for energy sources that minimize pollution, which can be addressed through engineering. These global challenges also may have manifestations in local communities
<urn:uuid:cd077752-ecd3-4ef7-a69f-6e3f33e85024>
3.984375
740
Truncated
Science & Tech.
19.109031
95,498,980
A series of tests run by Chinese scientists on an experimental thermonuclear reactor have found "the artificial sun" is a reliable energy generating process. Designed to replicate the sun's energy generating process, the Experimental Advanced Superconducting Tokamak fusion reactor recently garnered positive results in tests being conducting at China's Institute of Plasma Physics, the Chinese news agency Xinhua reported. "The new tests show the reactor is very reliable, and we can repeat the experiments," institute official Wu Songtao said. With tests set to continue until Feb. 10, the experiments will reveal exactly how far the project is from its final goal of creating plasma that can last for 1,000 seconds while giving off its own energy. While many have disputed the project's ability to create such an energy source, Xinhua said many scientists maintain such a fusion reactor could lesson China's energy crisis by providing cleaner endless energy at a significantly lower cost. Copyright 2007 by United Press International Explore further: New microscopy works at extreme heat, sheds light on alloys for nuclear reactors
<urn:uuid:a10ec59d-5ede-48ad-9b36-4c96229f5c9c>
3.046875
216
News Article
Science & Tech.
21.060312
95,498,982
The view back in time—way back to the origins of the universe—just got clearer. Much clearer. A team of U.S. cosmologists using the BICEP2 telescope at the South Pole announced this week that they have discovered the first direct evidence of the rapid inflation of the universe at the dawn of time, thanks in part to technology developed and built by the National Institute of Standards and Technology (NIST). NIST chip identical to the 16 chips integrated into the BICEP2 telescope camera at the South Pole. Each custom superconducting circuit chip amplifies the electrical signals generated by 32 microwave detectors and assembles them into a sequential time stream. The BICEP2 camera relies, in part, on the extraordinary signal amplification made possible by NIST's superconducting quantum interference devices (SQUIDs). The team of cosmologists from Harvard University, the University of Minnesota, the California Institute of Technology/Jet Propulsion Laboratory (JPL) and Stanford University/SLAC used BICEP2 to observe telltale patterns in the cosmic microwave background—the afterglow of the Big Bang almost 14 billion years ago—that support the leading theory about the origins of the universe. The patterns, so-called "B-mode polarization," are the signature of gravitational waves, or ripples in space-time. These waves are direct evidence that the currently observable universe expanded rapidly from a subatomic volume in the first tiny fraction of a second after the Big Bang. The project was funded by the National Science Foundation. Researchers at NIST's campus in Boulder, Colo., made the custom superconducting circuits, or chips, that amplify electrical signals generated by microwave detectors measuring primordial particles of light. JPL made the detectors. The NIST chips, which along with the detectors are chilled to cryogenic temperatures, also assemble the signals into a sequential time stream that can be read by conventional room-temperature electronics. "This is an exciting and important new result, and we are pleased that technology developed at NIST played a role," said physicist Gene Hilton, who was responsible for production of the NIST chips. The 16 NIST chips contain a total of more than 2,000 SQUIDs, which measure the magnetic fields created in coils that carry and amplify the very small currents generated by the detectors. NIST researchers invented a method for wiring hundreds of SQUID signal amplifiers together to make large arrays of superconducting detectors practical—part of the cutting-edge technology that helps make BICEP2 especially powerful. Physicists just celebrated the 50th anniversary of the SQUID, which has broad applications from medicine to mining and materials analysis—and now more than ever, cosmology. For more on the BICEP2 discovery, see the Harvard announcement, "First Direct Evidence of Cosmic Inflation," at http://www.cfa.harvard.edu/news/2014-05. Laura Ost | EurekAlert! Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication 16.07.2018 | Chinese Academy of Sciences Headquarters For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:0cf15b94-e0c1-4e8c-a0c9-97bcd1705a9d>
3.53125
1,255
Content Listing
Science & Tech.
35.215823
95,498,984
Curious to find out how swarms cooperate and are guided to their new homes, Tom Seeley, a neurobiologist from Cornell University, and engineers Kevin Schultz and Kevin Passino from The Ohio State University teamed up to find out how swarms are guided to their new home and publish their findings on October 3rd 2008 in The Journal of Experimental Biology, http://jeb.biologists.org. According to Schultz there are two theories on how swarms find the way. In the 'subtle guide' theory, a small number of scout bees, which had been involved in selecting the new nest site, guide the swarm by flying unobtrusively in its midst; near neighbours adjust their flight path to avoid colliding with the guides while more distant insects align themselves to the guides' general direction. In the 'streaker bee' hypothesis, bees follow a few conspicuous guides that fly through the top half of the swarm at high speed. Schultz explains that Seeley already had still photographs of the streaks left by high-speed bees flying through a swarm's upper layers, but what Seeley needed was movie footage of a swarm on the move to see if the swarm was following high-velocity streakers or being unobtrusively directed by guides. Passino and Seeley decided to film swarming bees with high-definition movie cameras to find out how they were directed to their final destination. But filming diffuse swarms spread along a 12·m length with each individual on her own apparently random course is easier said than done. For a start you have to locate your camera somewhere along the swarm's flight path, which is impossible to predict in most environments. The team overcame this problem by relocating to Appledore Island, which has virtually no high vegetation for swarms to settle on. By transporting large colonies of bees, complete with queen, to the island, the team could get the insects to swarm from a stake to the only available nesting site; a comfortable nesting box. Situating the camera on the most direct route between the two sites, the team successfully filmed several swarms' chaotic progress at high resolution. Back in Passino's Ohio lab, Schultz began the painstaking task of analysing over 3500 frames from a swarm fly-by to build up a picture of the insects' flight directions and vertical position. After months of bee-clicking, Schultz was able to find patterns in the insects' progress. For example, bees in the top of the swarm tended to fly faster and generally aimed towards the nest, with bees concentrated in the middle third of the top layer showing the strongest preference to head towards the nest. Schultz also admits that he was surprised at how random the bees' trajectories were in the bottom half of the swarm, 'they were going in every direction,' he says, but the bees that were flying towards the new nest generally flew faster than bees that were heading in other directions; they appeared to latch onto the high-speed streakers. All of which suggests that the swarm was following high-speed streaker bees to their new location. Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:70695c8a-2239-41a3-865c-32705d0379d6>
3.90625
1,267
Content Listing
Science & Tech.
47.281662
95,499,021
Its surface material composition differs in important ways from both those of the other terrestrial planets and expectations prior to the MESSENGER mission, calling into question current theories for Mercury's formation. Its magnetic field is unlike any other in the Solar System, and there are huge expanses of volcanic plains surrounding the north polar region of the planet and cover more than 6% of Mercury's surface. These findings and other surprises are revealed in seven papers in a special section of the September 30, 2011, issue of Science. Two of the seven papers indicate that the surface material is more like that expected if Mercury formed from similar, but less oxidized, building blocks than those that formed its terrestrial cousins, perhaps reflecting a variable proportion of ice in the initial accretionary stages of the planets. Measurements of Mercury's surface by MESSENGER's X-Ray and Gamma-Ray Spectrometers also reveal substantially higher abundances of sulfur and potassium than previously predicted. Both elements vaporize at relatively low temperatures, and their abundances thus rule out several popular scenarios in which Mercury experienced extreme high-temperature events early in its history. "Theorists need to go back to the drawing board on Mercury's formation," remarked the lead author of one of the papers, Carnegie's Larry Nittler. "Most previous ideas about Mercury's chemistry are inconsistent with what we have actually measured on the planet's surface." For decades scientists had puzzled over whether Mercury had volcanic deposits on its surface. MESSENGER's three flybys answered that question in the affirmative, but the global distribution of volcanic materials was not well constrained. New data from orbit show a huge expanse of volcanic plains surrounding the north polar region of Mercury. These continuous smooth plains cover more than 6% of the total surface of Mercury. Another lead author, James Head of Brown University, said that the deposits appear typical of flood lavas, like those found in the few-million-year-old Columbia River Basalt Group on Earth. "Those on Mercury appear to have poured out from long, linear vents and covered the surrounding areas, flooding them to great depths and burying their source vents," Scientists have also discovered vents, measuring up to 25 kilometers (km) (15.5 miles) in length, that appear to be the source of some of the tremendous volumes of very hot lava that have rushed out over the surface of Mercury and eroded the substrate, carving valleys and creating teardrop-shaped ridges in the underlying terrain. MESSENGER revealed an unexpected class of landform on Mercury and suggest that a previously unrecognized geological process is responsible for its formation. Images collected during the Mariner 10 and MESSENGER flybys of Mercury showed that the floors and central mountain peaks of some impact craters are very bright and have a blue color relative to other areas of Mercury. These deposits were considered to be unusual because no craters with similar characteristics are found on the Moon. But without higher-resolution images, the bright crater deposits remained a curiosity. Now MESSENGER's orbital mission has provided close-up, targeted views of many of these craters. The bright areas are composed of small, shallow, irregularly shaped depressions that are often found in clusters said David T. Blewett, a planetary scientist at the Johns Hopkins University Applied Physics Laboratory (APL) and lead author of one of the Science reports. "The science team adopted the term 'hollows' for these features to distinguish them from other types of pits that are found on Mercury." Hollows have been found over a wide range of latitudes and longitudes, suggesting that they are fairly common across Mercury. Many of the depressions have bright interiors and halos, and Blewett says the ones detected so far have a fresh appearance and have not accumulated small impact craters, indicating that they are relatively young. "Analysis of the images and estimates of the rate at which the hollows may be growing lead to the conclusion that they are actively forming today," Blewett says. "The old conventional wisdom was that 'Mercury is just like the Moon.' But from its vantage point in orbit, MESSENGER is showing us that Mercury is radically different from the Moon in just about every way we can measure." Earth, Mercury, Jupiter, Saturn, Uranus, and Neptune all have intrinsic magnetic fields, but MESSENGER found that Mercury's weak field is different. So too are particle acceleration processes in Mercury's magnetosphere, as described in a paper by lead author George Ho of APL. MESSENGER's observations of energetic electrons indicated that their distribution is not consistent with what are known as Van Allen radiation belts. These belts are bands of charged particles that interact with the magnetic field and surround the planets. Mercury's magnetic equator is also well to the north of the planet's geographic equator. The best-fitting internal dipole magnetic field is located about 480 km (298 miles), northward of the planet's center. The team found that sodium is the most important plasma ion contributed by the planet to the magnetosphere. "We had previously observed neutral sodium from ground observations, but up close we've discovered that charged sodium particles are concentrated near Mercury's polar regions where they are likely liberated by solar wind ion sputtering, effectively knocking sodium atoms off Mercury's surface" notes the University of Michigan's Thomas Zurbuchen, author of one of the Science reports. "We were able to observe the formation process of these ions, one that is comparable to the manner by which auroras are generated in the Earth atmosphere near polar regions." MESSENGER's Fast Imaging Plasma Spectrometer detected helium ions throughout the entire volume of Mercury's magnetosphere. "Helium must be generated through surface interactions with the solar wind," says Zurbuchen. "We surmise that the helium was delivered from the Sun by the solar wind, implanted on the surface of Mercury, and then fanned out in all directions." "Our results tell us is that Mercury's weak magnetosphere provides very little protection of the planet from the solar wind," he continued. "Extreme space weather must be a continuing activity at the surface of the planet closest to the Sun." "In the history of exploration of our planetary system, the first spacecraft to orbit a planet has always yielded stunning surprises, and MESSENGER has been true to that pattern," notes Carnegie's Sean Solomon, MESSENGER Principal Investigator. "Our first good views of the polar regions, our first high-resolution images, our first continuous observations of the exosphere and magnetosphere, and our first opportunity to collect time-consuming measurements of surface composition have all returned unexpected results. Mercury is not the planet described in the textbooks. Although a true sibling of Venus, Mars, and Earth, the innermost planet has had a much more exciting life than anyone predicted." MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging) is a NASA-sponsored scientific investigation of the planet Mercury and the first space mission designed to orbit the planet closest to the Sun. The MESSENGER spacecraft launched on August 3, 2004, and entered orbit about Mercury on March 18, 2011, to begin a one-year study of its target planet. Dr. Sean C. Solomon, of the Carnegie Institution of Washington, leads the mission as Principal Investigator. The Johns Hopkins University Applied Physics Laboratory built and operates the MESSENGER spacecraft and manages this Discovery-class mission for NASA. The Carnegie Institution for Science (carnegiescience.edu) is a private, nonprofit organization headquartered in Washington, D.C., with six research departments throughout the U.S. Since its founding in 1902, the Carnegie Institution has been a pioneering force in basic scientific research. Carnegie scientists are leaders in plant biology, developmental biology, astronomy, materials science, global ecology, and Earth and planetary science. Sean Solomon | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:2af421f7-29d6-44be-9e97-e28fb6fe7884>
4.375
2,207
Content Listing
Science & Tech.
33.798897
95,499,058