text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
The Norwegian professor, writer and film maker Terje Tvedt, of the Universities of Oslo and Bergen, argues that water has played a unique and fundamental role in shaping societies throughout human history. Speaking at a European Science Foundation and COST conference in Sicily in October, Tvedt proposed that social scientists and historians have long made a serious error by not taking natural resources into account in their attempts to understand social structures. Water, according to Tvedt, is a unique natural resource for two reasons. First, it is absolutely essential for all societies, because we cannot live without it. Secondly, it is always the same. Whatever you do with water on the surface of the Earth, it reemerges. “You can destroy or create rivers and lakes,” he says, “but you cannot destroy water itself.” How rivers shaped industry Tvedt used the example of the industrial revolution to show how water can help to understand human history. Historians have proposed two contrasting theories to explain why the industrial revolution started in Europe, specifically in Britain, and not in China, India or Australia. They debate about whether it is because of specific political ideologies and social structures in Europe at the time, or due to the unequal relationship that already existed between Europe and the rest of the world, through slavery and colonialism. The two theories can be termed exceptionalism and exploitation, respectively. But according to Tvedt, the structure of the water system can adequately explain why the industrial revolution began in Britain. The early industrial revolution was enabled by the power of water mills, and bulk transport of goods by canal. Britain’s rivers were perfect for both things. They provided a good network across the country. All are fairly close to the sea, with good flows throughout the year and not too much silt. Elsewhere in the world, rivers were too silty, too large and uncontrollable, all flowing in the same direction or had flows that were too seasonally variable. The exclusion of nature from our understanding of society is not a benign, academic problem. “Since World War II, the dominant theories relating to the international aid system have, without exception, disregarded the role of nature,” Tvedt says. “Modernisation theory has told us that all societies could develop modernism in the same way, if they just find the right economic instruments.” This, he argues, is simply not right. Global water trade Another speaker at the conference demonstrated how social scientists are now thinking analytically about natural resources. Maite M. Aldaya, from the University of Twente, in the Netherlands, presented the water footprint concept. The water footprint of a product (commodity, good or service) is the volume of freshwater used to produce the product, measured at the place where the product is actually produced. The water footprint for an individual, community or business is the total volume of freshwater used to produce the goods and services consumed by that individual or community or produced by that business. Water use is measured in terms of water volume consumed (evaporated) and/or polluted per unit of time. Developed by Arjen Hoekstra, based on Tony Allan’s idea of virtual water, water footprints allow us to visualise the transfer of water that occurs during global trade. The concept produces some shocking facts. The global trade in virtual water is about 1,600 billion cubic metres a year, equivalent to 16% of world water use. Australia, the driest inhabited continent on Earth, is one of the world’s largest exporters of virtual water, whilst northern hemisphere temperate areas such as northern Europe and Japan, where water is plentiful, are importers. The water footprint concept is already being enshrined in national policies as a way of accounting for water use. “Spain has just approved a regulation that requires water footprint analysis in River Basin Management Plans, which Member States need to send to the European Commission regularly from 2009, according to the Water Framework Directive,” says Aldaya. “So it is now compulsory to calculate water footprints of the different socioeconomic sectors in Spain.” Thomas Lau | alfa Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:1147b056-fa8d-43ac-bde4-f2d9810b6fd1>
3.296875
1,489
Content Listing
Science & Tech.
37.574129
95,492,764
Species Detail - Rose-ringed Parakeet (Psittacula krameri) - Species information displayed is based on all datasets. Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM). Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84). Invasive Species: Invasive Species || Invasive Species: Invasive Species >> High Impact Invasive Species 12 February (recorded in 2013) 11 October (recorded in 2012) National Biodiversity Data Centre, Ireland, Rose-ringed Parakeet (Psittacula krameri), accessed 20 July 2018, <https://maps.biodiversityireland.ie/Species/11749>
<urn:uuid:9b45355b-5d86-44aa-a533-78e35e9424f0>
3.125
169
Structured Data
Science & Tech.
19.001586
95,492,776
AIPMT-NEET Biology Aspirants, read out the next AIPMT-NEET Biology Study material/ Notes on Classification. In this we will learn about Classification, taxonomic categories/rank important for AIPMT-NEET Biology. Free online notes for AIPMT-NEET. In biology, Classification is the process of grouping living organisms into convenient categories based on some easily observable characters. - Basic unit of classification is species. - It involves recognition of species and the placing of species in a system of higher categories (taxa). - Classification is not a single step process, but involves hierarchy of steps in which each step represents a rank or category. These categories are called as Taxonomic categories and overall arrangement of all categories together constitutes the taxonomic hierarchy. - Scientific term for category is Taxon (plural: taxa) or we can say taxon is a taxonomic group of any rank. - Study of all living organisms have led to the development of seven essential taxonomic categories, namely- Kingdom, Phylum or Division (for plants), Class, Order, Family, Genus, Species. - Species is the lowest category. Group of individual organisms with fundamental similarities are considered as species. - Genus is a group of closely related species or a group of species that have descended from a common ancestor. It has more common characters comparison to species of other genera. - Family is a group of related genera with less number of similarities as compared to genus and species. - Order is a group of related families. Order and other higher taxonomic categories are identified based on the aggregates of characters. - Class is a group of related orders. - Classes with few similar characters are placed into a higher category called Phylum. Or phylum is a group of related classes. - Highest category is Kingdom. Kingdom includes organisms of different phyla. Or related phyla are grouped as a kingdom. Like Kingdom Animalia has animals of different phyla. - As we go higher from species to kingdom, number of common characteristics goes on decreasing. - Lower the taxa, more are the characteristics that the members within the taxon share. - Higher the category, greater is the difficulty of determining the relationship to other taxa at the same level. - Classification of Human: Binomical nomenclature is Homo sapiens. Kingdom — Animalia Phylum — Chordata Class — Mammalia Order — Primata Family — Hominidae species — Homo sapiens - These seven categories are considered essential to define the relationship of a given organism. - Name of a family is formed by adding -idae to the stem of the name of one of the genera in the group. - NOTE: Kingdom>Phylum>COF>Genus>Species
<urn:uuid:27cb8e35-711f-405a-8888-1e44efe62a17>
3.71875
590
Knowledge Article
Science & Tech.
24.983891
95,492,781
GALLEX or Gallium Experiment was a radiochemical neutrino detection experiment that ran between 1991 and 1997 at the Laboratori Nazionali del Gran Sasso (LNGS). This project was performed by an international collaboration of French, German, Italian, Israeli, Polish and American scientists led by the Max-Planck-Institut für Kernphysik Heidelberg. It was designed to detect solar neutrinos and prove theories related to the Sun's energy creation mechanism. Before this experiment (and the SAGE experiment that ran concurrently), there had been no observation of low energy solar neutrinos. The experiment's main components, the tank and the counters, were located in the underground astrophysical laboratory Laboratori Nazionali del Gran Sasso in the Italian Abruzzo province, near L'Aquila, and situated inside the 2912 metres high Gran Sasso mountain. Its place under a depth of rock equivalent of 3200 metres of water was important to shield from cosmic rays. This laboratory is accessible by a highway A-24, which runs through the mountain. The 54-m3 detector tank was filled with 101 tons of gallium trichloride-hydrochloric acid solution, which contained 30.3 tons of gallium. The gallium in this solution acted as the target for a neutrino-induced nuclear reaction, which transmuted it into germanium through the following reaction: - νe + 71Ga → 71Ge + e−. The threshold for neutrino detection by this reaction is 233.2 keV, and this is also the reason why gallium was chosen: other reactions (as with chlorine-37) have higher thresholds and are thus unable to detect low-energy neutrinos. In fact, the low energy threshold makes the reaction with gallium suitable to the detection of neutrinos emitted in the initial proton fusion reaction of the proton-proton chain reaction, which have an upper energy limit of 420 keV. The produced germanium-71 was chemically extracted from the detector, converted to germane (71GeH4). Its decay, with a half life of 11.43 days, was detected by counters. Each detected decay corresponded to one detected neutrino. The rate of neutrinos detected by this experiment agreed with standard solar model predictions. Thanks to the use of gallium, it was the first experiment to observe solar initial pp neutrinos. Another important result was the detection of a smaller number of neutrinos than the standard model predicted (the solar neutrino problem). After detector calibration the amount did not change. This discrepancy - an example of the solar neutrino problem - has since been explained. Such radiochemical neutrino detectors are sensitive only to electron neutrinos, and not to the second and third generation flavours of neutrinos - the neutrino oscillation of electron neutrinos emitted from the sun, between the earth and the sun, accounts for the discrepancy. After the end of GALLEX its successor project, the Gallium Neutrino Observatory or G.N.O., was started at LNGS in April 1998. A similar experiment detecting solar neutrinos using liquid gallium-71 was the Russian-American Gallium Experiment SAGE.
<urn:uuid:ebc339b3-d5b7-4e76-84bb-974ed9da2dae>
3.734375
689
Knowledge Article
Science & Tech.
38.065405
95,492,802
UMass Amherst researchers find Chilean salt flat drains a surprisingly vast area A recent research report about one of the largest lithium brine and salt deposits in the world in Chile's Atacama Desert by geoscientists from the University of Massachusetts Amherst is the first to show that water and solutes flowing into the basin originate from a much larger than expected portion of the Andean Plateau. UMass Amherst graduate student Lilly Corenthal making notes at one of the largest lithium brine and salt deposits in the world, a deposit 3,900 feet thick in Chile's Atacama Desert, with the Andes Mountains in the background. The basin drains a surprisingly larger area of the Andean Plateau than geoscientists had expected. Credit: UMass Amherst The astonishingly massive evaporite deposit, 3,900 feet (1,200 m) thick, appears to be draining an area far larger than a map-based or topographic watershed would suggest, says lead hydrologist David Boutt. The brine volume present, contrasted with the relatively small surface drainage in such an arid area, poses fundamental questions about both the hydrologic and solute budgets at plateau margins, that is the relationship between input and accumulation, the authors say. Their answers should aid understanding of the water and mineral resources in one of the world's driest regions. As Boutt explains, "The amazing finding is the fact that most of the water is originating from outside the topographic watershed, on the Andean Plateau, and it's draining an area four or five times bigger than the watershed. There is no outlet to this basin and it is capturing an unbelievably huge volume of water in an otherwise extremely arid environment." Details appear in a recent early online edition of Geophysical Research Letters. Boutt and first author Lilly Corenthal, his former graduate student, say the physical and chemical connections between active tectonics, slopes, discharge zones and aquifers are not well characterized. In fact, they do not yet understand the conditions under which the massive evaporate deposit formed. Thus, the Chilean salt flat, Salar de Atacama, provides "a unique case-study to investigate questions about sub-surface fluid flow on the margins" of the Central Andean Plateau and others like it where mountain building forces are still active, they point out. A drainage area that is several times larger than the topographic catchment is more common than people think, Boutt notes. "You can't assume that the surface catchment and ground water catchment are the same, and it tends not to happen in humid areas. But in dry areas--his is the driest non-polar desert in the world--the difference can be extensive, as it is in this case. And, this water is very, very old," he adds. In such closed basins, high concentrations of mineral deposits, in particular lithium brine, represent an increasingly important resource in high global demand. The researchers collected 300 samples of freshwater and brine to analyze how much sodium is entering the basin. Boutt says, "knowing something about how much sodium is there now can help us reconstruct how much water must have been coming in over the 7 to 10 million years as the Andes plateau uplift was taking place. The high elevation regions of the Andes are like wicks pulling water out of the atmosphere and putting it into the basin," he adds. They also used satellite precipitation data to "backsolve" the brine's origins using sodium concentrations, oxygen and hydrogen isotopes, as the isotopic composition of water reflects the condensation temperature and precipitation rate over time. The main controls are source of the moisture and condensation temperature, and whether or not the water has experienced evaporation, Boutt notes. Janet Lathrop | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e3e7ccd8-20be-49ec-bdbe-0c3f06f90121>
3.125
1,371
Content Listing
Science & Tech.
37.227265
95,492,805
ScienceTake | Building a Rescue Roach Humans may never be able to exterminate cockroaches, but they may be able to learn something from them. The insects might even offer a new model for robots that crawl through rubble to look for survivors or gather information after disasters, according to scientists at the University of California, Berkeley, who reported Monday in Proceedings of the National Academy of Sciences on the extraordinary abilities of the American cockroach, known affectionately to millions of apartment dwellers as the water bug. Kaushik Jayaram and Robert J. Full, who work on technology inspired by biology, were drawn to the roaches, which average about an inch and a half in length, because of two abilities for which the creatures are renowned. They are fast. And they can get through very small cracks. The scientists put the insects through a kind of roach Olympics. They ran them through crevices and very small spaces, then even smaller and smaller spaces. And they squashed them flat â not completely, absolutely dead flat, but they subjected them to pressure equivalent to 900 times their body weight. The bugs could get through a crevice the height of two stackedpennies in less than a second. They survived the squashing. And even when pressed flat enough that their legs splayed, they could still move about 20 body lengths a second. Obviously, Dr. Full said, not only can they get behind your walls and ceilings, but âthey can run at high speedsâ once they get there. They can tolerate flattening not because the roach exoskeleton is soft, but because it is composed of rigid plates connected by more flexible tissue. When flattened, they resort to a kind of locomotion that hasnât been studied before, Dr. Full said. The splayed legs get enough grip to move the body forward rapidly against the friction it faces from being pressed between surfaces. Dr. Full said that the value of the research is that the cockroach structure and motion could work better than other, more wormlike designs for robots to explore sites where war or natural disaster has caused buildings to collapse. âThis is the model for soft robots,â he said. To prove the point, he and Dr. Jayaram built a palm-size prototype with rigid plates connected by flexible membranes and legs that can work in standard running and the splayed crawling that flattened cockroaches do. No cockroaches were harmed during the experiments, Dr. Full said, not even the ones subjected to the highest pressures. âWe actually ran them and flew them before and after the measurements,â he said, and there was no difference in performance. That seems only fair; roaches could end up being the source of inspiration for machines that save peopleâs lives. Besides, who wants to hurt a cockroach?
<urn:uuid:3c0be36c-3958-4727-9fbf-1b073af4d1af>
3.21875
585
News Article
Science & Tech.
54.072944
95,492,833
A small asteroid passed by the Earth at a distance that's 20 times nearer compared to the moon. The close approach enabled experts to view the space rock from a telescope. Scientists and space fans may have been looking at the skies for so long that they seem to forget Earth has its own share of wonders as well. Researchers at NASA have just discovered quite a rare find in the Caspian Sea - a gigantic rare find, at least. NASA handed back precious moon samples to a lady who purchased it from an auction. The government said it was mistakenly labeled, causing the confusion. NASA is studying whirlwinds on Mars that were captured by Curiosity rover. These winds are capable of transporting dust thus affecting the planet's landscape. Elon Musk is sending two humans to the moon in 2018. The two person team allegedly paid a significant amount of money to get the seats. NASA is planning to send a robot over one of Jupiter's moons, Europa. However, this time, it's not for sight-seeing and assessing its atmosphere but to actually look for alien life. Cassini's ring grazing orbits resulted in the close-up images of the hardy objects that create a disruptive pattern on Saturn's rings. Orbital collisions are what NASA is trying to avoid in space. And if most missions will launch in the 2020 Mars window, there will be more spacecraft to regulate on Mars. NASA unveiled its Trappist-1e retro travel poster. The poster is available for free download in different sizes. The first solar eclipse of the year is set to grace the skies this weekend. It may be the time to stop talking about Mars and exoplanets for a second and venture into the inner Solar System further. It appears scientists are now turning their sights into exploring Venus once more. Pluto, its features, and its moons will soon get their own official names after the New Horizons team got an approved themes for the name. The International Astronomical Union approved the teams and is now waiting for the specific names from New Horizons scientists. The SpaceX Dragon cargo ship arrived a day late to the ISS due to a GPS error. The incorrect reading prevented the ISS robotic arm from capturing the spacecraft. NASA and astronomers worldwide will now work towards proving whether or not the planets in Trappist-1 are indeed habitable. Experts are expecting various missions and ground-based observations to focus on the newly discovered star system.
<urn:uuid:af5c6a5d-4d23-41f6-81ab-276d6ea12f4a>
3.25
494
Content Listing
Science & Tech.
54.20439
95,492,836
Large barrages have been constructed on the main rivers in South Korea to store water and mitigate fluvial flooding damage. However, the increase in water levels behind the barrages can potentially lead to a rise in groundwater levels in the riversides. The purpose of this study was to describe the effect of a barrage on groundwater levels and to test the applicability of a numerical model to groundwater inundation in this context. The Shincheon–Baekcheon catchment is characterised mainly by agricultural land use and includes significant greenhouse cultivation. Its two zones, which are lower A and upper B basins, mainly yield fine- and coarse-grained deposits, respectively. Trend and distribution analyses of manual and automatic measurements of groundwater levels indicated that: (1) the groundwater levels generally increased as the river water levels rose after the river was dammed; (2) the significant correlation between groundwater and river water levels could lead to reductions in the groundwater levels if the barrage gates were opened as a control measure; and (3) the lowering of high groundwater levels during dry seasons is important for preventing soil wetting in the riversides. Assessment of riverside groundwater flood risk induced by high river water levels using a numerical model and monitoring data Gyoo-Bum Kim, Eun-Jee Cha; Assessment of riverside groundwater flood risk induced by high river water levels using a numerical model and monitoring data. Water Science and Technology: Water Supply 1 April 2016; 16 (2): 388–401. doi: https://doi.org/10.2166/ws.2015.144 Download citation file:
<urn:uuid:3192271a-d2bb-46f3-996f-a94c69ec8bc4>
2.671875
326
Academic Writing
Science & Tech.
30.000368
95,492,848
Surface Relief Due to Bainite Transformation at 473 K (200 °C) - 1.1k Downloads Extremely thin plates of bainitic ferrite can now routinely be induced in steels by heat–treatment at low homologous temperatures. Given the atomic mechanism by which the transformation occurs, morphology should be dominated by the minimization of strain energy due to the displacements necessary to accomplish the change in crystal structure when austenite decomposes into bainite. Experiments were conducted using atomic force microscopy in an attempt to characterize these displacements, with a surprising outcome that the shear strain is much larger than associated with conventional, coarser bainitic structures. It appears that this might explain why the plates of bainitic ferrite tend to be slender in this new class of nanostructured alloys. KeywordsAustenite Shear Strain Cementite Bainite Bainitic Ferrite Novel steels containing high concentrations of carbon and silicon were developed by Caballero and co-workers,[1, 2, 3] and the international activity in this developing field was recently reviewed; the most recent publication is by Hu and Wu. The steels are a development of carbide-free bainitic alloys, in which silicon is used to suppress the precipitation of cementite from the residual austenite during the course of bainite transformation.[6,7] A most significant feature of the alloy design is that the bainite is able to form at unconventionally low temperatures, resulting in a structure of incredibly fine plates of bainitic ferrite in a matrix of carbon-enriched retained austenite, resulting in a dramatic increase in hardness and strength while at the same time achieving reasonable fracture toughness and ductility. The material is now commercially available, and the manufacturing process established to a point where many hundreds of tonnes have been produced successfully. The very fine structure offers an intriguing opportunity to explore the fundamental characteristics of bainite that form at temperatures as low as a quarter of the homologous temperature, where the diffusion distance of atoms on substitutional sites is negligibly small over the time scale of the transformation. A number of revealing experiments were reported using techniques such as the atom probe and in-situ measurements using high-energy X-ray sources.[9, 10, 11, 12, 13] The plates of bainite that form at the low temperatures are some 20 to 40 nm in thickness; their shape deformations have yet to be characterized and may hold a clue to explaining why they are so thin. Optical interference techniques, which have been used successfully to look at the surface relief due to coarse plates of martensite, cannot be applied even to ordinary bainite, because their thickness, typically 0.2 μm, is below the wavelength of the light, so that any relief observed is an average due both to the bainite plates and intervening phases such as cementite and austenite. Sandvik resolved this issue by looking at the displacement of austenite twin boundaries by plates of bainitic ferrite using transmission electron microscopy and reported a shear strain associated with an individual plate to be 0.22. Swallow and Bhadeshia used atomic force microscopy to determine the shear strain to be about 0.26; theory predicts values in the range 0.22 to 0.28. Observed Values for Invariant–Plane Strain for a Variety of Transformations 2 Experimental procedures Samples were machined in the form of square-sectioned 4 mm × 4 mm × 30 mm rods, from an alloy of composition Fe-0.79C-1.59Si-1.94Mn-1.33Cr-0.3Mo-0.11V wt pct. They were then metallographically polished on all their surfaces to 1 μm, cleaned with high-purity ethanol, and sealed in partially evacuated quartz tubes flushed with argon in order to conduct the austenitization and isothermal transformation heat treatments. All of these measures help reduce contamination and oxidation of the polished surface, but to further reduce surface deterioration, titanium powder was added to the tubes in order to getter any remaining traces of oxygen. Austenitization was carried out for 15 minutes at 1473 K (1200 °C) before isothermal transformation at 473 K (200 °C) for 7 days, which is sufficient time that the microstructure can be expected to be a mixture of bainite and austenite based on previous results. A SPA-300 (Seiko Instruments Inc., Chiba, Japan) atomic force microscope was operated in contact mode with a 20-μm scanner table and a force reference of 1.95 nN. Images were acquired with 512 × 512 pixel resolution; the maximum scan speed used for imaging was 1 Hz, and this was reduced in order to prevent artefacts due to excessive “jumping” of the tip caused by scanning over the surface. The voltage sensitivity for the vertical dimension (nm mV−1) was calculated automatically based on the dimensions and resonant frequency of the cantilever, which were supplied by the manufacturer who measured the dimensions of each tip. The quoted accuracy of the frequency is 10 pct, resulting in 10 pct error in the measurement of the vertical dimension. Horizontal dimensions were not calibrated; however, it is reasonable to believe that the errors from surface artefacts and vertical calibration will be much larger than the error in measuring the horizontal dimensions. 3 Results and Discussion The true shear s due to an individual plate, and indeed the thickness of the plate, can only be measured directly if the habit plane of the plate is normal to the plane of observation. The plates, however, generally will be inclined to the surface so that the measurement represents an apparent shear component s A , which was determined using the gradient of the scan across the plate, s A = height/width. There is a further complication that the austenite adjacent to the bainite plate may relax by plastic deformation. The side corresponding to the bainitic ferrite can be identified from the shape of the lenticular plate in the context of adjacent plates (Figure 5). Furthermore, the angle of shear of adjacent plates should be similar, because they should have similar crystallographic orientation within a bainite sheaf. Plastic accommodation in the austenite involves multiple slip systems resulting in intense dislocation tangles and irregular relief. It should be noted that in the experiments reported here, the observation plane and specimen surface were both horizontal. Because of the different magnifications of the vertical and horizontal scales, simply looking at the 3-D representations and the topographic line scans can give an exaggerated view of the magnitude of the features, as demonstrated by comparing Figures 3(a) and (b). If the shear component is around 0.25 to 0.28, the features expected due to shearing should be 3.5 to 4 times wider than they are high; it may be more useful to interpretation of the data to plot the data with a 1:1 ratio between the scales of height and horizontal displacement, as demonstrated in Figures 4 and 5. Plotting the observed values against the plate width shows the tendency implied by these equations (Figure 6). If all the plates are of equal true thickness, then the plates, which are seen as wide in their two-dimensional projections on the plane of observation, have relatively shallow inclination to that plane and hence a smaller apparent shear. This is precisely the trend observed in Figure 6; however, a corrected shear could not be reliably calculated, presumably because there is also a spread in the true plate widths. For this reason, the largest shears measured have to be used as the best indication as to the value of the shear component. It is emphasized that these calculations assume that each plate measured makes a substantial intersection with the free surface. The plates have a lenticular morphology,[20,21] so if only the periphery of a deeply located plate intersects the free surface, then the apparent width will be smaller. This, together with the fact that the orientation of the displacement vector of the shape deformation can vary relative to the free surface, could explain why different values of apparent shear are observed for the same apparent width in Figure 6. Neither of these observations detract from the fact that the maximum observed value of the shear will be closer to the actual shear. A value of ≈0.46 was determined for the shear component of the displacements due to individual plates of bainitic ferrite transformed at 473 K (200 °C). This is larger than previously reported using the same experimental technique but for bainite generated by transformation at a higher temperature. The large shear is consistent with the slender character of the bainitic ferrite plates that form at low homologous temperatures and is not unprecedented in the context of solid-state phase transformations in steels. It is speculated that different modes of lattice invariant deformation might operate when the transformation occurs in the austenite that is strengthened by the large carbon concentration of the alloy studied and the lower temperature at which transformation was induced. The authors are grateful to the Engineering and Physical Sciences Research Council and to Corus plc. (now Tata Steel) for their support of this work. - 5.F. Hu and K. Wu: Adv. Mater. Res., 2011, vols. 146–147, pp. 1843–48.Google Scholar - 6.H.K.D.H. Bhadeshia and D.V. Edmonds: Met. Sci., 1983, vol. 17, pp. 411–19.Google Scholar - 8.Patent No. GB2462197, Intellectual Property Office, London, 2010.Google Scholar - 13.F.G. Caballero, M.K. Miller, A.J. Clarke, and C. Garcia-Mateo: Acta Mater., 2010, vol. 63, pp. 442–45.Google Scholar - 14.B.P.J. Sandvik and H.P. Nevalainen: Met. Technol., 1981, vol. 8, pp. 213–20.Google Scholar - 15.E. Swallow and H.K.D.H. Bhadeshia: Mater. Sci. Technol., 1996, vol. 12, pp. 121–25.Google Scholar - 17.B.J.P. Sandvik: Metall. Trans. A, 1982, vol. 13A, pp. 777–87.Google Scholar - 19.H.K.D.H. Bhadeshia and D.V. Edmonds: Metall. Trans. A, 1979, vol. 10A, pp. 895–907.Google Scholar - 20.G.R. Srinivasan and C.M. Wayman: Trans. TMS-AIME, 1968, vol. 242, pp. 78–81.Google Scholar
<urn:uuid:fddbc85b-48bd-42de-a1a6-d32b126e5769>
2.71875
2,286
Academic Writing
Science & Tech.
51.64691
95,492,849
Satellites | Earth’s Artificial Little Moons! 🛰 Machines Orbiting Earth A satellite is any object that orbits another. These can be artificial or natural objects like moons, planets even the Sun! We typically refer to satellites when talking about artificial machines launched into orbit around the Earth. Since the first satellite, Sputnik-1, there has been nearly 7,000 launched with only about 1,000 left active. Many become space junk once they’re no longer useful. Quick Fun Facts & Summary About Satellites A satellite is any object that orbits another, these can be natural objects like moons (e.g. the Moon is Earth’s natural satellite) or man-made which is what we typically mean when we say “satellite” – a machine launched into orbit around the Earth. Of course, during the exploration of the Solar System we have put many artificial satellites into orbit around other bodies like the Moon, planets, asteroids and comets but we’d refer to them more as space probes. Following a satellite’s launch aboard a rocket, it will normally stay in orbit without any additional rocket power. This is because its sideways speed is balanced by the downward pull of Earth's gravity; so it’s constantly falling around the Earth! However, for some satellites in the lowest low Earth orbits, like Space Stations, they need to be periodically ‘reboosted’ due to the slight atmospheric drag the experience! The world’s first artificial satellite, Sputnik-1, was launched in 1957. Since then, nearly 7,000 artificial satellites have been launched with about half remaining in orbit today. These satellites fall into several main categories; - Military – such as spy satellites, the Global Positioning System (GPS) to the mysterious X-37B spaceplane! - Earth Observation – taking pictures, tracking weather, the climate and ecology changes - Space Observatories – from space these space-based telescopes and observatories are free from the distorting (and absorbing) effects of the atmosphere! - Space Stations – these are the biggest of the artificial satellites, from the small Tiangong-1 to Skylab, Mir and the International Space Station (ISS) which is the size of a football pitch! - Commercial – mainly for communications such as beaming TV signals and phone calls around the world Most satellites have a central body, solar panels and an antenna dish to receive and send information to Earth or other satellites or probes. Some satellites also carry a payload of cameras and scientific sensors such as Earth observation satellites which point these sensors at Earth to gather information about the land, atmosphere and oceans. Other satellites (such as the Hubble Space Telescope, Chandra X-ray Observatory, TESS and NASA's MMS) point towards space to observe/study the greater universe beyond. Satellites are launched near Earth’s equator (to gain initial speed from Earth’s rotation) with some of the upper stages of the rockets also becoming satellites, but as space debris/junk as they also reach orbit. Satellites typically get placed into one of the following orbits; - Low Earth orbit (LEO) with altitudes of 200 – 2000 km. The vast majority of satellites have these orbits and circle the Earth quickly; about every 90 minutes! - Inclined LEO – the most popular orbit - Polar orbit (often Sun-synchronous) – satellites in these orbits can observe the entire globe, one strip at a time, each day! - Medium Earth orbit (MEO) - between 2,000 km and 35,700 km. The most famous satellites orbiting here are the GPS satellites. - Geostationary orbit – 35,700 km altitude. At this altitude, the satellite orbits once a day meaning the satellite stays directly over the same position on Earth all the time! Perfect for weather satellites. With So Many Satellites In Orbit Do They Ever Collide? Yes! But not very often....although there are plenty of close calls! Even though satellites are small and space is very big, in 2009 an American Iridium satellite collided with an old Russian satellite at over 35,000 km/h destroying both and creating a large debris cloud of space junk!
<urn:uuid:a0401ff5-33ce-4774-a968-87228c4aa7ce>
3.796875
890
Knowledge Article
Science & Tech.
40.794391
95,492,861
A color marker is used to monitor the progress of agarose gel electrophoresis and polyacrylamide gel electrophoresis (PAGE) since DNA, RNA, and most proteins are colourless. They are also referred to as tracking dyes. Commonly used color markers include Bromophenol blue, Cresol Red, Orange G and Xylene cyanol. Generally speaking, Orange G migrates faster than bromophenol blue, which migrates faster than xylene cyanol, but the apparent "sizes" of these dyes (compared to DNA molecules) varies with the concentration of agarose and the buffer system used. For instance, in a 1% agarose gel made in TAE buffer (Tris-acetate-EDTA), xylene cyanol migrates at the speed of a 3000 base pair (bp) molecule of DNA and bromophenol blue migrates at 400 bp. However, in a 1% gel made in TBE buffer (Tris-borate-EDTA), they migrate at 2000 bp and 250 bp respectively. For PAGE, some commercially available molecular weight markers (also called "ladders" because they look like the rungs of a ladder after separation) contain pre-stained proteins of different colours, so it is possible to determine more accurately where the proteins of interest in the samples might be. |This biochemistry article is a stub. You can help Wikipedia by expanding it.|
<urn:uuid:b64c75e5-80b5-4c30-a8a0-11f62f0bfdaf>
3.609375
307
Knowledge Article
Science & Tech.
29.093493
95,492,863
The University of Arizona Steward Observatory has been given an advanced infrared telescope that is unique because it will be used primarily by students and amateurs enrolled in UA Astronomy Camps. The internationally acclaimed UA Astronomy Camps are popular with youngsters and adults who include Girl Scout leaders from around the nation. They will study the universe using a truly professional research class infrared telescope available to them for the first time on Mount Lemmon in the Santa Catalina Mountains about 45 miles north of the UA campus in Tucson, Ariz. The 20-inch reflector telescope is equipped with a state-of-the-art 256 x 256 pixel mercury-cadmium-tellurium infrared detector. Thats the same kind of infrared detector that UA scientists developed for the infrared camera flying on the Hubble Space Telescope. Infrared light has longer wavelengths than visible light. Its the light emitted as heat by a burner on a kitchen stove. And its the light emitted by objects far away and far back in time, near the beginning of the universe. The Jamieson Telescope also has a visible light CCD (charged-coupled device). Thanks to a beam splitter, the 1,000 x 1,600 pixel visible light detector can be used simultaneously with the infrared detector to photograph an object at visible and infrared wavelengths. The late John Jamieson worked closely with UA astronomers in developing his telescope. Jamieson constructed the telescope on Orcas Island, Washington state, at his Heron Cover Observatory. He used it in searches for asteroids and studies of such things as red dwarf stars, low mass stars and planets. After Jamieson died in 1999, his widow, Barbara, and her family donated the telescope to UAs Steward Observatory. The Jamieson family and friends and UA astronomers dedicated the telescope at ceremonies on the 9,160-foot summit of Mount Lemmon on Oct. 29. John Jamieson, a pioneer in infrared detectors, "was passionately interested in astronomy," Steward Observatory Director Peter Strittmatter said at the ceremony. "Dr. Jamieson had many connections to Steward Observatory, and he made significant gifts to assist our graduate students." "The John Jamieson Telescope allows us to expand our educational outreach beyond visible wavelengths, into the infrared," UA astronomer Donald McCarthy said. "Students experience a whole new world up here, where they can see 100 miles in all directions and the sky is very clear. What the students do at these telescopes is very amazing and inspirational to them." McCarthy has directed the UA Astronomy Camps on Mount Lemmon for the past 18 years. Campers are immersed in the real-life adventure of doing astronomy. They become "guest astronomers" who use professional 12-inch, 40-inch, 60-inch and 61-inch telescopes on the mountain at night and sleep in astronomers dorms during the day. Girl Scout leaders began training as guest astronomers, too, in 2003, after NASA selected UA astronomy Professor Marcia Riekes proposal to build a near-infrared camera called NIRCam for launch on the James Webb Space Telescope in 2013. McCarthy proposed and directs the $1 million, 10-year education and public outreach effort that is part of the $90 million NIRCam project. The James Webb Space Telescopes science mission is to find the first light-emitting objects that formed in the early universe soon after the Big Bang, roughly 13 billion years ago, McCarthy said. NIRCams education and public outreach program is designed to help the public understand the images of the first light in the universe, he added. Linking NIRCams outreach program to the Girl Scouts could potentially reach thousands of adult leaders and millions of girls in the organizations 317 councils. The Jamieson Telescope will be key in training Girl Scout leaders who, in turn, teach other Girl Scout leaders in making infrared observations. Those at the Jamieson Telescope dedication are promoting the Mount Lemmon astronomy site "as a place to come and study and learn and change your life," Barbara Jamieson said. "Johns only grandchild is a little girl, and I think its just wonderful that women, and the Girl Scouts, can train here to be scientists. I wish John were here. Hed be thrilled." "Our hope is to build a marvelous science center up here that will be used by not just hundreds, but by thousands of people every year," said Nick Hanauer, a successful Seattle-based businessman active in Washington states education system. Hanauer, who first attended a UA adult astronomy camp in 2001, is helping spearhead development of the 25-acre Mount Lemmon observing site into a financially self-sustaining world-class science education and research center. "Were working hard to make this place a science center for professional research scientists, for amateurs who come for an astronomical science experience, and for kids to learn about science in a wonderful hands-on way." Catalina Observatories operations manager Bob Peterson and his team moved the telescope (without the dome) from Orcas Island to Mount Lemmon in August 2002. Peterson and his crew recently poured the concrete foundations for the telescope dome, assembled the 18-foot diameter aluminum dome and installed flooring and electrical wiring. UA astronomer Laird Close purchased the new telescope dome and modern computerized infrared camera controller with $105,000 from his $545,000 National Science Foundation (NSF) Faculty Early Career Development Award in 2004. Given the NSFs heavy investment in professional infrared astronomy, Close said, "It will be a great asset to U.S. astronomical outreach and education efforts to have at least one dedicated research class infrared telescope fully devoted to outreach." Closes NSF award supports his research to directly detect planets around young, nearby stars. Five Astronomy Campers have earned doctorates in astronomy in the past 18 years, and more are on the way, McCarthy noted. But training professional astronomers isnt the goal of the camps, he added: "Our most important goal is to develop people who have an appreciation for science, who can do arithmetic, who know what basic research is, who see the value of such research in our lives. Our campers have gone everywhere -- medicine, physics, astronomy, law, psychiatry, hotel management -- we think theyre better people as a result of coming to the camps, and they have a long-lasting and significant appreciation for science." Astronomy Camps for teens and adults who include Girl Scout leaders reflect a "pay it forward" philosophy, McCarthy said. "Even a small investment now in the life of one junior high school student can pay off. Its just amazing what one of these students can become in 10 years." Mount Lemmon has been a pioneering site for infrared astronomy. The late Gerard Kuiper, an eminent planetary astronomer who founded UAs Lunar and Planetary Laboratory in 1960, secured a long-term lease from the U.S. Forest Service so the university could establish its astronomical research and science education station in the Coronado National Forest. Frank J. Low, Regents Professor Emeritus of Steward Observatory, was among those who Kuiper hired at UA. Low is considered the father of modern infrared astronomy because he developed a low temperature detector that enabled astronomers to observe throughout the infrared spectrum. Graduate students who developed the observing techniques and technologies on Mount Lemmon a few decades ago include Marcia Rieke and others who head world-class infrared astronomy projects. What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:b71097e6-dc2f-48c4-83e2-c2be081e35bf>
2.6875
2,166
Content Listing
Science & Tech.
38.649509
95,492,865
Thermodynamic Models for Crystalline Solutions Composition of coexisting minerals occurring in rocks or in experiments are the main source of data to be used in obtaining the information on the thermodynamic behaviour of silicate crystalline solutions. It is, therefore, necessary to use certain solution models to relate the observed compositional variables to the thermodynamic functions of mixing. Although such models are based on specific statistical theory and employ certain assumptions regarding the molecular forces in the crystalline lattice, the equations obtained for the thermodynamic functions of mixing are mathematically equivalent to those of other mathematical models which are not bound to any special physical interpretation. The regular solution model of Guggenheim (1952) is considered in detail in this chapter. KeywordsActivity Coefficient Thermodynamic Model Thermodynamic Function Zeroth Approximation Excess Function Unable to display preview. Download preview PDF.
<urn:uuid:39611721-0236-4f6e-8fb5-063426b81780>
2.71875
181
Truncated
Science & Tech.
-7.765784
95,492,876
By: Wynne Parry, LiveScience Senior Writer Published: 08/31/2012 11:20 AM EDT on LiveScience About 1,000 northern hairy wood ants are expected to have tiny radio tags, about 0.04 inches (1 millimeter) long, attached to their bodies, allowing researchers to track their movements on a protected English estate. The wood ants, which get their name from the "eyebrows" visible through a microscope, live in colonies housed within nests connected by trails worn into the ground by years of ant traffic. The biologist doing the work, Samuel Ellis of the University of York, intends to examine how the ants interact with one another. The results are expected to help staff at the Longshaw Estate in Derbyshire manage the estate — a natural and archaeological site —with the ants' needs in mind. "I think this is a world first. It has not been done in the wild before," said Ellis in a video produced by the U.K. National Trust, which manages the estate. [See Photos of the Tagged Wood Ants] Ellis is not certain how long they will stay attached to the insects. "The tags act like a bar code," he said in the video. "It gives each ant an individual identity and what this means is you can see which ants are going where and how individual ants interactions work together to make the colony long behaviors." An estimated 50 million hairy wood ants, Formica lugubris, inhabit the estate. They are the largest species of ants native to the British Isles with workers reaching up to 0.4 inches (10 mm) long. To get food for their young, the ants gently stroke sap-sucking aphids, which then produce honeydew; in return, the ants protect these aphids. The ants defend themselves from predators by spraying smelly, vinegar-like formic acid. Some birds, like Jays and Green Woodpeckers, use the formic acid spray as a cleansing agent to get rid of parasites, according to the University of York.
<urn:uuid:7fce028b-4e04-4bc1-bb9f-db3c0c759196>
3.8125
425
Truncated
Science & Tech.
55.597341
95,492,877
+44 1803 865913 By: Michael H Glantz 256 pages, Photos, figs, tabs Presents the base of knowledge needed to address the questions surrounding the unknown impacts of climate change. Outlines a new approach to understanding the interactions between climate, society and the environment. Includes: key concepts and terms; the effects of climate around the world; important but overlooked aspects of climate-society-environmental interactions; examples of societal uses, misuses, and potential uses of climate-related information such as forecasts; a reasearch agenda, challenges, and methodologies for future climate research. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects Fantastic service at a great price – I'll definitely use you again. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:e1421f6e-7e16-49eb-ae17-6ed3628a5363>
3.3125
197
Product Page
Science & Tech.
27.548951
95,492,878
Losses from extreme floods in Europe could more than double by 2050, because of climate change and socioeconomic development. Understanding the risk posed by large-scale floods is of growing importance and will be key for managing climate adaptation. Current flood losses in Europe are likely to double by 2050, according to a new study published in the journal Nature Climate Change by researchers from the International Institute for Applied Systems Analysis (IIASA), the Institute for Environmental Studies in Amsterdam, and other European research centers. Socioeconomic growth accounts for about two-thirds of the increased risk, as development leads to more buildings and infrastructure that could be damaged in a flood. The other third of the increase comes from climate change, which is projected to change rainfall patterns in Europe. "In this study we brought together expertise from the fields of hydrology, economics, mathematics and climate change adaptation, allowing us for the first time to comprehensively assess continental flood risk and compare the different adaptation options," says Brenden Jongman of the Institute for Environmental Studies in Amsterdam, who coordinated the study. The study estimated that floods in the European Union averaged €4.9 billion a year from 2000 to 2012. These average losses could increase to €23.5 billion by 2050. In addition, large events such as the 2013 European floods are likely to increase in frequency from an average of once every 16 years to a probability of once every 10 years by 2050. The analysis combined models of climate change and socioeconomic development to build a better estimate of flood risk for the region. IIASA researcher Stefan Hochrainer-Stigler led the modeling work on the study. He says, "The new study for the first time accounts for the correlation between floods in different countries. Current risk-assessment models assume that each river basin is independent. But in actuality, river flows across Europe are closely correlated, rising and falling in response to large-scale atmospheric patterns that bring rains and dry spells to large regions." "If the rivers are flooding in Central Europe, they are likely to also be flooding Eastern European regions," says Hochrainer-Stigler. "We need to be prepared for larger stress on risk financing mechanisms, such as the pan-European Solidarity Fund (EUSF), a financial tool for financing disaster recovery in the European Union." For example, the analysis suggests that the EUSF must pay out funds simultaneously across many regions. This can cause unacceptable stresses to such risk financing mechanisms. Hochrainer-Stigler says, "We need to reconsider advance mechanisms to finance these risks if we want to be in the position to quickly and comprehensively pay for recovery." IIASA researcher Reinhard Mechler, another study co-author, points out the larger implications arising from the analysis. He says, "There is scope for better managing flood risk through risk prevention, such as using moveable flood walls, risk financing and enhanced solidarity between countries. There is no one-size-fits all solution, and the risk management measures have very different efficiency, equity and acceptability implications. These need to be assessed and considered in broader consultation, for which the analysis provides a comprehensive basis." Hochrainer-Stigler presented testimony based on this research at a recent public hearing on the EUSF with the European Commission (add link) Jongman, B, S Hochrainer-Stigler, et. al. (2014). Increasing stress on disaster risk finance due to large floods. Nature Climate Change (letter). doi: 10.1038/nclimate2124 For more information please contact: Risk Policy and Vulnerability +43(0) 2236 807 517 Deputy Program Director Risk Policy and Vulnerability +43(0) 2236 807 313 IIASA Press Office Tel: +43 2236 807 316 Mob: +43 676 83 807 316 IIASA is an international scientific institute that conducts research into the critical issues of global environmental, economic, technological, and social change that we face in the twenty-first century. Our findings provide valuable options to policy makers to shape the future of our changing world. IIASA is independent and funded by scientific institutions in Africa, the Americas, Asia, Oceania, and Europe. http://www.iiasa.ac.at Katherine Leitzell | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:4b6d46cd-a7b0-44db-9ad5-ee55ea3f0be5>
3.34375
1,542
Knowledge Article
Science & Tech.
37.477783
95,492,895
The beauty and fascination of numbers can be summed up by one simple fact: anyone can count 1, 2, 3, 4, but no one knows all the implications of this simple process. Let me elaborate. We all realize that the sequence 1, 2, 3, 4, continues 5, 6, 7, 8, and that we can continue indefinitely adding 1. The objects produced by the counting process are what mathematicians call the natural numbers. Thus if we want to say what it is that 1, 2, 3, 17, 643, 100097801, and 4514517888888856 have in common, in short, what a natural number is, we can only say that each is produced by the counting process. This is slightly troubling when you think about it: the simplest, and most finite, mathematical objects are defined by an infinite process. However, the concept of natural number is inseparable from the concept of infinity, so we must learn to live with it and, if possible, use it to our advantage. KeywordsNatural Number Induction Step Prime Divisor Integer Solution Counting Process Unable to display preview. Download preview PDF.
<urn:uuid:1f12003e-b128-441e-bc22-c3dbe9e0c462>
3
238
Truncated
Science & Tech.
56.345
95,492,916
An exactly dated time series of almost 900 year length was established, exhibiting the medieval warm period, the little ice age between the 16th and 19th century as well as the transition into the modern warm phase. Moreover, Ingo Heinrich from the GFZ German Research Centre for Geosciences and colleagues revealed that the modern warming trend cannot be found in the new chronology. "A comparison with seasonal meteorological data also demonstrates that at several places in the Mediterranean the winter and spring temperatures indicate long-term trends which are decreasing or at least not increasing", says Ingo Heinrich. "Our results stress the need for further research of the regional climate variations." It seems that especially temperature reconstructions derived from extreme sites such as high mountain zones and high latitudes do not always correctly reflect the climate of the different geographical regions. The past temperature variations in the lowlands of central Europe and in the Mediterranean are not well understood yet. The analysis of carbon isotope ratios (13C/12C) in tree rings aims to close this research gap. By focusing on the months January to May the researchers detected the period in which the trees shift from dormancy in late winter to re-activation of growth in early spring. The carbon isotope ratios measured in individual tree rings largely depends on the environmental conditions; thus, the varying tree-ring isotope values are good indicators for changes in the environment. The carbon isotope ratios in the trees from Turkey indicate a temperature sensitivity of the trees during late winter to early spring. In cold winters the cambium and the leaves are damaged more than usual and the following recovery in spring takes longer. Low spring temperatures further delay the photosynthesis or slow down the rate of photosynthesis, with negative effects on the cambial activity. Ingo Heinrich, Ramzi Touchan, Isabel Dorado Liñán, Heinz Vos, Gerhard Helle: „Winter-to-spring temperature dynamics in Turkey derived from tree rings since AD 1125", Climate Dynamics, October 2013, Volume 41, Issue 7-8, pp 1685-1701, DOI 10.1007/s00382-013-1702-3 Pictures in a printable resolution can be found here: http://www.gfz-potsdam.de/medien-kommunikation/bildarchiv/klimaforschung/dendrochronologie/ F.Ossing | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:6be06c7c-2074-4617-869d-bf90a97709c8>
3.421875
1,081
Content Listing
Science & Tech.
37.961994
95,492,920
| IUPAC name | Other names Cyanamide calcium salt |Molar mass||80.102 g/mol| |Appearance||White solid (Often gray or black from impurities)| |Melting point||1,340 °C (2,440 °F; 1,610 K)| |Solubility||Insoluble in organic solvents| |Vapor pressure||~0 mmHg| Std enthalpy of |Safety data sheet||Sigma-Aldrich| | Calcium carbide| Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa). Calcium cyanamide is an inorganic compound with the formula CaCN2, widely used as fertilizer in agriculture. Hydrolysis of CaCN2 will yield cyanamide, as well as ammonia. - CaCN2 + H2O + CO2 → CaCO3 + H2NCN - CaCN2 + 3 H2O → 2 NH3 + CaCO3 Fusing calcium cyanamide with sodium carbonate will give sodium cyanide. - CaCN2 + Na2CO3 + 2C → 2 NaCN + CaO + 2 CO Calcium cyanamide is a white (gray or black if impure) solid, which reacts with water. Calcium cyanamide is sold as fertilizer. In some places it's hard to find, as the compounds readily hydrolyzes in the presence of moisture, and water-sensitive materials are generally not sold in most stores. It can also be bought from chemical suppliers. - 3 CO(NH2)2 → 3 HOCN + NH3 - CaO + 2 HOCN → Ca(OCN)2 + H2O - Ca(OCN)2 → CaCN2 + CO2 Calcium cyanamide is obtained industrially by heating calcium carbide powder at 1,000 °C, usually in an electric furnace, while injecting nitrogen gas over the hot carbide, which is recirculated. The reaction takes several hours for completion. Another route involves heating calcium cyanide with nitrogen gas at 600 °C for at least one hour. Reducing calcium nitride with carbon at 800-900 °C in a nitrogen atmosphere will also give calcium cyanamide. Since calcium cyanamide reacts with oxygen at high temperatures, all of these routes must be done in low of oxygen environment, and an inert gas is used. Calcium cyanamide is harmful and should be handled with care. It is known to cause alcohol intolerance, before or after the consumption of alcohol. In closed containers, away from moisture. Can be dumped in the ground. - Franck, H. H.; Hochwald, F.; Zeitschrift fuer Elektrochemie; vol. 31; (1925); p. 581 - 590 - Franck, H.; Heimann, H.; Angewandte Chemie; vol. 44; (1931); p. 372 - 378 - Krase, H. J.; Jee, J. Y.; Journal of the American Chemical Society; vol. 46; (1924); p. 1358 - 1366 - Franklin; Journal of the American Chemical Society; vol. 44; (1922); p. 504 - Patent; Caro; Frank; DE467479; Fortschr. Teerfarbenfabr. Verw. Industriezweige; vol. 16; p. 283
<urn:uuid:ce90a8b4-33e6-4c4b-a3ae-a7da946ce882>
3.3125
754
Knowledge Article
Science & Tech.
65.76192
95,492,932
Area of circle can be determined easily using a formula. But before knowing the area of circle, let us understand the perimeter of circle. Perimeter of Circle/Circumference of Circle: Perimeter of closed figures is defined as length of its boundary. When it comes to circles, perimeter of circle is given a different name. It is called Circumference of circle. To define the circumference of circle, knowledge of a term known as ‘pi’ is required. Consider the circle shown in the fig. 1, with center at O and radius r. The circumference of the above circle is equal to the length of its boundary. The length of rope which wraps around its boundary perfectly will be equal to its circumference. π, read as ‘pi’is defined as the ratio of circumference of a circle to its diameter. This ratio is same for every circle. Consider a circle with radius ‘r’ and circumference ‘C’. For this circle C = 2πr The same is shown in fig. 2. Area of a Circle Take a circle with radius r. Its area, A is given by Area of a circle can be visualized & proved using two methods, namely - Determining area of a circle using rectangles - Determining area of a circle using triangles Let us understand both the methods one-by-one- Using areas of rectangles The circle is divided into 16 equal sectors and the sectors are arranged as shown in the fig. 3. The area of the circle will be equal to that of the parallelogram shaped figure formed by the sectors cut out from the circle. Since the sectors have equal area, each sector will have equal arc length. The green colored sectors will contribute for the half of the circumference and blue colored sectors will contribute for the other half. If the number of sectors cut from the circle is increased, the parallelogram will eventually look like a rectangle with length equal to πr and breadth equal to r. The area of rectangle (A) will also be the area of circle. So, we have Using areas of triangles Fill the circle with radius r with concentric circles. After cutting the circle along the indicated line in fig. 4 and spreading the lines, the result will be a triangle. The base of the triangle will be equal to the circumference of the circle and its height will be equal to the radius of the circle. So, area of the triangle (A) will be equal to the area of the circle. We have If area of a circle with radius r units is \(πr^2\) Subscribe to our BYJU’S YouTube channel to learn even the most difficult concepts in easy ways or visit our site to learn from wonderful animations and interactive videos. Practise This Question
<urn:uuid:9be5948d-5e41-47b9-9259-608685d5c8e0>
4.46875
585
Tutorial
Science & Tech.
55.050734
95,492,936
TeachMeFinance.com - explain Microburst Microburst The term 'Microburst' as it applies to the area of the weather can be defined as ' A convective downdraft with an affected outflow area of less than 2½ miles wide and peak winds lasting less than 5 minutes. Microbursts may induce dangerous horizontal/vertical wind shears, which can adversely affect aircraft performance and cause property damage'. About the author Copyright © 2005-2011 by Mark McCracken, All Rights Reserved. TeachMeFinance.com is an informational website, and should not be used as a substitute for professional medical, legal or financial advice. Information presented at TeachMeFinance.com is provided on an "AS-IS" basis. Please read the disclaimer for details.
<urn:uuid:588ac0dc-e39c-4915-8177-e381d7a237fc>
2.53125
162
Knowledge Article
Science & Tech.
37.089603
95,492,971
Prof. Shigeru Watanabe of the Graduate School of Human Relations of Keio University and Tsukuba University graduate student Kohji Toda trained pigeons to discriminate real-time self-image using mirrors as well as videotaped self-image, and proved that pigeons can recognize video images that reflect their movements as self-image. Self-recognition is found in large primates such as chimpanzees, and recent findings show that dolphins and elephants also have such intelligence. Proving that pigeons also have this ability show that such high intelligence as self-recognition can be seen in various animals, and are not limited to primates and dolphins that have large brains. 1. EXPERIMENT METHOD AND RESULTS (These findings will be introduced in “Animal Cognition”, a journal for comparative cognitive science. The electronic version of “Animal Cognition” has been released.) 2. METHOD OF TESTING SELF RECOGNITION ON ANIMALS 3. SELF-COGNITIVE ABILITIES OF PIGEONS ARE HIGHER THAN THAT OF 3-YEAR OLDS Through various experiments, it is known that pigeons have great visual cognitive abilities. For example, a research at Harvard University proved that pigeons could discriminate people photographs from others. At Prof. Shigeru Watanabe’s laboratory, pigeons could discriminate paintings of a certain painter (such as Van Gogh) from another painter (such as Chagall). Furthermore, pigeons could discriminate other pigeons individually, and also discriminate stimulated pigeons that were given stimulant drugs from none. In this experiment, pigeons could discriminate video images that reflect their movements even with a 5-7 second delay from video images that don’t reflect their movements. This ability is higher than an average 3-year-olds of humans. According to a research by Prof. Hiraki of the University of Tokyo, 3-year-olds have difficulty recognizing their self-image with only a 2 second delay.(*1) Global COE Program Center for Research Promotion | ResearchSEA Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:59d26d6b-18b9-478e-ba2d-2b57f6b0f828>
3.234375
1,081
Content Listing
Science & Tech.
36.000637
95,492,977
Find the maximum linear speed of a person sitting on the merry-go-round 6.50 m from the center. and Find the person's maximum radial acceleration. and Find the angular acceleration of the merry-go-round and Find the person's tangential acceleration. Recently Asked Questions - How do you create concept map involving atoms, molecoules, bonds, and marcomolecules. - Research the steps involved in launching a new personal training business and develop a business plan. Find out where to obtain your DBA, business checking - how do I approximate the area under the curve by evaluating the function at the left-hand endpoints of the subintervals ? f(x)= 25-x^2 from x=1 to x=3; 4
<urn:uuid:91f7358d-0ccf-42ba-9812-85db228f9252>
3.046875
159
Q&A Forum
Science & Tech.
59.240254
95,492,991
Loading in 2 Seconds... Loading in 2 Seconds... Genetic divergence among Dectes texanus texanus LeConte (Coleoptera : Cerambycidae) from sunflower and soybean Madhura Siddappaji* and Srini Kambhampati Department of Entomology, Kansas State University, Manhattan, KS 66506 , * email@example.com Photo by J. P. Michaud ABSTRACT Genetic divergence among Dectes texanus texanus LeConte (Coleoptera : Cerambycidae) from sunflower and soybean Madhura Siddappaji* and Srini Kambhampati Department of Entomology,Kansas State University, Manhattan, KS 66506 , * firstname.lastname@example.org Photo by J. P. Michaud The cerambycid beetle, Dectes texanus texanus LeConte, an indigenous, univoltine species, is a pest of cultivated sunflower. During 1960’s this species emerged as a new pest on soybean. This led to speculation as to whether the populations infesting sunflower and soybean are panmictic or isolated. Our goal is to quantify genetic variation patterns among samples collected from soybeans and sunflowers to assess whether D. texanus populations are undergoing sympatric genetic divergence. We isolated 10 polymorphic microsatellite markers from genomic DNA. These loci were amplified from populations of D. texanus collected from soybeans and sunflowers in Kansas. There is a moderate amount of genetic variation among the beetles infesting soybean and sunflower, suggesting partial barrier to gene flow. DISCUSSION AND CONCLUSIONS Our preliminary results indicated that microsatellite markers can be used to detect genetic variation on a fine scale, i.e., among samples collected a few hundred meters apart. Whether the genetic variation we detected is reflective of host plant usage or random variation remains to be seen with the inclusion of more samples. The presence of private alleles in both sunflower and soybean samples, however, is suggestive of incipient genetic divergence. Additionally, only adult beetles were included in the present assay. Therefore, we cannot be certain that individuals collected in soybean and soybean fields did in fact develop in the respective host plants. Therefore, we are presently collecting larvae (to know whether there exists host fidelity or oviposition preference by adults, feeding at diverse habitats) that over winter in the stubble of the host plant in which they developed. The initial results from one site in larvae indicated high genetic variation (FST = 0.27) among the populations infesting soybean and sunflower, indicating a degree of host plant fidelity and a mate preference, though adults feed on diverse habitats. Analysing larval samples from other sites, will help us to understand the dynamics of the sympatric divergence. The behavioral differences among D. texanus developing in soybean and sunflower (Michaud and Grant 2005) suggest a degree of host-plant fidelity, which is a prerequisite for genetic divergence through assortative mating. Fig 1: PAGE gel showing allelic variation at microsatellite locus (D. texanus 6, 190 bp). Each lane represents the amplification of the locus from an individual beetle. Lane 1 is molecular weight marker (600 bp ladder). A: a homozygote for allele c ; B: heterozygote for alleles c andb; C: heterozygote for alleles a and b. ( a = 188; b = 190; c = 192). A primary focus of evolutionary biology is the origin of biodiversity through speciation (Darwin 1859). While allopatric speciation is widely accepted, sympatric speciation has been controversial (Berlocher and Feder 2002). However, sympatric speciation may have played a major role in the diversification of phytophagous insects (D`res and Mallet 2002). The beetle, Dectes texanus texanus, historically a pest of cultivated sunflowers, began utilizing soybeans about 40 years ago (Laster and Thorn 1981). It is not known whether the beetles on soybean and sunflower are a panmictic population or if there is host specialization. Michaud and Grant (2005) suggested behavioral differences exist among D. texanus infesting sunflower and soybean. Thus, our objective is to quantify levels and patterns of genetic variation among samples of D. texanus collected from soybeans and sunflowers in Kansas using polymorphic microsatellite markers. Here, we report on population genetic data from adult D. texanus collected from soybean and sunflower fields that were located next to each other. An example of PAGE with microsatellite alleles of different sizes for D. texanus is shown in Fig. 1. Allele composition: A total of 48 and 53 alleles at the ten loci were recorded for the samples collected in sunflower and soybean fields, respectively. Of these, four alleles were unique to samples from sunflower and nine alleles were unique to samples from soybean. The mean number of alleles was 5.0 ± 1.93 for soybean and 6.0 ± 2.50 for sunflower. Allele frequency: Allele frequencies of samples collected from the two crops at a given location (Pair 1 and Pair 2) were compared using Mann-Whitney test. The frequencies were not significantly different among samples from soybean and sunflower crops suggesting little or no genetic divergence. Genetic divergence: Measures of genetic variation and genetic divergence and gene flow are given in Table 1. The samples from soybeans and sunflowers showed similar levels of genetic variation as measured by observed heterozygosity. Both samples also exhibited lower than expected heterozygosity. There was low to moderate level of genetic divergence between soybean and sunflower samples from a given location as indicated by the FST estimates (Table 1). SIGNIFICANCE OF THE STUDY D. texanus is an excellent system for understanding the early dynamics of host shift in a phytophagous insect. Previous studies (Berlocher and Feder 2002) on host shift and sympatric divergence generally involved plant-insect systems in which host shift had taken place hundreds of generations ago. In contrast, the D. texanus host shift is about <50 generations old and therefore facilitates the investigation of genetic changes that occur during the early phases of host shift. Berlocher, S.H. and Feder, J. L. 2002. Annu. Rev. Entomol. 47:773 -815. Darwin, C. 1859. The origin of species. London: Murray, 1st Ed. 513 pp. Dieringer, D. and Schlötterer, C. 2003. Mol. Ecol. Notes 3: 167-169 D`res, M. and Mallet, J. 2002. Philos. Trans. R. Soc. Lond. Ser. B-Biol. Sci. 357: 471–492. Grace et al. 2005. Mol. Ecol. Notes. 5 : 321 –322. Laster, M. L., and Thorn, W. O. 1981. Res. Highlights Miss. Agric. For. Exp. Stn.The Station 6: 3. Michaud, J.P. and Grant, A.K. 2005. J. Ins. Sci.(In press) MATERIALS AND METHODS INSECT SAMPLING: Adult D. texanus were collected in fields near Belleville, KS. A soybean and sunflower field (approx. 30 acres each) are located next to each other 0.8 miles north of Belleville (Pair 1). Another pair of soybean and sunflower fields (35 acres each) is located 0.1miles west of Belleville (Pair 2). Thirty-seven to 48 individuals were collected from each field. MICROSATELLITES: Ten polymorphic microsatellite markers were isolated, characterized and optimized as described by Grace et al. (2005). DNA was isolated from individual insects and microsatellite loci were PCR-amplified. The PCR products were electrophoresed on a PAGE gel which was stained with ethidium bromide to visualize the fragments. DATA ANALYSIS: Allele frequencies, F-statistics, observed and expected heterozygosities were calculated using MSAnalyzer (Dieringer and Schlötterer2003). Table 1: Estimates of genetic divergence and genetic variation for samples of D. texanus collected from soybean and sunflower in Kansas. We thank Tony Grace and Benjamin Aldrich for help during the study. This study was funded by Regional Soybean Project (S1010).
<urn:uuid:dd3ee39e-4f85-46ee-818d-cf0c66042a97>
2.625
1,844
Academic Writing
Science & Tech.
42.816704
95,492,995
Standard Deviation Formula To understand the standard deviation formula let us first understand the meaning of standard deviation. For a set of data the measure of dispersion about mean when expressed as the positive square root of variance is called standard deviation. It is denoted by For discrete frequency distribution of the type: The standard deviation is given as: For continuous frequency distribution, the mid-point of each class is considered for calculating the standard deviation. If frequency distribution of n classes is defined by its mid-point xi with frequency fi, the standard deviation is given by: Now let us try to obtain another standard deviation formula: This gives us the standard deviation (σ) as: This is the standard deviation formula for given set of observations. But sometimes it happens that the value xi in a given data set or the midpoints of classes in a given frequency distribution are very enormous. In such cases, determination of mean or median or variance becomes hefty and time-consuming. To solve this problem we make use of step deviation method for simplifying the procedure. Let us discuss this shortcut method for determining variance and standard deviation. Let us consider the assumed mean as ‘A’ and the width of class interval be h. Let the step deviation be yi. The mean of a data set is given by: Substituting the values of xi from equation (1) into equation (2), we get Variance of the variable x, Substituting the values from equation (1) and (3), we have σx=h2 × variance of variable yi From equation (3) and (4), we can conclude that Let us look into an example for a better insight. Example: Find out the mean, variance and standard deviation for the following data representing the age group of employees working in XYZ Company. Solution: Let the assumed mean A = 30 and h = 10. From the table given above, we can obtain. The variance of the above data can be calculated as: The standard deviation can be given as: To learn more about standard deviation formula and Statistics download Byju’s- The Learning App. Practise This Question
<urn:uuid:21856e1f-5401-4d8c-9b4b-6a109aeda02c>
4.125
458
Tutorial
Science & Tech.
41.495587
95,493,001
API testing - A set of functions and procedures that allow the creation of applications which access the features or data of an operating system, application, or other service. What is an API ? API is an acronym for Application Programming Interface. It enables communication and data exchange between two separate software systems. A software system implementing an API contains functions/sub-routines which can be executed by another software system. What is an API testing ? The API Testing is performed for the system, which has a collection of API that ought to be tested. During Testing, a test of following things is looked at - - Exploring boundary conditions and ensuring that the test harness varies parameters of the API calls in ways that verify functionality and expose failures. - Generating more value added parameter combinations to verify the calls with two or more parameters. - Verifying the behaviour of the API which is considering the external environment conditions such as files, peripheral devices, and so forth. - Verifying the Sequence of API calls and check if the API's produce useful results from successive calls. - API Testing requires an application to interact with API. In order to test an API, you will need to Use Testing Tool to drive the API Write your own code to test the API Set-up of API Test environment API testing is different than other testing types as GUI is not available, and yet you are required to setup initial environment that invokes API with required set of parameters and then finally examines the test result. Hence, Setting up a testing environment for API testing seems a little complex. Database and server should be configured as per the application requirements. Once the installation is done, API Function should be called to check whether that API is working. Types of Output of an API Output of API could be Any type of data Status (say Pass or Fail) Call to another API function. Common Tests performed on API's - Return Value based on input condition - The return value from the API's are checked based on the input condition. - Verify if the API's does not return anything. - Verify if the API triggers some other event or calls another API. The Events output should be tracked and verified. - Verify if the API is updating any data structure. What to test for in API testing - API testing should cover at-least following testing methods apart from usual SDLC process - Discovery testing: The test group should manually execute the set of calls documented in the API like verifying that a specific resource exposed by the API can be listed, created and deleted as appropriate - Usability testing: This testing verifies whether the API is functional and user-friendly. And does API integrates well with another platform as well - Security testing: This testing includes what type of authentication is required and whether sensitive data is encrypted over HTTP or both - Automated testing: API testing should culminate in the creation of a set of scripts or a tool that can be used to execute the API regularly - Documentation: The test team has to make sure that the documentation is adequate and provides enough information to interact with the API. Documentation should be a part of the final deliverable Best Practices of API Testing: - Test cases should be grouped by test category - On top each test, you should include the declarations of the APIs being called. - Parameters selection should be explicitly mentioned in the test case itself - Prioritise API function calls so that it will be easy for testers to test - Each test case should be as self-contained and independent from dependencies as possible - Avoid "test chaining" in your development - Special care must be taken while handling one time call functions like - Delete, Close Window, etc... - Call sequencing should be performed and well planned - To ensure complete test coverage, create test cases for all possible input combinations of the API. Types of Bugs that API testing detects - Fails to handle error conditions gracefully - Unused flags - Missing or duplicate functionality - Reliability Issues. Difficulty in connecting and getting a response from API. - Security Issues - Multi-threading issues - Performance Issues. API response time is very high. - Improper errors/warning to caller - Incorrect handling of valid argument values - Response Data is not structured correctly (JSON or XML) Tools for API testing Since API and unit testing both target source code, similar tools can be used for testing both. - Postman with jet-packs - Postman with Newman Challenges of API Testing - Main challenges in API testing is Parameter Combination, Parameter Selection, and Call Sequencing - There is no GUI available to test the application which makes difficult to give input values - Validating and Verifying the output in different system is little difficult for testers - Parameters selection and categorisation is required to be known to the testers - Exception handling function needs to be tested - Coding knowledge is necessary for testers - Check out top API Testing Tools List API consists of a set of classes / functions / procedures which represent the business logic layer. If API is not tested properly, it may cause problems not only the API application but also in the calling application. Subscribe to Engineering At Kiprosh Get the latest posts delivered right to your inbox
<urn:uuid:aa1ac614-ee9e-41ad-a256-a3ef538b58cc>
3.578125
1,107
Documentation
Software Dev.
27.125321
95,493,011
|Paradigm||Multi-paradigm: procedural, functional, object-oriented, meta, reflective, generic| |Designed by||Scott Fahlman, Richard P. Gabriel, David A. Moon, Kent Pitman, Guy Steele, Dan Weinreb| |Developer||ANSI X3J13 committee| |First appeared||1984, 1994 for ANSI Common Lisp| |Typing discipline||dynamic, strong| |Scope||lexical, optionally dynamic| |Filename extensions||.lisp, .lsp, .l, .cl, .fasl| |Allegro CL, ABCL, CLISP, Clozure CL, CMUCL, ECL, GCL, LispWorks, Scieneer CL, SBCL, Symbolics Common Lisp| |CLtL1, CLtL2, ANSI Common Lisp| |Lisp, Lisp Machine Lisp, Maclisp, Scheme, Interlisp| |Clojure, Dylan, Emacs Lisp, EuLisp, ISLISP, Julia, Moose, R, SKILL, SubL| Common Lisp (CL) is a dialect of the Lisp programming language, published in ANSI standard document ANSI INCITS 226-1994 (R2004) (formerly X3.226-1994 (R1999)). The Common Lisp HyperSpec, a hyperlinked HTML version, has been derived from the ANSI Common Lisp standard. The Common Lisp language was developed as a standardized and improved successor of Maclisp. By the early 1980s several groups were already at work on diverse successors to MacLisp: Lisp Machine Lisp (aka ZetaLisp), Spice Lisp, NIL and S-1 Lisp. Common Lisp sought to unify, standardise, and extend the features of these MacLisp dialects. Common Lisp is not an implementation, but rather a language specification. Several implementations of the Common Lisp standard are available, including free and open-source software and proprietary products. Common Lisp is a general-purpose, multi-paradigm programming language. It supports a combination of procedural, functional, and object-oriented programming paradigms. As a dynamic programming language, it facilitates evolutionary and incremental software development, with iterative compilation into efficient run-time programs. This incremental development is often done interactively without interrupting the running application. It also supports optional type annotation and casting, which can be added as necessary at the later profiling and optimization stages, to permit the compiler to generate more efficient code. For instance, fixnum can hold an unboxed integer in a range supported by the hardware and implementation, permitting more efficient arithmetic than on big integers or arbitrary precision types. Similarly, the compiler can be told on a per-module or per-function basis which type safety level is wanted, using optimize declarations. Common Lisp is extensible through standard features such as Lisp macros (code transformations) and reader macros (input parsers for characters). - 1 History - 2 Syntax - 3 Data types - 4 Scope - 5 Macros - 6 Condition system - 7 Common Lisp Object System (CLOS) - 8 Compiler and interpreter - 9 Code examples - 10 Comparison with other Lisps - 11 Implementations - 12 Applications - 13 Libraries - 14 See also - 15 References - 16 Bibliography - 17 External links Work on Common Lisp started in 1981 after an initiative by ARPA manager Bob Engelmore to develop a single community standard Lisp dialect. Much of the initial language design was done via electronic mail. In 1982, Guy L. Steele, Jr. gave the first overview of Common Lisp at the 1982 ACM Symposium on LISP and functional programming. The first language documentation was published 1984 as Common Lisp the Language, first edition. A second edition, published in 1990, incorporated many changes to the language, made during the ANSI Common Lisp standardization process. The final ANSI Common Lisp standard then was published in 1994. Since then no update to the standard has been published. Various extensions and improvements to Common Lisp (examples are Unicode, Concurrency, CLOS-based IO) have been provided by implementations and libraries (many available via Quicklisp). Common Lisp is a dialect of Lisp; it uses S-expressions to denote both code and data structure. Function calls, macro forms and special forms are written as lists, with the name of the operator first, as in these examples: (+ 2 2) ; adds 2 and 2, yielding 4. The function's name is '+'. Lisp has no operators as such. (defvar *x*) ; Ensures that a variable *x* exists, ; without giving it a value. The asterisks are part of ; the name, by convention denoting a special (global) variable. ; The symbol *x* is also hereby endowed with the property that ; subsequent bindings of it are dynamic, rather than lexical. (setf *x* 42.1) ; sets the variable *x* to the floating-point value 42.1 ;; Define a function that squares a number: (defun square (x) (* x x)) ;; Execute the function: (square 3) ; Returns 9 ;; the 'let' construct creates a scope for local variables. Here ;; the variable 'a' is bound to 6 and the variable 'b' is bound ;; to 4. Inside the 'let' is a 'body', where the last computed value is returned. ;; Here the result of adding a and b is returned from the 'let' expression. ;; The variables a and b have lexical scope, unless the symbols have been ;; marked as special variables (for instance by a prior DEFVAR). (let ((a 6) (b 4)) (+ a b)) ; returns 10 Common Lisp has many data types. Number types include integers, ratios, floating-point numbers, and complex numbers. Common Lisp uses bignums to represent numerical values of arbitrary size and precision. The ratio type represents fractions exactly, a facility not available in many languages. Common Lisp automatically coerces numeric values among these types as appropriate. The symbol type is common to Lisp languages, but largely unknown outside them. A symbol is a unique, named data object with several parts: name, value, function, property list and package. Of these, value cell and function cell are the most important. Symbols in Lisp are often used similarly to identifiers in other languages: to hold the value of a variable; however there are many other uses. Normally, when a symbol is evaluated, its value is returned. Some symbols evaluate to themselves, for example all symbols in the keyword package are self-evaluating. Boolean values in Common Lisp are represented by the self-evaluating symbols T and NIL. Common Lisp has namespaces for symbols, called 'packages'. A number of functions are available for rounding scalar numeric values in various ways. The function round rounds the argument to the nearest integer, with halfway cases rounded to the even integer. The functions ceiling round towards zero, down, or up respectively. All these functions return the discarded fractional part as a secondary value. For example, (floor -2.5) yields -3, 0.5; (ceiling -2.5) yields -2, -0.5; (round 2.5) yields 2, 0.5; and (round 3.5) yields 4, -0.5. Sequence types in Common Lisp include lists, vectors, bit-vectors, and strings. There are many operations that can work on any sequence type. As in almost all other Lisp dialects, lists in Common Lisp are composed of conses, sometimes called cons cells or pairs. A cons is a data structure with two slots, called its car and cdr. A list is a linked chain of conses or the empty list. Each cons's car refers to a member of the list (possibly another list). Each cons's cdr refers to the next cons—except for the last cons in a list, whose cdr refers to the nil value. Conses can also easily be used to implement trees and other complex data structures; though it is usually advised to use structure or class instances instead. It is also possible to create circular data structures with conses. Common Lisp supports multidimensional arrays, and can dynamically resize adjustable arrays if required. Multidimensional arrays can be used for matrix mathematics. A vector is a one-dimensional array. Arrays can carry any type as members (even mixed types in the same array) or can be specialized to contain a specific type of members, as in a vector of bits. Usually only a few types are supported. Many implementations can optimize array functions when the array used is type-specialized. Two type-specialized array types are standard: a string is a vector of characters, while a bit-vector is a vector of bits. Hash tables store associations between data objects. Any object may be used as key or value. Hash tables are automatically resized as needed. Packages are collections of symbols, used chiefly to separate the parts of a program into namespaces. A package may export some symbols, marking them as part of a public interface. Packages can use other packages. Classes are similar to structures, but offer more dynamic features and multiple-inheritance. (See CLOS). Classes have been added late to Common Lisp and there is some conceptual overlap with structures. Objects created of classes are called Instances. A special case are Generic Functions. Generic Functions are both functions and instances. Common Lisp supports first-class functions. For instance, it is possible to write functions that take other functions as arguments or return functions as well. This makes it possible to describe very general operations. The Common Lisp library relies heavily on such higher-order functions. For example, the sort function takes a relational operator as an argument and key function as an optional keyword argument. This can be used not only to sort any type of data, but also to sort data structures according to a key. ;; Sorts the list using the > and < function as the relational operator. (sort (list 5 2 6 3 1 4) #'>) ; Returns (6 5 4 3 2 1) (sort (list 5 2 6 3 1 4) #'<) ; Returns (1 2 3 4 5 6) ;; Sorts the list according to the first element of each sub-list. (sort (list '(9 A) '(3 B) '(4 C)) #'< :key #'first) ; Returns ((3 B) (4 C) (9 A)) The evaluation model for functions is very simple. When the evaluator encounters a form (f a1 a2...) then it presumes that the symbol named f is one of the following: - A special operator (easily checked against a fixed list) - A macro operator (must have been defined previously) - The name of a function (default), which may either be a symbol, or a sub-form beginning with the symbol If f is the name of a function, then the arguments a1, a2, ..., an are evaluated in left-to-right order, and the function is found and invoked with those values supplied as parameters. defun defines functions where a function definition gives the name of the function, the names of any arguments, and a function body: (defun square (x) (* x x)) Function definitions may include compiler directives, known as declarations, which provide hints to the compiler about optimization settings or the data types of arguments. They may also include documentation strings (docstrings), which the Lisp system may use to provide interactive documentation: (defun square (x) "Calculates the square of the single-float x." (declare (single-float x) (optimize (speed 3) (debug 0) (safety 1))) (the single-float (* x x))) Anonymous functions (function literals) are defined using lambda expressions, e.g. (lambda (x) (* x x)) for a function that squares its argument. Lisp programming style frequently uses higher-order functions for which it is useful to provide anonymous functions as arguments. Local functions can be defined with (flet ((square (x) (* x x))) (square 3)) There are a number of other operators related to the definition and manipulation of functions. For instance, a function may be compiled with the compile operator. (Some Lisp systems run functions using an interpreter by default unless instructed to compile; others compile every function). Defining generic functions and methods defgeneric defines generic functions. Generic functions are a collection of methods. defmethod defines methods. Methods can specialize their parameters over CLOS standard classes, system classes, structure classes or objects. For many types there are corresponding system classes. When a generic function is called, multiple-dispatch will determine the effective method to use. (defgeneric add (a b)) (defmethod add ((a number) (b number)) (+ a b)) (defmethod add ((a vector) (b number)) (map 'vector (lambda (n) (+ n b)) a)) (defmethod add ((a vector) (b vector)) (map 'vector #'+ a b)) (defmethod add ((a string) (b string)) (concatenate 'string a b) ) (add 2 3) ; returns 5 (add #(1 2 3 4) 7) ; returns #(8 9 10 11) (add #(1 2 3 4) #(4 3 2 1)) ; returns #(5 5 5 5) (add "COMMON " "LISP") ; returns "COMMON LISP" Generic Functions are also a first class data type. There are many more features to Generic Functions and Methods than described above. The function namespace The namespace for function names is separate from the namespace for data variables. This is a key difference between Common Lisp and Scheme. For Common Lisp, operators that define names in the function namespace include To pass a function by name as an argument to another function, one must use the function special operator, commonly abbreviated as #'. The first sort example above refers to the function named by the symbol > in the function namespace, with the code #'>. Conversely, to call a function passed in such a way, one would use the funcall operator on the argument. Scheme's evaluation model is simpler: there is only one namespace, and all positions in the form are evaluated (in any order) -- not just the arguments. Code written in one dialect is therefore sometimes confusing to programmers more experienced in the other. For instance, many Common Lisp programmers like to use descriptive variable names such as list or string which could cause problems in Scheme, as they would locally shadow function names. Whether a separate namespace for functions is an advantage is a source of contention in the Lisp community. It is usually referred to as the Lisp-1 vs. Lisp-2 debate. Lisp-1 refers to Scheme's model and Lisp-2 refers to Common Lisp's model. These names were coined in a 1988 paper by Richard P. Gabriel and Kent Pitman, which extensively compares the two approaches. Multiple return values Common Lisp supports the concept of multiple values, where any expression always has a single primary value, but it might also have any number of secondary values, which might be received and inspected by interested callers. This concept is distinct from returning a list value, as the secondary values are fully optional, and passed via a dedicated side channel. This means that callers may remain entirely unaware of the secondary values being there if they have no need for them, and it makes it convenient to use the mechanism for communicating information that is sometimes useful, but not always necessary. For example, TRUNCATEfunction rounds the given number to an integer towards zero. However, it also returns a remainder as a secondary value, making it very easy to determine what value was truncated. It also supports an optional divisor parameter, which can be used to perform Euclidean division trivially: (let ((x 1266778) (y 458)) (multiple-value-bind (quotient remainder) (truncate x y) (format nil "~A divided by ~A is ~A remainder ~A" x y quotient remainder))) ;;;; => "1266778 divided by 458 is 2765 remainder 408" GETHASH returns the value of a key in an associative map, or the default value otherwise, and a secondary boolean indicating whether the value was found. Thus code which does not care about whether the value was found or provided as the default can simply use it as-is, but when such distinction is important, it might inspect the secondary boolean and react appropriately. Both use cases are supported by the same call and neither is unnecessarily burdened or constrained by the other. Having this feature at the language level removes the need to check for the existence of the key or compare it to null as would be done in other languages. (defun get-answer (library) (gethash 'answer library 42)) (defun the-answer-1 (library) (format nil "The answer is ~A" (get-answer library))) ;;;; Returns "The answer is 42" if ANSWER not present in LIBRARY (defun the-answer-2 (library) (multiple-value-bind (answer sure-p) (get-answer library) (if (not sure-p) "I don't know" (format nil "The answer is ~A" answer)))) ;;;; Returns "I don't know" if ANSWER not present in LIBRARY Multiple values are supported by a handful of standard forms, most common of which are the MULTIPLE-VALUE-BIND special form for accessing secondary values and VALUES for returning multiple values: (defun magic-eight-ball () "Return an outlook prediction, with the probability as a secondary value" (values "Outlook good" (random 1.0))) ;;;; => "Outlook good" ;;;; => 0.3187 Other data types in Common Lisp include: - Pathnames represent files and directories in the filesystem. The Common Lisp pathname facility is more general than most operating systems' file naming conventions, making Lisp programs' access to files broadly portable across diverse systems. - Input and output streams represent sources and sinks of binary or textual data, such as the terminal or open files. - Common Lisp has a built-in pseudo-random number generator (PRNG). Random state objects represent reusable sources of pseudo-random numbers, allowing the user to seed the PRNG or cause it to replay a sequence. - Conditions are a type used to represent errors, exceptions, and other "interesting" events to which a program may respond. - Classes are first-class objects, and are themselves instances of classes called metaobject classes (metaclasses for short). - Readtables are a type of object which control how Common Lisp's reader parses the text of source code. By controlling which readtable is in use when code is read in, the programmer can change or extend the language's syntax. Like programs in many other programming languages, Common Lisp programs make use of names to refer to variables, functions, and many other kinds of entities. Named references are subject to scope. The association between a name and the entity which the name refers to is called a binding. Scope refers to the set of circumstances in which a name is determined to have a particular binding. Determiners of scope The circumstances which determine scope in Common Lisp include: - the location of a reference within an expression. If it's the leftmost position of a compound, it refers to a special operator or a macro or function binding, otherwise to a variable binding or something else. - the kind of expression in which the reference takes place. For instance, (go x)means transfer control to label (print x)refers to the variable x. Both scopes of xcan be active in the same region of program text, since tagbody labels are in a separate namespace from variable names. A special form or macro form has complete control over the meanings of all symbols in its syntax. For instance, in (defclass x (a b) ()), a class definition, the (a b)is a list of base classes, so these names are looked up in the space of class names, and xisn't a reference to an existing binding, but the name of a new class being derived from b. These facts emerge purely from the semantics of defclass. The only generic fact about this expression is that defclassrefers to a macro binding; everything else is up to - the location of the reference within the program text. For instance, if a reference to variable xis enclosed in a binding construct such as a letwhich defines a binding for x, then the reference is in the scope created by that binding. - for a variable reference, whether or not a variable symbol has been, locally or globally, declared special. This determines whether the reference is resolved within a lexical environment, or within a dynamic environment. - the specific instance of the environment in which the reference is resolved. An environment is a run-time dictionary which maps symbols to bindings. Each kind of reference uses its own kind of environment. References to lexical variables are resolved in a lexical environment, et cetera. More than one environment can be associated with the same reference. For instance, thanks to recursion or the use of multiple threads, multiple activations of the same function can exist at the same time. These activations share the same program text, but each has its own lexical environment instance. To understand what a symbol refers to, the Common Lisp programmer must know what kind of reference is being expressed, what kind of scope it uses if it is a variable reference (dynamic versus lexical scope), and also the run-time situation: in what environment is the reference resolved, where was the binding introduced into the environment, et cetera. Kinds of environment Some environments in Lisp are globally pervasive. For instance, if a new type is defined, it is known everywhere thereafter. References to that type look it up in this global environment. One type of environment in Common Lisp is the dynamic environment. Bindings established in this environment have dynamic extent, which means that a binding is established at the start of the execution of some construct, such as a let block, and disappears when that construct finishes executing: its lifetime is tied to the dynamic activation and deactivation of a block. However, a dynamic binding is not just visible within that block; it is also visible to all functions invoked from that block. This type of visibility is known as indefinite scope. Bindings which exhibit dynamic extent (lifetime tied to the activation and deactivation of a block) and indefinite scope (visible to all functions which are called from that block) are said to have dynamic scope. Common Lisp has support for dynamically scoped variables, which are also called special variables. Certain other kinds of bindings are necessarily dynamically scoped also, such as restarts and catch tags. Function bindings cannot be dynamically scoped using flet (which only provides lexically scoped function bindings), but function objects (a first-level object in Common Lisp) can be assigned to dynamically scoped variables, bound using let in dynamic scope, then called using Dynamic scope is extremely useful because it adds referential clarity and discipline to global variables. Global variables are frowned upon in computer science as potential sources of error, because they can give rise to ad-hoc, covert channels of communication among modules that lead to unwanted, surprising interactions. In Common Lisp, a special variable which has only a top-level binding behaves just like a global variable in other programming languages. A new value can be stored into it, and that value simply replaces what is in the top-level binding. Careless replacement of the value of a global variable is at the heart of bugs caused by use of global variables. However, another way to work with a special variable is to give it a new, local binding within an expression. This is sometimes referred to as "rebinding" the variable. Binding a dynamically scoped variable temporarily creates a new memory location for that variable, and associates the name with that location. While that binding is in effect, all references to that variable refer to the new binding; the previous binding is hidden. When execution of the binding expression terminates, the temporary memory location is gone, and the old binding is revealed, with the original value intact. Of course, multiple dynamic bindings for the same variable can be nested. In Common Lisp implementations which support multithreading, dynamic scopes are specific to each thread of execution. Thus special variables serve as an abstraction for thread local storage. If one thread rebinds a special variable, this rebinding has no effect on that variable in other threads. The value stored in a binding can only be retrieved by the thread which created that binding. If each thread binds some special variable *x* behaves like thread-local storage. Among threads which do not rebind *x*, it behaves like an ordinary global: all of these threads refer to the same top-level binding of Dynamic variables can be used to extend the execution context with additional context information which is implicitly passed from function to function without having to appear as an extra function parameter. This is especially useful when the control transfer has to pass through layers of unrelated code, which simply cannot be extended with extra parameters to pass the additional data. A situation like this usually calls for a global variable. That global variable must be saved and restored, so that the scheme doesn't break under recursion: dynamic variable rebinding takes care of this. And that variable must be made thread-local (or else a big mutex must be used) so the scheme doesn't break under threads: dynamic scope implementations can take care of this also. In the Common Lisp library, there are many standard special variables. For instance, all standard I/O streams are stored in the top-level bindings of well-known special variables. The standard output stream is stored in *standard-output*. Suppose a function foo writes to standard output: (defun foo () (format t "Hello, world")) To capture its output in a character string, *standard-output* can be bound to a string stream and called: (with-output-to-string (*standard-output*) (foo)) -> "Hello, world" ; gathered output returned as a string Common Lisp supports lexical environments. Formally, the bindings in a lexical environment have lexical scope and may have either indefinite extent or dynamic extent, depending on the type of namespace. Lexical scope means that visibility is physically restricted to the block in which the binding is established. References which are not textually (i.e. lexically) embedded in that block simply do not see that binding. The tags in a TAGBODY have lexical scope. The expression (GO X) is erroneous if it is not actually embedded in a TAGBODY which contains a label X. However, the label bindings disappear when the TAGBODY terminates its execution, because they have dynamic extent. If that block of code is re-entered by the invocation of a lexical closure, it is invalid for the body of that closure to try to transfer control to a tag via GO: (defvar *stashed*) ;; will hold a function (tagbody (setf *stashed* (lambda () (go some-label))) (go end-label) ;; skip the (print "Hello") some-label (print "Hello") end-label) -> NIL When the TAGBODY is executed, it first evaluates the setf form which stores a function in the special variable *stashed*. Then the (go end-label) transfers control to end-label, skipping the code (print "Hello"). Since end-label is at the end of the tagbody, the tagbody terminates, yielding NIL. Suppose that the previously remembered function is now called: (funcall *stashed*) ;; Error! This situation is erroneous. One implementation's response is an error condition containing the message, "GO: tagbody for tag SOME-LABEL has already been left". The function tried to evaluate (go some-label), which is lexically embedded in the tagbody, and resolves to the label. However, the tagbody isn't executing (its extent has ended), and so the control transfer cannot take place. Local function bindings in Lisp have lexical scope, and variable bindings also have lexical scope by default. By contrast with GO labels, both of these have indefinite extent. When a lexical function or variable binding is established, that binding continues to exist for as long as references to it are possible, even after the construct which established that binding has terminated. References to lexical variables and functions after the termination of their establishing construct are possible thanks to lexical closures. Lexical binding is the default binding mode for Common Lisp variables. For an individual symbol, it can be switched to dynamic scope, either by a local declaration, by a global declaration. The latter may occur implicitly through the use of a construct like DEFVAR or DEFPARAMETER. It is an important convention in Common Lisp programming that special (i.e. dynamically scoped) variables have names which begin and end with an asterisk sigil * in what is called the “earmuff convention”. If adhered to, this convention effectively creates a separate namespace for special variables, so that variables intended to be lexical are not accidentally made special. Lexical scope is useful for several reasons. Firstly, references to variables and functions can be compiled to efficient machine code, because the run-time environment structure is relatively simple. In many cases it can be optimized to stack storage, so opening and closing lexical scopes has minimal overhead. Even in cases where full closures must be generated, access to the closure's environment is still efficient; typically each variable becomes an offset into a vector of bindings, and so a variable reference becomes a simple load or store instruction with a base-plus-offset addressing mode. Secondly, lexical scope (combined with indefinite extent) gives rise to the lexical closure, which in turn creates a whole paradigm of programming centered around the use of functions being first-class objects, which is at the root of functional programming. Thirdly, perhaps most importantly, even if lexical closures are not exploited, the use of lexical scope isolates program modules from unwanted interactions. Due to their restricted visibility, lexical variables are private. If one module A binds a lexical variable X, and calls another module B, references to X in B will not accidentally resolve to the X bound in A. B simply has no access to X. For situations in which disciplined interactions through a variable are desirable, Common Lisp provides special variables. Special variables allow for a module A to set up a binding for a variable X which is visible to another module B, called from A. Being able to do this is an advantage, and being able to prevent it from happening is also an advantage; consequently, Common Lisp supports both lexical and dynamic scope. A macro in Lisp superficially resembles a function in usage. However, rather than representing an expression which is evaluated, it represents a transformation of the program source code. The macro gets the source it surrounds as arguments, binds them to its parameters and computes a new source form. This new form can also use a macro. The macro expansion is repeated until the new source form does not use a macro. The final computed form is the source code executed at runtime. Typical uses of macros in Lisp: - new control structures (example: looping constructs, branching constructs) - scoping and binding constructs - simplified syntax for complex and repeated source code - top-level defining forms with compile-time side-effects - data-driven programming - embedded domain specific languages (examples: SQL, HTML, Prolog) - implicit finalization forms Various standard Common Lisp features also need to be implemented as macros, such as: - the standard setfabstraction, to allow custom compile-time expansions of assignment/access operators with-open-fileand other similar - Depending on implementation, condis a macro built on the other, the special operator; unlessconsist of macros - The powerful Macros are defined by the defmacro macro. The special operator macrolet allows the definition of local (lexically scoped) macros. It is also possible to define macros for symbols using define-symbol-macro and symbol-macrolet. Paul Graham's book On Lisp describes the use of macros in Common Lisp in detail. Doug Hoyte's book Let Over Lambda extends the discussion on macros, claiming "Macros are the single greatest advantage that lisp has as a programming language and the single greatest advantage of any programming language." Hoyte provides several examples of iterative development of macros. Example using a macro to define a new control structure Macros allow Lisp programmers to create new syntactic forms in the language. One typical use is to create new control structures. The example macro provides an until looping construct. The syntax is: (until test form*) The macro definition for until: (defmacro until (test &body body) (let ((start-tag (gensym "START")) (end-tag (gensym "END"))) `(tagbody ,start-tag (when ,test (go ,end-tag)) (progn ,@body) (go ,start-tag) ,end-tag))) tagbody is a primitive Common Lisp special operator which provides the ability to name tags and use the go form to jump to those tags. The backquote ` provides a notation that provides code templates, where the value of forms preceded with a comma are filled in. Forms preceded with comma and at-sign are spliced in. The tagbody form tests the end condition. If the condition is true, it jumps to the end tag. Otherwise the provided body code is executed and then it jumps to the start tag. An example form using above until macro: (until (= (random 10) 0) (write-line "Hello")) The code can be expanded using the function macroexpand-1. The expansion for above example looks like this: (TAGBODY #:START1136 (WHEN (ZEROP (RANDOM 10)) (GO #:END1137)) (PROGN (WRITE-LINE "hello")) (GO #:START1136) #:END1137) During macro expansion the value of the variable test is (= (random 10) 0) and the value of the variable body is ((write-line "Hello")). The body is a list of forms. Symbols are usually automatically upcased. The expansion uses the TAGBODY with two labels. The symbols for these labels are computed by GENSYM and are not interned in any package. Two go forms use these tags to jump to. Since tagbody is a primitive operator in Common Lisp (and not a macro), it will not be expanded into something else. The expanded form uses the when macro, which also will be expanded. Fully expanding a source form is called code walking. In the fully expanded (walked) form, the when form is replaced by the primitive if: (TAGBODY #:START1136 (IF (ZEROP (RANDOM 10)) (PROGN (GO #:END1137)) NIL) (PROGN (WRITE-LINE "hello")) (GO #:START1136)) #:END1137) All macros must be expanded before the source code containing them can be evaluated or compiled normally. Macros can be considered functions that accept and return S-expressions - similar to abstract syntax trees, but not limited to those. These functions are invoked before the evaluator or compiler to produce the final source code. Macros are written in normal Common Lisp, and may use any Common Lisp (or third-party) operator available. Variable capture and shadowing Common Lisp macros are capable of what is commonly called variable capture, where symbols in the macro-expansion body coincide with those in the calling context, allowing the programmer to create macros wherein various symbols have special meaning. The term variable capture is somewhat misleading, because all namespaces are vulnerable to unwanted capture, including the operator and function namespace, the tagbody label namespace, catch tag, condition handler and restart namespaces. Variable capture can introduce software defects. This happens in one of the following two ways: - In the first way, a macro expansion can inadvertently make a symbolic reference which the macro writer assumed will resolve in a global namespace, but the code where the macro is expanded happens to provide a local, shadowing definition which steals that reference. Let this be referred to as type 1 capture. - The second way, type 2 capture, is just the opposite: some of the arguments of the macro are pieces of code supplied by the macro caller, and those pieces of code are written such that they make references to surrounding bindings. However, the macro inserts these pieces of code into an expansion which defines its own bindings that accidentally captures some of these references. The Scheme dialect of Lisp provides a macro-writing system which provides the referential transparency that eliminates both types of capture problem. This type of macro system is sometimes called "hygienic", in particular by its proponents (who regard macro systems which do not automatically solve this problem as unhygienic). In Common Lisp, macro hygiene is ensured one of two different ways. One approach is to use gensyms: guaranteed-unique symbols which can be used in a macro-expansion without threat of capture. The use of gensyms in a macro definition is a manual chore, but macros can be written which simplify the instantiation and use of gensyms. Gensyms solve type 2 capture easily, but they are not applicable to type 1 capture in the same way, because the macro expansion cannot rename the interfering symbols in the surrounding code which capture its references. Gensyms could be used to provide stable aliases for the global symbols which the macro expansion needs. The macro expansion would use these secret aliases rather than the well-known names, so redefinition of the well-known names would have no ill effect on the macro. Another approach is to use packages. A macro defined in its own package can simply use internal symbols in that package in its expansion. The use of packages deals with type 1 and type 2 capture. However, packages don't solve the type 1 capture of references to standard Common Lisp functions and operators. The reason is that the use of packages to solve capture problems revolves around the use of private symbols (symbols in one package, which are not imported into, or otherwise made visible in other packages). Whereas the Common Lisp library symbols are external, and frequently imported into or made visible in user-defined packages. The following is an example of unwanted capture in the operator namespace, occurring in the expansion of a macro: ;; expansion of UNTIL makes liberal use of DO (defmacro until (expression &body body) `(do () (,expression) ,@body)) ;; macrolet establishes lexical operator binding for DO (macrolet ((do (...) ... something else ...)) (until (= (random 10) 0) (write-line "Hello"))) until macro will expand into a form which calls do which is intended to refer to the standard Common Lisp macro do. However, in this context, do may have a completely different meaning, so until may not work properly. Common Lisp solves the problem of the shadowing of standard operators and functions by forbidding their redefinition. Because it redefines the standard operator do, the preceding is actually a fragment of non-conforming Common Lisp, which allows implementations to diagnose and reject it. The condition system is responsible for exception handling in Common Lisp. It provides conditions, handlers and restarts. Conditions are objects describing an exceptional situation (for example an error). If a condition is signaled, the Common Lisp system searches for a handler for this condition type and calls the handler. The handler can now search for restarts and use one of these restarts to automatically repair the current problem, using information such as the condition type and any relevant information provided as part of the condition object, and call the appropriate restart function. These restarts, if unhandled by code, can be presented to users (as part of a user interface, that of a debugger for example), so that the user can select and invoke one of the available restarts. Since the condition handler is called in the context of the error (without unwinding the stack), full error recovery is possible in many cases, where other exception handling systems would have already terminated the current routine. The debugger itself can also be customized or replaced using the *debugger-hook* dynamic variable. Code found within unwind-protect forms such as finalizers will also be executed as appropriate despite the exception. In the following example (using Symbolics Genera) the user tries to open a file in a Lisp function test called from the Read-Eval-Print-LOOP (REPL), when the file does not exist. The Lisp system presents four restarts. The user selects the Retry OPEN using a different pathname restart and enters a different pathname (lispm-init.lisp instead of lispm-int.lisp). The user code does not contain any error handling code. The whole error handling and restart code is provided by the Lisp system, which can handle and repair the error without terminating the user code. Command: (test ">zippy>lispm-int.lisp") Error: The file was not found. For lispm:>zippy>lispm-int.lisp.newest LMFS:OPEN-LOCAL-LMFS-1 Arg 0: #P"lispm:>zippy>lispm-int.lisp.newest" s-A, <Resume>: Retry OPEN of lispm:>zippy>lispm-int.lisp.newest s-B: Retry OPEN using a different pathname s-C, <Abort>: Return to Lisp Top Level in a TELNET server s-D: Restart process TELNET terminal -> Retry OPEN using a different pathname Use what pathname instead [default lispm:>zippy>lispm-int.lisp.newest]: lispm:>zippy>lispm-init.lisp.newest ...the program continues Common Lisp Object System (CLOS) Common Lisp includes a toolkit for object-oriented programming, the Common Lisp Object System or CLOS, which is one of the most powerful object systems available in any language. For example, Peter Norvig explains how many Design Patterns are simpler to implement in a dynamic language with the features of CLOS (Multiple Inheritance, Mixins, Multimethods, Metaclasses, Method combinations, etc.). Several extensions to Common Lisp for object-oriented programming have been proposed to be included into the ANSI Common Lisp standard, but eventually CLOS was adopted as the standard object-system for Common Lisp. CLOS is a dynamic object system with multiple dispatch and multiple inheritance, and differs radically from the OOP facilities found in static languages such as C++ or Java. As a dynamic object system, CLOS allows changes at runtime to generic functions and classes. Methods can be added and removed, classes can be added and redefined, objects can be updated for class changes and the class of objects can be changed. CLOS has been integrated into ANSI Common Lisp. Generic Functions can be used like normal functions and are a first-class data type. Every CLOS class is integrated into the Common Lisp type system. Many Common Lisp types have a corresponding class. There is more potential use of CLOS for Common Lisp. The specification does not say whether conditions are implemented with CLOS. Pathnames and streams could be implemented with CLOS. These further usage possibilities of CLOS for ANSI Common Lisp are not part of the standard. Actual Common Lisp implementations are using CLOS for pathnames, streams, input/output, conditions, the implementation of CLOS itself and more. Compiler and interpreter Several implementations of earlier Lisp dialects provided both an interpreter and a compiler. Unfortunately often the semantics were different. These earlier Lisps implemented lexical scoping in the compiler and dynamic scoping in the interpreter. Common Lisp requires that both the interpreter and compiler use lexical scoping by default. The Common Lisp standard describes both the semantics of the interpreter and a compiler. The compiler can be called using the function compile for individual functions and using the function compile-file for files. Common Lisp allows type declarations and provides ways to influence the compiler code generation policy. For the latter various optimization qualities can be given values between 0 (not important) and 3 (most important): speed, space, safety, debug and compilation-speed. There is also a function to evaluate Lisp code: eval takes code as pre-parsed s-expressions and not, like in some other languages, as text strings. This way code can be constructed with the usual Lisp functions for constructing lists and symbols and then this code can be evaluated with the function eval. Several Common Lisp implementations (like Clozure CL and SBCL) are implementing eval using their compiler. This way code is compiled, even though it is evaluated using the function The file compiler is invoked using the function compile-file. The generated file with compiled code is called a fasl (from fast load) file. These fasl files and also source code files can be loaded with the function load into a running Common Lisp system. Depending on the implementation, the file compiler generates byte-code (for example for the Java Virtual Machine), C language code (which then is compiled with a C compiler) or, directly, native code. Common Lisp implementations can be used interactively, even though the code gets fully compiled. The idea of an Interpreted language thus does not apply for interactive Common Lisp. The language makes distinction between read-time, compile-time, load-time and run-time, and allows user code to also make this distinction to perform the wanted type of processing at the wanted step. Some special operators are provided to especially suit interactive development; for instance, defvar will only assign a value to its provided variable if it wasn't already bound, while defparameter will always perform the assignment. This distinction is useful when interactively evaluating, compiling and loading code in a live image. Some features are also provided to help writing compilers and interpreters. Symbols consist of first-level objects and are directly manipulable by user code. The progv special operator allows to create lexical bindings programmatically, while packages are also manipulable. The Lisp compiler is available at runtime to compile files or individual functions. These make it easy to use Lisp as an intermediate compiler or interpreter for another language. The following program calculates the smallest number of people in a room for whom the probability of completely unique birthdays is less than 50% (the birthday paradox, where for 1 person the probability is obviously 100%, for 2 it is 364/365, etc.). The answer is 23. By convention, constants in Common Lisp are enclosed with + characters. (defconstant +year-size+ 365) (defun birthday-paradox (probability number-of-people) (let ((new-probability (* (/ (- +year-size+ number-of-people) +year-size+) probability))) (if (< new-probability 0.5) (1+ number-of-people) (birthday-paradox new-probability (1+ number-of-people))))) Calling the example function using the REPL (Read Eval Print Loop): CL-USER > (birthday-paradox 1.0 1) 23 Sorting a list of person objects We define a class person and a method for displaying the name and age of a person. Next we define a group of persons as a list of Then we iterate over the sorted list. (defclass person () ((name :initarg :name :accessor person-name) (age :initarg :age :accessor person-age)) (:documentation "The class PERSON with slots NAME and AGE.")) (defmethod display ((object person) stream) "Displaying a PERSON object to an output stream." (with-slots (name age) object (format stream "~a (~a)" name age))) (defparameter *group* (list (make-instance 'person :name "Bob" :age 33) (make-instance 'person :name "Chris" :age 16) (make-instance 'person :name "Ash" :age 23)) "A list of PERSON objects.") (dolist (person (sort (copy-list *group*) #'> :key #'person-age)) (display person *standard-output*) (terpri)) It prints the three names with descending age. Bob (33) Ash (23) Chris (16) Exponentiating by squaring Use of the LOOP macro is demonstrated: (defun power (x n) (loop with result = 1 while (plusp n) when (oddp n) do (setf result (* result x)) do (setf x (* x x) n (truncate n 2)) finally (return result))) CL-USER > (power 2 200) 1606938044258990275541962092341162602522202993782792835301376 Compare with the built in exponentiation: CL-USER > (= (expt 2 200) (power 2 200)) T Find the list of available shells WITH-OPEN-FILE is a macro that opens a file and provides a stream. When the form is returning, the file is automatically closed. FUNCALL calls a function object. The LOOP collects all lines that match the predicate. (defun list-matching-lines (file predicate) "Returns a list of lines in file, for which the predicate applied to the line returns T." (with-open-file (stream file) (loop for line = (read-line stream nil nil) while line when (funcall predicate line) collect it))) The function AVAILABLE-SHELLS calls above function LIST-MATCHING-LINES with a pathname and an anonymous function as the predicate. The predicate returns the pathname of a shell or NIL (if the string is not the filename of a shell). (defun available-shells (&optional (file #p"/etc/shells")) (list-matching-lines file (lambda (line) (and (plusp (length line)) (char= (char line 0) #\/) (pathname (string-right-trim '(#\space #\tab) line)))))) Example results (on Mac OS X 10.6): CL-USER > (available-shells) (#P"/bin/bash" #P"/bin/csh" #P"/bin/ksh" #P"/bin/sh" #P"/bin/tcsh" #P"/bin/zsh") Comparison with other Lisps Common Lisp is most frequently compared with, and contrasted to, Scheme—if only because they are the two most popular Lisp dialects. Scheme predates CL, and comes not only from the same Lisp tradition but from some of the same engineers—Guy L. Steele, with whom Gerald Jay Sussman designed Scheme, chaired the standards committee for Common Lisp. Common Lisp is a general-purpose programming language, in contrast to Lisp variants such as Emacs Lisp and AutoLISP which are extension languages embedded in particular products (GNU Emacs and AutoCAD, respectively). Unlike many earlier Lisps, Common Lisp (like Scheme) uses lexical variable scope by default for both interpreted and compiled code. Most of the Lisp systems whose designs contributed to Common Lisp—such as ZetaLisp and Franz Lisp—used dynamically scoped variables in their interpreters and lexically scoped variables in their compilers. Scheme introduced the sole use of lexically scoped variables to Lisp; an inspiration from ALGOL 68 which was widely recognized as a good idea. CL supports dynamically scoped variables as well, but they must be explicitly declared as "special". There are no differences in scoping between ANSI CL interpreters and compilers. Common Lisp is sometimes termed a Lisp-2 and Scheme a Lisp-1, referring to CL's use of separate namespaces for functions and variables. (In fact, CL has many namespaces, such as those for go tags, block names, and loop keywords). There is a long-standing controversy between CL and Scheme advocates over the tradeoffs involved in multiple namespaces. In Scheme, it is (broadly) necessary to avoid giving variables names which clash with functions; Scheme functions frequently have arguments named lyst so as not to conflict with the system function list. However, in CL it is necessary to explicitly refer to the function namespace when passing a function as an argument—which is also a common occurrence, as in the sort example above. CL also differs from Scheme in its handling of boolean values. Scheme uses the special values #t and #f to represent truth and falsity. CL follows the older Lisp convention of using the symbols T and NIL, with NIL standing also for the empty list. In CL, any non-NIL value is treated as true by conditionals, such as if, whereas in Scheme all non-#f values are treated as true. These conventions allow some operators in both languages to serve both as predicates (answering a boolean-valued question) and as returning a useful value for further computation, but in Scheme the value '() which is equivalent to NIL in Common Lisp evaluates to true in a boolean expression. Lastly, the Scheme standards documents require tail-call optimization, which the CL standard does not. Most CL implementations do offer tail-call optimization, although often only when the programmer uses an optimization directive. Nonetheless, common CL coding style does not favor the ubiquitous use of recursion that Scheme style prefers—what a Scheme programmer would express with tail recursion, a CL user would usually express with an iterative expression in loop, or (more recently) with the See the Category Common Lisp implementations. Common Lisp is defined by a specification (like Ada and C) rather than by one implementation (like Perl before version 6). There are many implementations, and the standard details areas in which they may validly differ. In addition, implementations tend to come with extensions, which provide functionality not covered in the standard: - Interactive Top-Level (REPL) - Garbage Collection - Debugger, Stepper and Inspector - Weak data structures (hash tables) - Extensible sequences - Extensible LOOP - Environment access - CLOS Meta-object Protocol - CLOS based extensible streams - CLOS based Condition System - Network streams - Persistent CLOS - Unicode support - Foreign-Language Interface (often to C) - Operating System interface - Java Interface - Threads and Multiprocessing - Application delivery (applications, dynamic libraries) - Saving of images Free and open-source software libraries have been created to support extensions to Common Lisp in a portable way, and are most notably found in the repositories of the Common-Lisp.net and Common Lisp Open Code Collection projects. Common Lisp implementations may use any mix of native code compilation, byte code compilation or interpretation. Common Lisp has been designed to support incremental compilers, file compilers and block compilers. Standard declarations to optimize compilation (such as function inlining or type specialization) are proposed in the language specification. Most Common Lisp implementations compile source code to native machine code. Some implementations can create (optimized) stand-alone applications. Others compile to interpreted bytecode, which is less efficient than native code, but eases binary-code portability. There are also compilers that compile Common Lisp code to C code. The misconception that Lisp is a purely interpreted language is most likely because Lisp environments provide an interactive prompt and that code is compiled one-by-one, in an incremental way. With Common Lisp incremental compilation is widely used. List of implementations - Allegro Common Lisp - for Microsoft Windows, FreeBSD, Linux, Apple macOS and various UNIX variants. Allegro CL provides an Integrated Development Environment (IDE) (for Windows and Linux) and extensive capabilities for application delivery. - Liquid Common Lisp - formerly called Lucid Common Lisp. Only maintenance, no new releases. - for Microsoft Windows, FreeBSD, Linux, Apple macOS, iOS, Android and various UNIX variants. LispWorks provides an Integrated Development Environment (IDE) (available for all platforms, but not for iOS and Android) and extensive capabilities for application delivery. - for iOS, Android and macOS. - Open Genera - for DEC Alpha. - Scieneer Common Lisp - which is designed for high-performance scientific computing. Freely redistributable implementations - Armed Bear Common Lisp (ABCL) - A CL implementation that runs on the Java Virtual Machine. It includes a compiler to Java byte code, and allows access to Java libraries from CL. It was formerly just a component of the Armed Bear J Editor. - A bytecode-compiling implementation, portable and runs on a number of Unix and Unix-like systems (including macOS), as well as Microsoft Windows and several other systems. - Clozure CL (CCL) - Originally a free and open-source fork of Macintosh Common Lisp. As that history implies, CCL was written for the Macintosh, but Clozure CL now runs on macOS, FreeBSD, Linux, Solaris and Windows. 32 and 64 bit x86 ports are supported on each platform. Additionally there are Power PC ports for Mac OS and Linux. CCL was previously known as OpenMCL, but that name is no longer used, to avoid confusion with the open source version of Macintosh Common Lisp. - Originally from Carnegie Mellon University, now maintained as free and open-source software by a group of volunteers. CMUCL uses a fast native-code compiler. It is available on Linux and BSD for Intel x86; Linux for Alpha; macOS for Intel x86 and PowerPC; and Solaris, IRIX, and HP-UX on their native platforms. - Corman Common Lisp - for Microsoft Windows. In January 2015 Corman Lisp has been published under MIT license. - Embeddable Common Lisp (ECL) - ECL includes a bytecode interpreter and compiler. It can also compile Lisp code to machine code via a C compiler. ECL then compiles Lisp code to C, compiles the C code with a C compiler and can then load the resulting machine code. It is also possible to embed ECL in C programs, and C code into Common Lisp programs. - GNU Common Lisp (GCL) - The GNU Project's Lisp compiler. Not yet fully ANSI-compliant, GCL is however the implementation of choice for several large projects including the mathematical tools Maxima, AXIOM and (historically) ACL2. GCL runs on Linux under eleven different architectures, and also under Windows, Solaris, and FreeBSD. - Macintosh Common Lisp (MCL) - Version 5.2 for Apple Macintosh computers with a PowerPC processor running Mac OS X is open source. RMCL (based on MCL 5.2) runs on Intel-based Apple Macintosh computers using the Rosetta binary translator from Apple. - ManKai Common Lisp (MKCL) - A branch of ECL. MKCL emphasises reliability, stability and overall code quality through a heavily reworked, natively multi-threaded, runtime system. On Linux, MKCL features a fully POSIX compliant runtime system. - Implements a Lisp environment for x86 computers without relying on any underlying OS. - Poplog implements a version of CL, with POP-11, and optionally Prolog, and Standard ML (SML), allowing mixed language programming. For all, the implementation language is POP-11, which is compiled incrementally. It also has an integrated Emacs-like editor that communicates with the compiler. - Steel Bank Common Lisp (SBCL) - A branch from CMUCL. "Broadly speaking, SBCL is distinguished from CMU CL by a greater emphasis on maintainability." SBCL runs on the platforms CMUCL does, except HP/UX; in addition, it runs on Linux for AMD64, PowerPC, SPARC, MIPS, Windows x86 and has experimental support for running on Windows AMD64. SBCL does not use an interpreter by default; all expressions are compiled to native code unless the user switches the interpreter on. The SBCL compiler generates fast native code according to a previous version of The Computer Language Benchmarks Game. - Ufasoft Common Lisp - port of CLISP for windows platform with core written in C++. This section needs additional citations for verification. (July 2018) (Learn how and when to remove this template message) - Austin Kyoto Common Lisp - an evolution of Kyoto Common Lisp by Bill Schelter - Butterfly Common Lisp - an implementation written in Scheme for the BBN Butterfly multi-processor computer - a Common Lisp to C compiler - Common Lisp for PCs by Symbolics - Codemist Common Lisp - used for the commercial version of the computer algebra system Axiom - ExperCommon Lisp - an early implementation for the Apple Macintosh by ExperTelligence - Golden Common Lisp - an implementation for the PC by GoldHill Inc. - Ibuki Common Lisp - a commercialized version of Kyoto Common Lisp - Kyoto Common Lisp - the first Common Lisp compiler that used C as a target language. GCL, ECL and MKCL originate from this Common Lisp implementation. - a small version of Common Lisp for embedded systems developed by IS Robotics, now iRobot - Lisp Machines (from Symbolics, TI and Xerox) - provided implementations of Common Lisp in addition to their native Lisp dialect (Lisp Machine Lisp or Interlisp). CLOS was also available. Symbolics provides an enhanced version Common Lisp. - Procyon Common Lisp - an implementation for Windows and Mac OS, used by Franz for their Windows port of Allegro CL - Star Sapphire Common LISP - an implementation for the PC - a variant of Common Lisp used for the implementation of the Cyc knowledge-based system - Top Level Common Lisp - an early implementation for concurrent execution - a shared library implementation - Vax Common Lisp - Digital Equipment Corporation's implementation that ran on VAX systems running VMS or ULTRIX - an implementation written by David Betz See the Category Common Lisp software. Common Lisp is used to develop research applications (often in Artificial Intelligence), for rapid development of prototypes or for deployed applications Common Lisp is used in many commercial applications, including the Yahoo! Store web-commerce site, which originally involved Paul Graham and was later rewritten in C++ and Perl. Other notable examples include: - ACT-R, a cognitive architecture used in a large number of research projects. - Authorizer's Assistant, a large rule-based system used by American Express, analyzing credit requests. - Cyc, a long running project with the aim to create a knowledge-based system that provides a huge amount of common sense knowledge - Gensym G2, a real-time expert system and business rules engine - Genworks GDL, based on the open-source Gendl kernel. - The development environment for the Jak and Daxter video game series, developed by Naughty Dog. - ITA Software's low fare search engine, used by travel websites such as Orbitz and Kayak.com and airlines such as American Airlines, Continental Airlines and US Airways. - Mirai, a 3d graphics suite. It was used to animate the face of Gollum in the movie Lord of the Rings: The Two Towers. - Prototype Verification System (PVS), a mechanized environment for formal specification and verification. - PWGL is a sophisticated visual programming environment based on Common Lisp, used in Computer assisted composition and sound synthesis. - Piano, a complete aircraft analysis suite, written in Common Lisp , used by companies like Boeing, Airbus, Northrop Grumman - Grammarly, an English-language writing-enhancement platform, has its core grammar engine written in Common Lisp - The Dynamic Analysis and Replanning Tool (DART), which is said to alone have paid back during the years from 1991 to 1995 for all thirty years of DARPA investments in AI research. - NASA's (Jet Propulsion Lab's) "Remote Agent", an award winning Common Lisp program for autopiloting the Deep Space One spaceship. - SigLab, a Common Lisp platform for signal processing used in missile defense, built by Raytheon - NASA's Mars Pathfinder Mission Planning System - SPIKE, a scheduling system for earth or space based observatories and satellites, notably the Hubble Space Telescope., written in Common Lisp - Common Lisp has been used for prototyping the garbage collector of Microsoft's .NET Common Language Runtime There also exist open-source applications written in Common Lisp, such as: - ACL2, a full-featured automated theorem prover for an applicative variant of Common Lisp. - Axiom, a sophisticated computer algebra system. - Maxima, a sophisticated computer algebra system, based on Macsyma. - OpenMusic is an object-oriented visual programming environment based on Common Lisp, used in Computer assisted composition. - Stumpwm, a tiling, keyboard driven X11 Window Manager written entirely in Common Lisp. Since 2011 Zach Beane, with support of the Common Lisp Foundation, maintains the Quicklisp library manager. It allows automatic download, installing, and loading of over 3600 libraries, all of which are required to work on more than just one implementation of Common Lisp and to have a license that allows their redistribution.. - Quoted from cover of cited standard. ANSI INCITS 226-1994 (R2004), for sale on standard's document page Archived 2014-01-01 at the Wayback Machine.. - "CLHS: About the Common Lisp HyperSpec (TM)". www.lispworks.com. - "CLHS: Section 1.1.2". www.lispworks.com. - Common Lisp Implementations: A Survey - "Old LISP programs still run in Common Lisp". Retrieved 2015-05-13. - "Roots of "Yu-Shiang Lisp", Mail from Jon L White, 1982". cmu.edu. - "Mail Index". cl-su-ai.lisp.se. - Knee-jerk Anti-LOOPism and other E-mail Phenomena: Oral, Written, and Electronic Patterns in Computer-Mediated Communication, JoAnne Yates and Wanda J. Orlikowski., 1993 Archived 2012-08-08 at the Wayback Machine. - Jr, Steele; L, Guy (15 August 1982). "An overview of COMMON LISP". ACM. pp. 98–107. doi:10.1145/800068.802140 – via dl.acm.org. - Reddy, Abhishek (2008-08-22). "Features of Common Lisp". - "Unicode support". The Common Lisp Wiki. Retrieved 2008-08-21. - Richard P. Gabriel, Kent M. Pitman (June 1988). "Technical Issues of Separation in Function Cells and Value Cells". Lisp and Symbolic Computation. 1 (1): 81–101. doi:10.1007/bf01806178. - "Common Lisp Hyperspec: Section 3.1.7". - "Common Lisp Hyperspec: Function FLOOR". - "Common Lisp Hyperspec: Accessor GETHASH". - "Let Over Lambda". letoverlambda.com. - Peter Seibel (7 April 2005). Practical Common Lisp. Apress. ISBN 978-1-59059-239-7. - "Design Patterns in Dynamic Programming". norvig.com. - "32.6. Quickstarting delivery with CLISP". clisp.cons.org. - "Armed Bear Common Lisp". - "Corman Lisp sources are now available". - "History and Copyright". Steel Bank Common Lisp. - "Platform Table". Steel Bank Common Lisp. - "Which programs are fastest? - Computer Language Benchmarks Game". archive.org. 20 May 2013. - "Package: lang/lisp/impl/bbn/". www.cs.cmu.edu. - "Recent Developments in Butterfly Lisp, 1987, AAAI Proceedings" (PDF). aaai.org. - Burkart, O.; Goerigk, W.; Knutzen, H. (22 June 1992). "CLICC: A New Approach to the Compilation of Common Lisp Programs to C". psu.edu. - "codemist.co.uk". lisp.codemist.co.uk. - Axiom, the 30 year horizon, page 43 - "Golden Common Lisp Developer". goldhill-inc.com. - Golden Common LISP: A Hands-On Approach, David J. Steele, June 2000 by Addison Wesley Publishing Company - Brooks, Rodney A.; al., et (22 June 1995). "L -- A Common Lisp for Embedded Systems". psu.edu. - TI Explorer Programming Concepts - TI Explorer Lisp Reference - Medley Lisp Release Notes - "Symbolics Common Lisp Dictionary" (PDF). trailing-edge.com. - "Symbolics Common Lisp Language Concepts" (PDF). trailing-edge.com. - "Symbolics Common Lisp Programming Constructs" (PDF). trailing-edge.com. - "SubL Reference – Cycorp". www.cyc.com. - "Top Level Inc. - Software Preservation Group". www.softwarepreservation.org. - WCL: Delivering efficient Common Lisp applications under Unix , Proceedings of the 1992 ACM conference on LISP and functional programming, Pages 260-269 - "commonlisp.net :: WCL". pgc.com. - "Package: lang/lisp/impl/xlisp/". www.cs.cmu.edu. - "Beating the Averages". www.paulgraham.com. - "Authorizer's Assistant" (PDF). aaai.org. - American Express Authorizer's Assistant Archived 2009-12-12 at the Wayback Machine. - Real-time Application Development. Gensym. Retrieved on 2016-08-16. - PWGL - Home. . Retrieved on 2013-07-17. - "Aerospace - Common Lisp". lisp-lang.org. - Piano Users, retrieved from manufacturer page. - Grammarly.com, Running Lisp in Production - "Remote Agent". ti.arc.nasa.gov. - "Franz Inc Customer Applications: NASA". franz.com. - Spike Planning and Scheduling System. Stsci.edu. Retrieved on 2013-07-17. - "Franz Inc Customer Applications: Space Telescope Institute". franz.com. - "How It All Started…AKA the Birth of the CLR". microsoft.com. - Quicklisp description - Library count can be directly obtained by executing (length (ql:system-list)) in a Lisp REPL that has loaded the Quicklisp system. Count as of 2018-04-26 is 3622 packages. - Getting a library into Quicklisp A chronological list of books published (or about to be published) about Common Lisp (the language) or about programming with Common Lisp (especially AI programming). - Guy L. Steele: Common Lisp the Language, 1st Edition, Digital Press, 1984, ISBN 0-932376-41-X - Rodney Allen Brooks: Programming in Common Lisp, John Wiley and Sons Inc, 1985, ISBN 0-471-81888-7 - Richard P. Gabriel: Performance and Evaluation of Lisp Systems, The MIT Press, 1985, ISBN 0-262-57193-5, PDF - Robert Wilensky: Common LISPcraft, W.W. Norton & Co., 1986, ISBN 0-393-95544-3 - Eugene Charniak, Christopher K. Riesbeck, Drew V. McDermott, James R. Meehan: Artificial Intelligence Programming, 2nd Edition, Lawrence Erlbaum, 1987, ISBN 0-89859-609-2 - Wendy L. Milner: Common Lisp: A Tutorial, Prentice Hall, 1987, ISBN 0-13-152844-0 - Deborah G. Tatar: A Programmer's Guide to Common Lisp, Longman Higher Education, 1987, ISBN 0-13-728940-5 - Taiichi Yuasa, Masami Hagiya: Introduction to Common Lisp, Elsevier Ltd, 1987, ISBN 0-12-774860-1 - Christian Queinnec, Jerome Chailloux: Lisp Evolution and Standardization, Ios Pr Inc., 1988, ISBN 90-5199-008-1 - Taiichi Yuasa, Richard Weyhrauch, Yasuko Kitajima: Common Lisp Drill, Academic Press Inc, 1988, ISBN 0-12-774861-X - Wade L. Hennessey: Common Lisp, McGraw-Hill Inc., 1989, ISBN 0-07-028177-7 - Tony Hasemer, John Dominque: Common Lisp Programming for Artificial Intelligence, Addison-Wesley Educational Publishers Inc, 1989, ISBN 0-201-17579-7 - Sonya E. Keene: Object-Oriented Programming in Common Lisp: A Programmer's Guide to CLOS, Addison-Wesley, 1989, ISBN 0-201-17589-4 - David Jay Steele: Golden Common Lisp: A Hands-On Approach, Addison Wesley, 1989, ISBN 0-201-41653-0 - David S. Touretzky: Common Lisp: A Gentle Introduction to Symbolic Computation, Benjamin-Cummings, 1989, ISBN 0-8053-0492-4. Web/PDF Dover reprint (2013) ISBN 978-0486498201 - Christopher K. Riesbeck, Roger C. Schank: Inside Case-Based Reasoning, Lawrence Erlbaum, 1989, ISBN 0-89859-767-6 - Patrick Winston, Berthold Horn: Lisp, 3rd Edition, Addison-Wesley, 1989, ISBN 0-201-08319-1, Web - Gerard Gazdar, Chris Mellish: Natural Language Processing in LISP: An Introduction to Computational Linguistics, Addison-Wesley Longman Publishing Co., 1990, ISBN 0-201-17825-7 - Patrick R. Harrison: Common Lisp and Artificial Intelligence, Prentice Hall PTR, 1990, ISBN 0-13-155243-0 - Timothy Koschmann: The Common Lisp Companion, John Wiley & Sons, 1990, ISBN 0-471-50308-8 - W. Richard Stark: LISP, Lore, and Logic, Springer Verlag New York Inc., 1990, ISBN 978-0-387-97072-1, PDF - Molly M. Miller, Eric Benson: Lisp Style & Design, Digital Press, 1990, ISBN 1-55558-044-0 - Guy L. Steele: Common Lisp the Language, 2nd Edition, Digital Press, 1990, ISBN 1-55558-041-6, Web - Robin Jones, Clive Maynard, Ian Stewart: The Art of Lisp Programming, Springer Verlag New York Inc., 1990, ISBN 978-3-540-19568-9, PDF - Steven L. Tanimoto: The Elements of Artificial Intelligence Using Common Lisp, Computer Science Press, 1990, ISBN 0-7167-8230-8 - Peter Lee: Topics in Advanced Language Implementation, The MIT Press, 1991, ISBN 0-262-12151-4 - John H. Riley: A Common Lisp Workbook, Prentice Hall, 1991, ISBN 0-13-155797-1 - Peter Norvig: Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp, Morgan Kaufmann, 1991, ISBN 1-55860-191-0, Web - Gregor Kiczales, Jim des Rivieres, Daniel G. Bobrow: The Art of the Metaobject Protocol, The MIT Press, 1991, ISBN 0-262-61074-4 - Jo A. Lawless, Molly M. Miller: Understanding CLOS: The Common Lisp Object System, Digital Press, 1991, ISBN 0-13-717232-X - Mark Watson: Common Lisp Modules: Artificial Intelligence in the Era of Neural Networks and Chaos Theory, Springer Verlag New York Inc., 1991, ISBN 0-387-97614-0, PDF - James L. Noyes: Artificial Intelligence with Common Lisp: Fundamentals of Symbolic and Numeric Processing, Jones & Bartlett Pub, 1992, ISBN 0-669-19473-5 - Stuart C. Shapiro: COMMON LISP: An Interactive Approach, Computer Science Press, 1992, ISBN 0-7167-8218-9, Web/PDF - Kenneth D. Forbus, Johan de Kleer: Building Problem Solvers, The MIT Press, 1993, ISBN 0-262-06157-0 - Andreas Paepcke: Object-Oriented Programming: The CLOS Perspective, The MIT Press, 1993, ISBN 0-262-16136-2 - Paul Graham: On Lisp, Prentice Hall, 1993, ISBN 0-13-030552-9, Web/PDF - Paul Graham: ANSI Common Lisp, Prentice Hall, 1995, ISBN 0-13-370875-6 - Otto Mayer: Programmieren in Common Lisp, German, Spektrum Akademischer Verlag, 1995, ISBN 3-86025-710-2 - Stephen Slade: Object-Oriented Common Lisp, Prentice Hall, 1997, ISBN 0-13-605940-6 - Richard P. Gabriel: Patterns of Software: Tales from the Software Community, Oxford University Press, 1998, ISBN 0-19-512123-6, PDF - Taiichi Yuasa, Hiroshi G. Okuno: Advanced Lisp Technology, CRC, 2002, ISBN 0-415-29819-9 - David B. Lamkins: Successful Lisp: How to Understand and Use Common Lisp, bookfix.com, 2004. ISBN 3-937526-00-5, Web - Peter Seibel: Practical Common Lisp, Apress, 2005. ISBN 1-59059-239-5, Web - Doug Hoyte: Let Over Lambda, Lulu.com, 2008, ISBN 1-4357-1275-7, Web - George F. Luger, William A. Stubblefield: AI Algorithms, Data Structures, and Idioms in Prolog, Lisp and Java, Addison Wesley, 2008, ISBN 0-13-607047-7, PDF - Conrad Barski: Land of Lisp: Learn to program in Lisp, one game at a time!, No Starch Press, 2010, ISBN 1-59327-200-6, Web - Pavel Penev: Lisp Web Tales, Leanpub, 2013, Web - Edmund Weitz: Common Lisp Recipes, Apress, 2015, ISBN 978-1-484211-77-9, Web - Patrick M. Krusenotto: Funktionale Programmierung und Metaprogrammierung, Interaktiv in Common Lisp, Springer Fachmedien Wiesbaden 2016, ISBN 978-3-658-13743-4, Web |Wikibooks has more on the topic of: Common Lisp| - Lisp Lang Resources for Common Lisp. - The Awesome CL list, a curated list of Common Lisp frameworks and libraries. - The Common Lisp Cookbook, a collaborative project. - The CLiki, a Wiki for free and open-source Common Lisp systems running on Unix-like systems. - Common Lisp software repository. - "History". Common Lisp HyperSpec. - Lisping at JPL - The Nature of Lisp Essay that examines Lisp by comparison with XML. - Common Lisp Implementations: A Survey Survey of maintained Common Lisp implementations. - Common Lisp Quick Reference - Planet Lisp Articles about Common Lisp. - Quickdocs summarizes documentation and dependency information for many Quicklisp projects. |Lisp 1.5||Lisp 1.5| |ZetaLisp||Lisp Machine Lisp| |Common Lisp||Common Lisp| |Emacs Lisp||Emacs Lisp| |Visual LISP||Visual LISP|
<urn:uuid:468e3272-7f05-40c6-b927-a49b0bc9688f>
2.65625
17,321
Knowledge Article
Software Dev.
45.895125
95,493,013
The Milky Way Galaxy, in which we live, is a giant 20 kpc diameter disk. The center of the galaxy lies about 8 kpc away from the Sun in the constellation Sagittarius. The interstellar matter (ISM) contains a significant amount of dust that hides most of the optical objects in the galactic disk from us. The optical absorption rate in the solar vicinity is about 1 mag per 1 kpc, and the galactic extinction amounts to ∼ 30 mag for the light from the galactic center (GC), or the light is weakened by ∼ 100−(1∕5)20 ∼ 30 = 10−8 ∼ −12 times the original intensity. Namely, the GC cannot be observed in visible light, even if we used the world largest telescopes. The GC and the regions beyond are a world hidden from our eyes. These deepest parts of the galaxy can be easily visible if we observe them through the radio window. Radio waves are transparent against the interstellar dust because they have much longer wavelengths than the dust, and avoid scattering by dust particles. This situation mimics the reception of radio and TV waves even on a foggy and rainy day. If it is seen in radio, the radio galaxy appears quite different from its optical appearance. The change in appearance is not only because of the difference in opacity, but also because of the different locations and matter that emit radiation as well as because of the emission mechanism. Knowledge about the emission mechanism can in turn be used to investigate the origin and physics of the emitting regions. In this chapter the fundamental physics of various emission mechanisms in interstellar space is described, starting with basic formulae for electromagnetic waves. Readers who feel this chapter is too basic may skip here and proceed to the chapters more concerned with astrophysics and astronomy, and may come back when more fundamental knowledge is necessary to understand the underlying physics of the phenomena. In this book, we use the cgs (cm, gram, second) unit system. KeywordsOptical Depth Brightness Temperature Radio Source Molecular Cloud Column Density - 1.Brown, R.L., Lockman, F.J., Knapp, G.R.: Radio recombination lines. ARAA 16, 445 (1978). [Recombination lines]Google Scholar - 2.Cox, A. (ed).: Allen’s Astrophysical Quantities. Springer, New York (1999). [General astrophysical quantities]Google Scholar - 3.Ginzburg, V.L. Syrovatsikii, S.I.: Cosmic Magnetobremsstrahlung (synchrotron radiation). ARAA 3, 297 (1965). [Synchrotron radiation]Google Scholar - 4.Kraus, J.D.: Radio Astronomy. Cygnus-Quasar Books, Powell (1986). [Introduction to radaio astronomy]Google Scholar - 5.Landau, L.D., Lifshitz, E.M.: Classical Theory of Field. Pergamon Press, New York (1960). [Emission mechanisms]Google Scholar - 6.Pacholzyk, A.G.: Radio Astrophysics. Pergamon Press (1977). [Emission mechanisms]Google Scholar - 7.Ribicke, G.B., Lightnman, A.P.: Radiative Processes in Astrophysics. Wiley, Weinheim (2004). [Radiation from plasma]Google Scholar - 8.Schklovsky, I.S.: Cosmic Radio Waves. Harvard University Press, Cambridge (1960). [Synchrotron emission]Google Scholar - 9.Syrovatskii, S.I.: Pinch sheets and reconnection in astrophysics. ARAA 19, 163 (1981). [Synchrotron emission]Google Scholar
<urn:uuid:31311741-c6f8-48ce-a9e7-70f40b359ddd>
3.9375
783
Truncated
Science & Tech.
57.954084
95,493,015
Oceans and atmosphere interlinked 10 January 2018 ESA's Climate Change Initiative is compiling data from Earth Observation satellites across multiple oceans and atmosphere parameters to help improve our knowledge of how the climate is changing. This is not a one-way process, however, because the atmosphere also affects the oceans and helps to drive ocean circulation, which plays an important role in moderating our climate. Top story on Copernicus 13 July 2018 The Copernicus Sentinel-1 mission has revealed that, on average, Greenland's glaciers are now flowing more slowly into the Arctic Ocean. While glacial flow may have slowed overall, in summer glaciers flow 25% faster than they do in the winter.
<urn:uuid:fba9035e-4503-4de0-8399-b4be8e187fbf>
2.796875
143
Content Listing
Science & Tech.
22.8075
95,493,043
The transformation of terrestrial and coastal ecosystems by humans is well known, but only recently have the impacts of anthropogenic forces in the open ocean been recognized. In particular, intense exploitation by industrial fisheries is rapidly changing oceanic ecosystems by drastically reducing populations of many marine species. For most oceanic species we lack a historical perspective. In an important article to shortly appear in Ecology Letters, Baum and Myers demonstrate that the initial abundance of large apex predator populations, sharks, was enormously greater than is currently recognized. They estimate that since the onset of intense exploitation in the Gulf of Mexico in the 1950s, the pelagic shark assemblage has declined by over 80%, and the oceanic whitetip shark, initially the most common species, by over 99%. Remarkably, there is no conservation attention focused on this species. Rather it is all but forgotten in the Gulf of Mexico, with no recognition of its former prevalence in the ecosystem. That declines of this magnitude in these conspicuous species could go virtually unnoticed demonstrates how little we understand about the ocean. Kate Stinchcombe | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e54e33b3-ff3d-42cd-b67e-fa469c2f25b1>
3.671875
818
Content Listing
Science & Tech.
32.235408
95,493,048
The SQL AND Like Operator work with WHERE clause to find and search for a specified pattern in a column of table. The AND Like Operator help you to find and search for a specified records from a table in database based upon the AND operator condition. Understand with Example The Tutorial illustrates an example from SQL AND Like Operator. In this Tutorial, the code create a Table Stu_Table using create table Statement. The insert into statement add or insert a records to the table Stu_Table. The Select statement retrieve the record from table Stu_Table. Create Table Stu_Table create table Stu_Table(Stu_Id integer(2), Stu_Name varchar(15), Stu_Class varchar(10)) Insert data into Stu_Table insert into Stu_Table values(1,'Komal',10) insert into Stu_Table values(2,'Ajay',10) insert into Stu_Table values(3,'Rakesh',10) insert into Stu_Table values(4,'Bhanu',10) insert into Stu_Table values(5,'Santosh',10) insert into Stu_Table values(1,'Komal',10) The below Syntax helps you to fetch the records from the table. The Like query is used to find and search for a specified records from a table in database.. SELECT ColumnName(s) FROM TableName WHERE ColumnName LIKE pattern The given below Query fetch the record from table 'Stu_Table using WHERE clause, which restrict the records based upon the Like Query, that is used to find and search for a specific records from a table Stu_Table. The % (wildcard) is used to specify wildcards, which indicates missing letters in the pattern before and after the pattern. In this code, we want to select the Stu_Table begin with a stu_name "k" whose id is"1".The AND operator is used to satisfy the both condition. If the both condition is true returns you a record from table. The record display a Stu_Name begin with "k" whose id is "1". select * from Stu_Table where stu_name like 'k%'AND stu_id = 1
<urn:uuid:0592fb45-c276-4fdc-b684-ba5ebbc5025e>
3.1875
486
Documentation
Software Dev.
45.105813
95,493,052
Anisotropy Models of Sedimentary Sections and Characteristics of Wave Propagation A material is said to be anisotropic if its properties, when measured at the same point, change with direction. Anisotropy is exhibited in its pure form in crystals. A crystal’s properties are governed by the periodic structure of its atoms. The study of anisotropic behavior of crystals has greatly helped understanding seismic anisotropy of sedimentary rocks and fractured zones. With this perspective, some salient points of anisotropy in crystals are introduced first. KeywordsAnisotropy Model Stiffness Constant Sedimentary Section Shear Wave Splitting Azimuthal Anisotropy Unable to display preview. Download preview PDF.
<urn:uuid:b3942bb0-9e11-45a0-a302-52cf3e44f9f1>
2.859375
154
Truncated
Science & Tech.
8.750396
95,493,077
About microscopic forms of life, including Bacteria, Archea, protozoans, algae and fungi. Topics relating to viruses, viroids and prions also belong here. They also can evolve more quickly than most other organisms due to a shorter generation time (just 20 minutes for E.coli). An example of this is resistance to antibiotics, which can be result from a single duration of medicine.mith wrote:fast growth, evolution(they have like 3 billion years of head start), simple nutrition requirements. Another reason they are successful is because they have adjusted to living within humans by evolving ways to evade the immune system, by for example displaying structures on their surfaces which are similar to those of their host, so the immune system does not perceive it as a foreign object. There are also bacteria which, unlike animals, can live without oxygen (anaerobic), some can alternate between many different respiratory pathways, using different electron acceptors apart from oxygen, such as nitrogen, nitrite and nitrate, and even sulphate, which also allows some to live deep within the human gut. There are also special appendages, such as fimbriae which allows attachment to the host surface and bacteria can form biofilms, which offers protection against the immune system and antibiotics. Bacteria can also produce antibiotics of their own to complete with its own and other species of bacteria. Who is online Users browsing this forum: No registered users and 3 guests
<urn:uuid:ac1002a0-6d91-4a84-9b04-7287ff6b87a3>
3.375
297
Comment Section
Science & Tech.
24.435237
95,493,079
Two days ago, Juno, a pinwheel-shaped spacecraft, zoomed over Jupiter, coming within just 5,600 miles of its best-known feature, the Great Red Spot. The spacecraft’s camera stared at the oval-shaped storm as it soared above, capturing a few images of its orangey-red coils. The photo shoot lasted nine minutes. Juno travels at tens of thousands of miles per hour, and it doesn’t slow down. The spacecraft’s orbit flung it beyond Jupiter, toward Callisto, one of its moons, and away from the worst of the planet’s radiation. Nine minutes, and humanity managed to capture the closest-ever photograph of a storm on another world, one that’s bigger than the entire Earth and has been churning for decades. “We will be seeing the Great Red Spot at a resolution that’s never been seen before,” said Candice Hansen, a senior scientist at the Planetary Science Institute who leads the JunoCam team, on Monday, hours before the spacecraft made its pass. The JunoCam data arrived to Earth on Wednesday. The raw images were posted online, where amateur image processors got to work touching them up, producing enhanced shots like the one at the top of this story. Here’s the full version: Hansen said enhanced colors like this make details in Jupiter’s atmosphere pop. “We don’t turn up our noses at artificial color,” she said. “We love artificial color.” This is how things have worked since Juno entered Jupiter’s orbit a year ago: JunoCam sends a bunch of images back to Earth, scientists and engineers upload them, and image-software gurus enhance them. The process has produced dozens of detailed, high-resolution photos of Jupiter’s puffy clouds and swirling storms, unlike anything found in science textbooks before. But there was something special about getting this close to the Great Red Spot, which Juno hadn’t passed over until this week. The earliest observations of a massive spot on Jupiter date back to the 1660s, but historians and scientists don’t know whether people were actually looking at the Great Red Spot. The feature is large enough to be seen with Earth-based telescopes, and as technology improved, so did humanity’s image of the mysterious storm. The earliest photographs from the late 1800s and early 1990s showed a grainy, gray sphere. It wasn’t until 1979 that humanity got its first real, close-up look. In January of that year, the Voyager 1 spacecraft started sending back photos of Jupiter on its journey through the solar system. It was still about 27.5 million miles from the planet then, but the level of detail in the imagery was astounding. “We’re extending our eyes into the outer solar system and into the unknown,” Bradford Smith, the head of Voyager’s imaging team, told The New York Times as Voyager approached the planet. “Not since Mariner 4 to Mars, some 15 years ago, have we been less prepared, less certain of what we expect to see.” Voyager returned thousands of beautiful, detailed images of the gas giant and its famous spot, which were turned into a time-lapse that revealed the motions of its cloud tops. Hansen has been inching closer and closer to Jupiter for nearly 40 years. She was fresh out of college and working on Voyager’s imaging team at the time as “the assistant to the assistant to the assistant,” she explained jokingly. Scientists today aren’t as in the dark as they were in the early days of the Voyager mission, she said. But Jupiter can still surprise. “That’s part of the fun, really,” she said, before the latest Jupiter photos came back, “is not knowing what to expect.” Scientists still don’t understand exactly what drives this giant storm, where winds can reach 400 miles an hour. The Juno mission is in a good place to provide some answers. I asked Hansen what it feels like to get within a few thousand miles of one of the solar system’s biggest storms after decades of observations. She pointed me to one of the images on JunoCam’s website, but it wasn’t of the planet. It was an illustration by an artist. A brown-haired girl with feathers down her back stands on a colorful lookout, staring up at a glowing, yellow-green planet. “I would say, emotionally, this captures it for me,” Hansen said. “Just moving in closer and closer and seeing this world. And as you get closer, you don’t know what you’re gonna see. But you know it’s gonna be fantastic.” We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.
<urn:uuid:49c81b1a-f4a5-4abe-8578-4b2093798fe9>
3.328125
1,035
Truncated
Science & Tech.
59.274783
95,493,117
Environmental pollution. Ocean pollution graphs. Goodbye, darkness: light pollution is making us forget the night sky. Light pollution green momcom. Light pollution. 25 best ideas about sea turtle facts on pinterest turtle facts, facts about turtles and facts. Goodbye, darkness: light pollution is making us forget the night sky. Clean air kids activity book. Introduction to light pollution for young people. Light pollution facts. Ecological light pollution #1: an introduction towards a (b)righter future?. Infographic: what is light pollution?. Prevent light pollution, save a star, save a life. International dark sky association ida light pollution. Light pollution and its effects on the night sky. Fun facts on pollution for kids. Marina times light pollution is a glaring problem. Light pollution. Inviroment: light pollution facts. 5 ways you can reduce light pollution mnn mother nature network. Preserving our dark skies astrobites. What is noise pollution for children. Light pollution through the eyes of civil twilight make: diy projects, how tos, electronics. Dark sky preservation: april 2015. Noise pollution youtube. Perils of pollution kids b teacher angelyn. They are replacing street lights with new led lamps it is supposed to reduce the electricity. Nasa climate kids :: play games. Nps: explore nature air resources air quality basics visibility effects of air pollution. Singapore has most light pollution in the world the new paper. Light pollution. Pinterest the worlds catalog of ideas. What is light pollution pdf bittorrenthidden. Light pollution and its effects on the night sky. States with laws in place to reduce outdoor light pollution table of laws included in comments.
<urn:uuid:9985c8d6-d94f-4f9d-8218-1a7c6bfa4095>
2.734375
342
Spam / Ads
Science & Tech.
51.977952
95,493,120
The aim of astrophysics is to describe, to understand and to predict the physical phenomena that occur in the Universe. The physical content of the Universe — dense or rarefied, hot or cold, stable or unstable — can be classified into categories, such as planets, stars, galaxies, and so on. The information received by observers and transformed into signals is the basis for these classifications, for the physical models and for the predictions, which together make up the science of astrophysics. KeywordsSolar Wind Gravitational Wave Angular Resolution European Space Agency Analyse Information Unable to display preview. Download preview PDF.
<urn:uuid:4bdff2d6-949d-4ee7-8f69-33523493fb44>
2.71875
125
Truncated
Science & Tech.
18.816165
95,493,146
Water purification breakthrough uses sunlight and ‘hydrogels’ According to the United Nations, 30,000 people die each week from the consumption and use of unsanitary water. Although the vast majority of these fatalities occur in developing nations, the U.S. is no stranger to unanticipated water shortages, especially after hurricanes, tropical storms and other natural disasters that can disrupt supplies without warning. Led by Guihua Yu, associate professor of materials science and mechanical engineering at The University of Texas at Austin, a research team in UT Austin's Cockrell School of Engineering has developed a cost-effective and compact technology using combined gel-polymer hybrid materials. Possessing both hydrophilic (attraction to water) qualities and semiconducting (solar-adsorbing) properties, these "hydrogels" (networks of polymer chains known for their high water absorbency) enable the production of clean, safe drinking water from any source, whether it's from the oceans or contaminated supplies. The findings were published in the most recent issue of the journal Nature Nanotechnology. "We have essentially rewritten the entire approach to conventional solar water evaporation," Yu said. The Texas Engineering researchers have developed a new hydrogel-based solar vapor generator that uses ambient solar energy to power the evaporation of water for effective desalination. Existing solar steaming technologies used to treat saltwater involve a very costly process that relies on optical instruments to concentrate sunlight. The UT Austin team developed nanostructured gels that require far less energy, only needing naturally occurring levels of ambient sunlight to run while also being capable of significantly increasing the volume of water that can be evaporated. "Water desalination through distillation is a common method for mass production of freshwater. However, current distillation technologies, such as multi-stage flash and multi-effect distillation, require significant infrastructures and are quite energy-intensive," said Fei Zhao, a postdoctoral researcher working under Yu's supervision. "Solar energy, as the most sustainable heat source to potentially power distillation, is widely considered to be a great alternative for water desalination." The hydrogels allow for water vapor to be generated under direct sunlight and then pumped to a condenser for freshwater delivery. The desalinating properties of these hydrogels were even tested on water samples from the salt-rich Dead Sea and passed with flying colors. Using water samples from one of the saltiest bodies of water on Earth, UT engineers were able to reduce salinity from Dead Sea samples significantly after putting them through the hydrogel process. In fact, they achieved levels that met accepted drinking water standards as outlined by the World Health Organization and the U.S. Environmental Protection Agency. "Our outdoor tests showed daily distilled water production up to 25 liters per square meter, enough for household needs and even disaster areas," said Yu. "Better still, the hydrogels can easily be retrofitted to replace the core components in most existing solar desalination systems, thereby eliminating the need for a complete overhaul of desalinations systems already in use." Because salt is one of the most difficult substances to separate from water, researchers have also successfully demonstrated the hydrogels' capacity for filtering out a number of other common contaminants found in water that are considered unsafe for consumption. Yu believes the technology can be commercialized and is preparing his research team in anticipation of requests from industry to conduct scalability tests. The potential impact of this technology could be far-reaching, as global demand for fresh, clean water outpaces existing natural supplies. A patent application has been filed, and Yu has teamed up with the university's Office of Technology Commercialization to assist with the licensing and commercialization for this novel class of hydrogels. This research was funded by the Alfred P. Sloan Foundation, the Camille & Henry Dreyfus Foundation and the National Science Foundation.
<urn:uuid:e9befb94-10d6-4e61-a6e8-a64bd77baf1a>
3.609375
807
News Article
Science & Tech.
14.30195
95,493,147
Describe how black holes are believed to form. Assess the various tools and techniques that are used to study and observe black holes, as well as the ranges of the electromagnetic spectrum that are best suited for these studies© BrainMass Inc. brainmass.com July 21, 2018, 9:19 pm ad1c9bdddf Black Holes are, according to NASA (2014), "a region in space where the pulling force of gravity is so strong that light is not able to escape. The strong gravity occurs because matter has been pressed into a tiny space. This compression can take place at the end of a star's life. Some black holes are a result of dying stars. Because no light can escape, black holes are invisible." Black holes come in a number of sizes but according to NASA (2014), there 3 main ones. Primordial Black holes are the smallest which, " is as small as a single atom but with the mass of a large mountain." The next is a stellar or medium black hole which according to NASA (2014), "can be up to 20 times greater than the mass of the sun and can fit inside a ball with a diameter of about 10 miles. Dozens of stellar mass black holes may exist within the Milky Way galaxy." And lastly, the biggest are the supermassive black holes (NASA, 2014), "have masses greater than 1 ... The solution provides information, assistance and advise in tackling the task (see above), on the topic of Black holes (see above); their origins, how they are studied, the electromagnetic spectrum that they can be seen, etc. Resources are listed for further exploration of the topic.
<urn:uuid:ff055c26-f926-4ff4-8089-921b5651197a>
4.03125
334
Truncated
Science & Tech.
59.870479
95,493,161
NASA’s Opportunity Mars rover is examining the edge of a crater on the red planet that may once have been a lake of liquid water. The Opportunity rover found rocks at the edge of Endeavour Crater that were either transported by a flood or eroded in place by wind. The features were seen just outside the crater rim’s crest above “Perseverance Valley,” which is carved into the inner slope of the rim. Researchers plan to drive Opportunity down Perseverance Valley after completing a “walkabout” survey of the area above it. The Opportunity mission has been investigating sites on and near the western rim of Endeavour Crater since 2011. The crater is about 22 kilometres across. “The walkabout is designed to look at what’s just above Perseverance Valley,” said Ray Arvidson, from Washington University in St Louis. “We see a pattern of striations running east-west outside the crest of the rim,” said Arvidson, Deputy Principal Investigator of the Opportunity mission. A portion of the crest at the top of Perseverance Valley has a broad notch. Just west of that, elongated patches of rocks line the sides of a slightly depressed, east-west swath of ground, which might have been a drainage channel billions of years ago. You may also like to watch: “We want to determine whether these are in-place rocks or transported rocks,” Arvidson said. “One possibility is that this site was the end of a catchment where a lake was perched against the outside of the crater rim,” he said. “A flood might have brought in the rocks, breached the rim and overflowed into the crater, carving the valley down the inner side of the rim,” he added. “Another possibility is that the area was fractured by the impact that created Endeavour Crater, then rock dikes filled the fractures, and we’re seeing effects of wind erosion on those filled fractures,” Arvidson said. In the hypothesis of a perched lake, the notch in the crest just above Perseverance Valley may have been a spillway. Weighing against that hypothesis is an observation that the ground west of the crest slopes away, not toward the crater. The science team is considering possible explanations for how the slope might have changed. A variation of the impact-fracture hypothesis is that water rising from underground could have favoured the fractures as paths to the surface and contributed to weathering of the fracture-filling rocks. The team is analysing images of Perseverance Valley, taken from the rim, to plot Opportunity’s route. The valley extends down from the crest into the crater at a slope of about 15 to 17 degrees for a distance of about two football fields.
<urn:uuid:82f34a44-df43-4a2f-87aa-8fe73e4915ad>
3.46875
586
News Article
Science & Tech.
42.8095
95,493,173
Video credit: University of Wisconsin–Madison When does a (typically) vegetarian caterpillar become a cannibalistic caterpillar, even when there is still plenty of plant left to eat? When the tomato plant it’s feeding on makes cannibalism the best option. “It often starts with one caterpillar biting another one in the rear, which then oozes. And it goes downhill from there,” says University of Wisconsin–Madison integrated biology professor John Orrock, author of a new study published July 10 in Nature Ecology & Evolution that examines how plants, in defending themselves from insect predation, can encourage insects to become cannibals. “At the end of the day, somebody gets eaten,” he says. It started when Orrock wondered whether a tomato plant could ever taste so horrible that an herbivore that would typically munch on its green leaves would instead turn to its buddy and begin to consume him or her instead. “Many insects are known to become cannibalistic when the going gets tough,” says Orrock. So Orrock, his postdoctoral researcher Brian Connolly, and Anthony Kitchen, an undergraduate student in the lab, devised a set of experiments to test their idea using tomato plants and a species of caterpillar called the beet armyworm. “Beet armyworms are important agricultural pests, in part because they can feed on a variety of plants,” Connolly says. “And early, influential work describing plant responses to herbivore attacks used tomato and beet armyworm. We build on that work here.” Unlike animals that can flee from hungry predators, plants are rooted in place. However, plants aren’t defenseless. When danger looms, many plants can produce chemicals like methyl jasmonate that act like a chemical scream. Other plants can detect this scream and begin to invest in their own defenses, producing chemicals that deter herbivores, in case they are next on the menu. To test the effect of plant defenses on herbivore behavior, the researchers sprayed tomato plants in plastic containers with either a control solution or a range of concentrations of methyl jasmonate—low, medium and high—and then added eight caterpillar larvae to each container. They counted the number of caterpillars remaining each day to determine how many had been eaten, and after eight days they weighed how much plant material each treatment group had managed to preserve. Photo credit: BRIAN CONNOLLY In the control and lower-concentration treatment groups, the caterpillars ate the entire plant before turning to cannibalism, but the plants sprayed with the highest levels of methyl jasmonate stayed mostly intact. Caterpillars living with the well-defended plants became cannibalistic much sooner than their leaf-eating counterparts with access to the less-well-defended plants. “Not only do these guys become predators, which is a victory for the plant, they are getting a lot of food by eating one another,” says Orrock. “We struck upon a way that plants defend themselves that nobody had really appreciated before.” “It’s grisly and macabre,” Connolly adds, “but it’s energy transfer.” In a second experiment Orrock conducted while on sabbatical at Virginia Commonwealth University, he added a single caterpillar larva to containers holding leaves from plants that were not sprayed with methyl jasmonate or containing leaves from plants sprayed with a moderate level of the chemical. To some containers he also added freshly frozen-and-thawed caterpillars that appeared alive. It was important to ensure the flash-frozen caterpillars looked enticing enough to serve as a potential meal for a living caterpillar, but were not actually alive to consume plant material. Once again, caterpillars with access only to well-defended plant leaves and lifelike dead caterpillars turned to cannibalism sooner than caterpillars for whom less-nasty plant leaves were available, and they ate far less leaf material. “From the plant’s perspective, this is a pretty sweet outcome, turning herbivores on each other,” Orrock says. “Cannibals not only benefit the plant by eating herbivores, but cannibals also don’t have as much appetite for plant material, presumably because they’re already full from eating other caterpillars.” The cannibalistic caterpillars on defended plants grew at similar rates to caterpillars given access to undefended plants, which consumed the plant material available to them before turning to cannibalism. Meanwhile, caterpillars housed with well-defended plants and no fresh caterpillar carcasses ate less plant material and had very low rates of growth. “The next step in this work is to figure out whether accelerated cannibalism would slow, or increase, the rate of spread of insect pathogens,” says Orrock, who says the researchers also hope to better understand whether caterpillars are as quick to turn to cannibalism when they are not trapped with a single plant, as they were in the lab. Regardless, Orrock says, “the research suggests that we may need to give plants a little more credit. Instead of being wallflowers who sit and wait for life to happen, plants respond to their environment with potent defenses, and these defenses make caterpillars more likely to eat other caterpillars.” Like this article? Click here to subscribe to free newsletters from Lab Manager
<urn:uuid:f6bb5bcd-84eb-4034-841e-5cb0d4dfff22>
3.546875
1,160
Truncated
Science & Tech.
29.377815
95,493,183
Daily news articles relating Environmental Engineering, The Earth times and pollution with daily updates on breaking news. Stay informed, learn how you can take action to reverse global warming. Environmental engineering system is the branch of engineering concerned with the application of scientific and engineering principles for protection of human populations from the effects of adverse environmental factors; protection of environments, both local and global, from potentially deleterious effects of natural and human activities; and improvement of environmental quality. Environmental engineering system can also be described as a branch of applied science and technology that addresses the issues of energy preservation, protection of assets and control of waste from human and animal activities. Furthermore, it is concerned with finding plausible solutions in the field of public health, such as waterborne diseases, implementing laws which promote adequate sanitation in urban, rural and recreational areas. It involves waste water management, air pollution control, recycling, waste disposal, radiation protection, industrial hygiene, animal agriculture, environmental sustainability, public health and environmental engineering law. It also includes studies on the environmental impact of proposed construction projects. Environmental engineers system study the effect of technological advances on the environment. To do so, they conduct studies on hazardous-waste management to evaluate the significance of such hazards, advise on treatment and containment, and develop regulations to prevent mishaps. Environmental engineers design municipal water supply and industrial wastewater treatment systems. They address local and worldwide environmental issues such as the effects of acid rain, global warming, ozone depletion, water pollution and air pollution from automobile exhausts and industrial sources. Many universities offer environmental engineering programs at either the department of civil engineering or the department of chemical engineering at engineering faculties. Environmental "civil" engineers focus on hydrology, water resources management, bioremediation, and water treatment plant design. Environmental "chemical" engineers, on the other hand, focus on environmental chemistry, advanced air and water treatment technologies and separation processes. Some subdivision of environmental engineering include natural resources engineering, agricultural engineering, and agricultural engineering. More engineers are obtaining specialized training in law (J.D.) and are utilizing their technical expertise in the practices of environmental engineering law. Most jurisdictions also impose licensing and registration requirements. >> Tell Us << You are witnessing of Pollution, tell us how you feel after witnessing this major event in history of the earth.
<urn:uuid:ffdcb08c-3d80-4ff1-828f-b32221175d17>
3
457
Knowledge Article
Science & Tech.
-4.545187
95,493,185
By Felix Schlenk Symplectic geometry is the geometry underlying Hamiltonian dynamics, and symplectic mappings come up as time-1-maps of Hamiltonian flows. The marvelous pressure phenomena for symplectic mappings found within the final 20 years express that convinced issues cannot be performed by way of a symplectic mapping. for example, Gromov's well-known "non-squeezing'' theorem states that one can't map a ball right into a thinner cylinder by way of a symplectic embedding. the purpose of this booklet is to teach that sure different issues can be performed via symplectic mappings. this can be accomplished through a number of easy and specific symplectic embedding buildings, akin to "folding", "wrapping'', and "lifting''. those structures are performed intimately and are used to unravel a few particular symplectic embedding difficulties. The exposition is self-contained and addressed to scholars and researchers attracted to geometry or dynamics. Read or Download Embedding Problems in Symplectic Geometry (De Gruyter Expositions in Mathematics) PDF Similar geometry & topology books In response to a chain of lectures for grownup scholars, this full of life and pleasing publication proves that, faraway from being a dusty, boring topic, geometry is in truth choked with good looks and fascination. The author's infectious enthusiasm is positioned to exploit in explaining a number of the key recommendations within the box, beginning with the Golden quantity and taking the reader on a geometric trip through Shapes and Solids, in the course of the Fourth measurement, winding up with Einstein's Theories of Relativity. This exact publication on sleek topology seems to be well past conventional treatises and explores areas which can, yet don't need to, be Hausdorff. this can be crucial for area thought, the cornerstone of semantics of desktop languages, the place the Scott topology is sort of by no means Hausdorff. For the 1st time in one quantity, this publication covers uncomplicated fabric on metric and topological areas, complex fabric on whole partial orders, Stone duality, sturdy compactness, quasi-metric areas and masses extra. Differential geometry and topology are crucial instruments for plenty of theoretical physicists, fairly within the learn of condensed subject physics, gravity, and particle physics. Written by way of physicists for physics scholars, this article introduces geometrical and topological tools in theoretical physics and utilized arithmetic. Stiefel manifolds are a fascinating kin of areas a lot studied by means of algebraic topologists. those notes, which originated in a direction given at Harvard collage, describe the country of information of the topic, in addition to the phenomenal difficulties. The emphasis all through is on functions (within the topic) instead of on conception. Additional resources for Embedding Problems in Symplectic Geometry (De Gruyter Expositions in Mathematics) Embedding Problems in Symplectic Geometry (De Gruyter Expositions in Mathematics) by Felix Schlenk
<urn:uuid:2088afd9-8004-44a2-af29-458a49e69ae5>
2.65625
627
Truncated
Science & Tech.
20.223574
95,493,209
Global warming has become a major threat to the global economy. One of them is the climate. This has become one of the biggest problems for the whole world. Climate is such a problem. Which is the basic source of all environmental problem? The security of the climate is not of any county and foreign. This is the problem of the whole world. This requires a shared strategy in the world, As a result of climate change, the ozone hole is expanding. There is a steady increase in the heat values of the world. Controlling this has become a challenge for the world. Continuous use of fossil fuels in the process of industrialization is increasing the amount of carbon dioxide in the atmosphere. According to scientists, in the last century, the temperature of the earth has increased by 1.5 degrees Celsius this century. According to another estimate of scientists, by the year 2080, the earth’s temperature is expected to grow from 1 degree to 3.5-degree Celsius. It is also believed that growth in the temperature of the earth is not a big event. But if the increase in the amount of carbon dioxide and other gases in the atmosphere with the increase in temperature, then it would be very harmful. How to prevent pollution- Increased carbon dioxide and other gases increase the likelihood of earthquakes, tsunamis, overgrowth’s, severe heat etc. But every human being is looking at global warming. This will increase its effect in the coming time. That would be a very big warning for the coming generation. To prevent this, every government in the world should take a major step in its prevention. Due to the thermal balance of thermal radiation coming from the sun and heat from the earth 6% of the heat on the earth arising from the sun gets reflected in the atmosphere and 14% out of 65% is absorbed by atmospheric gases. The likewise the surface of the heated earth continues to be achieved. Due to increasing pollution in today’s time, the amount of carbon dioxide is increasing. This has caused a hole in the ozone layer. This is why ultraviolet rays come directly to the earth. Its impact is directly on the environment. This is why the heat started to increase. How to Prevent Pollution- It is necessary that every government in the world be made aware of it’s public. If every government in the country prohibits cutting of plantation. And try to keep this growing population, limited in the world And if we control the pollution that is rid of this problem somewhat.
<urn:uuid:ddd3cb96-521d-4246-a6cc-7f03a1107c28>
3.375
511
Personal Blog
Science & Tech.
55.4189
95,493,214
ANU biologists have found the first evidence of mass extinction of Australian animals caused by a dramatic drop in global temperatures 35 million years ago. ANU Chemists have found a way to use sunlight to purify wastewater rapidly and cheaply, and to make self-cleaning materials for buildings. Physicists have designed a handheld devicethat will use the power of MRI and mass spectrometry to perform a chemical analysis of objects. Humans are causing the climate to change 170 times faster than natural forces, new research co-led by The Australian National University (ANU) has found. Scientists at The Australian National University (ANU) have controlled wave-generated currents to make previously unimaginable liquid materials. People who work more than 39 hours a week are putting their health at risk, new research from The Australian National University (ANU) has found. A new study has shown that the female Superb Fairy-Wren has the ability to change the size of the eggs it lays in a biological feat which could buffer against... By 2050, the population of the world will be nine billion. That’s a lot of mouths to feed. Researchers at The Australian National University (ANU) have found a new breeding population of the critically endangered regent honeyeater. Five per cent of the Australian population suffer from autoimmune diseases, such as multiple sclerosis and lupus It might be hard to believe, but scientists currently rely on one type of coral found in one place only—the Bahamas—as a source of the ingredients to make... It took a mathematician to find an answer to how life on Earth actually began, and to discover the answer in the most surprising place: the hairdresser’s.
<urn:uuid:a342dcaf-8ef5-415b-9b55-c97b8aeac22f>
2.78125
358
Content Listing
Science & Tech.
39.245625
95,493,227
A new theory for the breaking of (bio-)chemical bonds under load may help to predict the strength and performance of synthetic nanostructures and proteins, on a molecular level. Theoretical physicists from Leipzig University have published their findings in „Nature Communications“. The fundamental question how a molecular bond breaks is of interest in many fields of science and has been studied extensively. Yet, now writing in Nature Communications, a group of theoretical physicists from the University of Leipzig, Germany, has put forward a more powerful analytical formula for forcible bond breaking than previously available. It predicts how likely a bond will break at a given load, if probed with a prescribed loading protocol. This so-called rupture force distribution is the most informative and most commonly measured quantity in modern single-molecule force spectroscopy experiments (which may roughly be thought of as nanoscopic versions of the conventional crash- or breaking tests employed in materials science and engineering). Such experiments are nowadays performed in large numbers in molecular biology and biophysics labs to probe the mechanical strength of individual macromolecular bonds. Recent methodological advances have pushed force spectroscopy assays to ever higher loading rates (the equivalent of the speed employed in the macroscopic crash-test). This provided a strong incentive for the Leipzig team to improve on current state-of-the-art theories for forcible bond breaking, which are limited to comparatively low speeds. Moreover, the new equation solves another problem that has bothered experts in the field for many years. Force spectroscopy experiments are often simulated with sophisticated all-atom computer models to supplement the experimental data with information on internal molecular details that cannot be resolved in a laboratory setting. However, because of their enormous complexity, such computer simulations operate at extremely high loading rates to cut down on the runtime. As a consequence, simulation and experiment were so far two essentially distinct branches of force spectroscopy. The new equation, which gives exact results for both low and high loading rates, will thus suit both experimentalists and computer scientists, and help them to systematically analyze and compare their results. This should eventually improve our microscopic understanding of the strength of synthetic materials and of how proteins attain and maintain their three-dimensional structure and perform conformational changes, which are core features determining the function and dysfunction of these amazing engines of life. Article in „Nature Communications”: „Theory of rapid force spectroscopy“, by Jakob T. Bullerjahn, Sebastian Sturm and Klaus Kroy Prof. Dr. Klaus Kroy Phone: +49 341 97 32436 Carsten Heckmann | Universität Leipzig What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:6cab0adc-6894-48e0-ad6c-aa815635d1bd>
2.625
1,192
Content Listing
Science & Tech.
31.567406
95,493,252
Making Fuel from Leftovers Researchers have designed a process to generate hydrogen from organic materials. Unsure what to do with your Thanksgiving leftovers? According to Penn State University (PSU) researchers, feeding table scraps to bacteria may be a clean and efficient way to produce hydrogen that can be used as fuel. Bruce Logan, Kappe professor of environmental engineering, and his colleagues at PSU have designed a tabletop reactor that uses bacteria to break down biodegradable organic material. Adding a small jolt of energy to the system causes hydrogen gas to bubble up to the surface. Logan says that this biological process–compared with today’s existing techniques–may be a more sustainable and efficient alternative for generating hydrogen. The promise of hydrogen as a fuel source has led major automakers like BMW, Daimler Chrysler, Ford, and Toyota to develop test cars that run on hydrogen-powered fuel cells. These fuel cells convert hydrogen and oxygen into electricity, giving off water as a byproduct. It’s a zero-emissions model that could vastly reduce reliance on polluting fossil fuels. But there’s a catch: generating hydrogen itself can involve the burning of fossil fuels like natural gas. Cleaner methods of producing hydrogen include using geothermal, wind, and solar energy to separate water into hydrogen and oxygen, in a process called electrolysis. However, these processes are expensive and require large amounts of electricity. If scaled up, these methods could prove very inefficient. Some scientists have concentrated on creating microbial fuel cells–reactors that use bacteria to catalyze reactions that produce electricity. Logan’s lab found a way to improve on existing microbial fuel cells by breaking down end products, such as acetic acid. The researchers grew bacteria in a specially designed, oxygen-free reactor: a bioelectrochemically assisted microbial reactor, which they dubbed BEAMR. The reactor comprises two compartments. The first houses a negatively charged anode, composed of granulated graphite, which Logan sprayed with ammonia gas to help bacteria stick better. The second compartment contains a positively charged cathode of carbon, with a platinum catalyst. An ion-exchange membrane sits between the compartments. Logan used a small wire to connect both electrodes to a small external power source. The researchers then fed the microbial reactor a varied diet of acetic acid and cellulose. They found that as bacteria fed, the reactor released protons and electrons. The electrons were immediately taken up by the anode, while the protons crossed the membrane to the cathode. The energy from the electrons (which amounted to 0.3 volts), coupled with a short jolt of external voltage (0.2 volts), passed into the cathode compartment, joining with the protons to produce hydrogen gas, which researchers captured and measured in a tube. Penn State researchers have developed a microbial electrolysis cell, which they call BEAMR, to produce hydrogen. The process uses bacteria to break down organic material, such as acetic acid and cellulose. A small external burst of voltage aids in boosting hydrogen production. Credit: Zina Deretsky, National Science Foundation The entire process generated 288 percent more energy than the electricity required to produce the reaction. Logan and his colleagues estimate that, compared with conventional electrolysis, which has a 60 percent efficiency rate, BEAMR achieved an 82 percent efficiency rate. Logan says that his recent experiment shows that acetic acid could be a rich source of hydrogen-generating material under specific conditions. This suggests that researchers may be able to get more hydrogen out of biomass than was previously thought. Another implication from the study: cellulose may turn out to be better suited for hydrogen production than ethanol fuel is because using cellulose for ethanol involves a more complicated process. “If you think of cellulose as a starting material to make ethanol, people have to add enzymes to break it down to sugars, and then those can be fermented into ethanol,” says Logan. “But we can use cellulose directly to make hydrogen.” He says that a potential first application for the technology may be in powering farms, wastewater treatment plants, and other facilities with large amounts of unused biomass. However, scaling BEAMR up to commercial applications may take some rejiggering. The materials used in the system, particularly the platinum cathode in the reactor, would be very expensive if manufactured at a large scale. In the future, Logan’s lab plans to reduce the cost of the reactor’s components, and it has already started looking for alternatives to platinum. Lars Angenent, assistant professor in the Department of Energy, Environmental, and Chemical Engineering at Washington University, works to optimize fermentation processes to produce bioenergy. He says that while Logan’s technology successfully “circumvents biological limitations of hydrogen production,” bringing it to a commercial level may pose challenges. “Scale-up will be the problem,” says Angenent. “This must be commercially viable while sustaining high efficiencies.” Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:fddb3af9-7705-4d9d-9324-786220a995fa>
4.0625
1,069
News Article
Science & Tech.
27.506988
95,493,299
Modelling the Uk Nhx, S, nox Budgets for 2010 with an Atmospheric Transport Model The Gothenburg Protocol to Abate Acidification, Eutrophication and Ground- level Ozone was adopted on 30 November 1999. The Protocol sets emission ceilings for 2010 for four pollutants: SO2, NO x , VOC s and NH3. Emissions reduction of SO2, NO x and NH3 for the time period 1990 to 2010 have been estimated over Europe by (UN/ECE, 1999). To consider the effect of these changes on the British Isles, the FRAME model is applied here to model deposition of sulphur and nitrogen considering the implementation of the Gothenburg Protocol scenario of emissions for 2010 throughout the British Isles and Europe. KeywordsEmission Reduction British Isle Model Deposition Level Ozone Total Deposition - UN/ECE (1999) Protocol to the 1979 Convention on Long-range Transboundary Air Pollution to Abate Acidification, Eutrophication, and Ground-level Ozone. United Nations Economic Commission for Europe, Geneva, Switzerland.Google Scholar
<urn:uuid:ef35ad49-d796-4308-87cc-b7f08ef82b29>
2.59375
227
Academic Writing
Science & Tech.
24.260385
95,493,305
What happens when a quantum dot looks in a mirror? The 2014 chemistry Nobel Prize recognized important microscopy research that enabled greatly improved spatial resolution. This innovation, resulting in nanometer resolution, was made possible by making the source (the emitter) of the illumination quite small and by moving it quite close to the object being imaged. This is an illustration of the interference between light from the quantum dot (black sphere) and radiation from the mirror dipole (black sphere on the wire). This interference will slightly distort the perceived location of the diffraction spot as imaged on a black screen at the top. The distortion is different depending on whether the quantum dot dipole is oriented perpendicular (red) or parallel (blue) to the wire surface, a difference that can be visualized by imaging the diffraction spot along different polarizations. One problem with this approach is that in such proximity, the emitter and object can interact with each other, blurring the resulting image. Now, a new JQI study has shown how to sharpen nanoscale microscopy (nanoscopy) even more by better locating the exact position of the light source. Traditional microscopy is limited by the diffraction of light around objects. That is, when a light wave from the source strikes the object, the wave will scatter somewhat. This scattering limits the spatial resolution of a conventional microscope to no better than about one-half the wavelength of the light being used. For visible light, diffraction limits the resolution to no be better than a few hundred nanometers. How then, can microscopy using visible light attain a resolution down to several nanometers? By using tiny light sources that are no larger than a few nanometers in diameter. Examples of these types of light sources are fluorescent molecules, nanoparticles, and quantum dots. The JQI work uses quantum dots which are tiny crystals of a semiconductor material that can emit single photons of light. If such tiny light sources are close enough to the object meant to be mapped or imaged, nanometer-scale features can be resolved. This type of microscopy, called "Super-resolution imaging," surmounts the standard diffraction limit. JQI fellow Edo Waks and his colleagues have performed nanoscopic mappings of the electromagnetic field profile around silver nano-wires by positioning quantum dots (the emitter) nearby. (Previous work summarized at http://jqi. They discovered that sub-wavelength imaging suffered from a fundamental problem, namely that an "image dipole" induced in the surface of the nanowire was distorting knowledge of the quantum dot's true position. This uncertainty in the position of the quantum dot translates directly into a distortion of the electromagnetic field measurement of the object. The distortion results from the fact that an electric charge positioned near a metallic surface will produce just such an electric field as if a ghostly negative charge were located as far beneath the surface as the original charge is above it. This is analogous to the image you see when looking at yourself in a mirror; the mirror object appears to be as far behind the mirror as you are in front. The quantum dot does not have a net electrical charge but it does have a net electrical dipole, a slight displacement of positive and negative charge within the dot. Thus when the dot approaches the wire, the wire develops an "image" electrical dipole whose emission can interfere with the dot's own emission. Since the measured light from the dot is the substance of the imaging process, the presence of light coming from the "image dipole" can interfere with light coming directly from the dot. This distorts the perceived position of the dot by an amount which is 10 times higher than the expected spatial accuracy of the imaging technique (as if the nanowire were acting as a sort of funhouse mirror). The JQI experiment successfully measured the image-dipole effect and properly showed that it can be corrected under appropriate circumstances. The resulting work provides a more accurate map of the electromagnetic fields surrounding the nanowire. The JQI scientists published their results in the journal Nature Communications. Lead author Chad Ropp (now a postdoctoral fellow at the University of California, Berkeley) says that the main goal of the experiment was to produce better super-resolution imaging: "Any time you use a nanoscale emitter to perform super-resolution imaging near a metal or high-dielectric structure image-dipole effects can cause errors. Because these effects can distort the measurement of the nano-emitter's position they are important to consider for any type of super-resolved imaging that performs spatial mapping." "Historically scientists have assumed negligible errors in the accuracy of super-resolved imaging," says Ropp. "What we are showing here is that there are indeed substantial inaccuracies and we describe a procedure on how to correct for them." Reference report: "Nanoscale probing of image dipole interactions in a metallic nanostructure," Chad Ropp, Zachary Cummins, Sanghee Nah, John T. Fourkas, Benjamin Shapiro, Edo Waks, Nature Communications, 19 March 2015; Doi:10.1038/ncomms7558 http://www. Phillip F. Schewe | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ba3d402d-3929-4843-a890-efc41f0b783f>
3.484375
1,660
Content Listing
Science & Tech.
33.315985
95,493,309
This article is misleading at best. Article title: NASA animation shows Arctic sea ice approaching normal levels Here's their quote: However, late season winter storms over the Bering and Barents Seas allowed it to continue to enlarge. “By the end of March, total extent approached 1979 to 2000 average levels for this time of year,” the NSIDC said. Notice: "for this time of year." And that's it. That's all they say. They leave you with a pretty animation without any context. What normally happens? What was expected to happen? The article leaves a casual reader with the impression that Arctic ice is "approaching normal levels." Their link is to the National Snow and Ice Data Center (good for them!) but they fail to link to the actual news release about the observed maximum ice late in the season. If they had they would see this little detail: The average ice extent for March 2010 was 670,000 square kilometers (260,000 square miles) higher than the record low for March, observed in 2006. The linear rate of decline for March over the 1978 to 2010 period is 2.6% per decade. So we had one good March compared to a steady decline for the last thirty years. And if they would manage to keep reading... The late date of the maximum extent, though of special interest this year, is unlikely to have an impact on summer ice extent. The ice that formed late in the season is thin, and will melt quickly when temperatures rise. Well put. These articles always seem hide that they are talking about ice extent, not mass. I think a lot of this "arctic ice is returning to normal levels" stuff is based on satellite data. A researcher from the University of Manitoba actually took an icebreaker out to look at the ice. The satellites are being fooled into thinking the thin, rotten ice is thick, multi-year ice. Or so says this article, anyway. Interesting, I wonder what has caused that. If memory serves last year was the warmest year in recorded history (average temp I believe) so without a lot of knowledge on how this stuff works I would assume it would be a bad year for that. Very interest, can't wait to read some comments by people who work in this field or know a lot about it. I think hottest year runs: 2005, 2007 = 2009, 1998 - but they're all so close it barely makes a difference. The more important stat is that the ten warmest years on record all occurred in the last ~decade. As for this story from the Examiner, it's misleading - see http://www.guardian.co.uk/environment/2010/apr/07/arctic-sea-ice-recovers-slightly Or better still, go to the source: Also, consider that ice extent - i.e. area - is only part of the story, and often used by Deniers to pretend everything is OK. They ignore volume and age of ice - both of which are in a clear and rapid downward trend. The 2000-2010 period is very warm but the northern polar winter is still sufficiently cold to freeze ice in polar waters. I think we can all be assured that there will always be polar sea ice formation during the polar winter. The real story in northern polar ice is the steady decrease in summer ice extent and the inevitability of ice-free polar summers at some point in the not-too-distant future. warmest year in recorded history I don't know, in NYC I had to use my AC a lot less than in the year before, not that means anything. But what temp data do you speak of? Nasa said it was '98, then '36, then '98 again. Rereading some of the stuff it looks like you were right http://www.ncdc.noaa.gov/sotc/?report=global&year=2009&month=13&submitted=Get+Report Perhaps I was thinking of the quote about it being the warmest decade. Regardless I hope some people chime in here soon :) This question of 1936 vs 1998 was for US average land surface temperature in the lower 48 states. 1998 remains the warmest globally averaged year. No it isn't normal. It's much thinner with far less multi-year ice. The simple extent may "approach" normal but it will be gone very quickly. "will" does not equal "may", unless you are from the future? Give it another year like this, and it will be thicker. Current news, information and issues related to the environment.
<urn:uuid:9bd3febc-dc33-4cad-8092-4bdb44b5236b>
2.75
971
Comment Section
Science & Tech.
67.007932
95,493,350
A plasma is, at a basic level, an ionized gas. When electrons are stripped from ions, the gas becomes a quasi-neutral system of two or more interpenetrating fluids. In a gas, microscopic behavior is governed by collisions between atoms or molecules. The macroscopic behavior is determined by taking volumes with large numbers of particles within and taking statistical averages over known particle distributions (i.e. a Maxwellian). A volume element of fluid must be small relative to the system size and contain a large number of particles for this to be valid. In a plasma, electrons and ions still undergo collisions, but additionally they interact via the long-range Coulomb force. This allows the plasma to screen both DC and AC fields, as we will see in subsequent sections. This introduces two more conditions: Here we seek to define the typical length scale for DC electric field screening using simple arguments. In an infinite homogenous plasma, we can take the electron distribution function as where is a constant of proportionality, is the velocity of a particle, is the temperature, and is Boltzmann’s constant. In the presence of an electrostatic potential, , this becomes where is the electron charge . This is somewhat intuitive and also can be derived rigorously by taking the partition function for Boltzmann statistics and using as the ‘chemical potential’ . This derivation omitted here for simplicity. Now consider the effect of a small deviation in density between electrons and ions, using the Poisson equation: where we have taken , a hydrogen plasma, for simplicity. Next we assume that only the electrons move in response to the applied potential, which is reasonable given the small electron mass . In this case the ion density is simply the equilibrium value and we can rewrite Poisson’s equation as We can calculate the perturbed electron density from the distribution functions by taking the ratio of Eq 1.2 to Eq 1.1: Substituting into the Poisson equation, In general this cannot be solved analytically, but in the limit we can Taylor expand: which we put into the Poisson equation to get: which has solutions which is the electron Debye length. Rigorously, the ion motion must be treated too, in which case Physically, the Debye length represents the distance an electrostatic field can penetrate into a plasma before it is screened. Electrons and ions move to oppose the imposed field. At zero temperature the screening is perfect, but at finite temperature the Debye length is finite due to thermal ions and electrons having enough energy to sample the screened potential. The density dependence is logical - if there are more available charges, the screening length with be short. In the previous section we considered the plasma DC response, now we consider the AC response. A simple intuitive approach is taken here, which is rigorously confirmed later. If we consider a high-frequency oscillation, we can consider the ion inertia to be infinite. A simple intuitive system is as follows. Consider semi-infinite slabs of rigid electron and ion fluids, with finite length in the direction. Take the ion slab as stationary and the electron slab as displaced by a distance . There will be a restoring Coulomb force between the two slabs. First we need the field of a charge slab. Using Poisson’s equation, the charge density is with taken for simplicity (Z=1 hydrogen plasma). Using Stokes’ Theorem, and , for a Gaussian pillbox, the field outside a single slab is, Now we consider the restoring force on the electron slab, which simplifies to which of course has well-known oscillatory solutions of the form with characteristic frequency which is the electron plasma frequency. At lower frequencies, the ion plasma frequency is also important, which is obtained by taking . First, we consider the motion of a single charged particle in constant uniform magnetic field - the cyclotron motion. So we take and where we can choose the magnetic field along the axis without loss of generality. Then the Lorentz Force, leads to equations of motion The equation, of course, is trivial: constant motion in the direction. The other two axes can be addressed by taking the derivative of the equation of motion to substitute in from the : which simplifies to where we have defined the cyclotron frequency . This clearly allows oscillatory solutions; without loss of generality we take combined with the original equations of motion, and using the initial conditions we arrive at the velocity Clearly the total perpendicular kinetic energy is conserved, , as expected since the magnetic force does no work. Next we consider the particle position. We simply need to integrate the velocity equations once more, which gives: The average position is generally referred to as the ‘guiding center’ position. The quantity is the Larmor radius, which gives the size of gyrations due to the Lorentz force. These equations describe the motion of any charged particle in a constant and uniform magnetic field. The motion is generally helical gyrations about the magnetic field lines. Next we consider the motion in constant and uniform and fields. First, consider the situation where . Clearly this will lead to continuous acceleration along the magnetic field lines where is zero. This is not generally an interesting phenomenon, except in the case of runaway electrons, which will be considered later. So for now, we consider the case . Without loss of generality we take and . In this case the general Lorentz force, reduces to equations of motion, Once again, the equation of motion along the magnetic field lines is trivial. Focusing on the plane, we start by taking the derivative of the equation, which allows us to substitute the first equation of motion, giving, Rearranging terms, we get that, This clearly suggests a coordinate transformation, in which case the equation of motion becomes which is simply cyclotron motion in the primed coordinate system. But now that we have introduced the primed system, we note that it is drifting relative to the original reference frame with a constant velocity: More generally, if we allow to lie anywhere in the plane, we get that where this is called the drift for obvious reasons. We note the important result that this drift does not depend on the particle mass or the particle charge, which means that in a plasma electrons and ions will drift with the same velocity and the same direction. The drift can therefore cause net motion of the plasma, and in certain situations can be problematic for plasma confinement in magnetic fusion devices. We observe that in the derivation of the drift, there is nothing ‘special’ about the electrostatic force and we notice that we can substitute in the previous result to get that the general force drift. We note that in the general case there is a dependence on the particle charge , meaning that electrons and ions will drift in opposite directions due to the ‘general force’. Up until now, we have only considered uniform fields. That will now change. Take and non-uniform but constant . The magnetic field points in the direction but has a gradient in the direction (taken without loss of generality). As usual we start with the Lorentz force, which gives equations of motion Again, the equation of motion is trivially solved. Focusing on the plane, the equations of motion are now more complicated than previously due to the position dependence of the magnetic field strength. If we take the limit , or that the particle Larmor radius is much smaller than the gradient length scale for changes in , then we can Taylor expand the magnetic field around the particle guiding center position: substituting this into the equation of motion, In general this would be very difficult to solve, but if we treat it perturbatively and use the general cyclotron motion as a 0th order solution, we can rewrite this as Now consider averaging over a gyration. The first term averages to , and the second term becomes (using over a whole gyration) using our previous definitions of , Combined with our result for the general force drift, we can simply write the drift due to the magnetic field gradient as where we have made the generalization to arbitrary directions of the magnetic field gradient by intuition. We note that the drift depends on both the particle charge and mass, which means that electrons and ions will drift in opposite directions at potentially different rates due to magnetic field gradients. The sign of the above equation is the sign of the charge. If the species temperatures are equal () then is equal and the electrons and ions drift in opposite directions but at the same rate. We can use our generalized force drift equation to examine two other interesting situations. First, consider the effect of a gravitational field on the plasma. In terrestrial experiments there is an unavoidable force which leads to a drift velocity So a gravitational field will induce a drift, which depends on both the particle mass and charge. In real experiment, however, this is usually neglected due to the smallness of relative to other forces (i.e. the electromagnetic force). Now consider a curved plasma. If the magnetic field lines are bent with some curvature radius, how will that affect the particle motion? The simple answer is to consider the particles as primarily streaming along the field lines. If the field lines are bent into a circle, as in a tokamak, then the curvature is essentially equivalent to considering the motion in a rotating reference frame. This induces a centrifugal force opposite the direction of curvature and perpendicular to the magnetic field: this is written by inspection and intuition but could be derived rigorously. The velocity parallel to the magnetic field lines is denoted by . If we substitute into the general force drift equation, then we get The curvature is perpendicular to the radius of curvature vector and the magnetic field. Its magnitude depends on the radius of curvature, the magnetic field, and particle info. In particular we note that it is proportional to the ratio which means that electrons and ions will drift in opposite directions and differing rates (unless ). Consider a time-varying electric field given by Also allow a constant uniform magnetic field , so that the equations of motion become as usual the equation is trivial. Taking a derivative of the first equation, where we have used Fourier analysis and the over-tilde denotes time varying quantities. If we define two drift velocities The latter drift is simply the drift but with a time-varying electric field; the drift velocity will vary sinusoidally with the imposed field as we might expect. The first drift is the so-called polarization drift. Rewriting the equations of motion, the motion in the direction is simply a superposition of the cyclotron motion and the polarization drift. In the direction, the motion is a superposition of cyclotron motion and the drift. In the case of an electric field which cannot be simply decomposed into Fourier components, we can rewrite a more general expression by inspection, This is the first adiabatic invariant. We want to start from the action integral, in general form, which will be an adiabatic invariant of the motion (in certain limits). First we consider the cyclotron motion of a particle, so let us take and and integrate over one gyration Now we need to rewrite this quantity as The first part is constant if the charge-to-mass ratio is not changing over the motion (a good assumption). In that case, the second part of the above equation is conserved, leading to This quantity is an invariant of the motion. However, we have assumed that pure cyclotron motion is a good approximation of the particle motion over short time scales, or equivalently, we have assumed slow motion . In cases where this is not true, then is not an invariant of the motion. The adiabatic invariant can be directly applied to a machine of fusion (historical) interest - the magnetic mirror. Consider a linear cylindrical machine where the magnetic field is weakest at the center and reaches a peak field value near the ends. This could be arranged, for example, by a two-coil configuration. A particle near the center of the machine, i.e. a low-field region, is heading towards the high-field region. The problem is to define when the particle is confined. The initial particle kinetic energy can be decomposed into parallel and perpendicular components which will satisfy where is the pitch angle relative to the magnetic field. Based on the previous derivation, in the slow motion limit, we will have the adiabatic invariant of the motion We can see from the definition of that as increases, the particle’s perpendicular velocity must increase to conserve . For a given particle’s kinetic energy, then, there is a maximum value of that can be reached before the particle is reflected. Consider the extremes of the motion, and setting we get using and the pitch angle definition, Next we define the ‘mirror ratio’ of the experimental machine as the ratio of minimum to maximum field strength, . In this case there is a critical pitch angle for reflection, Any particles with are confined, while particles with are not confined. This leads to a ‘loss cone’ in phase space. The mirror machine was originally proposed as a scheme for fusion energy. Unfortunately, the loss cone particle loss is too extreme for an efficient machine. Coulomb scattering continually scatters particles into the loss cone, and they are lost out the ends of the machine. A rigorous derivation finds that the maximum theoretical value for the mirror machine is . While this is actually slightly greater than unity, a real machine will not achieve the theoretical result, and even if it did a machine will never be economical for energy generation. Consider a single electron moving in a tokamak. In today’s machines, large transformers are used to inductively drive a toroidal current in the machine. In the (realistic) event that the plasma has finite resistivity, this will create a non-zero toroidal EMF. The value of this field is not important for the problem; we simply consider the case of a toroidal electric field. The electron equation of motion will be where is the collision frequency for momentum transfer, generally . It is convenient to introduce the form where the denominator is the electron thermal velocity . If we rewrite the electron equation of motion as, then the parts of the right hand side are normalized to which we will not derive here. The important part is that with the velocity normalization we have done, neither or depend on . We can then immediately see that for the electron will continuously accelerate. This is the runaway condition. In particular, we can write down a ‘critical velocity’ where all electrons starting with a velocity greater than will continuously accelerate. This phenomenon arises because the electron collision frequency for momentum transfer is proportional to so that fast electrons collide very rarely. Runaway electrons are potentially problematic for tokamak systems. This is more a physical argument than a strict derivation. We start from the well-known Navier-Stokes equation, i.e. the momentum equation for ordinary hydrodynamics: where is the mass density, is the ordinary fluid velocity, and is the scalar pressure. in this equation is the fluid kinematic viscosity. Right off the bat, we know to make the substitution for plasma species . The kinematic viscosity term is absorbed with the scalar pressure term into a tensor pressure term, giving us: where the bold-face denotes a tensor, and the ellipses are included to show that we are still missing terms in this equation. The first obvious addition that must be included when going from ordinary hydrodynamics to plasma magnetohydrodynamics is the effect of the Lorentz force. While ordinary fluids are composed of charge neutral particles, plasmas of course consist of electrons and ions and the fluid species generally has non-zero charge . In this case each particle feels a Lorentz force due to the local fields: which is generalized to a force on a fluid element by multiplying by the particle density and allowing the particle velocity to go to the average fluid velocity. This term is added to the momentum equation to obtain There is one more effect we must include. In ordinary hydrodynamics the fluid motion is determined by collisions within the fluid, and by external forces represented by the pressure term. But in a plasma, multiple fluids can be interpenetrating, most obviously the electron and ion fluids which must be co-located to preserve charge quasi-neutrality. This allows for a transfer of momentum between fluids via collisions, which will generally obey a relation for collisions between fluids and , with collision frequency for momentum transfer . This is added to the momentum equation to give us its final two-fluid form: If there are more than two fluid species (i.e. in a multi ion species plasma) then we must sum the last term with going over all other species in the plasma, though primarily we will be interested in electron+ion two-fluid plasmas. In ordinary hydrodynamics the continuity equation is: If we simply make the same substitutions, and we get that This is not as often quoted in fluid mechanics. Basically, we must think about how the internal energy of a fluid element changes. It can change via the first-order flow of the fluid, via compressive () work, and via heat conduction. Additionally, there may be any number of ‘external’ sources or sinks of energy. The average internal energy of a particle in an ideal gas, equilibrated to a Maxwellian, can be given by . The overall internal energy is the particle density multiplied by this quantity. Changes in density are included in the term; here we consider the change in temperature due to the flow. We can simply write this via the convective derivative as: Next, we consider the compressive work done on (or by) a fluid element, which is simply the pressure multiplied by the divergence of the fluid velocity: Finally, we can simply write down the heat flux in terms of the thermal conductivity and temperature gradient, We combine these various terms together, with the total change in internal energy from the three equal to the external sources or sinks of energy: where represents several sources and sinks of energy: For completeness we have to include Maxwell’s equations for electromagnetism to the fluid equations above, since the electric and magnetic fields must be determined self-consistently in MHD. In plasma physics we prefer to use the vacuum equations with plasma serving as the source terms. Anyways, in SI units the four Maxwell equations are: where is the total charge density: and is the total current density: In many plasmas, the generalized two-fluid system described above can be simplified considerably to a one-fluid model. The goal of this section is to present a derivation of the one-fluid model with a discussion of when it is applicable. For simplicity we will use a single-species plasma with the usual assumptions. First, we need to define the single-fluid variables which will be used throughout this section: We start off by writing the momentum equations for both electron and ion species. For simplicity we take the pressure tensor as isotropic, meaning that the pressure appears only as the usual scalar pressure. The viscosity terms are small if the ion Larmor radius is much smaller than scale lengths for typical variations, for instance this condition can be written as using the density gradient length scale. We also drop the convective terms . Chen describes this as ‘hard to justify’. I think that the best justification is that in motions where the fluid velocity is ‘small’, and implicitly the 0th-order velocity is zero, then this convective term is second-order in the small velocity and can be ignored. A valid question is ‘small relative to what?’ If we consider the ratio of the convective term to the partial time derivative: If the plasma response is, say, acoustic (the slowest case), then and the convective term is negligible if the fluid motion is sufficiently sub-sonic. In any event, we can now write the ion momentum equation as and similarly for the electrons: where in the collisional momentum transfer term (the last one) we note that , which leads to the physically intuitive result that momentum lost by one species is gained by the other (terms are equal and opposite). We start off by taking the sum of the two momentum equations, which will lead to the single-fluid equation of motion. We get that, The left-hand side is simply the time derivative of the single-fluid velocity multiplied by the density. The electric field terms have canceled. The magnetic field part of the Lorentz force reduces to using the single-fluid total current. We take the single-fluid total pressure as the sum of the individual ion and electron pressures, which is sensible. By taking the sum, the collision term has also canceled (what is lost by one fluid is gained by the other). We are thus left with the single-fluid equation of motion: In some cases a gravitational force term is added to the right hand side, which is not included in this derivation but can be seen via physical intuition. To get to the various Ohm’s Laws, we must take the less-obvious step of calculating the difference of the ion and electron momentum equations. But in particular, we multiply the ion equation by the electron mass and vice versa, so that the difference becomes which is kind of a mess but simplifies considerably. The left hand side can be written in terms of the current, The electric field term can easily be simplified by recognizing that it is just . The magnetic field term is trickier. We have to write: Then the entire magnetic field term is: The collision term (momentum transfer) is re-written in terms of the resistivity so that it becomes Combining all of these together we get An immediate and obvious simplification is to take the limit which means that the terms and . We also observe that, since the electron and ion pressures are for in which case so we can drop the ion pressure gradient term. We can also rearrange terms and divide by to get that We then generally take the limit of slow motions, i.e. where inertial (cyclotron motion) effects are unimportant. Explicitly this is the limit where is the plasma scale size. We can get this result by considering the ratio of the current time derivative term to the term within square brackets: So, if we can neglect the time derivative term. In any event, this limit reduces the equation to: This is known as the Generalized Ohm’s Law. The term is the Hall current term. In many physical systems it turns out that the Hall current and pressure gradient terms are small, in which case this expression reduces to the Resistive MHD Ohm’s Law: In some plasmas the resistivity is small enough to be neglected (i.e. high temperature and low density). In this case the resistive Ohm’s law reduces to the Ideal MHD Ohm’s Law: In summary, the single-fluid MHD model is: with the various options given for Ohm’s Law (more complex ones can be derived but are not included). In this section we consider a few cases of MHD equilibria analysis, using the single-fluid MHD theory just derived. First, we note that in equilibrium several simplifications can be made to the single-fluid equations, in particular by taking time derivatives to zero (no change in an equilibrium situation). For simplicity we take ideal MHD. The continuity equation can be rewritten as: in equilibria this implies that , which means also that = 0. There are no particle or current sources/sinks in equilibrium solutions in our simple model (which does not include, for instance, current drive or neutral-beam heating). The momentum equation becomes simply: The complete set of equations which define simple MHD equilibrium systems is combined with two of Maxwells’ equations; in total we have that: One important qualitative feature of solutions to this set of equations is that both the lines of force ( field lines) and the current must lie in a plane perpendicular to the pressure gradient. The rest of this section is devoted to identifying a few useful solutions of the MHD equilibrium problem. Consider a cylindrical plasma which has a current flow in the azimuthal () direction, thus the name. This can be created via the diamagnetic effect of plasmas by an imposed axial field. In any event, the unknowns are , and with the total field and current given by and . By symmetry the condition is automatically satisfied. Ampère’s Law reduces to And the pressure balance equation becomes this can be rewritten, combining the derivatives with respect to r, to get that which can be trivially integrated, yielding the pressure balance equation: where we have automatically solved for the constant of integration by noting that the field must become the vacuum value for radii outside the plasma where , i.e. . We now consider the complementary system, the Z pinch. In this case, the current flows along the axis of a cylindrical plasma which creates an azimuthal magnetic field . There is generally assumed to be an imposed magnetic field along the axis as well, given by the term. The current in this pinch could be driven by a set of electrodes at either end of the plasma. Once again we have the condition trivially satisfied by the geometry. And Ampère’s Law becomes: Combining this with the pressure balance equation, If we expand the derivative we get that: which can be simplified by combining the terms: At this point one must typically assume a current profile to solve analytically for the pressure profile. A screw pinch is a linear combination of and Z- pinch, named because the magnetic field lines wrap around the cylindrical plasma like the threads on a screw (or the stripes on a barber’s pole). The defining pressure balance equation can be written by combining Eqs. 3.50 and 3.55: The equation has three unknowns - . In general solutions are obtained by assuming profiles for two of the three; MHD will then define the third. In our previous discussion of particle drifts, we left out one important drift - the diamagnetic drift. This is because the diamagnetic drift arises as a fluid effect and not from single-particle motion. We start out with the momentum equation for an arbitrary fluid (omitting subscripts for clarity): First, consider the effect of the time derivative term. Let and take the ratio of the time derivative term to the term considering only the motion perpendicular to the magnetic field: For drifts slow compared to the cyclotron motion, we can neglect the time derivative term. What about the term ? Observe that if the drift velocity we will derive ends up being then this term will be 0. Therefore we will assume off the bat that this is the case, and at the end of the derivation we can check to make sure that this assumption is valid. So what we have is that for slow drifts with the drift velocity perpendicular to the gradient, If we take the equation’s cross product with , Using the vector identity for the middle term: By definition of course . If we take this equation, rearrange and solve for we get that: The first term is the familiar drift. The second term is the Diamagnetic Drift: We observe that the diamagnetic drift is indeed perpendicular to the pressure gradient, as we had assumed before. Due to the dependence on , electrons and ions will drift in opposite directions and thus the diamagnetic current can induce azimuthal currents in cylindrical plasmas. For and , we can write the diamagnetic current as The diamagnetic drift of the previous section can lead to an unstable situation in cylindrical or toroidal plasmas, leading to so-called drift waves. The basic idea is as follows. We have a plasma magnetically confined by the 0th order field, shown as out of the page (or in the direction). Due to the density gradient as shown, the diamagnetic drift can set up electric fields if there is a perturbation in the isobaric surface (shown in red). This electric field causes drifts, shown as . A detailed analysis follows. The zeroth-order drifts are due to the diamagnetic drift: and also for the ions. Since the electrons can flow along the magnetic field, and we assume that this wave is slow relative to the electron motion, we can use the Boltzmann relation for the electron density: which is to first order, in the limit . Where the plasma is perturbed to higher pressure (higher density), as in the top of the figure, the potential is positive. The electric field thus points from peak to trough of the wave, as shown. We can now write the first-order velocity due to the drift as: which can be obtained from . We can also use a sort of ‘continuity equation’ for guiding centers to write: Doing some Fourier analysis on this continuity equation, Where represents the density perturbation due to the warped isobar. Using the Boltzmann relation from earlier we can also write (separately) that Equation the RHS of these two gives: The waves travel at the electron diamagnetic drift velocity. This is the behavior in the azimuthal direction. There is a small component which causes these perturbations to propagate. Along the way we implicitly assumed that electron currents cannot simply flow along to neutralize the electric field; this means that the plasma must be resistive (and these are sometimes called ‘resistive drift waves’). We have also neglected the instability analysis. One can see via qualitative arguments that this situation is very similar to the gravitational instability, and thus it can be argued that the perturbations in isobars tend to grow. The full analysis requires treating the polarization drift and non-uniform drift as well. We wish to consider the following scenario: an infinite uniform plasma, where the ions are stationary in the lab frame but the electrons have a non-zero uniform . The plasma is unmagnetized () and cold () for simplicity. The fluid momentum equation is: In a 1-D flow the first-order terms are zero for the electrons (ions have no first-order contribution from this term). Further, the terms are zero because we assumed a cold plasma. The collisional momentum transfer term also goes to because we note that . For the ions, contains no first-order terms since . For the electrons, a term remains. For simplicity we take . We thus get an ion equation: and an electron equation: Assuming electrostatic waves of the form: we can use Fourier analysis to simplify the ion equation to: And the electron equation similarly: Next we use the fluid equation of continuity, which is generally: So for the ions, employing our previous assumptions and Fourier analysis, where we have used the previously-derived relation between and . Following through the same steps for the electrons: We need one more equation to close these relations. We note that this is a high-frequency oscillations so we can use Poisson’s equation to eliminate the electric field: Using the first order density perturbations we have already derived on the right side, and Fourier analysis again on the left side, we get: We can now eliminate the electric field, and recognizing the plasma frequency on the right hand side, An important question is whether the plasma is stable or unstable to perturbations. If we define the dimensionless variables and we can define: There are singularities at and . But we note that solutions to are solutions to the dispersion relation. If is sufficiently large then there are 4 real roots for and thus , which all correspond to stable though oscillatory solutions. On the other hand, if there are only two real solutions for then there are necessarily two complex, one of which will be a damped oscillation and the other will be unstable (growing). This occurs for sufficiently small values of , or more physically, for small or large-wavelength perturbations. This is an important question in the dynamics of magnetized ideal plasmas. We want to know how plasma elements move with respect to the magnetic field lines, or vice versa. Consider the magnetic flux through a surface of plasma: where is the unit vector normal to the surface . We now want to explore how the magnetic flux through the plasma changes with time. The time derivative of the previous equation can be written as: The left hand side is a simple time derivative. The right hand side has been decomposed: flux can change due to a changing magnetic field (the first term) or due to motion of the plasma surface relative to the magnetic field (the second term). Using Faraday’s Law, and the ideal Ohm’s Law, The important result is that for then the flux is unchanging with time. Physically this occurs when the magnetic field lines are moving with the plasma, and we call this ”frozen in flux”. The obvious analogy is a superconductor. It is a well-known result that magnetic flux cannot penetrate a superconducting volume. If it did, and the flux was changing in time (as it must) then it would induce an EMF in the superconductor, which would cause an infinite current due to the zero resistivity. A real plasma will have finite resistivity but in many scenarios the resistivity is small enough that ideal MHD is a decent approximation. This is particularly true in certain high-temperature magnetically confined plasmas and in very low-density space plasmas. First we discuss the classification schemes generally used for MHD instabilities, then discuss qualitatively a few important examples. First an instability is generally described as internal or external mode. An internal mode instability is one in which the plasma surface does not move. Generally these affect transport and impose operational limits but do not impact confinement. Conversely, an external mode instability is one in which the surface of the plasma moves. These can cause loss of confinement. Next we classify the source of the instability. Generally a non-equilibrium current flows, which modifies the MHD. If a current flows perpendicular the magnetic field, these instabilities are often called pressure-driven since . On the other hand, if a parallel current flows to drive the instability we call it a current-driven mode. Finally, the wall characteristics are often important. We can see this simply as follows: If the plasma surface moves towards the wall in an ideal MHD system (frozen flux), the magnetic flux between the plasma surface and wall will be compressed. This will affect the plasma motion and feed back. If the wall is resistive then the compressed flux will ohmically dissipate, but if it is superconducting then this cannot happen. We can therefore discuss no wall, conducting wall, or superconducting wall configurations (though the last is operationally difficult). We start with a qualitative description of the Z-pinch interchange or ‘sausage’ instability, which is depicted in Fig. 3.2. As a reminder, the 0th order current flows axially and thus left to right (or vice versa) in the figure. Consider a perturbation where the surface of the plasma is rippled such that the radius varies with axial position, in particular shown for . In this case we know that which implies that the magnetic confinement is higher at than at . This causes the plasma to expand at and contract further at . Continuing with the Z-pinch equilibrium confinement scheme, consider the scenario in Fig. 3.3. In this case the cylindrical symmetry is broken by allowing the axis (dotted line) to be twisted. Since the current is flowing axially in the plasma, this creates a magnetic field scheme as shown in blue. At the top of the indicated twist, the azimuthal magnetic field is stronger than the equilibrium value. Below the twist it is weaker. This creates a force on the plasma that tends to reinforce the kink because the confinement is coming from the MHD equilibrium term . This configuration is very similar to the previous. We simply note that a screw pinch suffers from the same sensitivity to kinking or twisting as the Z-pinch. See Fig. 3.4. The mechanism is exactly the same as in the previous section, but now we have to remember that there is a potentially large axial/toroidal field as well as the field. The previous sections raise a general question - when is curvature of the plasma surface favorable versus unfavorable? Consider Fig. 3.5. The equilibrium surface of the plasma is denoted in black. We follow the general coloring scheme (plasma is red, magnetic field is blue) used in the previous sections. Consider a perturbation from the equilibrium, as shown. The general picture follows again from considering the confinement in MHD equilibria systems. When the plasma surface is perturbed as shown on the left of Fig. 3.5, it will tend to grow because the confinement is not good at the peak. More generally, it turns out that when the surface of the plasma is curved towards the plasma then the surface is unstable, so we call that ‘unfavorable’ curvature. Conversely, when the curvature is away from the plasma the system is well-confined and we thus call that ‘favorable’ curvature. This instability is named the ‘gravitational’ instability but really can occur for any non-electromagnetic force applied to the plasma. Consider the plasma configuration shown in Fig. 3.6. The plasma is confined by a magnetic field, shown as out of the page, with a well-defined surface horizontal on the page. There is a vertical force away from the plasma surface denoted by . We take to be any generalized non-electromagnetic force. Gravity is one potential application. As we saw in the single particle motion derivations, there is a generalized force drift which can be written: We can immediately see that the electrons and ions drift horizontally on the page due to the force , and in opposite directions as denoted by and in the figure. Now we have to consider what will happen if the surface of the plasma is perturbed. Consider Fig. 3.7, in which we have assumed an imposed sinusoidal horizontal perturbation in the plasma surface. The imposed force is still vertically down and is still out of the page. Because of the drift, the ions tend to ‘pile up’ on the left side of a plasma protrusion while the electrons pile up on the right hand side of a protrusion. This creates a first-order electric field which is horizontal in this scheme. On the right where the plasma is protruding from its equilibrium position, the first-order drift points down, as shown. As we know the drift affects ions and electrons equally, and thus this creates a net force on the plasma that reinforces the perturbation. In cases where the plasma is depressed from its equilibrium location the first-order electric field is reversed in direction, and thus the drift also reinforces the perturbation. Therefore the plasma configuration explored in this section is unstable to perturbations in the surface as shown. It is worth noting that this can be thought of as a magnetically-confined plasma version of the well-known hydrodynamic Rayleigh-Taylor instability. In this case the plasma is a ‘heavy’ fluid supported by the magnetic field, a ‘light’ fluid. Qualitatively one can arrive at the same sort of instability without an imposed external force if instead there is a density gradient. Recall that the diamagnetic drift is defined as: If instead of the external force, the plasma in Fig. 3.6 had a vertical density gradient (high density at the top) then the zero order are instead replaced by the electron and ion diamagnetic drifts. The remaining analysis is exactly the same. This is a very important instability because in cylindrical systems, there is necessarily a radial pressure gradient and thus the plasma is susceptible to drift instabilities. In a cylindrical plasma this is sometimes referred to as the flute instability instead of the drift instability since the resulting plasma surface looks like a fluted column. In toroidal systems the same instability occurs. The result of detailed analysis is that there is a small component along the toroidal axis. In this case the perturbations curve slightly and wrap around the plasma like screw threads or stripes on a barber’s pole. This causes the perturbations to propagate and they are called true ‘drift waves’. The final instability we consider is the Weibel instability. This situation is illustrated in Fig. 3.8 and described here. Consider a plasma where the electron temperature is much hotter in one direction than the other two. This can arise in magnetically confined systems (where and directions have different behavior) or in laser plasmas, particularly in direct laser illumination between the ablation surface and the critical surface. Anyways, in the figure we take . as shown. Consider a randomly-arising magnetic field in the plane as shown in blue. Due to the field, electrons will tend to curve as shown for a few different locations. A detailed analysis shows that the electrons tend to curve to form current sheets in the vertical () direction. These current sheets end up reinforcing the imposed field. The plasma is therefore unstable to randomly-arising magnetic fields in the plane. Before we get into plasma physics, we start with ordinary gas dynamics. Starting with the Navier-Stokes equation: we can derive sound waves as follows. Neglecting kinematic viscosity (), assuming and taking small waves (so keep only first order terms), we get that Using Fourier analysis, We know that the group velocity of waves is so we can rewrite this as We will also need to use the continuity equation: Using this with the momentum equation derived dispersion relation, we get that which directly implies We implicitly assumed earlier, a more general result is that Before we introduce plasma effects, it is useful to derive electromagnetic waves without plasma first. First we should write down Maxwell’s equations: In a vacuum we can immediately drop the source terms For plane waves the first two equations will be automatically satisfied. Taking the curl of Faraday’s law, and combining with Ampère’s law, If we assume plane wave Fourier solutions of the form and use the vector identity We get that which reduces to, using Fourier analysis, which leads to the group and phase velocities: In media we generally define the index of refraction, and the group / phase velocities become In the derivation of waves we neglected the geometry of the system, which leads to a discussion of the wave polarization. Starting off with the Fourier component solution for the electric field, we note that the solutions here are waves with but the wave propagation direction is . We can also find the magnetic field for this configuration, it is obtained from and the solution is with . So to summarize, in this nice plane polarized solution we have , , and finally . The choice of coordinate system does not cause loss of generality. It is important to note that we will always have in a vacuum. We could have chosen a wave solution in which the electric field vector rotated in the plane as it propagated along . In this case the magnetic field vector will also have to rotate. These are circularly or elliptically polarized solutions. We can characterize these solutions by taking the ratio when this , the wave is right-hand circularly polarized. When it is the wave is left-hand circularly polarized instead. We start off with the MHD picture of an electrostatic plasma oscillation. We will see at the end why it is an ’oscillation’ and not a wave. We need to start with the electron momentum equation: We need to make several simplifications and approximations in this section to make the problem tractable. First, we take meaning that . Next, we neglect collisions, . Next we take and keep only first-order quantities: for convenience we have dropped the subscripts. If we assume sinusoidally oscillatory solutions, then we can use Fourier analysis to get that We have also assumed plane waves. This is the basis of our dispersion relation. Next we can invoke the continuity equation: Using the same set of simplifications, we get: We can use this to rewrite the dispersion relation to eliminate : So that the dispersion relation becomes: Next we need to eliminate both and . We can do this by the use of Poisson’s equation. As we know we have to be careful in plasma physics as to when we can invoke Poisson’s equation, but in this case since we assumed high frequency waves and that the ion inertia is infinite, there must be a resulting electric field. Using and plus our usual set of assumptions, this simplifies to Using this result the dispersion relation becomes which can be rewritten as These disturbances do not propagate, , but instead are stationary oscillatory solutions with a given frequency, known as the plasma frequency: The electron oscillations we derived in the previous result can become true waves (i.e. ) when there is a finite electron temperature. They are called Electron Plasma Waves, or sometimes Bohm-Gross Waves or Langmuir Waves. Revisiting the electron momentum equation, with subscripts omitted: Again we neglect collisions, and take plane waves with small perturbations (1st order terms only) but keep the pressure gradient, and take : Generally we must make an assumption about the plasma equation of state, for example adiabatic: if we then substitute the ideal gas law into the above, and in this case, if we take 1-D isothermal compression and expansion of the plasma then . since there is no gradient in the equilibrium density . Plugging this into the momentum equation, taking for simplicity of notation, which we can use Fourier analysis to simplify further to: We need two relations between to eliminate them. First we use the continuity equation: which, with first-order terms Fourier analyzed: Solving for : Next, we can use the Poisson equation (valid because these are fast oscillations, and ion momentum can be approximated as infinite), to get a relation between and . To start: with and , and using , or solving for : Using these expressions for (continuity eq) and (Poisson eq) the momentum equation becomes: using the definition of the thermal velocity , we get the electron plasma wave dispersion relation: We can see immediately that as , then and this dispersion relation reduces the electron oscillation found previously. Furthermore, we note that propagation requires for the electron plasma wave. Next we consider the ion acoustic wave, which as we will see is the analog of traditional hydrodynamic sound waves. Since the ions are involved, it is by necessity a slow wave. We therefore take , since the electron response will be very fast compared to the wave. This requires that we do not use Poisson’s equation in the derivation! Starting with the ion momentum equation: I note that we have omitted the collision term. Taking , keeping only first-order terms, and using , we simplify this to: Doing our traditional simplification with Fourier analysis and , and taking plane waves, we get that: To simplify this further requires that we obtain relations between , , and so that they can be eliminated. First, we use the continuity equation as always: Next we note that in the presence of an electric potential, the electrons can be written as obeying the Boltzmann distribution: since the electron response is much faster than changes in . In the last step we used the assumption to Taylor expand the exponential. Recognizing the contribution from the perturbation in density, , we write: Or equivalently, we can eliminate using: Now we can use these two results to solve the momentum equation for the dispersion relation. We can eliminate now and further simplify algebraically to get that: with the equivalent of the sound speed in a plasma being: We note that the electrons essentially have while the ion contribution is the typical one and actually will reduce to the hydrodynamic result in that limit. Next we turn to the critical question of electromagnetic waves in plasmas. In vacuum, of course, this would be a typical light wave propagating at the speed of light, but in a plasma the and fields can induce plasma ‘sources’ of charge and current that feed back to the wave. The plasma response can be written in terms of the induced charge and current densities: and thus Maxwell’s equations are: We start by taking the curl of Faraday’s Law: and substitute in Ampère’s Law to get the dispersion relation: Of course we know our vector identity: Unlike the vacuum scenario, . We use this result with our proto-dispersion-relation equation, collecting ‘vacuum’ terms on the left and ‘plasma’ terms on the right to get: At this point it is useful to note that if we set we recover the vacuum result . If we assume transverse waves, then , and We know that light waves are very fast, so we can treat the ion inertia as infinite and consider the current as coming from the electron response only. In this case we have to solve the electron equation of motion, i.e. the momentum equation: At this point we assume , and take the case . Furthermore, assuming that the wave-induced perturbation is small and keeping only first-order terms, we get that: Using Fourier analysis: We can substitute this result into the dispersion relation to get: Now we have to use Fourier analysis on the whole thing, and to get: We can now cancel the electric field, use and , and rearrange to get the dispersion relation for EM waves in plasmas: The most important thing to note from this dispersion relation is that if , then is imaginary and the wave is evanescent. We call this a cutoff for the wave. It is also useful to calculate the group velocity: If we directly take the derivative of the dispersion relation, we get that: If we do some algebraic manipulation of the original dispersion relation, using this we get a group velocity of: For propagating waves, we must have which nicely gives us the result that . We also note that, since , the wave propagation depends on the plasma density in this case. In this section we consider two electromagnetic ion waves: the Alfvén wave and the magnetosonic wave. This is a fundamental plasma wave. We wish to consider low-frequency ion oscillations in the presence of a magnetic field. We must consider a magnetized plasma with non-zero , which we take as . The geometry of this wave is , , and perpendicular to both and . By convention we take without loss of generality. Starting with Maxwell: Since by geometry, the only non-trivial equation is for the direction: We have to calculate the ion response from the momentum equation. For simplicity we take the zero temperature limit, and use . We also neglect second-order terms. This gives us Splitting into and components, and using Fourier analysis, we get Starting with the equation, we can write that We can immediately use this result to get the form for : In summary, then, the ion motion in the plane is given by We can immediately obtain the electron equations of motion by substituting , , and taking the limit . There is no electron motion in the direction because the cyclotron motion overpowers it; however, the electrons have the usual drift which is in this geometry. Since the current of interest is in the direction, it must be from the ion motion only. Going back to the dispersion relation, Since , we can eliminate them from the dispersion relation and simplify to: We recognize the plasma frequency, and simplify further to: In the limit , then: Rearranging, after some algebra one obtains the dispersion relation for Alfvén waves: We recognize the Alfvén velocity: In the limit then: Basically, this wave is one where the lines of force (magnetic field lines) and the plasma move together in the plane perpendicular to the initial magnetic field. In the last section we considered waves along the initial magnetic field; we now consider low-frequency electromagnetic waves that propagate across . Again we take , , but now to satisfy the above. At the beginning of this derivation, we note that , implying that the drifts will compress and expand the plasma as the wave propagates. This means that we have no choice but to keep the pressure gradient term in the ion momentum equation: However we let , keep only first-order terms, and use , employing Fourier analysis gets us to: Next we need to split this into and components, which will make it tractable: We now need to use the continuity equation: We use this with the equation of motion to get We can now plug this into the equation of motion for : For the final current, we also need to know the electron velocity, which can be obtained by and . We also take the limit in which case the term in square brackets can be simplified by the binomial approximation, and we get: Similarly to the last section, we know that the dispersion relation here will be: using the fluid velocities we just derived, the plasma frequency, taking the limit , After a bit of algebra, we can rearrange this to be where we have used the definition of the Alfvén speed, . From the derivation of the ion acoustic wave, recall the sound speed is: so the dispersion relation becomes, after some algebraic hammering, or after rearranging, the magnetosonic wave dispersion relation: This wave is essentially an acoustic (sound) wave but the compression/expansion is created by drifts. In the limit , this wave becomes the ion acoustic wave. In the limit the pressure gradient forces drop out, , and this wave becomes a modified Alfvén wave. Because the phase velocity of this wave is almost always higher than , it is sometimes called the ‘fast’ hydromagnetic wave. We give a brief summary of the rest of the menagerie of plasma waves here, with dispersion relations given from general plasma physics references (e.g. Chen). Next we consider electron plasma waves. It turns out that the electron plasma wave along is unaffected by the magnetic field. The perpendicular case, however, is the upper hybrid wave. The easiest situation to consider is the zero temperature limit. If we consider longitudinal waves, and we can immediately see that there will be an drift for the electron motion in the wave. This affects the electron equation of motion. At the end of the derivation, we would arrive at the upper hybrid frequency: Once again, we immediately see that the limit corresponds to the usual electron plasma oscillation. In the derivation of the ion acoustic wave we assumed unmagnetized plasma. Instead we consider the ion acoustic wave in a magnetized plasma. It is tempting to set , but the problem for this situation is that the electrons are unable to move between wave fronts because of their small gyroradii. Instead, if and are almost perpendicular but not quite, then this wave can propagate. It turns out that the critical angle is which is indeed small. When one goes through the derivation, analogously to the ion acoustic wave, one arrives at the electrostatic ion cyclotron wave dispersion relation: we can see that this reduces to the ion acoustic wave in the limit (equivalently ). So what happens to the ion acoustic wave when the propagation angle is exactly , i.e. the wave is exactly perpendicular to the magnetic field? In this case, keeping finite electron mass, it turns out that the compression/expansion of the normal ion acoustic wave is unimportant because the electron motion is constrained. Starting with the electron and ion equations of motion, it turns out that there is an oscillation at the lower hybrid frequency: We now move on to considering the electromagnetic wave in magnetized plasmas, first for the waves with . First, if then there is no change to the plasma response, and we have the Ordinary (‘O’) Wave: which is the same as the unmagnetized result. On the other hand, if then there is a drift which alters the plasma response. Generally the wave’s electric field is taken as elliptically polarized in the plane perpendicular to . Working through the math is somewhat involved, but at the end of the day the dispersion relation is which includes the previously-encountered upper-hybrid frequency. The last set of waves are the electromagnetic waves with . In general the wave’s electric field can lie anywhere in the plane perpendicular to the initial magnetic field. This results in two solutions, which are elliptically polarized waves with right-hand or left-hand polarization, and which are respectively the R wave and the L wave If we consider a weakly-ionized gas, then the diffusion of plasma species is dominated by collisions with the neutral atoms. This is the simplest diffusion case, so we start with it. We begin with the momentum equation (for an arbitrary plasma species): if we consider a steady-state equilibrium system, then the term is zero. We take . If is small or is large then we can likewise neglect the convective derivative term as a small term, in which case, where we have also used . We can simplify this expression by defining the mobility: and diffusion coefficients: so that the total flux of a plasma species is a generalized Fick’s law: of course if or then this expression reduces to the typical Fick’s law from classical physics. To start off with, we rewrite the continuity equation in terms of the particle flux: It is clear that the particle flux can cause time-varying concentrations of plasma species. If we consider a hydrogen ion - electron plasma, with just two species, the immediate question is if the diffusion process violates quasi-neutrality. We note that both the mobility and diffusion coefficients depend inversely on the mass, and so naïvely one might ask what stops the electrons from all diffusing away and leaving an ion-only plasma. The answer is that an imposed positive potential on the high-density regions of plasma creates an electric field which enhances ion diffusion and suppresses electron diffusion. The plasma will quickly charge to essentially force . This process is known as ambipolar diffusion. We can derive the spontaneously-generated ambipolar electric field by setting the diffusion rates for electrons and ions equal to each other: where we have taken . We can solve this equation for the ambipolar electric field: with definitions for the (electron and ion) mobility and diffusion coefficients given in the previous section. We can also derive the common resulting diffusion as: which is simply Fick’s Law with a new ambipolar diffusion coefficient: We consider the simple application to diffusion in a slab geometry. If the ambipolar diffusion coefficient is constant, then the continuity equation becomes: We can solve this via a separation of variables technique. Take solutions of the form: plugging this into the continuity equation, we get that: which we can rearrange to get: We note that the left-hand side depends only on and the right-hand side depends only on , so by the separation of variables technique they must both be equal to a common constant. We can therefore solve them individually. Starting with the time, where we have set the constant equal to for reasons that will be apparent later. This equation can be simply solved: The plasma decays exponentially in time, with time constant . Next, we consider the solution in the spatial () direction. The equation to solve is: which has sinusoidal solutions: We require that, for a plasma extending to , the solution must satisfy so we can immediately set and . We also now have the requirement that: is the timescale for the plasma to decay by ambipolar diffusion. Combining these results, the solution for the plasma density as a function of time and space is: We revisit the problem of diffusion in weakly-ionized plasmas but now introduce a magnetic field, which will have the effect of reducing the diffusion. Of course, this is the main goal of magnetic confinement fusion! In the direction parallel to , there is no effect on the diffusion rates. If we take , then: For the perpendicular direction, we have to return to the fluid momentum equation: Assuming that the collisions are fast enough that the velocity derivative term (left hand side) can be neglected, and also assuming an isothermal plasma, the two components are: which can be simplified immediately to: We can solve these for via substitution of . This gives us: and similarly for : We notice that the last two terms are the drift and the diamagnetic drift, respectively, which are important generally but not for this problem. The first two terms are the standard mobility and diffusion terms, but because of the factor on the left-hand side the coefficients are reduced from the no-field case: We give a brief qualitative discussion of the difference between diffusion in partially- and fully-ionized plasmas. The previous sections dealt with diffusion in partially-ionized plasmas, in which both electrons and ions diffused via collisions with the background neutral particles, and we did not care about the concentrations of neutrals. In the case of a fully-ionized plasma, the collisions which matter are ion-ion, electron-electron, or ion-electron / electron-ion. These can be lumped into like-particle and unlike-particle collisions. In the case of a Coulomb collision between two identical (like) particles, it turns out that the individual particle velocities can be changed quite significantly in the collision - i.e. they are reversed in a collision, and a collision changes the velocity vector direction by . However, in a magnetized plasma, the important quantity is actually the particle’s guiding center. It turns out that the ‘center of mass of the guiding centers’ does not change in like-particle collisions. This means that like-particle collisions cannot lead to diffusion. All diffusion in fully-ionized plasmas therefore comes from unlike-particle collisions. In an electron colliding with an ion, for instance, there can be net momentum exchange between the species which leads to diffusion. The ion itself is much less affected because of its large mass, of course, but at the end of the day momentum conservation requires that both species will diffuse based on these unlike-particle collisions. We present a simple and intuitive derivation of the plasma resistivity, which is of course an important quantity. From the momentum equations, we know that the rate of momentum transfer between electrons and ions is: We can also write this another way as follows. The plasma resistivity for two interpenetrating fluids with relative flow will be proportional to the relative velocity, , the scattering species density (), the electron density , and the scattering force which is proportional to the two charges (). There is also going to be a constant of proportionality, so in total: setting these two expressions for the momentum exchange equal to each other, which reduces to: The constant of proportionality is the plasma resistivity. We now need to come up with an expression for the electron-ion collision rate. The simple derivation is as follows. Consider a collision between electron and ion. We can define this as the case where the initial kinetic energy is equal to half the potential energy at the initial impact parameter: There is some degree of arbitrariness in this definition by about a factor of 2. More rigorous derivations are done but are much lengthier. Anyways, we can write the impact parameter as: A simple expression for the scattering cross section is thus: and the electron-ion collision rate can be written: It turns out that in many real plasmas, small-angle collisions actually dominate Coulomb interactions, or are at the very least important and cannot be neglected. We will see this more rigorously later. The relative importance of small- and large-angle scattering is characterized by the Coulomb logarithm, and the electron-ion collision rate is enhanced by : We can now write down an expression for the plasma resistivity: Further, if the electrons have a Maxwellian distribution then and this becomes: Since the Coulomb logarithm is very insensitive to plasma conditions, we can see that the plasma resistivity is mostly dependent on the electron temperature. We also note that the resistivity decreases (conductivity increases) at high temperature, which places limits on processes like Ohmic heating. We must treat the collision of two charged particles; the classical Coulomb collision, in which the two particles interact only through the electrostatic potential: In the lab frame we can define the kinematics by the positions and velocities of the two individual particles: Of course each of the previous four equations is in three dimensions, generally, and we must simplify from 12 equations before this is a tractable problem. The first is to notice that conservation of momentum: suggests a new coordinate system moving with the center of mass velocity: which is a constant of the motion, plus the relative velocity: Of course we can transform back to the original lab frame coordinates via: The motion in the center-of-mass frame is simply the relative motion: where we have introduced the reduced mass: The problem is now equivalent to a single particle of mass moving around a central potential at the center of mass. At this point we will need to introduce additional constraints via conserved quantities, in particular energy and angular momentum. One can derive these rigorously, but for simplicity: One nice result of conservation of angular momentum is that we can see that the motion is entirely in a plane, therefore we can simplify from three to two dimensions for the remainder of the derivation. We can write the conserved quantities in terms of the original velocity and impact parameter (relative to the center of mass). Finally, we use cylindrical coordinates for this problem: and we remember that: so that the conservation of energy and momentum, respectively, give us that: the second equation can be used to eliminate in the first to get: where we have defined the impact parameter as: The next task is to solve for the scattering angle as a function of the impact parameter and relative velocity. If we define the point of closest approach by an angle and distance we can see via geometric arguments that We will solve for using our previously-derived equation of motion, starting with: with a change of variables from to in the last step. Next we need to use: we can plug our previous relation for into this equation and then into the expression for to get that: Actually evaluating this integral requires a non-obvious trick substitution (see, e.g. Friedberg): What we end up with, omitting some algebra, Now that we have the general result in the previous section for Coulomb collisions, the immediate question is how often does a particle in a plasma, a ‘test particle’, collide with other particles. In this derivation, which follows along with Friedberg Section 9.3, we explicitly consider test electrons but the results can be applied generally. First, we can write simply the change in the test electron’s momentum as: where in the last step we have essentially defined the electron-ion collision frequency as a rate of momentum loss. In general the cross section is not a constant and we in fact must rewrite this as: where the momentum loss must be integrated over all possible impact parameters , scattering angles in the plane perpendicular to the scattering, and also over the scattering body distribution, in this case the ions. The first task is to evaluate and then we must tackle the integral. If the initial velocity is in the direction in the center-of-mass frame, then we can write the change in momentum as: In terms of the scattering angle used previously we can see by intuition that the change in -directed momentum will be and based on what we previously derived for Coulomb scattering: Now we can tackle the integrations over impact parameter and : There is an important point to be made here. In a naïve sense one might expect to take the integral over the impact parameter to infinity since the Coulomb interaction has indefinite range, and we have previously argued that long-range interactions are fundamental aspects of a plasma. However, in this integration taking the integration to infinity would lead to a logarithmic divergence, which cannot be allowed. We can make a simple physical argument that as the impact parameter is taken to the large limit, it cannot exceed the Debye length because at that point the plasma will screen the Coulomb interaction. We therefore typically take . We typically use the definition such that In magnetically-confined plasmas we typically have , and this quantity is called the Coulomb logarithm. It is essentially a metric of the relative importance of small- and large-angle collisions. In the regime the behavior is dominated by small-angle collisions, the classical plasma regime. If then we are in a moderately- or strongly-coupled plasma, which are more applicable to ICF plasmas. We now use this in the expression for the collision frequency: we now have to tackle the integration over target velocities, which is best done in spherical coordinates: in which case we can write that: and returning to the expression for the collision frequency, some algebra reveals that it reduces to with velocities normalized to the ion thermal value: and . We have also introduced a dimensionless integral: This can be evaluated in terms of the error function, and then taking appropriate limits. We skip the details and go straight to the result: Using this to complete the expression for the collision frequency, we get: For electron-ion collisions in particular we can take the limits and , in which case: One can also examine the expression for the collision frequency and come up with a hierarchy of various species colliding with each other, in terms of the mass ratio : In the previous sections we have often explicitly or implicitly assumed that the particles obey a Maxwellian distribution, but in many plasmas of interest this is not the case and leads to full-fledged kinetic theory. First we discuss some properties of the distribution function. In general the particle distribution can be a function of full phase space: position , velocity , and time : One can obtain functions such as the density via integration over part of phase space: In this particular case we sometimes use a normalized distribution function: Of course the most famous and commonly-used distribution function is the Maxwellian: with and . The basic problem of kinetic theory is to define how the distribution function of a plasma changes with time. One can simply derive this by taking the chain rule total time derivative of : The first term is simply the explicit time dependence of . The second through fourth terms are observed to be: and the fifth through seventh terms are, using , simply Overall the total change in time of the distribution function is due to external interactions, which we call ‘collisions’ and now we can write the Boltzmann Equation: Of course in plasmas one often has the situation that the forces present are simply due to the particle’s Coulomb interaction, and furthermore, collisions can often be neglected in hot plasmas. In this limit we obtain the Vlasov Equation: If there are collisions with neutral atoms, then one can use the Krook collision term: And another familiar limit is with Coulomb collisions, we get the Fokker-Planck Equation: which is too complex to delve into at this level. One of the most important and famous results of kinetic theory is Landau damping, or more generally wave-plasma collisionless coupling. In this case we consider electron plasma oscillations in an initially uniform plasma with . We also want to treat the wave perturbatively, i.e. let: the first-order Vlasov equation for electrons is: we will assume infinite inertia ions and use plane wave perturbations in the direction, so that: so the Vlasov equation becomes, to first order, Now we have to use Poisson’s Equation: Using these two results we can eliminate to get: If is a Maxwellian, then we can easily separate the three coordinates and reduce this to a one-dimensional integral: We notice immediately the pole at . While the pole is not necessarily on the contour of integration if is complex, it matters nevertheless. In the simplest limit of large phase velocity and weak damping, the pole lies near the real axis and we can take the contour of integration as along the axis except for a small semi-circle around the pole. In this case, by the residue theorem, The first term in square brackets ends up being the electron plasma wave dispersion relation: see Chen section 7.4 for a derivation of this. The interesting part is the second term, which serves as a small and imaginary correction to the dispersion relation in the limit of small k: Or rearranging, and using again the smallness of the second term in parentheses, Essentially, we can see that this derivation has introduced damping to the electron plasma wave. The damping depends on the wavenumber, and most critically on the shape of the distribution function where the particle velocity matches the wave’s phase velocity. At this point there is coupling predicted between the wave and the plasma particles. If then we can see that the wave is damped and transfers energy to the particles. On the other hand, if the the wave is actually unstable and grows in amplitude. One can use the refractive index of a plasma, or dispersion, as a density measurement via interferometry. Consider the following scheme, shown in Fig. 6.1, where we stick an unmagnetized plasma in one leg of an interferometer (Mach-Zender scheme shown). In this case, the detectors will measure the phase shift between the two legs, so we must calculate the phase shift due to plasma. Consider the source as a monochromatic source of electromagnetic radiation that obeys: Remember the back-of-the-envelope rule Hz, with . The dispersion relation for electromagnetic waves in a plasma is which leads to a refractive index We know that we can write the phase shift for propagating waves as So the difference between plasma and vacuum (with ) is: Based on our initial assumption that we can use the binomial expansion: Using the definition of the plasma frequency: we can write the phase shift as: The phase shift is directly proportional to the line-integrated electron number density. Now consider a somewhat different system, where we have a broadband source of radiation that traverses a plasma of scale length and known density to a detector. Given some information, for example the difference in arrival time between radiation of frequencies and , what is the length ? We start off by stating: so for two different frequencies: The electromagnetic radiation propagates at the group velocity: so we need the dispersion relation. We know that for electromagnetic radiation in an unmagnetized plasma: taking the derivative gives us: Using the original dispersion relation to substitute for , We typically take the limit , which can be checked via the back-of-the-envelope relation with in Hz and . In that limit we can binomial approximate the last term, returning to the original expression between and , we can rewrite as: since the terms are small, when we substitute in for the group velocities, at which point one must take some values for , , , and and plug them into this expression. Next we consider using electromagnetic radiation as a probe of magnetic field conditions. This topic is similar to the first considered in Diagnostics, in that we will calculate a phase shift due to the plasma and consider that it is being compared to the vacuum value in an interferometer. We consider the electromagnetic source of radiation as producing monochromatic plane-polarized waves, which propagate in the plasma such that . In this case the two fundamental solutions are the R and L waves, or right- and left-hand circularly polarized waves. The initial wave can be written as a superposition of these two, following Hutchinson’s notation: where we are denoting the polarizations by from the basic: for circular polarization. Also note that we have taken without loss of generality. In general, some distance later, the electric field will be given by: where the two circular polarizations can have different wave numbers. Using , which we can rewrite using our notation as: there is a phase difference between the RHCP and LHCP components, which leads to a rotation in the plane polarization as the wave propagates along . In general, for a wave propagating in an plasma with small between and , the dispersion relation can be written as: the trick here is to write taking the square root, and approximating the second term as small ( is small): taking the difference between the two polarizations: We now plug in for and : in the limit of small , i.e. : we note that the polarization plane rotation is . The Faraday rotation angle is typically defined as The Faraday rotation angle is linearly proportional to the propagation distance , the plasma density (through plasma frequency), and the initial magnetic field strength through the cyclotron frequency. In the common event that the plasma is not perfectly uniform, a WKB-style analysis can be adopted: some algebraic manipulation and our previous definition of leads to: the Faraday rotation depends on both density and magnetic field profiles. To measure one, the other must be known ab initio. One of the most common and simplest probes for magnetic field is just a coil. The schematic is shown in Fig. 6.3. The magnetic field is sampled via flux through a coil, which induces an EMF and voltage. We know from Faraday’s law that where is the induced voltage, is the number of coils, and is the area of one coil. Typically these probes are used with an circuit integrator, in which case the post-integrator voltage is: The Hall probe is shown schematically in Fig. 6.4. The concept is that a current, , is passed through the probe. There is a force on the current carriers. Since this is typically electrons, there is an induced charge separation and electric field as shown in the figure. A Hall probe utilizes this effect by measuring the potential induced across the probe, which can be related to the magnetic field and current. In most plasma experiments, however, there is too much pickup for this to be a useful technique. The last simple diagnostic considered in this section is actually a current diagnostic, but the concept is very similar to the probe. Consider the schematic Fig. 6.5. In this case the coil surfaces are perpendicular to the magnetic field induced by the current . By the induced flux on the probe is where is the number of coils and is the area of each coil. We can thus write the voltage induced as Once again we typically would pair a Rogowski coil with an integrating circuit, in which case the final signal is: The last diagnostic considered is the important and basic Langmuir probe. Consider an electrically conducting probe placed into the plasma at arbitrary potential. Fig. 6.6 shows a schematic of the probe potential. A sheath develops around the probe; over the sheath the probe’s potential is Debye screened. In this example the probe potential is less than the plasma potential. The first important case is when the probe potential is much less than the plasma. This could arise, for example, if the probe is grounded and the plasma is positively charged due to ambipolar diffusion. If the difference in potentials is greater than a few times , then the thermal electrons do not have enough energy to overcome the potential and the probe collects only ions. This is called the ion saturation current. We can estimate this as: where is the sheath density, is the ion drift velocity, and is the probe area. If we take the drift velocity as and the sheath density as given by Debye screening: where we have approximated the sheath boundary as . In this case the ion saturation current is given by: By measuring the ion saturation current, one can determine the plasma density if the electron temperature is known. Of course one must also have a way to measure the electron temperature. This can actually be done with the same probe! Consider qualitatively what happens as the probe potential is increased. When it reaches the plasma potential there is no potential difference to drive a current, so the current must go to zero. As the potential is further increased, the probe potential exceeds the plasma potential. Now ions are repulsed by the probe potential and electrons are preferentially collected instead. The rate at which this transition occurs depends on the temperature, since the probe is essentially sampling an increasing fraction of the electron distribution as the voltage is increased. The derivation is omitted here, but the qualitative behavior of the curve is shown in Fig. 6.7. In this section we attempt to give a brief motivation of the basic design of a tokamak and its general properties. In previous sections we have considered only linear machines. If one wishes to build a fusion reactor, then end losses in a linear pinch (Z or ) are too great to overcome. The magnetic mirror was proposed as a way around this limit, yet it turns out that the mirror scheme is not enough to overcome the end losses and the absolute theoretical maximum for a mirror machine is . For magnetic confinement, we must design a scheme with no end losses. In the 1950s Soviet physicists Sakharov and Tamm proposed a scheme in which a linear pinch plasma is bent into a torus, thus eliminating the end losses. The basic geometry is shown in Fig. 7.1; here we call the major radius of the torus and the minor radius. If one imagines using coils around the torus to create a toroidal magnetic field, the confinement scheme will be very similar to the pinch MHD equilibrium. The basic field geometry thus far is shown in Fig. 7.2. In the top half of the figure we show the toroidal current-carrying coils which create a toroidal field . This field configuration is equivalent to a single infinite current-carrying wire on the origin for this simple analysis. We know that the magnetic field due to an infinite wire can be derived from Ampère’s Law: The important point to take away from this is that . So there is a magnetic field gradient. We know from Section 2.3 that a gradient in the magnetic field causes particle drifts: In our simple torus geometry, the result of the drift is shown in Fig. 7.3. Because the drift velocity is the electrons and ions drift in opposite directions. Working through the cross product with the right hand rule, the reader can verify that ions drift up and electrons drift down in this geometry. After some time has passed, the electron and ion drifts will necessarily induce charge separation. Positive charge will gather at the top of the machine and negative charge at the bottom. This is shown in Fig. 7.4. Naïvely one might expect that this will simply continue until the induced electric field cancels out the drift and thus quiescence can be achieved. There is, however, a major problem for this design. As shown in Fig. 7.4, this induced electric field is perpendicular to the toroidal field. There is therefore a drift, which is outwards as shown in the figure. As we know from Section 2.2, both electrons and ions will drift in the same direction at the same velocity due to this drift. The entire plasma is therefore shoved radially outwards. Numerical estimates show that this process will happen very fast (sub-ms) relative to necessary energy confinement times in tokamaks (s). One must therefore prevent the drift from causing charge separation. The solution is shown in Fig. 7.5. If we add a toroidal current within the plasma itself, it will create a poloidal field and the overall field lines will wrap around the torus like stripes on a barber’s pole (or screw threads - indeed this is the toroidal analog of the screw pinch). In magnetic confinement machines we are typically limited to , for instance , in which case we know that the transport properties along the field lines are much faster than perpendicular to them. In this case then, we know that the electron conductivity along the lines of force is very high. With the field lines wrapped around the plasma, the top and bottom of the machine are essentially electrically shorted and no potential difference can be generated between them. The drift can therefore generate some currents within the plasma but it cannot cause a charge separation, and we therefore avoid the disastrous drift. The immediate question for any confinement scheme is how well it can perform relative to the demands set by fusion ignition and energy gain. We therefore follow a simple derivation for tokamaks by J. Lawson [Proc. Phys. Soc. B70, 6 (1957)]. We obviously consider a DT plasma confined in a tokamak. The fusion power which can be used for self-heating is generated by the alpha particles from the DT fusion reaction: The total fusion power is: where the factor of comes from , MeV, and is the fusion reactivity. Using that the total density and that the total pressure is , The fusion power must overpower loss mechanisms. In this simple analysis, we consider only Bremsstrahlung and heat conduction losses. The bremsstrahlung power derivation won’t be treated until later, so we simply give the result: where is a constant of proportionality. The last term to derive is a heat conduction loss term. In general, this would be Unfortunately the thermal conductivity is not generally well-known, and neither is the temperature gradient at the edge of the plasma. It is therefore convenient to introduce / define a general ‘energy confinement time’ such that this loss term is where characterizes the loss timescale for thermal energy due to thermal conduction, or other loss mechanisms not considered. So, in general the power balance equation is: where we also include an arbitrary heating term , which is generally taken to be in a steady-state ignited plasma. Substituting in the terms we have derived above: We can further simplify the constant coefficients: this can be simplified algebraically to obtain a requirement on the pressure-confinement product: Since the fusion reactivity is just a function of the temperature, the right-hand side is a function of temperature only. This is the Lawson Criterion. For a given temperature that we can achieve in a fusion tokamak, the Lawson Criterion tells us what pressure-confinement product we must achieve. For example, a detailed calculation reveals that if one can operate a tokamak at 15 keV then one must have atm-s. Typical energy confinement times are of order one second, which means that typical pressures in reactors will have to be of order ten atmospheres. A plot of the minimum is shown in Fig. 7.6 Recall the work done previously in section 3.3. Now that we have added the toroidal current, and thus a poloidal field, we can see that the field configuration in the tokamak is at a most basic level the toroidal analog of the screw pinch: In general, with two of the three unknowns specified (e.g. and ) then MHD determines the third (e.g. ). However, because of the toroidal geometry there are several complications that arise, and are qualitatively discussed in the following sections. First we consider the hoop force on a tokamak plasma. The situation is shown schematically in Fig. 7.7. The field of interest in this case is poloidal, and this is therefore applicable to Z-pinch or Screw-pinch plasmas. The plasma is shown in red in the figure, the current flows in or out of the page, and the poloidal field thus wraps around the plasma as shown. Because of the toroidal geometry, the magnetic field induced has a dependence and is therefore stronger on the inside of the plasma relative to the outside. On the other hand, the surface area on the inner half, , is smaller than the outer surface area since they are going like . But the total pressure on one half surface of the plasma due to the magnetic pressure will be and the quadratic dependence on the magnetic field wins. The force on the inside of the plasma is thus greater than on the outside, and this effect creates a net ‘Hoop’ force directed outwards. The name comes from the fact that this is analogous to the tension in a circular current-carrying wire loop. The next situation to consider is analogous to the pressure exerted on the walls of a tire. Consider that the plasma surface is an isobar. In that case the force exerted on the surface of the plasma can be thought of as .Since and by assumption this creates a net force, which is also directed outwards. This is shown schematically in Fig. 7.8. This force does not depend on what the magnetic field is doing, and thus will occur in toroidal plasmas of all types (Z-pinch, -pinch, Screw-pinch). The next toroidal force balance problem comes from the dependence of the toroidal field. Because this requires a toroidal field, it occurs in the -pinch or screw-pinch problems only. This situation is shown in Fig. 7.9. The coils carry a current and the plasma has an induced opposite current since it is diamagnetic. We assume that the plasma current is carried in an infinitesimal sheath on the outer surface. We remember Ampère’s Law in integral form, which we can apply over a toroidal loop: Just outside the plasma, the toroidal field is: and just inside the inner plasma surface: The net force on a plasma surface is given by: depending on the jump in field and the surface area. We now need to consider the force on the inside of the plasma relative to the outside. Inside, is smaller and the surface area is also smaller, but because of the quadratic dependence on and the fact that the total force on the inside of the plasma is actually greater than on the outside. This is analogous to the hoop force. The net result is that the effect described here results in an outward directed net force on the plasma. No matter which type of pinch we choose, the above effects all act to push the plasma radially outwards in a toroidal machine or tokamak. Obviously this needs to be avoided in a confinement or fusion machine, since the plasma must remain isolated from the first wall. The problem of stabilization thus arises. One way to do this is to consider the effect of the wall. This is shown schematically in Fig. 7.10. In particular, consider a perfectly conducting (superconductor) wall. In this case we know that no magnetic flux can penetrate the wall. The initial situation is shown in the top half of the figure. The poloidal field is greater on the inside edge of the plasma, and due to the effects described in the previous sections we know that there is a net force on the plasma directed radially outwards. After some time, the plasma will be moved closer to the wall (bottom half of the figure). Since magnetic flux cannot penetrate a superconductor, the flux piles up on the outside of the plasma, increasing the strength of the poloidal magnetic field until equilibrium is achieved since this counteracts the outwards force. This seems like an excellent stabilization property. Of course, the problem is that a superconducting first wall is technologically infeasible. In a real machine the first wall will be resistive. In this case the magnetic flux can penetrate the wall and resistively dissipate. The wall effect might slow down the outwards drift of the plasma, but it will not indefinitely stop it and provide stable confinement. We note that many of the above effects depend on the fact that the poloidal field is stronger on the inside edge of the plasma. This is shown schematically in Fig. 7.11. A simple way to counteract this is to add a constant vertical magnetic field, shown below the schematic. This adds to the poloidal field on the plane and tends to increase the field strength on the outer part of the plasma while decreasing it on the inner edge. This counteracts the toroidal force effects and can negate the radial force. If we need to counteract effects like the tire tube force, which do not depend on a poloidal field, the vertical field could be further increased until the field on the outer edge is actually greater than the field on the inner edge, thus providing confinement. This technique is used in many real machines and is part of the design for ITER. This section gives a brief overview of potential heating mechanisms in tokamak systems. We recall that the Lawson criterion requires we heat the plasma to make keV before fusion self-heating can take over. We have already seen that for several reasons we will need to have a toroidal current in the plasma. Since the plasma has finite resistivity, the toroidal current will provide some Ohmic heating: the power is proportional to the square of the toroidal current and the plasma resistivity. This looks great but we recall that the plasma resistivity is . This means that Ohmic heating becomes less efficient as the plasma is heated. A detailed analysis reveals that Ohmic heating is great for initial heating of the plasma but cannot get a plasma to fusion-relevant temperatures on its own. For initial start-up, most machines use a transformer scheme (with the plasma as the secondary winding) to drive current inductively for Ohmic heating. Another method for heating a plasma is to use neutral beams of energetic ions. For example, one could use an accelerator to generate beams of energetic D which impinge on the plasma. As the atoms hit the plasma they are ionized, and then slow quickly via Coulomb collisions, transferring their energy to both plasma species. The required ion energy is set by the temperature and also by the areal density of the tokamak plasma - ideally the ions stop near the center of the plasma. Neutral beams are required so that the particles do not undergo cyclotron motion until they are collisionally ionized within the plasma itself. The scheme to create a neutral beam is a typical electrostatic ion accelerator plus a beam neutralizer. In current machines, beam energies are typically of order 100 keV. A reactor-scale machine would require energies of order 1 MeV. Neutral beam heating is expected to play a part in any magnetic fusion machine. Another very common scheme is to use high-frequency electromagnetic waves to heat the plasma. Generally the wave frequency is chosen to match either the electron or ion cyclotron resonance. We know that the electron cyclotron frequency rule-of-thumb is GHz/T. So for a toroidal field in the machine of T, we must produce radiation at GHz which is technically challenging to produce that radiation. Ion cyclotron heating, on the other hand, is at much lower frequency MHz/T. So in a 10 T field with (D resonance), the ICH heating system must produce MHz radiation. The absorption is collisionless so these processes are still efficient at high temperature. This is simple, but it turns out that the launching antenna must be very close to the plasma surface which is technically challenging. ECH and/or ICH is expected to play a role in any fusion machine. A related question is how we drive current in a tokamak. In the Ohmic heating section we discussed the use of a pulsed transformer to induce large toroidal currents which can Ohmically heat the plasma early in the discharge. However, we must have a scheme to drive steady-state toroidal currents in a fusion machine. Neutral beams and ECH/ICH can be used to do this but it turns out the efficiency is low. The problem of ‘current drive’ is an area of investigation. One way to do this is to launch waves at the lower-hybrid resonance such that the of the radiation is toroidally directed. It turns out that these waves tend to collisionlessly Landau damp on the electrons, which drives a toroidal current. Detailed analysis shows that this is a relatively efficient way to induce a steady-state toroidal current but it cannot provide quite enough. A real fusion machine will depend on a large amount of ‘bootstrap’ current for steady-state operation. We finish this section with a brief qualitative description of an effect of toroidal geometry on single-particle motion. We recall that the field strength exhibits a dependence, which is shown schematically at the top of Fig. 7.12. Consider a particle which is confined by cyclotron motion to one line of force. Because a tokamak is a screw-pinch style confinement scheme, as we have argued previously, the lines of force wrap around the plasma. If we project a single line of force into the poloidal plane, though, it will be circular. This is shown by the dashed line in Fig. 7.12. Naïvely, one might expect that the particles will simply gyrate around the line of force all the way around. However, as the particle moves from the outside of the plasma towards the inside of the machine the magnetic field strength is increasing. If the particle has non-zero transverse momentum, then it will exhibit a mirror machine style confinement and reflection. Recall Section 2.8, where we derived that: any particles at the location of with pitch angle greater than will be confined to a banana-style orbit as shown in green. Of course particles have a distribution (typically Maxwellian) and thus there is a wide range of banana orbits. This is shown very schematically in Fig. 7.13. The green orbit is a highly confined particle, i.e. with high pitch angle. The blue orbit is moderately confined, and the red orbit is barely confined. Of course there are also unconfined orbits, which transit the entire machine. In a real machine, the banana orbits will affect transport properties in the plasma. We also note that, depending on the initial direction of the velocity, the banana orbits are qualitatively different. See Fig. 7.14. This can also affect transport properties. In this chapter we derive a few scattered plasma physics topics related to inertial confinement fusion. For simplicity we begin with the cross section for emission of a photon with energy by an electron with velocity interacting with an ion via the Coulomb force. This is the Kramers cross section: The next task is to calculate the spectral emissivity per unit mass. To do this we have to integrate the Kramers cross section over the electron distribution: where the lower limit of integration is the minimum electron velocity needed to create a photon of energy , given by . Using the Kramers cross section together with the Maxwellian distribution: we can write the spectral emissivity as after doing the integral, only the lower limit remains and we substitute . Also with some algebraic cleanup, the emissivity expression becomes: The total power radiated into is obtained by integrating over all frequencies: We note that ignoring the constants gives a proportionality: In many ICF-relevant situations the electron heat conduction is critically important for the implosion dynamics. The classical heat conductivity is derived from the Fokker-Planck equation: where we have introduced , with the Coulomb log for electron-ion collisions. represents a formal treatment of electron-electron collisions, which are neglected in this derivation. Here will will assuming a temperature gradient in the direction, and consider a perturbative approach in which the distribution function becomes: where is the isotropic equilibrium Maxwellian. where is the pitch angle between the electron velocity and the temperature gradient. We are also implicitly assuming that the electron mean free path is much less than the gradient length scale: The 0-th order distribution function is written: If we insert the perturbed distribution function into the Fokker-Planck equation and keep only terms that are first order in , we get: We need another equation at this point. We use the plasma property of quasi-neutrality to require that the current due to vanishes in steady state, or: using the previous expression for and , glossing over some algebra, yields: It is an interesting aside that we have shown here that a gradient in temperature generates an electric field. Anyways, this can be replaced in the equation for which depends only on derivatives of and can be shown to be: It follows from this that the heat flux is: where we have defined the thermal conductivity: where the function takes into account effects of electron collisions, which are more important in low-Z plasmas. We note: The conductivity is primarily a function of temperature. A detailed look at the velocity integral reveals that electrons with end up doing the brunt of the work in conducting thermal energy. It is therefore important to revisit our assumption that was a small perturbation to . If we write the real heat flux must satisfy this inequality. We can define a free-streaming flux: physically, this represents the entire thermal energy of the plasma moving with the thermal velocity, and represents an upper limit on flux. The real conductivity thus must satisfy: The heat flux cannot exceed of the free-streaming limit. Generally in hydrodynamics simulations one takes: which is the minimum of the Spitzer conductivity and the free-streaming flux limit multiplied by a flux limiter which is generally taken in the range . Detailed Fokker-Planck simulations reveal that the Spitzer conductivity is accurate when , i.e. in the long gradient length limit. As the gradient length scale is decreased, flux-limited conduction must be used. This is particularly important in calculations of direct-drive ICF targets, where the thermal conduction between the critical surface and ablation front is fundamental to the entire dynamics of the problem, and the temperature gradient is very steep between the hot plasma at the critical surface and cold plasma at the ablation front. In Section 4.2 we derived the dispersion relation for electromagnetic waves in unmagnetized plasmas, which we reproduce here: We can immediately see that there is a cutoff (, no propagation) when the wave frequency is equal to the electron plasma frequency. Since the latter depends on the plasma density, this presents a limit on density: where is the critical density. Electromagnetic waves can only propagate for densities . If we consider a laser beam propagating from low-density plasma towards a higher density region () at some point the laser energy must be either scattered or absorbed in the plasma. The best situation for ICF is when the laser wave is absorbed collisionally, or more colloquially via the inverse bremsstrahlung mechanism. If we treat the laser electric field as: and write the electron momentum equation as: In previous derivations of EM waves in plasmas we neglected collisions, i.e. we had assumed . In this problem we keep finite collision frequency, since that is the effect we are looking for. If we assume that and keep only first-order terms: Where we have implicitly assumed that the ion inertia is infinite over the timescales of this process. Since the field is harmonic, we can use Fourier analysis on this equation: we can rearrange this equation to acquire the plasma response: which leads to the current: We now go back to Maxwell’s equations for a moment: Combining these and using Fourier analysis, we get that: Using the vector identity for the left hand side: and using Fourier analysis we get that Now we need to substitute into this equation for the current to get: If the damping is weak, we can use the binomial approximation: Clearly we now have wave damping via the imaginary part of the dispersion relation. If we take as imaginary with with weak damping (i.e. small ), we can isolate the imaginary part of the above: From the undamped dispersion relation we can approximate: we follow Kruer and write the damping coefficient as: the energy damping length for inverse bremsstrahlung is simply . We start off a series of qualitative descriptions of non-collisional absorption processes with resonance absorption. For p-polarized light, the schematic is shown in Fig. 8.1. At the top we show a schematic density profile where the plasma has a gradient from 0 past the critical density, with . Next we show the p-polarized wave propagating in the plasma in green. The cross-hatches represent the electric field’s direction. The wave is propagating at an angle relative to the density gradient. Due to refraction, the wave bends, and reaches a maximum in density at where is the gradient length scale. However, since the electric field has a component in the direction there will be an evanescent component in the direction. This reaches the surface of critical density, and can resonantly couple to the plasma there. The bottom part of Fig. 8.1. The resonant field coupling to the plasma at the critical density can excite electron plasma waves, or ‘plasmons’. For more detail see Kruer. Now we discuss a few parametric instabilities. The essential process is three-wave coupling: An incident laser photon decays into two daughter waves, 1 and 2. These can be several combinations of scattered photons, plasmons (electron plasma waves), and phonons (ion acoustic waves). First up is the two-plasmon decay, which is shown schematically in Fig. 8.2. The dispersion relation for light waves is shown in green and the dispersion relation for electron plasma waves, ‘plasmons’, is shown in blue. A single incident photon can decay into two plasmons via the frequency and matching conditions. As shown, one plasmon propagates in the direction of the original photon and the other goes backwards. Because the electron plasma wave dispersion relation is so much flatter than the light wave, by a ratio , we can approximate . Since , this implies that the two-plasmon decay can occur occur very close to a plasma density of ,the ‘quarter-critical’. The next possibility we consider is called ‘Stimulated Raman Scattering’ (SRS). The schematic for this decay is shown in Fig. 8.3. Once again the green curve shows the light wave dispersion relation, and the blue curve shows the plasmon dispersion relation. Once incident photon, labelled ‘inc’, decays into a scattered photon and a plasmon. The situation shown is a backscattered photon, but this is not necessarily the case. SRS-scattered photons can go forwards, sideways, etc. The maximum density at which SRS can occur is quarter-critical. However, it can also occur at much lower densities, and is thus distinct from TPD. The last parametric decay instability is Stimulated Brillouin Scattering (SBS). Once again, this is shown schematically in Fig. 8.4. Yet again green denotes light waves. SBS, consists of an incident laser photon decaying into a backscattered photon and an Ion Acoustic Wave (IAW), or ‘phonon’. These are shown in red. Because of for IAWs, and they are low frequency, the wave matching conditions require that the scattered photon is backscattered and the phonon goes nearly in the same direction as the original wave to conserve momentum . The SBS process can occur anywhere in under-dense plasma. It is important to have a brief discussion of hot electrons. In ICF implosions, it is important to keep the fuel adiabat low so that it is easily compressible to very high densities during the implosion. This means that any unintended heating of the fuel early on in the target illumination is undesirable, since it will raise the adiabat. The relevance here is that many of these processes can generate very high-energy electrons. Any time a laser-plasma instability generates plasmons, or electron plasma waves, those waves can Landau damp on the plasma electrons. When the wave is very strong, there is a non-linear wave-breaking process which efficiently couples wave energy to particle energy and generates a two-temperature electron distribution. The hot electrons can have a temperature of many tens of keV. These electrons are energetic enough to penetrate the fuel layer in an implosion target and preheat it. It is therefore important to understand and control LPI, in addition to controlling LPI to increase laser coupling to the target. We now discuss a few totally random but important topics of relevance to ICF. Field generation in plasmas is an important topic, particularly for spontaneous generation of fields in ICF. If we recall Ohm’s law from the discussion of magnetohydrodynamics, there is a pressure gradient term. This work will follow that of Haines. To first order we can write from Ohm’s Law: Physically, if there is a pressure gradient in the plasma the electrons quickly leave until there is an electrostatic field set up to mitigate this. See also the previous discussion of ambipolar diffusion. Other terms in the generalized Ohm’s law are omitted for clarity. This is also ignoring any fields from incident lasers in an ICF scheme. Using Faraday’s law we can get the magnetic field: and using , This is the most basic mechanism for spontaneous magnetic field generation. If the temperature and density gradients are not parallel in a plasma, then a magnetic field is generated. This is also known as the Biermann Battery. In a situation where a heavy fluid is supported against a gravitational potential by a lighter fluid, then it is clearly energetically and thus dynamically favorable to exchange material of the heavier fluid for the lighter one. Doing this from first principles is non-trivial. However, if we consider two homogenous fluids with an interface at with densities and . Stealing a result from Drake, the equation of motion for perturbations of this interface is We take the perturbation in each fluid as: plugging into the first equation, we get that: and thus, the growth rate, which we (Drake) have called , is given by: where is the gravitational acceleration, is the wave number, and is the Atwood Number: Obvious is the stable situation, and larger correspond to larger differences in density across the interface and thus higher growth rates. In this chapter we simplify a few of the previously-derived relations to ‘back of the envelope’ or Formulary-style equations which can be used to quickly calculate important plasma properties. The NRL Plasma Formulary (available online) is also an excellent resource for this sort of calculation. We start off with the plasma frequency. Way back in Section 1.3 we derived the general result for the electron plasma frequency: where the first expression is in the MKS system of units (generally used in this work) and the second is the CGS system (commonly used in older physics materials, and also in ICF). We also note that often is a more desired quantity than , e.g. if one wishes to compare to a source of electromagnetic radiation. We can therefore generate Table 9.1. The ion plasma frequency is not as widely used, however it is useful to note that and therefore we can write the ion plasma frequency in terms of the electron values: where the fraction out front comes from the square root of the ion to electron mass ratio. Recall the result of Section 1.2: In addition to the constants, we note that the Debye length depends on both temperature and density. Once again, we can give the result in both systems of units for convenience, though in both equations we absorb Boltzmann’s constant into the temperature and take eV: In this section, refer back to Section 2.1 for derivations. In this section, for convenience, we always take T. We recall the result, in MKS units: which leads to the following: where the results are given in cm, but the conversion to meters is trivial since we have used ‘convenient’ units for both and . From the previous results we can immediately write that: The cyclotron frequency only depends on the particle type and the magnetic field. By far the easiest values to remember are: one simply has to multiply by the magnetic field strength in Tesla to get the gyro frequencies. Here we will use the results of 5.8. For a thermal distribution, this expression simplifies to: where of course and the only different between systems of units is . Also recall the scaling result: where , so based on the species mass ratio, and there is an assumption buried in here that we are working in a plasma. For non-hydrogenic ions, the ion collision rate is enhanced by a factor of . For a Maxwellian distribution function, we know from basic physics that the mean thermal velocity is: This leads to the simple expressions: which are obviously in CGS units. Again we have used eV. The extension to MKS is trivial. We just derived the collision frequencies and thermal velocities. We can simply write that: which leads to the following for electrons: and for ions: all of these expressions are in CGS units with our usual convention of eV. We use the derivation presented in Sec. 4.6. In particular: for the ion sound speed. We can simplify this somewhat by writing: where we have to be somewhat careful about what exactly is meant by . Anyways, plugging in some numbers gets us that which is obviously in CGS units. Once again, eV. The drift velocity is: where we assume the electric and magnetic fields are perpendicular. A handy formula, with V/m and T is The plasma parameter is defined as the number of particles in a Debye sphere: this is useful to know, since for classical plasma behavior we typically require . For simple calculations, in CGS units. The plasma beta is defined as the ratio of thermal to magnetic pressure in a plasma; and is an important parameter for magnetically-confined plasmas. In the CGS system of units: where as usual eV and T. From Statistical Mechanics we know that the Fermi Energy is: in CGS units, this reduces to: we can also calculate the Fermi pressure: We just discussed the Fermi energy. In some ICF-relevant plasmas, the thermal energy can be comparable to or less than the Fermi energy, in which case we call it a degenerate plasma. It is useful to characterize these plasmas by: so, for quick back of the envelope calculations in CGS units, the result, , is a dimensionless quantity. In a similar vein, it is useful to think of the plasma coupling. We want to compare the thermal energy of plasma ions to the strength of the coupling between them. We define: so, once again for CGS units, We previously discussed the fact that the cutoff or critical density is very important for laser-plasma interactions, and thus for ICF. Recall, from section 8.2.1, It is most useful to write this in terms of the laser wavelength . Then, when getting rid of the constants, we get: the result is in CGS, and we have written the wavelength in units of for convenience. Thus, for a Nd:glass laser at , the critical density is . If we frequency triple it, then the critical density is
<urn:uuid:67f3b4f9-b8ee-4f97-a70c-17a49a27da3c>
3.453125
26,964
Academic Writing
Science & Tech.
40.728089
95,493,360
- Paper report - Open Access Arabidopsis chromosome 2 sequence - Todd Richmond © BioMed Central Ltd 2000 Received: 8 February 2000 Published: 27 April 2000 Members of the Arabidopsis Genome Initiative, primarily from The Institute for Genomic Research, have completed sequencing one of the first two plant chromosomes. Significance and context Arabidopsis thaliana is the model organism of choice for modern plant biologists. Its small genome, the excellent genetic and physical maps of the genome and the lack of large amounts of repetitive DNA made it the first choice for plant genome sequencing. In 1996, a multinational organization, the Arabidopsis Genome Initiative (AGI), was formed to coordinate the worldwide effort to sequence the first higher plant. Made up of labs from the United States, Europe and Japan, AGI set a goal for the completion of the Arabidopsis genome by 2004. The members divided up the five chromosomes between them and began sequencing. Advances in sequencing and computing technology have pushed forward their initial timetable, however, and two papers report the completion of the first two plant chromosomes. Lin et al. report the sequencing of chromosome 2, which represents approximately 15% of the Arabidopsis genome. In a companion paper in the same issue of Nature, Mayer et al. report the sequencing of chromosome 4, which represents about 17% of the genome. These sequences are two of the largest pieces of DNA sequence ever assembled and together represent almost a third of the Arabidopsis genome. The complete sequences of chromosomes 2 and 4 offer unique insights into large-scale genomic organization, plant heterochromatic DNA, non-coding regions, gene duplication events, and gene family organization. The paper summarizes years of work by hundreds (if not thousands) of people in dozens of labs spread over three continents. The key features of chromosome 2 are as follows. The long arm of chromosome 2 is 16.0 Mb, the short arm 3.5 Mb. Nearly 50% of the sequence codes for protein, with a total of 4,037 predicted proteins. Each gene is about 4.4 kb in length, containing an average of 4.6 exons. The largest gene contains 52 predicted exons and is 50% identical to a human protein. The actual or potential cellular function for approximately 52% of the genes can be predicted on the basis of similarity to other characterized proteins. Only 33% of the predicted genes are represented among the 45,000 available Arabidopsis expressed sequence tags (ESTs). After classifying the predicted proteins into functional classes, the largest functional groups were genes involved in regulatory function and signal transduction (including DNA-binding proteins, transcription factors and protein kinases). The most frequent protein domains were leucine-rich repeats, protein kinases and zinc-finger domains. More than 60% of the predicted gene products (2,542) on chromosome 2 have significant similarity to another Arabidopsis protein. The products of most of the genes that have paralogs (83%) within the Arabidopsis genome are more similar to their paralog than to proteins from other completed sequences. Of these, 593 are found in tandem duplications that range in size from two to nine genes. The same phenomenon was seen in the analysis of chromosome 4. Lin et al. present a number of graphs that show the distribution of features along chromosome 2, summarizing predicted gene density, EST density, tandem duplications and repetitive elements. As expected, gene density decreases as you approach the centromere and the amount of repetitive DNA increases. There is a table of the transposable elements found on chromosome 2, broken down into class, subclass and family. Unfortunately, there is no equivalent table for other types of repetitive DNA element. Most interesting, however, are the discoveries that can only come from the analysis of the genome as a whole. Of these, the large duplication events between chromosomes are the most unexpected. The largest is a 4.6 Mb region in chromosomes 2 and 4, in which 39% of the genes (430 out of 1,100) are duplicated between the two chromosomes. Another duplication, 0.7 Mb long, occurs on chromosomes 1 and 2, in which 33% of the genes (57 out of 170) are duplicated. These duplications account for part, but not all, of the high percentage of genes with paralogs. Chromosome 2 has another unusual feature. It is well established that individual genes can be transferred from organelles to the nucleus. It is nonetheless surprising to find a stretch of 270 kb of mitochondrial sequence in the genetically defined centromere region of chromosome 2. This inserted sequence is larger than any sequence previously reported to have undergone organelle-to-nuclear transfer (almost 75% of the mitochondrial genome). The sequence identity (99%) suggests that this transfer event was very recent. When the sequence of the Landsberg Arabidopsis ecotype is made available later this year, it will be interesting to see if this same insertion event is present. Information on Arabidopsis and its genome sequence is available from The Arabidopsis Information Resource (TAIR), MIPS Arabidopsis thaliana database (MATDB), Kazusa Arabidopsis data opening site (KAOS) and TIGR's Arabidopsis thaliana annotation database. Sequence-based, genetic and physical maps of the Arabidopsis genome are available from the Cold Spring Harbor Laboratory. This paper, and its companion, just begin to scratch the surface of the overwhelming amount of information contained in the sequence of two plant chromosomes. Comparing this paper with the chromosome 4 paper, the interest of the primary authors is clear. While Lin et al. emphasize chromosome structure and organization, Mayer et al. appear more interested in protein-coding sequences and the functional classification of the predicted proteins. Once the entire Arabidopsis genome (ecotype Columbia) is completed and the complete sequence of the Landsberg ecotype is released to researchers, a flood of papers is likely to overwhelm the plant community. In a sense, this is somewhat frustrating, as we will be forced to rely on others to analyze and summarize the important information. Not all researchers agree on what data is important, how to present it or how to interpret it. As a case in point, both the chromosome 2 and the chromosome 4 papers make reference to the large duplication event shared by the two chromosomes. Lin et al. report that this duplication is 4.6 Mb long, with several translocations or inversions, encompassing a total of 1,100 genes. Mayer et al. report that the two chromosomes share four blocks of conserved sequence, two of which are inverted, totaling 2.5 Mb in length. Which interpretation is correct? Another source of frustration is the lack of consistency in reporting results. While Lin et al., reporting on chromosome 2, place more emphasis on the overall chromosome structure and the organization and distribution of various elements, Mayer et al. place more emphasis on the genes, predicted functions and structural components. It makes this reporter wish that the two teams had coordinated with one another, divided up the various areas of interest and done a complete report on those areas for both chromosomes. For now, we must be satisfied with a partial analysis. The upcoming completion of the Arabidopsis genome will truly be a landmark. For the first time, we will have the complete genetic blueprint for a flowering plant. The initial data, especially from Lin et al., suggest that all plants have a common set of genes for many functions. It is already clear that many of the genes found in other plants are present in Arabidopsis, even when comparing across the monocot/dicot and angiosperm/gymnosperm divisions. But until another plant, such as rice, is completely sequenced, it will be difficult to evaluate the size of that set of common genes. With the Arabidopsis genome expected to be finished by the end of the year, we can then move on to the more complex area of functional genomics and begin to elucidate the function of the estimated 25,000 proteins that make up a flowering plant.
<urn:uuid:9c592b17-3818-4dbb-8af7-95a622d0aaf2>
2.734375
1,661
Academic Writing
Science & Tech.
35.057681
95,493,364
About the analyses and illustrated trends Canadian Migration Monitoring Network trends are displayed only for species at each site that meet the following criteria: - The species is a regular migrant with minimal stopover (i.e. excluding partial, irruptive and non-migrants, and roosting or staging species). - Standardized count protocols are followed consistently through time. - At least 75% of the species? migratory period is well-sampled in 2/3 or more of all years. - Relatively few individuals are detected before or after clearly recognizable migratory influxes, such that trends are minimally affected by wintering or locally breeding species recorded regularly outside the migratory period. - The species is regularly observed within and between seasons (averaging 10+ detections/season and detection on 5+ days/season). Trends for species meeting these criteria can be interpreted as representing change in population size within a large area of the stations' catchment area (the portion of breeding range sampled by that station). Trend maps show the estimated population trend over the most recent ten-year period for sites that meet the criteria for that species. About seasonal abundance Seasonal abundance graphs are plots of information showing phenology and abundance of all species regularly sampled at the selected location. Migration windows show the boundaries of the spring and fall migration window used in analyses (only for species with trends displayed on website). The bounds of spring and fall migration windows were restricted to those days of the year when the station operated during at least two-thirds of total years in operation, and are in some cases restricted further (e.g. to omit likely summer residents from analysis). ˜ daily mean log(species count) ____ Percent of years species present each day ____ Percent of years station in operation each day | Spring and/or fall migration window boundaries Analysis methods for Annual Indices and Population Trends Long-term trends in count were estimated independently for each species, site and season using a Bayesian framework with Integrated Nested Laplace Approximation (R-INLA, Rue et al. 2014) in R (version 3.1.3; R Core Team 2014). We estimated trends using log-linear regression, which included 1) a continuous effect for year (i) to estimate log-linear change in population size over time, 2) first and second order effects for day of year (j) to model the seasonal distribution of counts, and 3) hierarchical terms to account for random variation in counts among years and among days. Number of observation days each year was included as an offset to account for variation in daily effort: where γ_i is a first-order autoregressive (AR1) random effect for year to account for temporal autocorrelation among years, and η_jis an independent and identically distributed (IID) hierarchical term to account for random variation in counts among days of the year. For monitoring stations with more than one site (e.g., Long Point Bird Observatory), the regression also included a fixed site effect, as well as interactions between site and the first and second-order day of year effects. While we recognize that an AR1 random effect for day of year nested within year might have been more appropriate to account for temporal autocorrelation among daily counts, we found that specifying the random day effect as IID had no noticeable effect on trend bias or on probability of estimating a precise trend (probability that the simulated trend fell within confidence limits of the estimated trend; T. L. Crewe, unpublished data). Specifying the random day effect as IID did, however, significantly increase the speed of analysis, and reduced the probability of errors using INLA. We assumed a Poisson distribution of counts, unless the proportion of 0-observation days across years was >= 0.65. This cut-off is somewhat arbitrary, and should be examined in greater detail, but see Crewe et al. 2016. For both data distributions, year estimates and 95% credible intervals were back-transformed to annual rates of population change using 100*exp(estimate)-1. Trends were calculated using the full dataset, as well as for all 10-year subsets to estimate 10-, 20-, 30-year (etc., where appropriate) trends for comparison among years over time. Trends are presented as %/year with lower and upper 95% credible intervals, which suggest that there is a 95% probability that the true trend falls within that range. A posterior distribution was also calculated to estimate the support for an increasing or declining trend. A value near 0.5 would suggest equal probability for an increasing and declining trend (little evidence for a change in migration counts over time), whereas a posterior probability near 1 will suggest strong support for the observed change in counts. The posterior probability can be used as a pseudo p-value, such that trends with a posterior probability > 0.9 could be considered to have strong support. Annual indices of population size were estimated as the mean daily count from the posterior distribution of the above model. Plots of annual indices show 95% credible intervals (vertical lines), and the black line and grey shading display a loess fit across indices and upper and lower credible intervals. Crewe, T. L., P. D. Taylor, and D. Lepage. 2016. Temporal aggregation of migration counts can improve accuracy and precision of trends. Avian Conservation and Ecology 11(2): 8. <http://dx.doi.org/10.5751/ACE-00907-110208> R Core Team. 2014. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. [online] URL: Rue, H., S. Martino, F. Lindgren, D. Simpson, and A. Riebler. 2014. INLA: Functions which allow to perform full Bayesian analysis of latent Gaussian models using Integrated Nested Laplace Approximation. [online] URL: http://www.r-inla.org.
<urn:uuid:dbe3c1b8-d0c2-4d66-bf95-4c0cca0f37f2>
3.265625
1,285
Academic Writing
Science & Tech.
45.790512
95,493,366
In the Aug. 13 issue of the journal Nature, climate researchers including Jonathan Woodruff of the University of Massachusetts Amherst show that the frequency of intense hurricanes in the Atlantic Ocean over the last 1,500 years has been closely linked to long-term changes in the El Niño/Southern Oscillation (ENSO) and sea surface temperature. The finding could help with hurricane modeling and prediction in the future. Establishing the link between hurricane variability and climate change over these longer timescales “is a new viewpoint for us,” Woodruff explains. “There’s a randomness to hurricanes. But the fact that we can see trends that rise above that randomness is significant and a bit of a surprise. Our work indicates that hurricane activity has responded noticeably to past climate shifts. When considering future climate change over the next century, our results indicate that measurable changes in hurricane activity could occur, rising above the noise in the system.” A relationship between ENSO, sea surface temperature and hurricane activity is seen in modern times, Woodruff says, but the historical record based on ships’ logs and other observations is not long enough to assess variability on timescales longer than a few decades at best. “Given the possible effects of continued climate warming on intense tropical cyclone activity, it’s essential that we develop an understanding of how past climate change has affected tropical cyclone frequency, intensity and track on longer timescales,” the geologist says. “This work is another step forward in understanding the complex relationship between climate variability and Atlantic hurricane activity.” Woodruff and colleagues’ study shows that a statistical climate model and actual paleoclimate data cross-validate each other over the last 1,500 years during key intervals of climatic change. Specifically, UMass Amherst’s Woodruff and colleagues at Penn State and Woods Hole Oceanographic Institute prepared sedimentary reconstructions of hurricane-induced flooding, preserved in coastal ponds and salt marshes and collected as core samples from eight representative sites throughout the western North Atlantic, an approach known as paleotempestology. These environments are usually protected from the sea by barrier beach systems. They enjoy sustained quiet periods during which only fine-grained mud and organic materials build up on pond floors and marsh surfaces. But during hurricanes and other storms, these normally calm environments are overrun with ocean waves and storm surges that carry in coarser sand from the barrier beaches. The sedimentary record is thus one of fine-grained organic mud, interbedded with coarse-grained, storm-induced deposits. Such deposits serve as natural archives of past hurricanes, with storm reconstructions that can extend back for many thousands of years, Woodruff points out. Although still limited to a few reconstructions at present, he and colleagues have now observed statistically significant trends in tropical cyclone activity emerging from paleo-hurricane records. They compared these trends to data from a statistical model which independently predicted hurricane variability using paleo-reconstructions of climate factors known to influence hurricane activity, such as sea surface temperature, ENSO and the North Atlantic Oscillation. The model predicted similar trends to those observed in the paleo-storm reconstructions, with an observed decrease in hurricane activity during the “Little Ice Age” around 300 years ago, a time when sea surface temperatures were lower than today and El Niño events appear to have occurred more frequently. Likewise, a period of increased hurricane activity similar to present levels also occurred around 1,000 years ago during an interval known as the “Medieval Climate Anomaly,” driven predominantly by increases in both sea surface temperature and the frequency of La Niña events. Woodruff says the new finding “is like so much in science―in hindsight it makes sense. When the evidence is supplied, it’s simple enough to see the relationships, as in this case with the two independent records telling the same story. But until we had this evidence, things were much less clear.” Jon Woodruff | Newswise Science News Further reports about: > Atlantic > Atlantic hurricane activity > Climate change > ENSO > El Niño > Hurricanes > Nature Immunology > Woods Hole Oceanographic > fine-grained organic mud > future climate change > hurricane activity > organic material > oscillation > paleotempestology > sea surface > sea surface temperature > storm-induced deposits > tropical cyclone > tropical cyclone activity New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:84191061-684c-4430-ba75-d4a4e9269c01>
3.28125
1,547
Content Listing
Science & Tech.
25.57594
95,493,368
The scientists from I.P. Pavlov Institute of Physiology, Russian Academy of Sciences, have investigated the intellectual abilities of chimpanzees in comparison with the children from a nursery school in Koltushy near St. Petersburg. They asked both to build the pyramids of cubes of different size. The children with the retention of speech development aged from 2 to 7 and two chimpanzees, female (aged 7) and male (aged 11) ones, took part in the experiment. The children and the apes were shown a pyramid and they had to build the same one. The only difference was that the children were explained the task and the apes were not obviously told anything and they should guess how to do it without any assistance. At first, the construction consisted of only two cubes: big and small ones. Then they increased the number of cubes up to nine. The cubes had different sizes and, in addition, there was one unnecessary cube in the pile that the pyramid instanced did not contain. The scientists analyzed the time that the children and animals being tested spent trying to choose each detail and how much time it took them to clone the whole pyramid. In the early stage both the children and the apes spent more time trying to complete the operation with increasing the number of cubes. However, they gained useful experience with time and started working quicker. The chimpanzees built pyramids of two or three elements as quickly as the children aged 3 or 4. When the number of cubes became higher the apes began to make mistakes and they often could not solve the task themselves. This fact especially concerned the male chimpanzee: in most cases it needed help and sometimes it simply refused to solve the task. The female monkey was younger and, perhaps for this reason, managed without any assistance more often. Both the apes and the children made mistakes if the number of cubes were higher than 4 or 5. Interestingly enough, having success in completing the task greatly depended not on their age but on the degree of speech development. The children who could not describe properly what they had seen and tell about an event even at the age of 6 or 7 solved the task not much better than chimpanzees. Nadejda Markina | alphagalileo NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:d2317ddd-3f02-43bf-a795-e0930baa9f44>
3.296875
1,073
Content Listing
Science & Tech.
45.290217
95,493,369
R809 Fall 2018 History of the Universe (Classical) Thursdays, 11:50–1:15, Sept. 20–Oct. 11 Instructor: Mark Dodge This course is an introduction to the classical view of time and the universe. Why do we have 24 hours in a day and 60 minutes in an hour? Why not some other number? Why do we have the calendar we have? What are planets, and why did they cause such confusion to the ancients? How did we get from the perfectly obvious idea that the sun goes around the earth to the weird idea that the earth goes around the sun? And why is Pluto not a planet anymore? This class is an introduction to the subject of cosmology, the study of the universe. It begins with the Babylonians and extends to the time of Isaac Newton. Almost no math—but lots of ideas! There will be hands-on demonstrations and plenty of conversation as we explore our understanding of the universe. Mark Dodge taught high school physics for 24 years in Arlington. He has been fascinated by astronomy since gazing through his first telescope when he was in seventh grade. Dodge is also fascinated by ancient cultures (he can babble on about Babylon) and how these ancient cultures still influence us today. This course is a collection of several of his most popular presentations from his high school teaching days.
<urn:uuid:11237271-0b70-47dd-bfb2-b1d54b391464>
2.640625
277
Product Page
Science & Tech.
63.895304
95,493,393
The hot weather and dry lightning common in Florida between the end of March and the middle of June is often a recipe for wildfires. On March 23, 2018, the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Aqua satellite detected smoke from multiple lightning-triggered fires in southwest Florida. The fires had not caused much damage to homes as of March 29, but they are likely the first flare-up in what the Florida Forest Service expects to be an active fire season. Much of the southern part of the state faces abnormally dry conditions, and there are many downed trees that are primed to burn after last summer’s hurricanes. NASA images by Jeff Schmaltz, LANCE/EOSDIS Rapid Response. Caption by Adam Voiland.
<urn:uuid:409c4085-47aa-4378-a5aa-50d675e35dc7>
3.234375
163
Knowledge Article
Science & Tech.
41.530767
95,493,410
In the garden with an area of 8 ares rain 40hl of water. To what heights leveled water? Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: - Children pool The bottom of the children's pool is a regular hexagon with a = 60 cm side. The distance of opposing sides is 104 cm, the height of the pool is 45 cm. A) How many liters of water can fit into the pool? B) The pool is made of a double layer of plastic film. Annual rainfall in our country are an average of 797 mm. How many m3 of water rains on average per hectare? - Sand pile Auto sprinkled with sand to approximately conical shape. Workers wanted to determine the volume (amount of sand) and therefore measure the circumference of the base and the length of both sides of the cone (over the top). What is the volume of the sand c How high can vintner fill keg with crushed red grapes if these grapes occupy a volume of 20 percent? Keg is cylindrical with a diameter of the base 1 m and a volume 9.42 hl. Start from the premise that says that fermentation will fill the keg (the number. - Cuboid and eq2 Calculate the volume of cuboid with square base and height 6 cm if the surface area is 48 cm2. - Triangular prism Calculate the volume and surface of the triangular prism ABCDEF with base of a isosceles triangle. Base's height is 16 cm, leg 10 cm, base height vc = 6 cm. The prism height is 9 cm. - Axial section Axial section of the cone is equilateral triangle with area 208 dm2. Calculate volume of the cone. - Cone A2V Surface of cone in the plane is a circular arc with central angle of 126° and area 415 dm2. Calculate the volume of a cone. - Tetrahedral pyramid Calculate the volume and surface area of a regular tetrahedral pyramid, its height is $b cm and the length of the edges of the base is 6 cm. - Percitipation - meteo Do you have any tip for calc percitipation on a lnd ? Example on 2 ha falls to 5 mm, how many cubic meters is it? - Sphere slices Calculate volume and surface of a sphere, if the radii of parallel cuts r1=31 cm, r2=92 cm and its distance v=25 cm. - Tetrahedral prism - rhomboid base Calculate the area and volume tetrahedral prism that has base rhomboid shape and its dimensions are: a = 12 cm, b = 70 mm, v_a = 6 cm, v_h = 1 dm. Calculate volume of pillar shape of a regular tetrahedral truncated pyramid, if his square have sides a = 19, b = 27 and height is h = 48. Circular cone of height 15 cm and volume 10598 cm3 is at third of the height (measured from the bottom) cut plane parallel to base. Calculate the radius and circumference of the circular cut. - Cuboid - volume and areas The cuboid has a volume of 250 cm3, a surface of 250 cm2 and one side 5 cm long. How do I calculate the remaining sides? One cube is inscribed sphere and the other one described. Calculate difference of volumes of cubes, if the difference of surfaces in 254 cm2. - Cube 5 The content area of one cube wall is 32 square centimeters. Determine the length of its edges, its surface and volume.
<urn:uuid:da6772c9-31f7-47e7-b4b3-13534e9d2ebc>
3.265625
815
Q&A Forum
Science & Tech.
68.501682
95,493,449
A repository & source of cutting edge news about emerging terahertz technology, it's commercialization & innovations in THz devices, quality & process control, medical diagnostics, security, astronomy, communications, applications in graphene, metamaterials, CMOS, compressive sensing, 3d printing, and the Internet of Nanothings. NOTHING POSTED IS INVESTMENT ADVICE! REPOSTED COPYRIGHT IS FOR EDUCATIONAL USE. Wednesday, May 11, 2016 A quasiparticle collider Mark Sherwin and an international team prove that basic collider concepts from particle physics can be transferred to solid-state research In the early 1900s, Ernest Rutherford shot alpha particles onto gold foils and concluded from their scattering properties that atoms contain their mass in a very small nucleus. A hundred years later, modern scientists took that concept to a new level, building the Large Hadron Collider in Switzerland to smash protons into each other, which led to the discovery of the Higgs boson. However, what worked for particles like the Higgs hasn't translated to solids -- until now. Experiments conducted by UCSB physicist Mark Sherwin and an international team prove that basic collider concepts from particle physics can be transferred to solid-state research. Their findings appear in the journal Nature. "Ultimately, this approach might lead to the clarification of some of the most outstanding enigmas of condensed matter physics," said co-author Sherwin, director of UCSB's Institute for Terahertz Science and Technology and a professor in the Department of Physics. "This is a fundamentally new concept that could lead to better-designed modern materials. Our results also may one day provide a better understanding of important phases of matter such as those found in high-temperature superconductors." Despite the fact that modern technology depends on knowing the structural and electronic properties of solids, a parallel to the atomic-level collider has been lacking in solid-state research. Within a solid, the most useful analogs to particles like protons are called quasiparticles. Think of them this way: If each person in a very large stadium is like an atom in a solid, then the audience doing the "wave" is akin to a quasiparticle. Earlier experiments by the Sherwin group at UCSB have created quasiparticles called excitons -- pairs of electrons and holes (electron vacancies) bound by the electrical force between them -- and continuously accelerated them using laser beams that remain on during the entire process. But without short pulses of laser light, actual collision events were not previously observable as distinct flashes of light. This new research employed a unique laser source at the terahertz high-field lab in Regensburg, Germany, which enabled the investigators to directly observe quasiparticle collision events. Since the quasiparticle exists for an extremely short amount of time, it was crucial to operate on ultrashort timescales. If one second were stretched to the age of the universe, a quasiparticle would only exist for a few hours. The scientists produced collisions within excitons in a thin flake of tungsten diselenide. A light wave of the terahertz pulse accelerated the electrons and holes of the exciton within a period shorter than a single oscillation of light (1 terahertz means 1 trillion oscillations per second). The experiment demonstrates that only excitons created at the right time lead to electron-hole collisions, just as in conventional accelerators. However, this process of recollision generates ultrashort light bursts that encode key aspects of the solid. These laboratory observations have been supported and explained by a quantum mechanical simulation performed by co-authors at the University of Marburg in Germany. "These time-resolved collision experiments in a solid prove that the basic collider concepts that have transformed our understanding of the subatomic world can be transferred from particle physics to solid-state research," Sherwin said. "They also shed new light on quasiparticles and many-body excitations in condensed matter systems." Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
<urn:uuid:9ef100db-dcfd-48ba-afcd-dfc567a9f242>
3.421875
894
News (Org.)
Science & Tech.
21.381098
95,493,504
e-books in Nuclear Physics category by Sinya Aoki - arXiv.org , 2010 From the table of contents: Intorduction - Nuclear Forces; Phase Shift from Lattice QCD - Luescher's formula in the finite volume; Nuclear Potential from Lattice QCD; Repulsive core and operator product expansion in QCD; Concluding Remarks. by Bo N. Nilsson - De Gruyter Open Ltd , 2015 The textbook includes more than a hundred exercises and solutions to applied problems suitable for courses in basic radiation physics. Each chapter begins with a summary of important definitions and relations useful for the subject. by J. Pearson - University of Manchester , 2008 From the table of contents: Basic Terminology; Nuclear Properties; Nuclear Models; Collective Excitations (Nuclear Vibrations, Nuclear Rotations); Alpha-decay (Fine Structure of alpha-decay); Beta-decay; Gamma-ray Emission; and more. by Neal O. Hines - United States Atomic Energy Commission , 1966 This book describes the environmental investigations that have been conducted with the aid of the atom since the first atomic detonation near Alamogordo, in 1945. The story is one of beginnings that point the way to a new understanding of the world. - U.S. Congress, Office of Technology Assessment , 1995 While the focus of the study is on the TPX and alternate concepts, it also provides a history of the overall fusion energy program. With this context, the study identifies (but does not answer) some underlying questions that must be addressed. by Jenny Thomas - University College London , 2000 These notes will provide introductory coverage of modern particle physics, complete coverage of nuclear physics principals relevant to modern day nuclear physics usage, and comprehensive coverage of principals of particle detection and measurement. by E. B. Podgorsak (ed.) - International Atomic Energy Agency , 2005 This book is dedicated to students and teachers involved in programmes that train professionals for work in radiation oncology. It provides a compilation of facts on the physics as applied to radiation oncology and it is useful to graduate students. - International Atomic Energy Agency , 2012 This reference book for graduate students provides an introduction to nuclear fusion and its prospects, and features specialized chapters written by leaders in the field, presenting the main research and development concepts in fusion physics. by Loyce McIlhenny - US Atomic Energy Commission , 1964 Nuclear energy is playing a vital role in the life of every man today. It is essential that all Americans gain an understanding of this vital force if they are to realize fully the myriad benefits that nuclear energy offers them. by John M. Blatt, Victor F. Weisskopf - Wiley , 1952 A clear and cogent investigation of key aspects of theoretical nuclear physics by leading experts: the nucleus, nuclear forces, nuclear spectroscopy, two-, three- and four-body problems, nuclear reactions, beta-decay and nuclear shell structure. by A. K. Chaudhuri - arXiv , 2012 Some concepts in relativistic heavy ion collisions are discussed. To a large extent, the discussions are non-comprehensive and non-rigorous. It is intended for graduate students who are intending to pursue career in high energy nuclear physics. by Maarten Golterman - arXiv , 2010 These notes provide a pedagogical introduction to the subject. Topics covered include an introduction, inclusion of scaling violations in chiral perturbation theory, partial quenching and mixed actions, chiral perturbation theory with heavy kaons. by Feriz Adrovic - InTech , 2012 This book brings new research insights on the properties and behavior of gamma radiation, studies from a wide range of options of gamma radiation applications in Nuclear Physics, industrial processes, Environmental Science, Radiation Biology, etc. by Amir Zacarias Mesquita - InTech , 2012 A comprehensive review of nuclear reactors technology from authors across the globe. Topics include: Flow Instability in Material Testing Reactors; Decay Heat and Nuclear Data; Improving the Performance of the Power Monitoring Channel; and more. by Nirmal Singh - InTech , 2011 The book Radioisotopes - Applications in Physical Sciences is divided into three sections namely: Radioisotopes and Some Physical Aspects, Radioisotopes in Environment and Radioisotopes in Power System Space Applications. by Niels Walet - UMIST , 2003 In these lecture notes the author shall discuss nuclear and particle physics on a somewhat phenomenological level. The mathematical sophistication shall be rather limited, with an emphasis on the physics and on symmetry aspects. by C. Sharp Cook - Van Nostrand , 1961 The general coverage of the subject of atomic and nuclear physics provides the student who is planning to continue his studies of physics a reason able base for an understanding of many of the advanced texts and courses that he encounters later. by Howard Matis - CPEP , 2003 You don't have to be a nuclear physicist to understand nuclear science. The Wall Chart was created to explain to a broad audience the basic concepts of nuclear structure, radioactivity, and nuclear reactions as well as to highlight current research. by Walter Pfeifer - arXiv , 2003 This work introduces into the Interacting Boson Model, created in 1974 and then extended by numerous papers. Many-body configurations with s- and d-boson states are described and creation- and annihilation-operators for bosons are introduced. by S. Scherer - arXiv , 2002 This book provides a pedagogical introduction to the basic concepts of chiral perturbation theory and is designed as a text for a two-semester course on that topic. Various examples with increasing chiral orders and complexity are given. by F. Smarandache, V. Christianto - InfoLearnQuest , 2007 The book covers a wide-range of issues from alternative hadron models to their likely implications to New Energy research, including alternative interpretation of coldfusion phenomena. The authors explored some new approaches to particle physics. by Jay Orear - Cornell University , 2004 Cornell Emeritus Professor Jay Orear discusses his relationship with his former professor and mentor Enrico Fermi. The book also includes discussions of Fermi by other scientists, most of whom presented their papers honoring Fermi's career. by Joseph P. Hornak - Rochester Institute of Technology , 1999 Nuclear magnetic resonance, or NMR as it is abbreviated by scientists, is a phenomenon which occurs when the nuclei of certain atoms are immersed in a static magnetic field and exposed to a second oscillating magnetic field. by Gennady Gorelik - American Institute of Physics , 2005 Sakharov, the father of the Soviet Union's hydrogen bomb, went on to struggle for human rights, peace and democracy, sacrificing his high position for exile and repression. His pilgrimage is explained by his biographer and illuminated with photos. by Naomi Pasachoff - American Institute of Physics , 2005 Marie Curie opened up the science of radioactivity. She is best known as the discoverer of polonium and radium and as the first person to win two Nobel prizes. Her radium was a key to a change in our understanding of matter and energy. by Kieran Maher - Wikibooks , 2006 Nuclear Medicine is a fascinating application of nuclear physics. The first ten chapters of this book support a basic introductory course in an early semester of an undergraduate program. Additional chapters cover more advanced topics in this field.
<urn:uuid:b7e2ca66-fd08-45c3-bd1d-bea6e1669fc4>
2.78125
1,582
Content Listing
Science & Tech.
29.796365
95,493,528
Since the star's light exerts radiation pressure that would easily blow such small grains out of orbit, any dust near the star must have arrived there recently, perhaps from a swarm of comets, asteroids, or a dusty planetesimal While we don't know the cause of the tumbling, we predict that it was most likely sent tumbling by an impact with another planetesimal in its system, before it was ejected into interstellar space. Dust trapping is one potential solution to a major stumbling block in our theories of how planets form, which predicts that particles should drift into the central star and be destroyed before they have time to grow to planetesimal Among specific topics are the chemical classification of nearby active galaxies, the chemical variation in the Orion A cloud cores, highlighting the dynamical interaction between planets and planetesimal belts with ALMA, and Keplarian and infall motions around the late-phase protostar TMC-1A. See was critical of the planetesimal hypothesis, as it was known, and in 1909 attacked it in terms that drew a sharp response from Moulton, implying amongst other things that See, one of his former teachers at Chicago, had plagiarised his work; a charge he reiterated with biting sarcasm in a 1912 February critique 'Capture theory and capture practice', published in Popular Astronomy. Nevertheless, we have come to realize that these impacts not only contributed to forming our planet (through planetesimal collision and accretion), but also by modifying it through time, and even punctuating the path of life's evolution. Two prominent examples of this process are planetesimal interactions with the gaseous component of the protoplanetary disk during the formation of the solar system and orbital decay of ring particles as a result of drag caused by extended planetary atmospheres. Poco despues, en terminos astronomicos, hace 4 mil 450 o 4 mil 460 millones de anos, la Tierra sufrio otra colision, en esa ocasion fue contra un planetesimal Chamberlin had advanced the planetesimal theory (see 1905), which required a near collision that would draw out solar matter by gravitational pull and form the planets. Ward has also contributed fundamental insights to humankind's understanding of planetesimal formation, the dynamical evolution of the moon, planet migration, planetary spin axis orientations, and the formation of planetary and satellite systems. The resulting paper, "Identification of a primordial asteroid family constrains the original planetesimal population," appears in the August 3, 2017, online edition of Science. Antarctic Meteorite Program (a collaboration among Cassidy, NASA, the National Science Foundation, and the Smithsonian Institution), as well as current field and curatorial practices, the nebular and planetesimal history of our solar system through key specimens, the significance of key samples from larger bodies, and much more.
<urn:uuid:df76e042-aa52-4b14-b6dc-8d380ec01711>
3.40625
613
Knowledge Article
Science & Tech.
-3.503556
95,493,530
Glyoxal (CHOCHO) is produced in the atmosphere by oxidation of isoprene. It has been proposed as an important source of organic aerosol. We examine here its potential importance in the United States. 4.1 Glyoxal has mean atmospheric lifetimes of 2 hours against photolysis, 8 hours against oxidation by OH, and 8 hours against uptake by aqueous particles to form aerosol. What fraction of atmospheric glyoxal will form aerosol? 4.2 Isoprene emission in the U.S. in summer is estimated to be 5x1011 molecules cm-2 s-1. The glyoxal molar yield from isoprene oxidation is 10%. Assume a mixing depth of 1 km and an aerosol lifetime of 3 days, and further assume that glyoxal is in steady state. Calculate the resulting mean concentration of organic aerosol (in units of µg carbon m-3) from the glyoxal formation pathway. Compare to typical U.S. observations of 2 µg C m-3 for the concentration of organic aerosol. Recently Asked Questions - the 14 th Amendment has evolve with equal protection.how to restore equality vs equity in schools today? - Which of the following general anesthetics belongs to inhalants? a) Thiopental b) Desfluran c) Ketamine d) Propofol - “ The price of compact fluorescent light bulbs fell because of improvements in production technology . As a result , the demand for incandescent light bulbs
<urn:uuid:eed41af1-9ed8-4148-8696-fab4535d5c13>
3.0625
318
Tutorial
Science & Tech.
51.465455
95,493,535
Andrea and Julie smoke some of Fidel's finest. See other pictures from Costa Rica. I have always been interested in sea slugs because of their intense color patterns and unique lifestyle. In this project I hope to shed light on and learn more about their lifestyle and characteristics that make them so interesting and spectacular. I will also go into detail on the different ways in which current polluting of the coral reefs is affecting sea slugs as well as the combined biota that inhabit the same ecosystem. I- Introduction to Sea Slugs 1. Physical Characteristics 2. Abnormalities in body shape 3. Coral feeding 4. Aposematic Coloration II- Pollution as a threat to coral reef life A. Types of pollution B. Direct effects on Sea Slugs C. What changes must be made now to save coral reef life? (2005). Sea Sickness. Canada and the World Backgrounder, 70 (4), 7-14. Christopher, J. (2002). Pollution Puts Coral Reefs off Florida’s Coast in Peril. Christian Science Monitor, 49 (218), 12. Raloff, J. (2005). Paint additive hammers coral. Science News, 167(13), 206-207. Smithers, S. (2003). Coral Reefs in Crisis: Can the World’s Coral Reefs Survive?. Geodate, 16 (4), 1-4. Rudman, W. (2004). Retrieved Apr. 19, 2005, from Sea Slug Forum Web site: www.seaslugforum.net. Sea slug. (n.d.). Retrieved Apr. 4, 2005, from www.encyclopedia.com. Weber, P. (1993). Coral Reefs Face the Threat of Extinction. USA Today Magazine, 121(2576), 62-66. Return to Topic Menu We also have a GUIDE for depositing articles, images, data, etc in your research folders. Article complete. Click HERE to return to the Pre-Course Presentation Outline and Paper Posting Menu. Or, you can return to the course syllabus WEATHER & EARTH SCIENCE RESOURCES OTHER ACADEMIC COURSES, STUDENT RESEARCH, OTHER STUFF TEACHING TOOLS & OTHER STUFF It is 8:46:23 AM on Monday, July 16, 2018. Last Update: Wednesday, May 7, 2014
<urn:uuid:5a8f95d4-2f8d-450c-87c0-69c992a999a9>
2.765625
525
Product Page
Science & Tech.
73.789302
95,493,541
A series of field and laboratory experiments were conducted to examine whether natural levels of insect herbivory affect the arbuscular mycorrhizal (AM) colonization of two plant species. The plant species were the highly mycorrhizal (mycotrophic) Plantago lanceolata, which suffers small amounts of insect damage continuously over a growing season and the weakly mycorrhizal (non-mycotrophic) Senecio jacobaea, which is frequently subject to rapid and total defoliation by moth larvae. Herbivory was found to reduce AM colonization in P. lanceolata, but had no effect in S. jacobaea. Similarly, AM colonization reduced the level of leaf damage in R lanceolata, but had no such effect in S. jacobaea. AM fungi were found to increase growth of R lanceolata, but this effect was only clearly seen when insects were absent. AM fungi reduced the growth of S. jacobaea irrespective of whether insects were present. It is concluded that the reduction of AM fungal colonization by herbivory in P. lanceolata is due to the reduced amount of photosynthate available to the symbiont. This may only become apparent at threshold levels of insect damage and, below these, increased photosynthesis elicited by the mycorrhiza is able to compensate for foliage loss to the insects. However, in S. jacobaea, the mycorrhiza appears to be an aggressive parasite and insect attack only exacerbates the reduction in biomass. In mycotrophic plants, insect herbivores may be responsible for poor functioning of the symbiosis in field conditions and there is a symmetrical interaction between insects and fungi. However, in non-mycotrophic plants, the interaction is strongly asymmetrical, being entirely in favour of the mycorrhiza. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:eece3b13-709c-4706-8943-ba6dc1825237>
3.453125
409
Academic Writing
Science & Tech.
23.402865
95,493,598
Scientific evidence concludes that global warming and climate change are happening now. There is overwhelming scientific agreement on human-caused global warming. More than 97 percent of publishing scientists and a synthesis of peer reviewed studies confirm this scientific fact. Virtually all national and international science academies and societies have issued statements or assessments affirming humans' role in recent climate change. This includes the academies of science from 80 countries. No scientific body in the U.S. or internationally formally dissents from this consensus. Global warming refers to the upward temperature trend across the entire Earth since the early 20th century, and most notably since the late 1970's, due to the increase in fossil fuel emissions since the industrial revolution. Worldwide since 1880, the average surface temperature has gone up by about 0.8 degrees C (1.4 degrees F), relative to the mid-20th century baseline (of 1951-1980). Climate change refers to a broad range of global phenomena created predominantly by burning fossil fuels, which add heat-trapping gases to Earth's atmosphere. These phenomena include the increased temperature trends described by global warming, but also encompass changes such as sea level rise; ice mass loss in Greenland, Antarctica, the Arctic and mountain glaciers worldwide; shifts in flower/plant blooming; and extreme weather events. The National Wildlife Federation welcomes the news that EPA Administrator Scott Pruitt has stepped down from his position to allow new leadership for this critical agency.Read More Find out what it means to source wood sustainably, and see how your favorite furniture brands rank based on their wood sourcing policies, goals, and practices.Read More Climate change is allowing ticks to survive in greater numbers and expand their range—influencing the survival of their hosts and the bacteria that cause the diseases they carry.Read More Tell your members of Congress to save America's vulnerable wildlife by supporting the Recovering America's Wildlife Act.Read More You don't have to travel far to join us for an event. Attend an upcoming event with one of our regional centers.
<urn:uuid:ea1ac8df-b29e-4ecc-8713-aa6e12b15079>
3.375
413
About (Org.)
Science & Tech.
36.991241
95,493,599
Astro-QA #3 : How many moons in our solar system can cause total solar eclipse? - January 15, 2018 - 89 Views - 0 Likes - 0 Comment According a study based on 141 moons, whose orbits are known, it is found that around 32 moons can hide the Sun’s disk completely as seen from their planet’s surface. Many of these are extreme. Pluto’s Charon, which looms 200 times larger than the Sun in Pluto’s sky. Next is Neptune’s Triton, 27 times larger than the Sun in Neptune’s sky. Other whopping-big moons include Neptune’s Despina, 17 times larger and Uranus’s Ariel 15 times larger. Jupiter’s Galilean moon Callisto can appear just 1.3 times larger than the Sun when low in the Jovian sky. The only moons that seem to be capable of a near-perfect fit like our moon are Saturn’s Epimetheus, Prometheus, and Pandora. Each has an irregular shape, so a total eclipse by one of them is not likely to show the Sun’s corona in all its glory, nor the beautiful array of prominences and Baily’s Beads that we often behold from Earth. Many other moons in the solar system would always appear smaller than the Sun’s disk. Although the size of Neptune’s Nereid is still poorly known, when near the low point of its highly elongated orbit it might cover up to 80 percent of the Sun’s diameter. And Mars’s Phobos is wide enough to span 69 percent of the Sun’s diameter when Mars is at aphelion — but for less than 10 seconds as this tiny moon glides by! Yes, we are very lucky.
<urn:uuid:e8822cf6-441e-4188-bcd8-dfbc379f8e5c>
3.5
389
News Article
Science & Tech.
69.39
95,493,611
Feral cat distribution, abundance, management and impacts on threatened species: collation and analysis of data This project will improve our understanding of feral cat impacts and how to mitigate those impacts. At national scale, it will collate and analyse large and diverse sets of data to estimate cat distribution and abundance, and measure predation rates by cats on birds, reptiles and mammals, and to identify the ecological traits that make some species more susceptible to cat predation than others. Can culling noisy miners benefit threatened woodland birds? In recent decades across eastern Australia, noisy miner populations have expanded in fragmented agricultural landscapes. A communal, non-migratory, bird of considerable size (approximately 70g), they have aggressively outcompeted many other smaller species of native woodland birds. So concerning is this decline that in 2014, aggressive exclusion of woodland birds from potential habitat by noisy miners was listed as a Key Threatening Process under the EPBC Act. Combatting an emerging disease threatening endangered Christmas Island reptiles The blue-tailed skink and Lister’s gecko are critically endangered, currently extinct in the wild, and persist only within a captive breeding program. Recently, a new bacterial disease which causes facial deformity and death has emerged in the two species. This project will build on preliminary research to develop a critical understanding of the disease, how it interacts with the reptiles and their environments, and if and how it can be managed. Translocation, reintroduction and conservation fencing for threatened fauna Whether moving species into fenced areas, intensively managed habitats or outside its previous habitat - translocating threatened species presents a number of challenges. This project will research the most feasible and cost-effective translocation strategies to boost the size and long-term viability of wild populations. This will include improved planning for, and implementation of, translocations of mammals, birds, reptiles and frogs. Theme 6.00 - Using social and economic opportunities for threatened species recovery
<urn:uuid:d9ffa2e1-4ee4-4bd6-9dbd-6fcaa3f41a9b>
3.109375
398
Content Listing
Science & Tech.
11.817107
95,493,623
Explanation: Last week, a Tesla orbited the Earth. The car, created by humans and robots on the Earth, was launched by the SpaceX Company to demonstrate the ability of its Falcon Heavy Rocket to place spacecraft out in the Solar System. Purposely fashioned to be whimsical, the iconic car was thought a better demonstration object than concrete blocks. A mannequin clad in a spacesuit — dubbed the Starman — sits in the driver’s seat. The featured image is a frame from a video taken by one of three cameras mounted on the car. These cameras, connected to the car’s battery, are now out of power. The car, attached to a second stage booster, soon left Earth orbit and will orbit the Sun between Earth and the asteroid belt indefinitely — perhaps until billions of years from now when our Sun expands into a Red Giant. If ever recovered, what’s left of the car may become a unique window into technologies developed on Earth in the 20th and early 21st centuries. The universe is the biggest and oldest thing we know. It contains all existing matter and space. And its origin marks the beginning of time as far as we understand it. We don’t know what made the formation of the universe possible, nor why it occurred. The visible universe is currently about 93 billion light years wide. A light-year is a distance that light travels in a year, which makes the universe about 880 trillion trillion metres wide. The visible universe is, however, still expanding, and we can measure that rate of expansion. Then, working backwards, we can figure out when the universe would have begun. To the best of our knowledge, the universe formed about 13.8 billion years ago in what is commonly referred to as the Big Bang. This image shows the universe about 370000 years after the Big Bang, which is the oldest light that we’ve been able to record with the greatest precision. The image records ancient light or cosmic microwave background. The colours show tiny temperature fluctuations from an average temperature. These indicate areas of different densities, which became the stars and galaxies of today. Red spots are a bit hotter and blue spots a bit cooler. The image was recorded between 2009 and 2013, during the Planck mission, when the space observatory was operated by the European Space Agency, in conjunction with NASA, the National Aeronautics and Space Administration. Today, the universe is very cold. On average, it is 2.7Kelvin. Kelvin is a measure of temperature with the same magnitude as degrees Celsius. But 0 Kelvin equals minus 273.15 degrees Celsius. In the universe, the hot parts, such as stars, make up only a tiny fraction. If we wind the clock backwards, the universe gets smaller. And this means the universe was hotter in the past. When matter gets hot, solids melt and liquids boil. The hot matter glows – red at first, but it becomes bluer as the temperature goes up. Eventually, all matter is gas. So we have a bright, glowing blob of gas. Going further back in time, as the gas gets hotter, the electrons are separated from the nuclei and a plasma is made. The temperature at this point is about 3000 to 6000 Kelvin and the glowing blob is white hot. As we go back further in time, the universe gets even smaller and hotter. The nuclei themselves, containing protons and neutrons, are broken up. The reason for the breakup of nuclei is that the individual particles and the energy of the radiation are so great that the collisions of all this hot stuff are incredibly violent. The light is no longer in the visible spectrum. It is energetic enough to be x-rays and even gamma rays. Between just 10 seconds and 1000 seconds after the Big Bang, subatomic particles, including neutrons and protons, were formed. Neutrons live for just 9 minutes when they are free. Hence only those that stuck to protons during this period survived. All of the ordinary matter present today formed in this short window of time. At about 1 microsecond after the Big Bang, the universe was very hot, at 10 to the 10 Kelvin, and quarks formed stable particles called hadrons. Before 1 picosecond, or 10 to the minus 12 seconds, the universe was an exotic place. The gas was hotter still and the laws of physics appeared different to how we see them today. The distinction between matter and radiation, such as light, cannot be detected. The forces of electromagnetism and the weak nuclear force also become indistinguishable. At the very earliest times, the universe was so hot and dense that we cannot yet describe them accurately. The Earth is an oblate spheroid, being slightly flattened at the Equatorial radius = 6378 km Polar radius = 6357 km These measurements are calculated on the assumption that the Earth’s surface is smooth, but this is only an approximation since it disregards mountains and ocean depths. However, the difference between the height of Mount Everest and the depth of the Marianas trench is only about 20 km. Most land is concentrated in seven continents each fringed by shallow seas (flooded continent). Separating these are a number of major oceans including the Pacific, Atlantic and the Indian oceans. It was Cavendish in 1798 who first calculated the mass of the Earth as 5.977 x 1024kg, and since its volume is known (from 4/3 ∏ r^3 where r is the radius of the Earth), then it can be calculated that the average density is 5.516 g/cm3. However, most rocks exposed at the surface have densities of less than 3g/cc, for example: Therefore, a material of greater density must exist at deeper levels within the Earth. The Earth has a series of layers or “shells”, but only the outer few km of the Earth can be directly observed; the upper crust, and the deepest boreholes which reach to only about 12.5 kms. Earthquakes provide the key to the structure at depth. Stresses which develop in the Earth may become great enough to break the rocks, and cause slip along the resulting in fractures (faults). Although the slip distance in a given earthquake may be small (cm to metres), the rock masses involved are large and so the energy released is great. The resulting shock waves, or earthquakes, may cause great damage; greatest near the centre or focus, and less further away. The epicentre is the point on the surface of the Earth vertically above the focus. Detection of seismic waves. Earthquake energy is transmitted by several types of waves. Two types will be described: P waves (primary or compressional) are transmitted by vibrations oscillating in the direction of propagation (push/pull). S waves (secondary or shear), which vibrate at right angles to the direction of propagation. S waves cannot be transmitted through liquids because liquids have no elastic strength. The arrival of earthquake waves is recorded by a seismograph. A mass is loosely coupled to the Earth by a spring. A chart is firmly coupled to the Earth. A pen linking them traces the difference in motion between the mass and the Earth’s surface. The arrival of waves from a distant earthquake is recorded as a seismogram on the rotating drum. Consider what happens to P and S waves as they travel through the Earth. The most important property of seismic waves is their speed of propagation. The velocity is governed by the physical properties (density, compressibility, rigidity) of the medium through which the wave is travelling. Earlier in this lecture, it was deduced that the density of the Earth increases with depth. The wave propagation velocity must, therefore, change with depth, and this causes the wave to refract. If a wave travelling through a medium with a fixed density encounters a new medium with a different density, the wave will change its direction. This “bending” of the wave is called refraction. Data from seismometers located around the world can record waves from any given earthquake. The differences between recordings at different seismometers reveal properties of the sub-surface and hence the internal structure of the Earth. For example, it has been discovered that the mantle is solid rock, but the outer core is a liquid. This was discovered, because for any given earthquake:- Both P and S waves are recorded by seismometers at distances of up to 103o from the epicentre. At distances greater than 103o, no S waves are recorded. This means that S waves that would have reappeared at > 103o have not propagated. The material at depths travelled by such waves must be liquid and be unable to transmit S waves. Also, it has been discovered that the outer core must have a lower P wave velocity than the mantle. This is because at distances of 103o to 142o, no strong P waves are recorded. The liquid outer core has a lower P wave velocity, causing the P waves to be refracted to a steeper angle, so they cannot re-emerge between 103o to 142o. They actually re-emerge at angles > 186o. There is one small caveat to this observation. The inner core appears to be solid because some weak P wave arrivals occur between 103o to 142o. This is thought to be due to a slight increase in P wave velocity as waves enter the inner core, causing them to be refracted to a shallower angle, to re-emerge between 103o to 142o. If the inner core is solid, S waves could propa- gate there. The graph shows some calculations of what expected S wave velocities would be, but the inner core structure is still a source of controversy. In the early 20th century a Yugoslavian seismologist by the name of Mohorovicic was studying seismograms from shallow focus earthquakes (< 40 km) that were nearby <800km. He noticed that there were 2 distinct sets of P waves and S waves involved. He interpreted these waves as a direct set and a refracted set. In the refracted set, waves travel down and are refracted at a boundary by a medium of higher velocity. This boundary separates the crust with VP of 6-7km/sec from the upper mantle where VP starts at 8km/sec. It is called the Mohorovicic discontinuity but is commonly known as the MOHO. Today, seismologists use artificial explosions to determine the structure beneath the surface and it is from these data that the depth of the MOHO can be calculated and thus the thickness of the crust. The MOHO is at 5-15 km under ocean crust and 35 km beneath normal thickness continental crust. The MOHO can be as much as 70 km deep beneath mountain belts where converging plates have caused an orogeny or mountain building event. The Structure of the Earth Recent advances in seismology now allow tomographic images of the interior of the Earth to be produced from P and S wave velocity data. Just as tomographic images of the interior of human bodies are produced by density contrasts in human tissue and bone subject to wave propagation, density contrasts in the Earth can be mapped by combining wave velocity data from large numbers of earthquakes. The basic idea is that where the solid mantle is relatively hot, the P and S wave velocities should be anomalously low because the heat will result in a density decrease. One should be able to image hot, ascending plumes of mantle asthenosphere by looking for areas of anomalously low seismic velocity. Conversely, where the solid mantle is relatively cool, the P and S wave velocities should be anomalously fast because the lack of heat will result in a relatively high density. One should be able to image cool, descending slabs of mantle lithosphere by looking for areas of anomalously high seismic velocity. Such images allow us to study subduction zones and constrain how deep the slabs penetrate. It appears that some slabs do not penetrate beneath 670 km whereas others continue down to the core-mantle boundary. This is an area of controversy in geology. Orionid Meteors Over Turkey Credit & Copyright: Tunc TezelExplanation: Meteors have been flowing out from the constellation Orion. This was expected, as mid-October is the time of year for the Orionids Meteor Shower. Pictured above, over a dozen meteors were caught in successively added exposures over three hours taken this past weekend from a town near Bursa, Turkey. The above image shows brilliant multiple meteor streaks that can all be connected to a single point in the sky just above the belt of Orion, called the radiant. The Orionids meteors started as sand sized bits expelled from Comet Halley during one of its trips to the inner Solar System. Comet Halley is actually responsible for two known meteor showers, the other known as the Eta Aquarids and visible every May. Next month, the Leonids Meteor Shower from Comet Tempel-Tuttle might show an even more impressive shower from some locations. Eclipsosaurus Rex Image Credit & Copyright: Fred Espenak (MrEclipse.com)Explanation: We live in an era where total solar eclipses are possible because at times the apparent size of the Moon can just cover the disk of the Sun. But the Moon is slowly moving away from planet Earth. Its distance is measured to increase about 1.5 inches (3.8 centimeters) per year due to tidal friction. So there will come a time, about 600 million years from now, when the Moon is far enough away that the lunar disk will be too small to ever completely cover the Sun. Then, at best only annular eclipses, a ring of fire surrounding the silhouetted disk of the too small Moon, will be seen from the surface of our fair planet. Of course the Moon was slightly closer and loomed a little larger 100 million years ago. So during the age of the dinosaurs there were more frequent total eclipses of the Sun. In front of the Tate Geological Museum at Casper College in Wyoming, this dinosaur statue posed with a modern total eclipse, though. An automated camera was placed under him to shoot his portrait during the Great American Eclipse of August 21.
<urn:uuid:7ba9a194-d9a7-4721-bd4d-dad8249bf6b8>
3.984375
2,951
Nonfiction Writing
Science & Tech.
50.988008
95,493,662
As many of us prepare to travel to lakes and other bodies of water this summer for relaxation and recreation, now is the perfect time to consider what we can do to help protect the lakes we love. Scientists have long studied the ecological impact of humans on lakes, but a new study led by researchers at Virginia Tech explores how those ecological impacts can cycle back to affect humans. The study, published in the journal Ecosphere, offers a new model for those invested in protecting and maintaining lakes. “Lakes provide so much in terms of drinking water, recreation, aesthetic value, and more,” said Kelly Cobourn, assistant professor of water resource policy in Virginia Tech’s College of Natural Resources and Environment, and project lead. “People derive a lot of value from connecting with lakes. We also understand that humans degrade the quality of lakes with some of the choices they make. We provide a roadmap for understanding and approaching these problems that hasn’t been used before.” The study, which is in its third year, brings together researchers from Virginia Tech, The Pennsylvania State University, University of Wisconsin, Cornell University, Michigan State University, and Cary Institute of Ecosystem Studies. The team uses coupled natural and human systems modeling to understand how humans and the environment affect one another. “Similar research in the past has looked at the effects humans have on lakes, but rarely has anyone completed the loop of looking at how lakes affect humans,” said Cayelan Carey, assistant professor in the Department of Biological Sciences in Virginia Tech’s College of Science. “This paper focuses on completing the feedback loop that captures human actions, the consequences of those actions for water quality in lake ecosystems, and the effect of ecosystem change on human behavior.” The researchers examined three lake catchments, or areas of land where water runs into a freshwater lake — Lake Mendota in Wisconsin, Oneida Lake in New York, and Lake Sunapee in New Hampshire. The team collected data on land-use and management decisions, how water and sediment are transported to the lake, and how those things affect water quality by changing levels of chemicals that may affect the color or clarity of the lake. “Agriculture is a huge source of nitrogen and phosphorous loaded into lakes and watersheds,” explained Cobourn, a faculty member in the Department of Forest Resources and Environmental Conservation. “We are looking at how farmers decide which crops to plant, how much land is used, and how fertilizer is applied. While fertilizers can help improve crop yields, they also leach into lakes.” Carey, who is working on understanding how human decisions like land development or the application of fertilizer affected the ecosystem, added, “With this data, we can run ‘what if?’ scenarios to see how things might change under different conditions. For instance, what would happen to algal blooms if farmers don’t use any fertilizer at all? What would happen if they use 50 percent more fertilizer?” From there, the researchers moved on to focus on how changes in water quality affect shorefront property values and whether those changes motivate civic action by lake associations and other concerned groups. Kevin Boyle, professor in the Department of Agricultural and Applied Economics in Virginia Tech’s College of Agriculture and Life Sciences, studied the factors that affect the value of properties surrounding lakes. “Lakes provide important and unique ecosystems,” Boyle said. “A lake that is clear and healthy is going to hold more value to people than one that isn’t. The property-value impacts can be used as an educational tool, because we now have information about the economic benefits of protecting lakes, in addition to the ecological ones.” A critical relationship between lakes and people arises when citizens participate in civic action to protect ecosystems. According to Michael Sorice, associate professor in the Department of Forest Resources and Environmental Conservation, volunteer organizations like lake associations are common in areas surrounding lake catchments because of the nonmonetary value people associate with lakes. “Lakes are magnetic to people,” he said. “They are important for outdoor recreation, aesthetic value, historical significance, and more. We are trying to understand how lake associations encapsulate all of those values and use them to influence lake water quality.” Added Carey, “We were actually hosted by the Lake Sunapee Protective Association in New Hampshire recently. They were excited to use the models we came up with to help improve their lake.” This project aligns with Virginia Tech’s Global Systems Science Destination Area, which fosters transdisciplinary study of the dynamic interplay between natural and social systems. Research in this area involves faculty collaboration as a means to discover creative solutions to critical social problems emergent from human activity and environmental change in areas such as freshwater systems.
<urn:uuid:2166089a-b39a-4f1e-9d5e-45788089429d>
3.734375
1,002
News Article
Science & Tech.
21.935441
95,493,710
Is quantum technology the future of the 21st century? On the occasion of the 66th Lindau Nobel Laureate Meeting, this is the key question to be explored today in a panel discussion with the Nobel Laureates Serge Haroche, Gerardus ’t Hooft, William Phillips and David Wineland. In the following interview, Council Member Professor Rainer Blatt, internationally renowned quantum physicist, recipient of numerous honours, and Scientific Co-Chairman of the 66th Lindau Meeting, talks about what we can expect from the “second quantum revolution”. Blatt has no doubt: quantum technologies are driving forward a technological revolution, the future impact of which is still unclear. Nothing stands in the way of these technologies becoming the engine of innovations in science, economics and society in the 21st century. Early laboratory prototypes have shown just how vast the potential of quantum technologies is. Specific applications are expected in the fields of metrology, computing and simulations. However, substantial funding is required to advance from the development stage. Professor Blatt, the first quantum revolution laid the physical foundations for trailblazing developments such as computer chips, lasers, magnetic resonance imaging and modern communications technology. In the Quantum Manifest published in mid-May, researchers now talk about the advent of a second quantum revolution. What exactly does this mean? This second quantum revolution, as it is sometimes called, takes advantage of the phenomenon of entanglement. It’s a natural phenomenon that basic researchers recognized as early as the 1930s. Until now, all the technologies you mentioned derive their utility from the wave property upon which quantum physics is based. In the quantum world, its associated phenomena are often discussed in the context of wave-particle duality. Though they are not recognized as such, quantum technologies are therefore already available, and without them, many of our instruments would not be possible. By contrast, the nature of entanglement, which has been known for 85 years, has only been experimentally investigated in the past four decades based on findings by John Bell in the 1960s. Today, entanglement forms the basis for many new potential applications such as quantum communications, quantum metrology and quantum computing. The second quantum revolution is generally understood to be the realization of these new possibilities. How long will it take for the second quantum revolution to produce marketable applications and products? Marketable applications and products are already available in the field of quantum communications, meaning that such devices can already be purchased and commercially used. The use of entanglement for matter – not just for photons – will transform metrology by providing more sensitive and faster-responding sensors. Initially, it will produce small and later large quantum processors for a broad range of applications, for example simulations. Quantum processors will initially be used to solve a few (yet important) special problems, but in the more distant future also for universal calculations. There’s actually no discernible obstacle to realizing quantum technologies. Increasingly complex systems are being devised. This includes the development and use of new, previously unavailable technologies and methods. As quantum technologies become more widely available, ideas for their use and applications will rapidly follow. What far-ranging changes to society and economics do you expect from the second quantum revolution? At first, such technologies will lead to expanded and improved computing applications, which will continuously advance improvements in the sciences. It’s difficult to predict how far-reaching the impact on society and economics will be. Changes brought about by the development of the laser were similarly unpredictable. In the early 1960s, the laser was still seen as a solution to an unknown problem. Today, just over fifty years later, lasers have become an indispensable part of our lives. I expect quantum technologies to develop along similar lines. Will the second quantum revolution only benefit highly developed countries or regions in the world that invest heavily in cutting-edge research? Ultimately, everyone will benefit. But like all developments, only those countries and regions will really derive a benefit – including profit in the commercial sense – that play a role in the development and refinement of these technologies early on. We will need cutting-edge research for some decades to come, and this entails a degree of financial, institutional and above all personnel commitment in order to tap the potential of quantum technologies. Member of the Council for the Lindau Nobel Laureate Meetings Scientific Co-Chairman of the 66th Lindau Nobel Laureate Meeting Institute for Experimental Physics, University of Innsbruck, Austria Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Austria Gero von der Stein | idw - Informationsdienst Wissenschaft Leading experts in Diabetes, Metabolism and Biomedical Engineering discuss Precision Medicine 13.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Conference on Laser Polishing – LaP: Fine Tuning for Surfaces 12.07.2018 | Fraunhofer-Institut für Lasertechnik ILT For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:c1f020a8-3cee-437e-995a-996a1b1432bb>
2.796875
1,638
Content Listing
Science & Tech.
28.980837
95,493,711
HATNet (Hungarian Automated Telescope Network) is a set of automated (robotic) telescopes to search for Transiting Planets. HAT-1, which began operation in 2001 at Konkoly Observatory in Budapest, originated with a 512 x 768 CCD, and as later upgraded to a 65mm Aperture telescope with a Field of View of 8 degrees by 8 degrees, with a 2k x 2k CCD. Currently, HATNet consists of 7 telescopes at Fred Lawrence Whipple Observatory and Mauna Kea Observatories. A further effort, HAT-South is six telescopes in Australia, Namibia, and Chile. In 2017, 29 Extra Solar Planet discoveries are credited to them. Fred Lawrence Whipple Observatory (FLWO)
<urn:uuid:82865baa-bbfd-4973-a39c-07831e3a600a>
2.53125
157
Knowledge Article
Science & Tech.
39.262283
95,493,717
Triangle - 9th grade (14y) - examples A meter pole perpendicular to the ground throws a shadow of 40 cm long, the house throws a shadow 6 meters long. What is the height of the house? - Is right-angled Can a triangle with the sides of sqrt 3, sqrt 5 and sqrt 8 (√3, √5 a √8) be a right triangle? Isosceles trapezium ABCD ABC = 12 angle ABC = 40 ° b=6. Calculate the circumference and area. - Three sides Side b is 2 cm longer than side c, side a is 9 cm shorter than side b. The triangle circumference is 40 cm. Find the length of sides a, b, c . .. . - Double ladder The double ladder is 8.5m long. It is built so that its lower ends are 3.5 meters apart. How high does the upper end of the ladder reach? - Right triangle Calculate the length of the remaining two sides and the angles in the rectangular triangle ABC if a = 10 cm, angle alpha = 18°40'. - The perimeter The perimeter of equilateral △PQR is 12. The perimeter of regular hexagon STUVWX is also 12. What is the ratio of the area of △PQR to the area of STUVWX? Area of square garden is 6/4 of triangle garden with sides 56 m, 35 m and 35 m. How many meters of fencing need to fence a square garden? - Trapezoid MO The rectangular trapezoid ABCD with right angle at point B, |AC| = 12, |CD| = 8, diagonals are perpendicular to each other. Calculate the perimeter and area of the trapezoid. - Right Δ Right triangle has length of one leg 51 cm and length of the hypotenuse 85 cm. Calculate the height of the triangle. - Axial section Axial section of the cone is equilateral triangle with area 208 dm2. Calculate volume of the cone. - Circle chord What is the length d of the chord circle of diameter 36 m, if distance from the center circle is 16 m? - Trigonometric functions In right triangle is: ? Determine the value of s and c: ? ? The rectangle is 11 cm long and 45 cm wide. Determine the radius of the circle circumscribing rectangle. - Triangle SAS Calculate area and perimeter of the triangle, if the two sides are 51 cm and 110 cm long and angle them clamped is 130°. - Gimli Glider Aircraft Boeing 767 lose both engines at 45000 feet. The plane captain maintain optimum gliding conditions. Every minute, lose 1870 feet and maintain constant speed 212 knots. Calculate how long takes to plane from engines failure to hit ground. Calculate - Short cut Imagine that you are going to the friend. That path has a length 270 meters. Then turn left and go another 1810 meters and you are at a friend's. The question is how much the journey will be shorter if you go direct across the field? From the observatory 14 m high and 32 m from the river bank, river width appears in the visual angle φ = 20°. Calculate width of the river. - Regular 5-gon Calculate area of the regular pentagon with side 7 cm. - Right triangle Alef The area of a right triangle is 294 cm2, the hypotenuse is 35 cm long. Determine the lengths of the legs. See also our trigonometric triangle calculator. See also more information on Wikipedia.
<urn:uuid:a21523ad-71b8-4445-9034-3209e57109f7>
3.484375
780
Content Listing
Science & Tech.
79.808333
95,493,772
A swimming pool is being filled with water from a garden hose at a rate of \(6\) gallons per minute. If the pool already contains \(100\) gallons of water and can hold \(160\) gallons, after how long will the pool overflow? Assume \(m\) minutes later, the pool would overflow. Write an equation to model this scenario. There is no need to solve it.
<urn:uuid:c903bd52-e33f-4951-a5ed-286b66e80e68>
3.078125
80
Tutorial
Science & Tech.
70.503143
95,493,790
Often curvilinear, but sometimes diffuse surface features that are characteristically high albedo, optically immature, and associated with magnetic anomalies (Kramer et al. 2011b). A type of albedo feature Sinuous to irregular patterns. Often appear as curvilinear or looping pattern. Sometimes exhibit simpler shapes such as a single loop or diffuse bright spot. The bright appearance and curvilinear shape of lunar swirls are often accentuated by low-albedo regions that wind between the bright swirls, called dark lanes (Bell and Hawke 1981). Swirls show anomalously high albedo and the spectral characteristics of immature material (little darkening and reddening of the spectrum due to space weathering). Lunar swirls are associated with magnetic anomalies (Fig. 1), although not every magnetic anomaly (especially weaker ones) has an identified swirl (Kramer et al. 2009). The swirls have no topographic expression (they overprint the surface on which they... KeywordsSolar Wind Magnetic Anomaly Solar Wind Proton High Albedo Solar Wind Interaction - Bell JF, Hawke BR (1981) The Reiner Gamma formation: composition and origin as derived from remote sensing observations. Proc Lunar Planet Sci Conf, 12th, Houston, pp 679–694Google Scholar - Blewett DT (2011) Correction to Lunar swirls: examining crustal magnetic anomalies and space weathering trends. J Geophys Res 116, E06002. doi:0.1029/2011JE003852Google Scholar - Blewett DT, Hawke BR, Lucey PG (2005) Lunar optical maturity investigations: a possible recent impact crater and a magnetic anomaly. J Geophys Res 110:E04015Google Scholar - Blewett D, Coman EI, Hawke BR, Gillis-Davis J, Purucker M, Hughes C (2011) Lunar swirls: examining crustal magnetic anomalies and space weathering trends. J Geophys Res 116:E02002. doi:10.1029/2010JE003656Google Scholar - El-Baz F (1972) The Alhazen to Abul Wafa Swirl Belt: an extensive field of light-colored sinuous markings, Apollo 16: preliminary science report. NASA Spec Publ SP 315:29–93Google Scholar - Hemingway D, Garrick-Bethell I (2012) Insights into Lunar swirl morphology and magnetic source geometry: models for the Reiner Gamma and Airy anomalies. 43rd Lunar Planet Sci Conf, abstract #1735, HoustonGoogle Scholar - Hood LL, Williams CR (1989) The lunar swirls – distribution and possible origins. Proc Lunar Planet Sci Conf 19:99–113, HoustonGoogle Scholar - Hood LL & Artemieva NA (2008) Antipodal effects of lunar basin-forming impacts: Initial 3D simulations and comparisons with observations. Icarus 193:485-502Google Scholar - Kramer G, Blewett D, Hood L, Halekas J, Noble S, Hawke BR, Kletetschka G, Harnett E, Garrick-Bethell I (2009) The Lunar Swirls: a white paper to the NASA Decadal SurveyGoogle Scholar - Kramer GY, Combe J-P, Harnett EM, Hawke BR, Noble SK, Blewett DT, McCord TB, Giguere TA (2011a) Characterization of lunar swirls at Mare Ingenii: a model for space weathering at magnetic anomalies. J Geophys Res 116:E04008. doi:10.1029/2010JE003669Google Scholar - Kramer GY, Besse S, Dhingra D, Nettles J, Klima R, Garrick-Bethell I, Clark RN, Combe J-P, Head JW III, Taylor LA, Pieters CM, Boardman B, McCord TB (2011b) M3 spectral analysis of lunar swirls and the link between optical maturation and surface hydroxyl formation at magnetic anomalies. J Geophys Res 116:E00G18. doi:10.1029/2010JE003729Google Scholar
<urn:uuid:7c8ff9fc-c5f5-4078-9edb-984822c576fe>
2.921875
893
Academic Writing
Science & Tech.
47.804332
95,493,798
Figure 6 is an excerpt from the Lwarp v0.58 Manual (1.3 MB pdf). This manual is a combined user’s manual and source-code documentation, an example of “literate programming”. For more information, see LaTeX-HTML5 Generation — lwarp package. • Elaborate software operating procedures benefit from the inclusion of additional diagrams to help explain the logical connections of the various functions and processes which are involved. Typesetting is used to indicate user-interface functions, and screen images are used to highlight important selections. – Sample: How to set up sales-tax handling in the SQL-Ledger® double-entry accounting system: Figure 7 demonstrates a conceptual-logic diagram, showing the relationship between the various legislated sales-taxes, their software accounts, their software tax-rate settings. Also shown is how several at a time may be selected/deselected for a particular customer/vendor account, and also for a collection of parts/services on a particular invoice. The flowing arrows show the application of individual sales taxes through the various accounts and selections for each individual item on the invoice. • Diagrams are also used to illustrate the changing state of the system as a the user progresses through the required operations. Typesetting is used to highlight user-entry typing, display, warnings and notes. – Sample: How to move the Debian operating system to a new harddrive: Copying an entire operating system to a new harddrive can involve several steps, during which entire groups of directories are added and removed at different times. Figure 8 is one of several which help the user keep track of what is going where during the transfer process. Each time software is changed, it should be validated for proper operation before being released for general use. This important function must be carefully thought out. A thorough test procedure will test each software function, including all hardware interfaces plus associated noise and error handling conditions, and the proper software response to each possible input given each possible current state. • Sample: Software test procedure: Illustrated in the pdf and in Figure 9 are: – an overview of the product, – the use of a state machine in tabular form (also see Figure 10 for the same information in diagram form,) – statements of specification, – a checklist for each state’s actions, – additional tests to perform where necessary, – ESD noise and power-loss recovery testing, and – typeset user-interface buttons and displays. It is useful to create a state machine including every possible combination of input, output, and software state. The creation of this state machine can, in itself, reveal design flaws or force decisions about combinations which nobody had thought of before. The state machine, if created and incorporated into the initial design process, can be used as a guide for the software engineers to ensure that they have a complete description of the correct action of the program. When described in graphical format, the state machine makes a valuable part of the software test procedure, describing in an easy-to-use visual format the correct operation of the program. When converted to a table format, the state machine may be implemented in software, resulting in an easily maintained piece of code, readily adaptable to design changes or future product versions. A software state machine also avoids the nightmare of large blocks of heavily nested conditional code and its associated mysterious functional glitches. • Sample: State machines and user-interface: A sample state machine in diagram form is in Figure 10. States are in shown in green, machine actions in red, and movement to/from other states is in blue. Key icons or text show the user-initiated or other actions required to move to another state. The same information may be presented in tabular form, as shown previously in Figure 9. During the process of creating a state machine to describe a piece of software, certain functional and user-interface design improvements can become evident, especially in embedded software with minimal front-panel interfaces, resulting in a cleaner and easier-to-use product. Universal icons instead of English-language text, consistent state-transition actions, simpler key combinations and editing methods, more meaningful visual and audible feedback, unplanned special-case situations, error handling, and in some cases a reduction in the total number of keys or feedback LEDs — all are possible improvements from a full design review. Even something as simple as a change of the icon on a key’s label can make it more obvious what that key does — such as converting a right arrow into a curved clockwise arrow to illustrate that the key causes something to rotate, or using a small arrow icon for a key which produces smaller changes, and a larger arrow icon for a key which produces larger changes.
<urn:uuid:b803306f-ceca-4ebd-9e66-72fed743e474>
2.71875
993
Documentation
Software Dev.
24.03575
95,493,817
Dynamics of Faulting and Fault Plane Solution Explanation of immediate faulting or mechanism of earthquake is one of the most fascinating and significant problems in seismology. The physical process of elastic strain accumulation and the triggering mechanism are the basics to understand the earthquake kinematics. The term focal mechanism or fault-plane solution conventionally refers to fault orientation, the displacement and stress release patterns and the dynamic process of seismic wave generation. KeywordsFault Plane Focal Mechanism Hanging Wall Moment Tensor Nodal Plane Unable to display preview. Download preview PDF.
<urn:uuid:63e2212e-9d6f-4ff4-b9f9-947077a67008>
3
119
Truncated
Science & Tech.
14.815191
95,493,827
By Adam Landry Onita Basu still vividly remembers the exact moment she decided to devote her career to sustainable water solutions and practices. “I was in a second-year Chemical Engineering lab working with a solution of water that looked relatively clean,” she recalls. “When I passed the water through a treatment process I was shocked to see an incredible amount of dissolved copper emerge from the solution and begin coating onto various surfaces. It was an eye-opening experience to realize that we cannot always tell what is in our water.” Now an Environmental Engineering professor and Associate Chair of Graduate Studies in Carleton’s Department of Civil and Environmental Engineering, Basu still believes that startling realization hasn’t lost its impact. “The pressure that is placed on our natural resources has never been greater than it is today,” she says. “The more people we have on our planet, the more difficult it becomes to manage the health of our water systems.” In an effort to help alleviate some of the pressure caused by a surging population, Basu is currently engaged in bio-filtration research, which utilizes the growth of beneficial microbes in filter systems to help remove organic contaminants. Employing this technique provides a viable alternative to chemical treatment options, resulting in a cleaner and more sustainable water treatment process. Beyond the health of water itself, Basu also understands the importance of improving the sustainability of activities related to its treatment. “Removing pollution from water requires energy, but energy production also requires water, which in many cases results in its contamination,” she explains. “It creates a vicious cycle between water and energy needs.” To stabilize this systemic flaw, Basu has been investigating both direct and indirect methods of decreasing energy requirements for water treatment, focusing on elements such as how pumps are operated or how chemical treatments can be reduced, as they too require energy to produce, transport and deploy into the water treatment process. Developing Sustainable Engineering Systems While water may be the key to life, there’s no denying that the call for sustainability extends beyond our lakes and rivers to include the energy sector. With that in mind, Prof. Cynthia Cruickshank in Carleton’s Department of Mechanical and Aerospace Engineering has dedicated her research towards developing advanced building energy systems and optimizing the applications of solar energy. In addition to the obvious environmental benefits of reducing our society’s dependency on non-renewable energy sources, Cruickshank emphasizes how increased investment in sustainable energy will help to develop long-term energy security within Canada and beyond. “Currently, we rely heavily upon finite resources that will eventually run out or reach a point where they are no longer financially viable to retrieve or refine,” she explains. “Shifting towards sustainable and renewable sources will help to establish greater diversity in our energy infrastructure and enable us to mitigate or even circumvent that inevitability.” Cruickshank also serves as one of several key researchers working with the Urbandale Centre for Home Energy Research, a 1,600-square-foot, two-storey solar-powered house located at the north end of Carleton’s campus. Headed by Prof. Ian Beausoliel-Morrision, the facility acts as a test bed for innovative concepts that challenge the traditional way houses are designed and built, focusing largely on seasonal thermal storage, or how to store energy collected by the house’s solar panels during the summer for use during the darker winter months. “We’re working towards establishing sustainable communities,” Cruickshank says. “Whether by the renewal of building codes or the use of energy efficient materials and insulation, we’re seeing a conscious shift towards net-zero ready buildings and prefabricated retrofit solutions, which will ultimately reduce greenhouse emissions in the residential sector.” Cruickshank offers her advice on a number of simple ways people can make their homes more sustainable, such as upgrading or installing insulation in attics and basements or replacing old drafty windows with those that are more energy efficient. “Approximately half of the energy used in the average home is attributed to heating and cooling,” she says. “Older homes and buildings are the biggest contributor to Canada’s energy spending, as they are less insulated and prone to air leakage. The majority of structures built before 1990 actually average 30 to 60 percent higher utility costs compared to newer homes.” While Cruickshank also recommends replacing incandescent lights with fluorescent or LED bulbs and switching to efficient appliances, she admits one of the best things people can do is watch their energy consumption. “See what you actually use by investing in an energy monitor,” she suggests. “Set thermostats to regulate a temperature that is cost effective when the timing is appropriate, such as at night and when no one is home.” Inspiring Young Girls In addition to her research duties, Cruickshank is also helping to inspire young girls to pursue careers in science, technology, engineering and math (STEM) through Carleton’s youth outreach programs. “I’ve had the chance to give talks through the Carleton Women in Science and Engineering group, speaking with high school students about different careers that exist for women within engineering,” she says. “It’s important for them to hear about more than just the soft sciences because there are so many amazing opportunities where those sciences can be applied in areas such as electrical or mechanical engineering. There are many paths to choose from and female engineers are flourishing in all disciplines.” Another outreach program that Cruickshank has been involved with is Virtual Ventures, a youth summer camp at Carleton that features programming just for girls and includes hands-on activities such as coding and game design. “It’s really important for female students to be exposed to engineering, or at least the concept of it, at a young age,” she says. “It helps them build an appreciation of what it can offer, but also demonstrates how it can improve peoples’ lives.” Given that both Cruickshank and Basu have dedicated their careers to a field which is often seen as predominantly male, each of them certainly sees the value of increasing the female perspective within engineering. “Just as multi-disciplinary projects benefit from multiple viewpoints, the personal experiences of different individuals can result in diverse ideas and influence how we approach engineering challenges,” says Basu. “Including more female engineers in the discussion allows for a wider range of perspectives, which can contribute to new concepts and alternative solutions.” Big Ideas on Accessible Design Member of parliament Stéphane Lauzon, Universities Canada VP Philip Landon, and Carleton VP Suzanne Blanchard speak at launch event for new national competition for student accessible design projects. Banting Fellow Supports Ethiopia Ask Logan Cochrane what keeps drawing him back to Ethiopia and he’ll say it’s a mixture of personal connections and depth of experience. While Carleton’s newest Banting Postdoctoral Fellow has worked in other African countries... More Daring to Dream Big When someone is diagnosed with a serious illness, they often reflect on their life and ask whether they're spending their days in a meaningful way. Last March, in the busy lead-up to the launch of... More
<urn:uuid:7d759b37-5c0f-43f8-9824-67f11837de07>
3.0625
1,550
Content Listing
Science & Tech.
25.381284
95,493,855
Collision of Rigid Bodies. Multiple Collisions In Chap. 7, the contact of two deformable solids was investigated. It was seen that it is difficult to describe non-smooth evolutions involving collisions because shock waves appear. The wish to have both simplicity and efficiency is difficult to fulfil when dealing with collisions. Thus the necessities of engineering have given rather simple, crude theories based on rigid body mechanics. They can be traced back to Newton to whom the notion of a restitution coefficient is attributed. In this chapter, the same approach is followed in order to produce a simple, usable collision theory. The only improvement is a step toward continuum mechanics. The idea is developed that a system made of two rigid bodies is deformable because the distance between the two solids can change. KeywordsRigid Body Virtual Work Multiple Collision Landing Gear Dissipative Force Unable to display preview. Download preview PDF.
<urn:uuid:0dd67814-8900-4b0f-b516-019e36359c7e>
3.359375
188
Truncated
Science & Tech.
34.5355
95,493,856
Heavy winter rainfall produces double-peak hydrographs at the Slapton Wood catchment, Devon, UK. The first peak is saturation-excess overland flow in the hillslope hollows and the second (i.e. the delayed peak) is subsurface stormflow. The physically-based spatially-distributed model SHETRAN is used to try to improve the understanding of the processes that cause the double peaks. A three-stage (multi-scale) approach to calibration is used: (1) water balance validation for vertical one-dimensional flow at arable, grassland and woodland plots; (2) two-dimensional flow for cross-sections cutting across the stream valley; and (3) three-dimensional flow in the full catchment. The main data are for rainfall, stream discharge, evaporation, soil water potential and phreatic surface level. At each scale there was successful comparison with measured responses, using as far as possible parameter values from measurements. There was some calibration but all calibrated values at one scale were used at a larger scale. A large proportion of the subsurface runoff enters the stream from three dry valleys (hillslope hollows), and previous studies have suggested convergence of the water in the three large hollows as being the major mechanism for the production of the delayed peaks. The SHETRAN modelling suggests that the hillslopes that drain directly into the stream are also involved in producing the delayed discharges. The model shows how in the summer most of the catchment is hydraulically disconnected from the stream. In the autumn the catchment eventually ‘wets up’ and shallow subsurface flows are produced, with water deflected laterally along the soil-bedrock interface producing the delayed peak in the stream hydrograph. Copyright © 2007 John Wiley & Sons, Ltd. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:4bbd1bf2-0966-4f38-87fb-65f230028079>
2.984375
401
Academic Writing
Science & Tech.
33.980799
95,493,859
How many different circles is determined by 9 points at the plane, if 6 of them lie in a straight line? Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: Prove that k1 and k2 is the equations of two circles. Find the equation of the line that passes through the centers of these circles. k1: x2+y2+2x+4y+1=0 k2: x2+y2-8x+6y+9=0 In the circle with a radius 7.5 cm are constructed two parallel chord whose lengths are 9 cm and 12 cm. Calculate the distance of these chords (if there are two possible solutions write both). In how many points will intersect 14 different lines, where no two are parallel? Circle touch two parallel lines p and q; and its center lies on a line a, which is secant of lines p and q. Write the equation of circle and determine the coordinates of the center and radius. p: x-10 = 0 q: -x-19 = 0 a: 9x-4y+5 = 0 Straight line passing through points A [-3; 22] and B [33; -2]. Determine the total number of points of the line which both coordinates are positive integers. - Points collinear Show that the point A(-1,3), B(3,2), C(11,0) are col-linear. - XY triangle Determine area of triangle given by line 7x+8y-69=0 and coordinate axes x and y. Determine the slope of the line perpendicular to the line p: y = -x +4. In the triangle ABC is point D[1,-2,6], which is the center of the |BC| and point G[8,1,-3], which is the center of gravity of the triangle. Find the coordinates of the vertex A[x,y,z]. Line p passing through A[-10, 6] and has direction vector v=(3, 2). Is point B[7, 30] on the line p? A straight line p given by the equation ?. Calculate the size of angle in degrees between line p and y-axis. What is the slope of a line with an inclination -221°? - Square side Calculate length of side square ABCD with vertex A[0, 0] if diagonal BD lies on line p: -4x -5 =0. What is the slope of the line defined by the equation -2x +3y = -1 ? - Angle between lines Calculate the angle between these two lines: ? ? What is the slope of the perpendicular bisector of line segment AB if A[-4,-5] and B[1,-1]? If the segment of the line y = -3x +4 that lies in quadrant I is rotated about the y-axis, a cone is formed. What is the volume of the cone?
<urn:uuid:3150d9e7-1513-4b2c-9d6d-070c6dc7446c>
3.390625
681
Tutorial
Science & Tech.
85.963102
95,493,869
Dating of non carbonaceous materials All allotropic forms of carbons—graphite, glassy carbon, amorphous carbon, fullerenes, nanotubes, and doped diamond—are used as important electrode materials in all fields of modern electrochemistry. Examples include graphite and amorphous carbons as anode materials in high-energy density rechargeable Li batteries, porous carbon electrodes in sensors and fuel cells, nano-amorphous carbon as a conducting agent in many kinds of composite electrodes (e.g., cathodes based on intercalation inorganic host materials for batteries), glassy carbon and doped diamond as stable robust and high stability electrode materials for all aspects of basic electrochemical studies, and more. Mechanical injuries are injuries caused to the body by physical violence. Mechanism of Wounding: The body absorbs the natural forces, like gravity ,movement, routine movements like sitting and walking by the resilience and elasticity of its soft tissues and rigid skeletal framework. This chapter will primarily focus on the development of porous carbons for application as chromatographic stationary phases. Legally, injuries are classified into; (1) Simple, and (2) Grievous. Porous carbons in the separation sciences occupy an important niche owing to their unique retention characteristics, chemical stability and the ability to control pore structure through template strategies. However, these same synthetic processes utilise oil-based carbonising resins and high temperature, energy-intensive pyrolysis steps to ensure the carbon product has pore-size regularity, minimal micropore content and homogeneous surface chemistry. You don’t even doubt today – even though the promises made by Jesus have not been fulfilled to this day. Jesus promises that, for anyone who has faith in his father, i.e.
<urn:uuid:0886ba02-0cbf-4865-b0dc-57201486f881>
3
380
Content Listing
Science & Tech.
7.048012
95,493,872
With the help of space telescopes like the Hubble astronomers are trying deeper and deeper look inside the cosmic web of the Universe. After all, the farther you look, the further back in time you penetrate, and it allows you to see how the universe looked billions of years ago. With the launch and operation of other even more advanced technology, telescopes and observatories, scientists are unable to study in more detail the history and evolution of the cosmos. Recently, an international team of astronomers with a telescope "Gemini North" Gemini Observatory, located on mount Mauna Kea, on the island of Hawaii, has discovered a spiral galaxy located 11 billion light years from us. Using new technique of combining gravitational lensing and spectrography, the scientists were able to see the object that existed at 2.6 billion years since the Big Bang. Galaxy got the name A1689B11 and is currently the oldest ever discovered spiral galaxies. About his discovery, astronomers have shared in the pages of the latest issue of the journal The Astrophysical journal. To Find the galaxy A1689B11 allowed method of gravitational lensing. This method is very often used by astronomers and involves the use of very large objects (e.g. galaxy clusters) as a kind of lens which allows light of the object located behind the lens of her to go around. This light astronomers and monitor. In a press release, Swinburne University of Technology astronomer, day by day hotel yuan, head of the research group, writes: "This method allows us to study ancient galaxies with very high visual resolution and an unprecedented level of detail. We were able to observe the galaxy as it was 11 billion years ago, and became direct witnesses of the formation of the first primitive spiral galactic arms". Then, the researchers used a spectrograph NIFS (Near-infrared Integral Field Spectrograph), mounted on the telescope "Gemini-North" to confirm the structure and nature of the most ancient spiral galaxy. The tool was created by astronomer Peter McGregor the Australian national University. The scientist is responsible for its calibration and proper operation. Thanks to a recent discovery, scientists have obtained additional information about how galaxies acquire their form. According to the famous astronomer Edwin Hubble's classification of galaxies (Hubble sequence) there are three main types of these objects – elliptical, lenticular and spiral, — and a fourth additional category, called irregular galaxies. According to this classification, all the galaxies begin with the formation of elliptical structures, and then modified and take the form of a spiral, lenticular or irregular galaxies. Therefore, the opening of this very old spiral galaxy is particularly important for understanding when and how the earliest galaxies began to change its shape from elliptical and purchase more modern. "the Study of ancient spiral galaxies, such A1689B11, is an important link in understanding the mysteries of how and when did the Hubble sequence. Spiral galaxies were very rare in the early Universe, and this discovery opens the door for us to study the transition from galactic very chaotic, turbulent disk shape in a much more relaxed, thin galactic disks, such as our milky way," — says study co-author, astronomer at Princeton University, Renue Prices. But the Most interesting in the opening A1689B11 is that this spiral galaxy shows some extremely amazing features that reveal more detail and make to ask new questions about this period in the history of the cosmos. As yuan said, these features contrast vividly against the background of more modern galaxies of the same type. "compared to younger galaxies of the same type in this ancient galaxy new stars are forming 20 times faster. Almost at the same level as in young galaxies with similar mass that existed in the early Universe," — said yuan. "However, unlike other galaxies of the same era A1689B11 has a very cold and thin disk rotating in a very calm, surprisingly, with very low turbulence. Such type spiral galaxies belonging to the epoch before us do not ever see". In the future, a team of astronomers hopes to conduct additional studies of this galaxy in order to analyze in more detail its structure and nature, as well as compare it to other spiral galaxies of the same era. Of particular interest to scientists is the question of the origin of the spiral arms, projecting a kind of marker of the transition between the ancient elliptical galaxies to a more modern, spiral, lenticular and irregular. Scientists know about cancer almost everything. However, to overcome this dangerous disease completely from the experts have not yet been obtained. But if cancer cells are so intent on destroying everything, why not get them to destroy their own kind... last year, the exoplanet is only 11 light years from Earth. And not just another exoplanet, and rocky exoplanets, presumably with great potential for habitability. A new group of researchers conducted a deeper analysis of the open-world and found evi... Aging is an inevitable consequence of our existence. However, since time immemorial, mankind has sought to overcome this process (and ideally, and even immortality). To the ideal we have not yet reached, but to increase life expectancy by one third m... many people suffer from the oppressive consciousness of thoughts, worried about work, family, relationship problems and many other things. Sometimes depression or post-traumatic stress disorder is so spoil the quality of human lif... When the father of Judith Gardiner died in 1963, her mother, a lawyer, took their General practice of patent law. In those days, few women would dare to do so, but his mother Gardiner had his own ways to assert his authority. She ... "From a fundamental point of view, the question is: did anything like this before?", says Adam Frank, Professor of physics and astronomy at the University of Rochester. "And it is highly likely that our time and place — not only o...
<urn:uuid:1b736791-31e8-4aea-90d4-8b82dbeedc45>
3.640625
1,209
Content Listing
Science & Tech.
36.780892
95,493,876
What powers America? Since we are trying to figure out ways to reduce our electric energy usage. I thought I might try to inform you all of where that electricity comes from. Well about 48.9% comes from coal a fossil fuel. I think we have all seen the articles trying to deter people from using fossil fuels for power. Mainly, pointing our that fossil fuels are a non-renewable energy source. If we want to power the world cleaner, I believe we need to look at other renewable resources besides fossil fuels. Things such as solar, wind, water, and biomass power. As of 2006 only 9.5% of the United States energy sources came from renewable sources such as water, solar, wind, ect. Also, since electricity is the highest source of greenhouse gas emissions in the U.S. it is essential to not only reduce our dependence on the fossil fuels that provide our electricity, but also lower our individual dependence on electricity. I was actually really surprised to find that electricity was the highest producer of emissions, I thought for sure transportation would be number one in production of harmful emissions. Although some of the renewable resources may be expensive ways to collect power, such as solar, we will most likely save ourselves from a huge future expense. It always seems harder to fix a problem after it has already been done, especially true when dealing with the crisis of global warming our emissions have caused. A good motto to live by would be why put off what can be done today until tomorrow, especially when the expenses of putting off something could cost significantly more. If you really want to do your part in saving the earth everyone should be looking at ways they can reduce their consumption of electricity. Now I see why, stopping even the littlest bits of power from being wasted is important to our environment. Information collected from: http://en.wikipedia.org/wiki/Electricity_generation
<urn:uuid:97c8f405-2b3f-4812-a788-2f8104bdfbdf>
2.53125
386
Personal Blog
Science & Tech.
47.156009
95,493,897
The temperature was 36 degrees, and we found ourselves attempting a walk of several miles in the afternoon sun. We were hot, we were sweaty and were only a third of the way through our walk. The Cypriot sun was unforgiving, yet, for some bizarre reason, we thought this walk a good idea. All consumed by our hot and bothered misery, nothing could distract us. Or so we thought. Suddenly, something on the beach front caught our attention, a large form lying by the waters edge. A passing french couple saw it just as we did, and though we were slightly lost in translation (they were better at English than we were French) the four of us stood, debating what was going on. So what was this mysterious form that had us so captivated? A green sea turtle. After much debate, and to our heartbreak, we realised that our sea turtle was, in fact, dead. Even dead, it is the closest I have ever come to seeing a wild sea turtle. Our specimen was rather large and after clambering across the rocks to perform a brief inspection, our french friend concluded there was no ‘obvious’ reason for his demise. It was devastating, especially when we were looking at such an endangered species. Sea turtles have roamed our oceans for over 100 million years, but in this day and age, they find themselves fighting for their very survival. In our oceans, we have seven species of sea turtle: leatherback; green; loggerhead; Kemp’s ridley; hawksbill; flatback and olive ridley. All of which, are either endangered or critically endangered species. What’s the problem then? Why do we find all of our sea turtle species teetering precariously in such a perilous situation? Well, sea turtles are extremely sensitive and are immensely vulnerable to anthropogenic impacts, throughout all stages of their life-cycle. The most problematic and detrimental? The ‘harvesting’ of eggs, adults and juveniles from beaches and foraging grounds. Harvesting is predominantly for meat for human consumption, but trade of turtle body parts is also big business in many countries. Surely this sounds illegal? Unfortunately, not in all countries. According to research by Blue Ventures Conservation in 2014, 42,000 sea turtles are harvested annually in 42 countries and this is despite increased protection for these species. Although this number is thought to have reduced by 60% since the 1980s, it is still worryingly high. Another problem is very common among many of our marine animals; entanglement in fishing nets. Drift netting, shrimp trawling, long-lining and dynamite fishing all pose a significant threat to sea turtles and this is likely to become more prevalent as the fishing industry continues to grow. But it doesn’t end there. Degradation of nesting and marine habitats is also a significant threat. Beach development activities, such as beach armouring, nourishment and sand extraction, all threaten turtle nesting beaches. Such activities can be a direct threat through building construction but also indirectly, through changes in beach thermal profiles and erosion levels. This impacts not only the availability of nesting beaches, but also their quality. Threats to the marine environment come in the form of increased sedimentation and pollution levels, increased boat traffic and harvesting of algae. All of this contributes to the level of degradation that our marine environments experience and this can have a knock on affect on turtle health. Some research has linked such poor conditions within the marine environment with increases in the cases of Fibropapilloma disease. This is a disease amongst turtles that causes the growth of benign tumours. Though not fatal in itself, complications can arise when tumours grow in areas that affect swimming, breathing and swallowing. So, problems identified. But putting aside our responsibility to protect our species, why does it matter? Would our ecosystems really miss sea turtles if they were to disappear? The short answer? Yes. Sea turtles are important because they are a vital ingredient to two of Earth’s ecosystems. The marine ecosystem and the beach/dune system. In the marine world, sea turtles are one of few who like to chomp on seagrass. Like all grass in the world, seagrass needs to be kept short in order for it to be healthy and induce growth. Along with the manatee, sea turtles graze this marine ‘lawn’ and maintain healthy beds of seagrass. Ok, so why is sea grass important? They provide breeding and nursey grounds for numerous fish, shellfish and crustacean species (think of Finding Nemo). Without these beds, we would see a decline in many species, impacting the wider marine food chain and leading to the possible loss of some species. Indeed, recent declines in the health and number of seagrass beds, could easily be linked to a reduction in sea turtle numbers. So, what of the beach dune? Well, as you may know, dune systems are very lacking in nutrients, and only the toughest and hardiest plant species can grow here. As we also know, sea turtles lay their eggs in nests on beaches. A sea turtle will lay approximately 100 eggs in a nest and she can lay up to 7 nests in one season! But not every one of these nests will be successful, with many not hatching. These unhatched nests provide vital sources of nutrients for the beach/dune vegetation system and even empty eggshells can provide nutrients. Consequently, this has a positive impact on dune vegetation, making it healthier and stronger. This impacts upon the entire beach/dune ecosystem, improving its condition and making stronger root systems that hold sand together and protect against beach erosion. As turtles decline, egg numbers decline, impacting nutrient levels on beaches and increasing the threat from accelerated erosion. Lose one part of an ecosystem and we will likely lose another and then another, until the ecosystem declines so greatly in health, that we lose entire species. We have seen it before. Ecosystem damage across our globe is already prevalent and in many cases, irreversible. If we lose our sea turtles, our marine ecosystems will find themselves in a perilous situation. Save the sea turtle, save the oceans. 2,197 total views, 4 views today
<urn:uuid:29471f14-2dd0-472a-89c1-6d8a19b89d9d>
2.796875
1,290
Personal Blog
Science & Tech.
49.474654
95,493,910
Their report -- the first analysis of long-term stability of wind over the U.S. -- appears in this week's Proceedings of the National Academy of Sciences Early Edition. "The greatest consistencies in wind density we found were over the Great Plains, which are already being used to harness wind, and over the Great Lakes, which the U.S. and Canada are looking at right now," said Provost's Professor of Atmospheric Science Sara Pryor, the project's principal investigator. "Areas where the model predicts decreases in wind density are quite limited, and many of the areas where wind density is predicted to decrease are off limits for wind farms anyway." Coauthor Rebecca Barthelmie, also a professor of atmospheric science, said the present study begins to address a major dearth of information about the long-term stability of wind as an energy resource. Questions have lingered about whether a warmer atmosphere might lead to decreases in wind density or changes in wind patterns. "We decided it was time someone did a thorough analysis of long term-patterns in wind density," Barthelmie said. "There are a lot of myths out there about the stability of wind patterns, and industry and government also want more information before making decisions to expand it." Pryor and Barthelmie examined three different regional climate models in terms of wind density changes in a future U.S. experiencing modest but noticeable climate change (warming of about 2 degrees Celsius relative to the end of the last century).The scientists found the Canadian Regional Climate Model (CRCM) did the best job modeling the current wind climate, but included results from Regional Climate Model 3 (created in Italy but now developed in the U.S.) and the Hadley Centre Model (developed in the U.K.) for the sake of academic robustness and to see whether the different models agreed or disagreed when seeded with the same parameters. Comparing model predictions for 2041-2062 to past observations of wind density (1979-2000), most areas were predicted to see little or no change. The areas expected to see continuing high wind density -- and therefore greater opportunities for wind energy production -- are atop the Great Lakes, eastern New Mexico, southwestern Ohio, southern Texas, and large swaths of several Mexican states, including Nuevo Leon, Tamaulipas, Chihuahua, and Durango. "There was quite a bit of variability in predicted wind densities, but interestingly, that variability was very similar to the variability we observe in current wind patterns," Pryor said. The Great Lakes -- Lakes Michigan, Superior, and Erie in particular -- consistently showed high wind density no matter what model was used. Such predictions should prove crucial to American policymakers and energy producers, many of whom have pledged to make wind energy 20 percent of America's total energy production by 2030. Currently only about 2 percent of American energy comes from wind. "There have been questions about the stability of wind energy over the long term, " Barthelmie said. "So we are focusing on providing the best science available to help decision makers." Pryor added that 'this is the first assessment of its type, so the results have to be considered preliminary. Climate models are evolving and improving all the time, so we intend to continue this assessment as new models become available." Wind farms are nearly carbon neutral, and studies show that a turbine pays for itself after only three months of energy production. A typical turbine lasts about 30 years, Pryor says, not because parts break, but because advances in technology make it desirable to replace turbines with newer versions. "Wind speed increases with height, so turbines are also getting taller," Pryor said. "One of our future projects will be to assess the benefit of deploying bigger turbines that extend farther from the ground." This is also the week of the annual Offshore Technology Conference in Houston, the largest such energy conference in the world, which has increasingly focused on offshore wind energy production in recent years. Last month, Pryor was appointed to the National Climate Assessment and Development Committee, convened by the U.S. Department of Commerce's National Oceanic and Atmospheric Administration to help the U.S. government prepare for and deal with climate change. She also contributed to a special report used by the Intergovernmental Panel on Climate Change (IPCC). Barthelmie is a widely respected expert on wind energy, particularly in northern Europe, whose wind farms she has studied for years. She was the winner of the European Academy of Wind Energy's 2009 Academy Science Award. Both Pryor and Barthelmie are faculty in the IU Bloomington Department of Geography, a division of the College of Arts and Sciences, and the Center for Research in Environmental Science. Pryor and Barthelmie's work was supported by grants from the National Science Foundation (BCS 1019603), the International Atomic Energy Authority, and the IU Center for Research in Environmental Sciences. The model output they analyzed were provided by the North American Regional Climate Change Assessment Program (NARCCAP). NARCCAP is funded by the National Science Foundation, the U.S. Department of Energy, the National Oceanic and Atmospheric Administration, and the U.S. Environmental Protection Agency Office of Research and Development. To speak with Pryor or Barthelmie, please contact David Bricker, University Communications, at 812-391-2434, 812-856-9035, or firstname.lastname@example.org. David Bricker | EurekAlert! Further reports about: > Atmospheric > Atmospheric Administration > Climate change > End User Development > Foundation > Great Basin > Great Lakes > NARCCAP > climate models > energy production > environmental risk > long-term stability > regional climate > regional climate models > wind farms > wind patterns Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:2fa8e9cf-ed01-4211-9c85-c71c1a4fd6f0>
3.09375
1,812
Content Listing
Science & Tech.
37.60134
95,493,931
fcache is a simple, persistent, file-based cache module for Python. It uses cPickle to store objects into a cache file and appdirs to ensure that cache files are stored in platform-appropriate, application-specific directories. It supports optional, time-based data expiration. >>> import fcache >>> cache = fcache.Cache("population", "statistics-fetcher") >>> cache.set("chicago", 9729825) >>> print cache.get("chicago") 9729825 Using fcache is as simple as creating a Cache object, setting data, and getting data back. >>> exit() $ python >>> import fcache >>> cache = fcache.Cache("population", "statistics-fetcher") >>> print cache.get("chicago") 9729825 Cached data doesn't disappear when you stop using a Cache object. When you create a new object with the same arguments, your data is still there, just like you left it. >>> print cache.filename /Users/tsr/Library/Caches/statistics-fetcher/248081ecb337c85ec8e4330e6099625a Cached data is stored in a file, plain and simple. You can see it on the file system. You can delete it, copy it, or write your own library to open it. >>> import time >>> cache.set("chicago", 9729825, 30) >>> print cache.get("chicago") 9729825 >>> time.sleep(30) >>> print cache.get("chicago") None Just like an orange, some data goes bad after awhile. fcache can keep track of when data should expire. fcache's documentation contains an introduction along with an API overview. For more information on how to get started with fcache, be sure to read the documentation. fcache uses its GitHub Issues page to track bugs, feature requests, and support questions. fcache is released under the OSI-approved MIT License. See the file LICENSE.txt for more information. 1 year, 4 months ago passed .. image:: http://readthedocs.org/projects/fcache/badge/?version=stable :target: https://fcache.readthedocs.io/en/stable/?badge=stable :alt: Documentation Status <a href='https://fcache.readthedocs.io/en/stable/?badge=stable'> <img src='http://readthedocs.org/projects/fcache/badge/?version=stable' alt='Documentation Status' /> </a> Project Privacy Level
<urn:uuid:35f9574f-176b-4642-b293-bc9aa99cb845>
2.546875
561
Documentation
Software Dev.
51.364062
95,493,958
Polarization Sensitivity in Spiders and Scorpions Finding the way back to their web or burrow or shelter, or orienting within the web are complex tasks for which spiders also rely on their visual system. Spiders have two sets of simple eyes, a pair of anterior-median principal eyes directed forward, and three pairs of secondary eyes usually with a reflecting tapetum lining the back of the eye (Fig. 24.1A). These eyes are specialized for different tasks. All spider eyes possess microvillar photoreceptors, in certain species with orthogonally arranged microvilli (e.g. Schröer 1974; Dacke et al. 1999). Kovoor et al. (1993) studied the anatomy of the anterior median eyes and its possible relation to polarization sensitivity (PS) in Lycosa tarentula. They suggested that PS in Lycosa tarentula is mediated by the ventral part of the retina, where the photoreceptors bear rhabdomeres aligned in parallel series and successive lines of rhabdoms are orthogonal to each other. They also hypothesized that the analysis of polarization may be a successive process using a twisting of the retina due to the action of two muscles: the alternating contraction of these muscles can generate rotation and, to some extent, up and down movements of the retinal cup. The E-vector analysis by such a successive mechanism in spiders was first proposed by Schröer (1974) for Agelena gracilens. KeywordsPolarization Sensitivity Wolf Spider Animal Vision Lycosid Spider Skylight Polarization Unable to display preview. Download preview PDF.
<urn:uuid:3f91567a-ac72-4c36-9b70-5d55a86513c2>
3.34375
343
Truncated
Science & Tech.
33.69
95,493,998
Mysterious 'night-shining' clouds that glow BLUE and only appear during the summer are becoming more common with climate change - More water vapor in the atmosphere is making high-altitude clouds more visible - Clouds only seen on summer nights are due to human-caused climate change - They are noctilucent ('night-shining') clouds and are highest in the atmosphere - Study found methane emissions have increased water vapor concentrations by around 40 percent since the 1800s, which has doubled ice in the mesophere 'Night-shining' clouds forming 50 miles above Earth's surface are becoming more common due to climate change, a study has found. The clouds give off a shocking blue haze when ice crystals fixed to tiny particles of 'meteor dust' high in the atmosphere reflect sunlight. Noctilucent clouds are the highest in Earth's atmosphere, and are only seen during summer nights. Scroll down for video 'Night-shining' noctilucent clouds forming 50 miles above Earth's surface are becoming more common due to climate change, a study has found The stunning night-shining clouds form in the middle of the atmosphere, in a layer known as the mesosphere. Researchers have noted an increase in noctilucent clouds due to human-induced climate change causing increased water vapor in the atmosphere. 'We speculate that the clouds have always been there, but the chance to see one was very, very poor, in historical times,' said Franz-Josef Lübken, an atmospheric scientist at the Leibniz Institute of Atmospheric Physics in Kühlungsborn, Germany and lead author of the new study in Geophysical Research Letters. Humans first spotted the clouds in 1885, after the eruption of Krakatoa volcano in Indonesia spewed massive amounts of water vapor in the air. Sightings became more common in the 20th century, and by the 1990s researchers began wondering if climate change was making them more visible. Researchers used satellite observations and climate models to simulate how the effects of increased greenhouse gases have contributed to noctilucent cloud formation over the past 150 years. Extracting and burning fossil fuels delivers carbon dioxide, methane and water vapor into the atmosphere, all of which are greenhouse gases. This diagram shows the major layers of Earth’s atmosphere. Noctilucent clouds form in the mesosphere, high above where normal weather clouds form The results found that methane emissions have increased water vapor concentrations in the mesosphere by about 40 percent since the late 1880s. This has more than doubled the amount of ice that forms in the mesosphere. The researchers believe that human activities are the main reason why noctilucent clouds are significantly more visible now than they were 150 years ago. 'Our methane emissions are impacting the atmosphere beyond just temperature change and chemical composition,' said Ilissa Seroka, an atmospheric scientist at the Environmental Defense Fund in Washington, D.C. who was not connected to the new study. 'We now detect a distinct response in clouds.' WHAT ARE NOCTILUCENT (NIGHT-SHINING) CLOUDS? Noctilucent clouds, also called polar mesospheric clouds, form between 47-53 miles above Earth's surface (76-85 km), according to NASA. Here, water vapor freezes into clouds of ice crystals, which are illuminated when the sun is below the horizon. They are seeded by debris from disintegrating meteors, giving them a 'shocking' blue hue when they reflect sunlight. The clouds are formed during the summer of both the northern and southern hemispheres. Carbon dioxide warms Earth's surface and the lower part of the atmosphere, but actually cools the middle atmosphere where noctilucent clouds form. In theory, this cooling effect should make noctilucent clouds form more readily. According to NASA, they are observed seasonally, during summer in both the Northern and Southern hemispheres, when the mesosphere is at its most humid, sending water vapor up from lower altitudes. At this time, it is also the coldest place on Earth, hitting temperatures as low as -210 degrees Fahrenheit as a result of air flow patterns. Studying these clouds provides insight on the behavior of this layer of the atmosphere, and its role relative to other layers, weather, and climate, the researchers say. The start of the season has been recorded anywhere from Nov 17 to Dec 16. Most watched News videos - Beach in Ciutadella Menorca hit by mini-tsunami 'rissaga' - CVS manager calls cops on black woman trying to use coupon - Model Annabelle Neilson walks the catwalk in 2010 fashion show - Brave lion cub forced to jump into raging river to follow mother - Courageous woman hides victim from kidnappers till cops arrive - Putin hands Trump a football during meeting in Helsinki - The streets of Alcudia in Mallorca are flooded by mini-tsunami - Fans celebrate in Paris as France beat Croatia 4-2 to win World Cup - Shocking video shows driver knocking cyclists off their bikes - Moment off-duty cop shoots armed motorbike thief dead - White woman confronts mother playing outside with child - 'Bomb' attack at home of former Sinn Fein leader Gerry Adams
<urn:uuid:c8348d0b-ff4f-42aa-b010-803243110002>
3.328125
1,125
Truncated
Science & Tech.
31.095657
95,494,003
In 1963 a fiery volcanic eruption underwater just off the Iceland coast pushed the island of Surtsey above the waves. At first, it was a black cinder cone, barren of life. Surtsey grew to about a square mile in area. Ecologists seized on the chance to see how life would go about colonizing Surtsey. Within forty years, hundreds of species were living on Surtsey. Surtsey was the first really well-studied process of succession, which of course is still taking place.© BrainMass Inc. brainmass.com July 16, 2018, 9:32 pm ad1c9bdddf The way the ecologists seized the chance to see how life would colonize the island of Surtsey was by declaring the island as a nature preserve in 1965. Interestingly, this was made while the eruption was still going on. This was done so that the island could proceed with the natural ecological succession without any human intervention. Only a few scientists are allowed on the island in to observe the colonization. As a result, scientists have been able to study and document the different plants, animals, insects, and so on that have found its way to the island. One excellent resource for gathering information on the succession study can be found on the Surtsey Research Society's website at http://www.surtsey.is/index_eng.htm. This website offers information on the colonization of the island based on observations made by the scientists ... In about 515 words, this solution discusses the island of Surtsey and how this island is related to the subject of succession. References are also included for additional information and images/maps.
<urn:uuid:7ec2deab-7a19-489a-ab40-54580bbb306c>
3.734375
342
Truncated
Science & Tech.
53.032652
95,494,005
Interstellar Molecules and Interstellar Chemistry Radio astronomers have to date detected some 60 or so molecules in interstellar space. These molecules have been identified, in the majority of cases, by recording the microwave spectrum received by radio telescope from an interstellar or circumstellar dust cloud and comparing it with the spectra produced in laboratory experiments. This provides a reliable technique for the identification of molecules in space that are well known in the laboratory. However, radio astronomers and microwave spectroscopists recognized that some of the spectra received from space did not correspond to molecules known on Earth. This is perhaps not very surprising, since the prevalent conditions in space are very different from those in the terrestrial laboratory. It is in the study of interstellar molecules which are not observed on Earth that we find the main applications of the techniques of computational chemistry in astronomy. KeywordsInterstellar Medium Equilibrium Geometry Rotational Constant Interstellar Cloud Carbon Star Unable to display preview. Download preview PDF.
<urn:uuid:9e8c98e3-8fe9-4eb7-b0c8-efd5b96ad447>
3.578125
193
Truncated
Science & Tech.
7.093025
95,494,007
From the Back Cover Professional Assembly Language Every high level language program (such as C and C++) is converted by a compiler into assembly language before it is linked into an executable program. This book shows you how to view the assembly language code generated by the compiler and understand how it is created. With that knowledge you can tweak the assembly language code generated by the compiler or create your own assembly language routines. This code-intensive guide is divided into three sections basics of the assembly language program development environment, assembly language programming, and advanced assembly language techniques. It shows how to decipher the compiler-generated assembly language code, and how to make functions in your programs faster and more efficient to increase the performance of an application. What you will learn from this book: - The benefits of examining the assembly language code generated from your high-level language program - How to create stand-alone assembly language programs for the Linux Pentium environment - Ways to incorporate advanced functions and libraries in assembly language programs - How to incorporate assembly language routines in your C and C++ applications - Ways to use Linux system calls in your assembly language programs - How to utilize Pentium MMX and SSE functions in your applications About the Author Richard Blum has worked for a large U.S. government organization for more than 15 years. During that time, he has had the opportunity to program utilities in various programming languages: C, C++, Java, and Microsoft VB.NET and C#. With this experience, Rich has often found the benefit of reviewing assembly language code generated by compilers and utilizing assembly language routines to speed up higher-level language programs. Rich has a bachelor of science degree in electrical engineering from Purdue University, where he worked on many assembly language projects. (Of course, this was back in the eight-bit processor days.) He also has a master of science degree in management from Purdue University, specializing in Management Information Systems.
<urn:uuid:6382e035-1af4-4073-865c-9f44adfbc9c0>
2.765625
397
Product Page
Software Dev.
22.228855
95,494,014
Find information on common issues. Ask questions and find answers from other users. Suggest a new site feature or improvement. Check on status of your tickets. 100 amps of electricity crackle in a vacuum chamber, creating a spark that transforms carbon vapor into tiny structures. Depending on the conditions, these structures can be shaped like little, 60-atom soccer balls, or like rolled-up tubes of atoms, arranged in a chicken-wire pattern, with rounded ends. These tiny, carbon nanotubes, discovered by Sumio Iijima at NEC labs in 1991, have amazing properties. They are 100 times stronger than steel, but weigh only one-sixth as much. They are incredibly resilient under physical stress; even when kinked to a 120-degree angle, they will bounce back to their original form, undamaged. And they can carry electrical current at levels that would vaporize ordinary copper wires. Learn more about carbon nanotubes from the many resources on this site, listed below. More information on Carbon nanotubes can be found here. Carbon Nanotube Worksheet 27 Apr 2018 | Posted by Tanya Faltens how to combine the negative capacitance model with the carbon nano tube transistor model in verilog a file? Closed | Responses: 9 Sumit Kumar Sinha CNRS - Carbon Nanotube Interconnect RC Model 06 Oct 2017 | Compact Models | Contributor(s): By Jie LIANG, Aida Todri1 This CNT Interconnect Compact Model includes a solid physics understanding and electrical modeling for pristine and doped SWCNT as Interconnect applications. SWCNT resistance and capacitance are... 23 May 2017 | | Contributor(s):: Luca Bergamasco, Matteo Fasano, Eliodoro Chiavazzo, Pietro Asinari, Annalisa Cardellini, Matteo Morciano Compute thermal conductivity of single-walled carbon nano-tubes via NEMD method 02 Aug 2017 | Posted by Terrence Warren McGinnis This worksheet has students describe the geometries and conductivity type of several different carbon nanotubes of differing chirality. CNT Bands can be used to simulate the structures... 18 Jul 2017 | Posted by Tanya Faltens CNT Creating Python script 04 Jul 2017 | | Contributor(s):: Saksham Soni It can work through running python script directly on PC without using Internet .Just you download and install NanoTCAD ViDES and then we can simulate CNT and GNR without using nanohub or internet. Coherent Nonlinear Optical Propagation Processes in Hyperbolic Metamaterials 07 Jun 2017 | | Contributor(s):: Alexander K. Popov Coherence and interference play an important role in classic and quantum physics. Processes to be employed can be significantly enhanced and the unwanted ones suppressed through the deliberately tailored constructive and destructed interference at quantum transitions and at nonlinear optical... Guruprasad S Hegde Muhammad Ihsan Ul Haq The Role of Dimensionality on Phonon-Limited Charge Transport: from CNTs to Graphene 21 Oct 2016 | | Contributor(s):: Jing Li, Yann-Michel Niquet IWCE 2015 presentation.
<urn:uuid:c61e285f-2321-4962-8737-5d7062b9fb22>
3.625
726
Content Listing
Science & Tech.
33.978797
95,494,035
20 July 2018 Celestial diamonds offer clarity on protoplanets Published online 17 April 2018 Diamonds in meteorites, remnants of a lost Mercury-to-Mars-sized planet, offer astronomers a glimpse of the early solar system. In 2008, an asteroid entered Earth’s atmosphere above Sudan and exploded in a fireball tens of kilometres over the Nubian Desert, raining fragments that would become known as the Almahata Sitta meteorites, after the Arabic name of the remote train station near which the fragments were found. In the decade since, analysis of the Almahata Sitta meteorites has informed scientists about the protoplanets –– the building blocks of today’s terrestrial planets –– that inhabited the early solar system. New research analysing the tiny crystal inclusions embedded within diamonds in the meteorites and published today in Nature Communications1 provides solid evidence of the protoplanets’ existence. Most of the fragments are ureilites, rare, carbon-enriched meteorites whose origin remains a mystery. The carbon in ureilites is in the form of graphite and diamond, and in 2010 an international collaboration showed that the Almahata Sitta ureilite MS-170 contains unusually large diamonds. Based on this, the researchers argued that the diamonds did not form in the heat and pressure of an impact, as widely supposed, but rather inside the ureilite parent body (UPB), a hypothetical object in the early solar system about the size of a large asteroid (around 1000 km). Now, a team led by Philippe Gillet of the Ecole Polytechnique Fédérale de Lausanne in Switzerland, who was also involved in the earlier research, has further refined the picture based on information from minerals embedded in the diamonds. Farhang Nabiei, the study’s lead author, “discovered the [mineral] inclusions while he was looking at the detailed relationships between graphite and diamond,” according to Gillet. Electron microscopy and spectroscopic analysis showed that most of these inclusions are iron-rich sulfide crystals which could only have formed at high pressures. Together with the earlier evidence from the diamond crystals, this shows that the UPB was probably a Mercury-to-Mars sized protoplanet. Gillet says the findings were a surprise, adding that models “have shown that such bodies populated the early solar system, but no clear evidence of their remnants has been found before.” Matthias Meier, a meteor expert at ETH Zurich, who was not involved in this study, says that the paper “certainly makes a strong case that the diamonds found in the Almahata Sitta ureilites (and their exotic inclusions) formed inside a planet-sized body, a discovery that adds an exciting new perspective on the origin of the ureilites.” However, he points out that other mineral signatures in the meteorites suggest lower temperatures and pressures, raising questions about the history of ureilites and the UPB. Gillet’s team is examining other Almahata Sitta meteorites to see how broadly their findings apply and what other secrets these celestial diamonds hold. “Almahata Sitta is continuing to amaze the community,” says Meier. “That meteorite really turned into a scientific treasure trove.” - Nabiei, F. et al. A large planetary body inferred from diamond inclusions in a ureilite meteorite. Nature Communications. http://dx.doi.org/10.1038/s41467-018-03808-6 (2018) - Miyahara, M. et al. Unique large diamonds in a ureilite from Almahata Sitta 2008 TC3 asteroid. Geochimica et Cosmochimica Acta. http://dx.doi.org/0.1016/j.gca.2015.04.035 (2015)
<urn:uuid:ba0d9cbb-a826-48b1-bec2-aec5fa4531e5>
3.796875
836
Truncated
Science & Tech.
36.14461
95,494,054
Their maps, combined with climate models, will project how climate change will alter biodiversity and help to shape policy for setting aside conservation easements. Wildlife, people and livestock have weathered past variation in climate by shifting their seasonal migration patterns though the varied of ecological zones in the Great Rift Valley, which runs through the center of Kenya and Tanzania. “When you go from the bottom of the rift, it’s almost desert. By the time you get up to the top, no more than 15-20 km away, it’s rainforest,” said David Western, adjunct professor of biology at UCSD, director of the African Conservation Centre in Nairobi and former director of Kenya Wildlife Service. “Previously this was communal land where people moved with the seasons and they moved with changing climates.” Now, as climate change is expected to shift the balance between habitats in this region, increased farming has fragmented the landscape, Western said. “It’s removed the highland grazing for both livestock and wildlife. The crop residues can keep the livestock going, but it’s a complete lockout for wildlife.” The project will identify areas that, if protected, would allow both wildlife and pastoralists to move to more favorable conditions as climate shifts. “What we want to do is identify key pathways where, working with landowners, you can actually keep the land open, through a conservation easement,” Western said. To determine how the centers of biological richness are likely to shift, UCSD biology assistant professor Walter Jetz and Daniel Kissling, a postdoctoral fellow, have mapped the ranges of 2,700 species of birds, mammals, amphibians and reptiles across all of Kenya, Tanzania, Uganda, Rwanda and Burundi. For each species, they have plotted an ‘ecological envelope.’ “Within those boundaries, we are likely to encounter those species,” Jetz said. “With the distribution map, we can determine the species’ climatic niche.” The next step, Jetz said, is to revise their maps using satellite images and field notes, to a finer scale. Their current maps are drawn to a 100 kilometer resolution. “You need some refinement if you want to be able to make predictions, so we are taking global maps and refining them to the scale of actual conservation decision-making: to a 10 or 20 km resolution,” Jetz said. Jetz’s group’s maps of animal diversity will be combined with those for plants and human land use to gain a fuller picture of how ranges and interactions between species are likely to shift under different climate changes. A plant and an animal may respond differently to the same climate shift, for example, causing their ranges to diverge until the two species no longer co-exist. At a recent meeting at the University of York in the UK, participants in the project agreed to join their completed distribution maps in a single database, and to combine that multilayered map with two climate models – one based on the minimum expected change and another that anticipates larger climate shifts – to develop six future ecological scenarios for East Africa: two each for the years 2025, 2055 and 2085. These scenarios will inform decisions about setting aside additional reserves in Kenya and Tanzania. “The organization David Western represents has close ties to the stakeholders in Kenya, and therefore there is hope that some of the findings will actually be implemented,” Jetz said. “We sense a clear willingness in Kenya and Tanzania to put more reserves in place to mitigate the impacts of global change. For that they are looking to scientists for guidance. So we have a situation where good science can lead to significant basic insights and also make a difference for the people and their wildlife. We are very excited to be involved. The project is funded by the Liz Claiborne Art Ortenberg Foundation. Other participating institutions include the Missouri Botanical Garden; the University of York, UK; and Clark University.Contact: Susan Brown | Newswise Science News Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:01aab9c5-48cc-47c8-abda-b412a80ac670>
3.828125
1,439
Content Listing
Science & Tech.
38.372333
95,494,069
Earthwatch provides citizens with the opportunity to work alongside leading scientists to combat some of the planet’s most pressing environmental issues. With Earthwatch, you'll experience hands-on science in some of the most astounding locations in the world. You'll meet a community of like-minded travelers and return home with stories filled with adventure. WANT TO LEARN MORE? Help gather critical data on the sustainable use of one of Mexico City’s last wetlands. Read More + Join Earthwatch in South Africa to help protect the second largest colony of African penguins on the planet. Read More + In Northern Scotland, scientists are working to rewild the Scottish Highlands – from planting native trees and plants, to reinstating native... Read More + Explore interactions between people and chimpanzees and other primates in the rainforest of Uganda to improve human–primate relationships. Read More + When did ancient Portuguese societies shift to agriculture? Hunter-gatherers and farmers may have coexisted for a brief time here. Unearth t... When did ancient Portuguese societies shift to agriculture? Hunter-gatherers and farmers may have coexisted for a brief time here. Unearth t... Read More + Dig for clues into resource use, sustainability, and long-term impacts on ancient communities. Read More + How did the people of the Khmer Empire manage a changing climate and what can their resilience teach us today? Read More + How much can the lowly caterpillar tell us about the world we live in? More than you might imagine. How much can the lowly caterpillar tell us about the world we live in? More than you might imagine. Read More + What does climate change mean for one of America’s most famous national parks? Read More + How is a national treasure being reshaped by the changing climate? Help scientists search for clues in Acadia National Park. Read More + Uncover the role sea otters play in maintaining the health of critical seagrass habitat in Southeast Alaska. Help scientists to understand the occurrence of giant manta rays in Peru in order to strengthen protections for this species. Read More + Snorkel in the waters of Coral Bay to help protect the habitat of manta rays and reef sharks. Read More + What makes a coral reef resilient? On this beautiful island, find how we can help reefs survive climate change. Read More + Help conserve wildlife within the Amazon Basin, while seeking pink river dolphins, primates, macaw, caiman, giant river otters, piranha and exotic fish. How can we keep shark and ray populations strong? Find answers while exploring some of the world’s most beautiful reefs. What can we learn about Italy’s ancient people from the ruins they left along the coast of Tuscany? Help us dust off clues. Scientists expect to observe the greatest effects of global warming in the Arctic. But what, exactly, will these effects be?
<urn:uuid:402a692c-f84c-46a4-8cb5-cdf5255ca1bc>
2.765625
615
Content Listing
Science & Tech.
52.199082
95,494,080
“UVM physicists discover how to create the thinnest liquid films ever A team of physicists at the University of Vermont have discovered a fundamentally new way surfaces can get wet. Their study may allow scientists to create the thinnest films of liquid ever made—and engineer a new class of surface coatings and lubricants just a few atoms thick. “We’ve learned what controls the thickness of ultra-thin films grown on graphene,” says Sanghita Sengupta, a doctoral student at UVM and the lead author on the new study. “And we have a good sense now of what conditions—like knobs you can turn—will change how many layers of atoms will form in different liquids.” The results were published June 8 in the journal Physical Review Letters. A third way To understand the new physics, imagine what happens when rain falls on your new iPhone: it forms beads on the screen. They’re easy to shake off. Now imagine your bathroom after a long shower: the whole mirror may be covered with a thin layer of water. “These are two extreme examples of the physics of wetting,” says UVM physicist Adrian Del Maestro, a co-author on the new study. “If interactions inside the liquid are stronger than those between the liquid and surface, the liquid atoms stick together, forming separate droplets. In the opposite case, the strong pull of the surface causes the liquid to spread, forming a thin film.” More than 50 years ago, physicists speculated about a third possibility—a strange phenomena called “critical wetting” where atoms of liquid would start to form a film on a surface, but then would stop building up when they were just a few atoms thick. These scientists in the 1950s, including the famed Soviet physicist Evgeny Lifshitz, weren’t sure if critical wetting was real, and they certainly didn’t think it would ever be able to be seen in the laboratory. Then, in 2010, the Nobel Prize in physics was awarded to two Russian scientists for their creation of a bizarre form of carbon called graphene. It’s a honeycombed sheet of carbon just one atom thick. It’s the strongest material in the world and has many quirky qualities that materials scientists have been exploring ever since. Graphene turns out to be the “ideal surface to test for critical wetting,” says Del Maestro—and with it the Vermont team has now demonstrated mathematically that critical wetting is real. Harnessing Van der Waals force The scientists explored how three light gases—hydrogen, helium and nitrogen—would behave near graphene. In a vacuum and other conditions, they calculated that a liquid layer of these gases will start to form on the one-atom-thick sheet of graphene. But then the film stops growing when “it is ten or twenty atoms thick,” says Valeri Kotov, an expert on graphene in UVM’s Department of Physics and the senior author on the study. The explanation can be found in quantum mechanics. Though a neutral atom or molecule—like the light gases studied by the UVM team—has no overall electric charge, the electrons constantly circling the far-off nucleus (OK, “far-off” only from the scale of an electron) form momentary imbalances on one side of the atom or another. These shifts in electron density give rise to one of the pervasive but weak powers in the universe: Van der Waals force. The attraction it creates between atoms only extends a short distance. Because of the outlandish, perfectly flat geometry of the graphene, there is no electrostatic charge or chemical bond to hold the liquid, leaving the puny van der Waals force to do all the heavy lifting. Which is why the liquid attached to the graphene stops attracting additional atoms out of the vapor when the film has grown to be only a few atoms away from the surface. In comparison, even the thinnest layer of water on your bathroom mirror—which is formed by many much more powerful forces than just the quantum-scale effects of van der Waals force—would be “in the neighborhood of 109 atoms thick,” says Del Maestro; that’s 1,000,000,000 atoms thick. Engineering a surface where this kind of weak force can be observed has proven very challenging. But the explosion of scientific interest in graphene has allowed the UVM scientists to conclude that critical wetting seems to be a universal phenomenon in the numerous forms of graphene now being created and across the growing family of other two-dimensional materials. The scientists’ models show that, in a vacuum, a suspended sheet of graphene (above) could be manipulated to create a liquid film (atoms in blue, above) that stops growing at a thickness of a much as 50 nanometers, down to a thickness of just three nanometers. “What’s important is that we can tune this thickness,” says Sengupta. By stretching the graphene, doping it with other atoms, or applying a weak electrical field nearby, the researchers have evidence that the number of atoms in an ultra-thin film can be controlled. The mechanical adjustment of the graphene could allow real-time changes in the thickness of the liquid film. It might be a bit like turning a “quantum-sized knob,” says Nathan Nichols—another UVM doctoral student who worked on the new study—on the outside of an atomic-scale machine in order to change the surface coating on moving parts inside. Now this team of theoretical physicists—“I’m starting to call what I do dielectric engineering,” says Sengupta—is looking for a team of experimental physicists to test their discovery in the lab. Much of the initial promise of graphene as an industrial product has not yet been realized. Part of the reason why is that many of its special properties—like being a remarkably efficient conductor—go away when thick layers of other materials are stuck to it. But with the control of critical wetting, engineers might be able to customize nanoscale coatings which wouldn’t blot out the desired properties of graphene, but could, says Adrian Del Maestro, offer lubrication and protection of “next-generation wearable electronics and displays.””
<urn:uuid:0a2bce84-e9c0-4beb-bb9f-62a7f83ae374>
3.546875
1,326
News Article
Science & Tech.
42.146802
95,494,106
Press Release Summary: Researchers at Joint Quantum Institute (JQI) created synthetic magnetic fields for ultracold gas atoms. In effect, this tricks neutral atoms into acting as if they are electrically charged particles subjected to real magnetic field. This demonstration opens up possibilities for exploring complex natural phenomena involving charged particles in magnetic fields and may also contribute to new form of quantum computing. Original Press Release: JQI Researchers Create 'Synthetic Magnetic Fields' for Neutral Atoms Achieving an important new capability in ultracold atomic gases, researchers at the Joint Quantum Institute, a collaboration of the National Institute of Standards and Technology (NIST) and the University of Maryland, have created "synthetic" magnetic fields for ultracold gas atoms, in effect "tricking" neutral atoms into acting as if they are electrically charged particles subjected to a real magnetic field. The demonstration, described in the latest issue of the journal Nature, not only paves the way for exploring the complex natural phenomena involving charged particles in magnetic fields, but may also contribute to an exotic new form of quantum computing. As researchers have become increasingly proficient at creating and manipulating gaseous collections of atoms near absolute zero, these ultracold gases have become ideal laboratories for studying the complex behavior of material systems. Unlike usual crystalline materials, they are free of obfuscating properties, such as impurity atoms, that exist in normal solids and liquids. However, studying the effects of magnetic fields is problematic because the gases are made of neutral atoms and so do not respond to magnetic fields in the same way as charged particles do. So how would you simulate, for example, such important exotic phenomena as the quantum Hall effect, in which electrons can "divide" into quasiparticles carrying only a fraction of the electron's electric charge? A harbinger of the synthetic magnetic fields is the formation of vortices (spots). These spots, the number of which increases with increasing synthetic field, mark the points about which atoms swirled with a whirlpool-like motion. The measurement units in each panel indicate the size of the external magnetic field gradient applied to the gas of atoms, with larger external fields producing more vortices. The answer Ian Spielman and his colleagues came up with is a clever physical trick to make the neutral atoms behave in a way that is mathematically identical to how charged particles move in a magnetic field. A pair of laser beams illuminates an ultracold gas of rubidium atoms already in a collective state known as a Bose-Einstein condensate. The laser light ties the atoms' internal energy to their external (kinetic) energy, modifying the relationship between their energy and momentum. Simultaneously, the researchers expose the atoms to a real magnetic field that varies along a single direction, so that the alteration also varies along that direction. In a strange inversion, the laser-illuminated neutral atoms react to the varying magnetic field in a way that is mathematically equivalent to the way a charged particle responds to a uniform magnetic field. The neutral atoms experience a force in a direction perpendicular to both their direction of motion and the direction of the magnetic field gradient in the trap. By fooling the atoms in this fashion, the researchers created vortices in which the atoms swirl in whirlpool-like motions in the gas clouds. The vortices are the "smoking gun," Spielman says, for the presence of synthetic magnetic fields. Previously, other researchers had physically spun gases of ultracold atoms to simulate the effects of magnetic fields, but rotating gases are unstable and tend to lose atoms at the highest rotation rates. In their next step, the JQI researchers plan to partition a nearly spherical system of 20,000 rubidium atoms into a stack of about 100 two-dimensional "pancakes" and increase their currently observed 12 vortices to about 200 per-pancake. At a one-vortex-per-atom ratio, they could observe the quantum Hall effect and control it in unprecedented ways. In turn, they hope to coax atoms to behave like a class of quasiparticles known as "non-abelian anyons," a required component of "topological quantum computing," in which anyons dancing in the gas would perform logical operations based on the laws of quantum mechanics. * Y.J. Lin, R.L. Compton, K. Jimenez-Garcia, J.V. Porto and I.B. Spielman. Synthetic magnetic fields for ultracold neutral atoms. Nature, Dec. 3, 2009. Media Contact: Ben Stein, email@example.com, (301) 975-3097
<urn:uuid:d2e28957-f449-4678-9f84-db4ac4116b2e>
3.203125
967
News (Org.)
Science & Tech.
31.220193
95,494,121
Zoology Zoon = animal Logos = study of Zoology = study of animals Zoology is the study of animal diversity, the way they function, live, reproduce and interact. History and Evolution Animal life existed more than 600 million years ago PowerPoint Slideshow about 'Zoology' - emily An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. All forms of life descended from a common ancestor through a branching of lineages. An opposing argument that different forms of life arose independently and descended to the present in linear unbranched geneologies has been refuted by comparative studies of organismal form, cell structure and macromolecular structures.
<urn:uuid:9ddf6e32-677f-466e-be24-8e6b454c5453>
2.625
206
Truncated
Science & Tech.
10.91543
95,494,128
Notes,quiz,blog and videos of C++ Programming for computer science engineering.C++ Programming is free app for beginners who want to learn from scratch. It is handy notes, e-book app which contains basic concept of C++ topics programs with output . By using this app it easy to become programmer. It is also for computer science engineering , IT , BE ,B-Tech , BCA ,B.Sc. (CS),B.Sc. (IT) & MCA students. This app is precise C++ Tutorials with program . It cover to programs for Basics of C++, datatypes , C++ modifiers, storage classes, loops , operator , function , array ,control statements ,class and objects ,inheritance ,abstract class , polymorphism , encapsulation ,overloading (operator and function), multi threading etc.
<urn:uuid:e52f1e91-157a-4962-b994-682df468ef9b>
2.671875
174
Product Page
Software Dev.
59.401102
95,494,135
Mathis, Jeremy T. Ocean Acidification Research Center, University of Alaska, Fairbanks, Alaska. Last reviewed:April 2018 Show previous versions - The CO2 problem - Why is the ocean more acidic? - What are the effects of a more acidic ocean? - Ocean acidification effects and outlook - Links to Primary Literature - Additional Readings The decrease in the pH of the ocean over time, as carbon dioxide (CO2) from the atmosphere is absorbed by and dissolves in seawater. Since the Industrial Revolution, rising carbon dioxide levels in the atmosphere and increased absorption of CO2 by the oceans have created an unprecedented ocean acidification (OA) phenomenon that is altering pH levels and threatening a number of marine ecosystems (Fig. 1). Although the average oceanic pH can vary on interglacial time scales, the changes are usually of the order of about 0.002 unit per 100 years; however, the current observed rate of change is about 0.1 unit per 100 years, or roughly 50 times faster. Even more disconcerting, regional factors such as coastal upwelling, changes in riverine and glacial discharge rates, and loss of sea ice have created OA “hotspots” where changes are occurring at even faster rates. See also: Acid and base; Carbon dioxide; Glaciology; Marine ecology; pH; Sea ice; Upwelling The content above is only an excerpt. for your institution. Subscribe To learn more about subscribing to AccessScience, or to request a no-risk trial of this award-winning scientific reference for your institution, fill in your information and a member of our Sales Team will contact you as soon as possible. to your librarian. Recommend Let your librarian know about the award-winning gateway to the most trustworthy and accurate scientific information. AccessScience provides the most accurate and trustworthy scientific information available. Recognized as an award-winning gateway to scientific knowledge, AccessScience is an amazing online resource that contains high-quality reference material written specifically for students. Its dedicated editorial team is led by Sagan Award winner John Rennie. Contributors include more than 9000 highly qualified scientists and 42 Nobel Prize winners. MORE THAN 8500 articles and Research Reviews covering all major scientific disciplines and encompassing the McGraw-Hill Encyclopedia of Science & Technology and McGraw-Hill Yearbook of Science & Technology 115,000-PLUS definitions from the McGraw-Hill Dictionary of Scientific and Technical Terms 3000 biographies of notable scientific figures MORE THAN 17,000 downloadable images and animations illustrating key topics ENGAGING VIDEOS highlighting the life and work of award-winning scientists SUGGESTIONS FOR FURTHER STUDY and additional readings to guide students to deeper understanding and research LINKS TO CITABLE LITERATURE help students expand their knowledge using primary sources of information
<urn:uuid:e2c721b3-8954-464c-9800-d1f486eb254f>
3.359375
595
Truncated
Science & Tech.
21.587322
95,494,146
A new way of modifying the dipole moment of cholesteric liquid crystals allows for researchers to select between the different band-edge modes experimentally for the first time. Since lasers were first developed, the demand for more adaptable lasers has only increased. Chiral nematic liquid crystals (CLCs) are an emerging class of lasing devices that are poised to shape how lasers are used in the future because of their low thresholds, ease of fabrication, and ability to be tuned across wider swaths of the electromagnetic spectrum. New work on how to select band-edge modes in these devices, which determine the lasing energy, may shine light on how lasers of the future will be tuned. The laser cavities are formed of a chiral nematic liquid crystal doped with a fluorescent dye. The liquid crystal creates a photonic bandgap in the laser cavity. An international team of researchers demonstrated a technique that allows the laser to electrically switch emission between the long- and short-wavelength edges of the photonic bandgap simply by applying a voltage of 20 V. They report their work this week in Applied Physics Letters, from AIP Publishing. "Our contribution is to find a way to change the orientation of the transition dipole moment of the gain medium [the fluorescent dye] in the CLC structure and achieve mode selection between long- and short-wavelength edges without tuning the position of the photonic bandgap," said Chun-Ta Wang, an author of the paper. "We also demonstrated a polymer-stabilized CLC system, which improved the laser's stability, lasing performance and threshold voltage." CLC lasers work through a collection of liquid crystals that self-assemble into helix-shaped patterns, which then act as the laser's cavity. These helices are chiral, meaning they corkscrew in the same direction, which allows them to be tuned across a wide range of wavelengths. While many lasers, like the laser diodes used in DVD players, are fixed at one color, many CLC lasers can be tuned to multiple colors in the visible light spectrum and beyond. In addition to tuning the lasing wavelength, one hot area of inquiry is in finding different ways of tuning the wavelength by switching the lasing mode from one edge of the photonic bandgap to the other. Some attempts so far have suggested it is possible to switch between the long- and short-wavelength edges. Wang's team's work demonstrates that this mode switching is possible by applying a direct-current electric field to the fluorescent dye, altering its order parameter without affecting the spectral position of its bandgap. The researchers tested three mixtures by varying ratios of liquid crystals and dyes and recording their laser outputs through fiber-optic spectrometry. They found that it was possible for all the samples to shift from lasing at the short-wavelength edge to lasing at the long-wavelength edge, a shift of nearly 40 nanometers, with as little as 20 volts. Moreover, a polymer-stabilized planar CLC sample was able to leverage its extra structural stability to reversibly switch between the two modes and showed improved performance and threshold voltage. "There have been many calculations for how to achieve this phenomenon in this field, but to our knowledge, this is the first time it was proven experimentally," Wang said. Looking ahead, Wang said widespread use of CLC lasers is still slated for the future. In the meantime, he and his team are hoping to expand our understanding of electrically assisted band-edge mode selection in other types of photonic crystals. The article, "Electrically assisted bandedge mode selection of photonic crystal lasing in chiral nematic liquid crystals," is authored by Chun-Ta Wang, Chun-Wei Chen, Tzu-Hsuan Yang, Inge Nys, Cheng-Chang Li, Tsung-Hsien Lin, Kristiaan Neyts and Jeroen Beeckman. The article appeared in Applied Physics Letters Jan. 22, 2018 (DOI: 10.1063/1.5010880) and can be accessed at http://aip. ABOUT THE JOURNAL Applied Physics Letters features concise, rapid reports on significant new findings in applied physics. The journal covers new experimental and theoretical research on applications of physics phenomena related to all branches of science, engineering, and modern technology. See http://apl. Julia Majors | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:ea23a954-e314-480a-a3c8-a0c1dedb06b7>
3.171875
1,563
Content Listing
Science & Tech.
41.229087
95,494,151