text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
physical theory, introduced by Albert Einstein, that discards the concept of absolute motion and instead treats only relative motion between two systems or frames of reference. One consequence of the theory is that space and time are no longer viewed as separate, independent entities but rather are seen to form a four-dimensional continuum called space-time. Full comprehension of the mathematical formulation of the theory can be attained only through a study of certain branches of mathematics, e.g., tensor calculus. Both the special and general theories have been established and accepted into the structure of physics. Einstein also sought unsuccessfully for many years to incorporate the theory into a unified field theory valid also for subatomic and electromagnetic phenomena.
The modern theory is an extension of the simpler Galilean or Newtonian concept of relativity, which holds that the laws of mechanics are the same in one system as in another system in uniform motion relative to it. Thus, it is impossible to detect the motion of a system by measurements made within the system, and such motion can be observed only in relation to other systems in uniform motion. The older concept of relativity assumes that space and time are correctly measured separately and regards them as absolute and independent realities. The system of relativity and mechanics of Galileo and Newton is perfectly self-consistent, but the addition of Maxwell's theory of electricity and magnetism to the system leads to fundamental theoretical difficulties related to the problem of absolute motion.
It seemed for a time that the ether, an elastic medium thought to be present throughout space, would provide a method for the measurement of absolute motion, but certain experiments in the late 19th cent. gave results unexplained by or contradicting Newtonian physics. Notable among these were the attempts of A. A. Michelson and E. W. Morley (1887) to measure the velocity of the earth through the supposed ether as one might measure the speed of a ship through the sea. The null result of this measurement caused great confusion among physicists, who made various unsuccessful attempts to explain the result within the context of classical theory.
The validity of the classical concepts of absolute and independent time and space was challenged by H. A. Lorentz and others. Since absolute motion cannot be confirmed by objective measurement, Einstein suggested that it be discarded from physical reasoning; he explained the results of the Michelson-Morley experiment by means of the special relativity theory, which he enunciated in 1905. This theory accepts the hypothesis that the laws of nature are the same in different moving systems applies also to the propagation of light, so that the measured speed of light is constant for all observers regardless of the motion of the observer or of the source of the light. Einstein deduced from these hypotheses the full logical consequences and reformulated the mathematical equations of physics, basing them in part on equations of H. A. Lorentz (see Lorentz contraction) by which measurements made in one uniformly moving system can be correlated with measurements in another system if the velocity of one relative to the other is known.
The theory resolves the conflict between Newton's mechanics and Maxwell's electrodynamics by introducing fundamental changes in Newton's theory. In most phenomena of ordinary experience the results obtained from the application of the special theory approximate those based on Newtonian dynamics, but the results deviate greatly for phenomena occurring at velocities approaching the speed of light. In innumerable cases where the results predicted by these theories are incompatible, experimental evidence supports the Einstein theory. Among its assertions and consequences are the propositions that the maximum velocity attainable in the universe is that of light; that mass and energy are equivalent and interchangeable properties (this is spectacularly confirmed by nuclear fission, on which the atomic bomb is based); that objects appear to contract in the direction of motion; that the rate of a moving clock seems to decrease as its velocity increases; that events that appear simultaneous to an observer in one system may not appear simultaneous to an observer in another system; and that, since absolute time is excluded from physical reasoning because it cannot be measured, the results of observers in different systems are equally correct.
Einstein expanded the special theory of relativity into a general theory (completed c.1916) that applies to systems in nonuniform (accelerated) motion as well as to systems in uniform motion. The general theory is principally concerned with the large-scale effects of gravitation and therefore is an essential ingredient in theories of the universe as a whole, or cosmology. The theory recognizes the equivalence of gravitational and inertial mass. It asserts that material bodies produce curvatures in space-time that form a gravitational field and that the path of a body in the field is determined by this curvature. The geometry of a given region of space and the motion in the field can be predicted from the equations of the general theory.
Details of the motions of the planet Mercury had long puzzled astronomers; Einstein's computations explained them. He stated that the path of a ray of light is deflected by a gravitational field; observations of starlight passing near the sun, first made by A. S. Eddington during an eclipse of the sun in 1919, confirmed this. He predicted that in a gravitational field spectral lines of substances would be shifted toward the red end of the spectrum. This has been confirmed by observation of light from white dwarf stars. Further confirmation has been obtained in more recent years from precision measurements using artificial satellites, the Viking lander on Mars, and Gravity Probe B (designed specifically to test the theory) as well as from detailed observations of pulsars.
- See A. Einstein, The Meaning of Relativity (6th ed. 1956) and, with others, The Principle of Relativity (1923, repr. 1958; a collection of original papers on the theory).
- Relativity for the Million (1962). ,
- The Special Theory of Relativity (1965). ,
- Einstein's Legacy (1986). ,
Darrigol Olivier , “The Electrodynamic Origins of Relativity Theory” , Historical Studies in the Physical and Biological Sciences , ...
The theory of relativity was developed by German theoretical physicist Albert Einstein (1879–1955) in the first two decades of the twentieth...
An important theory proposed by Albert Einstein . The first part of the theory, published in 1905 and known as the special theory, applies... | <urn:uuid:f58cdb1d-fab2-484e-865f-b8f18ed6032d> | 3.8125 | 1,291 | Knowledge Article | Science & Tech. | 31.690715 | 95,611,356 |
Arrays, sets, and dictionaries are called collections, because they collect values together in one place.
If you want to create an empty collection just write its type followed by opening and closing parentheses. For example, we can create an empty dictionary with strings for keys and values like this:
var teams = [String: String]()
We can then add entries later on, like this:
teams["Paul"] = "Red"
Similarly, you can create an empty array to store integers like this:
var results = [Int]()
The exception is creating an empty set, which is done differently:
var words = Set<String>() var numbers = Set<Int>()
This is because Swift has special syntax only for dictionaries and arrays; other types must use angle bracket syntax like sets.
If you wanted, you could create arrays and dictionaries with similar syntax:
var scores = Dictionary<String, Int>() var results = Array<Int>()
Swift on the server is here
Get ahead of the game and learn server-side Swift with my latest book – build real-world projects while you learn! | <urn:uuid:43b78684-d341-4997-b862-02a5a2b22100> | 3.078125 | 239 | Tutorial | Software Dev. | 28.240714 | 95,611,360 |
Secrets behind high temperature superconductors revealed
Scientists from Queen Mary, University of London and the University of Fribourg (Switzerland) have found evidence that magnetism is involved in the mechanism behind high temperature superconductivity.
23 February 2009
Writing in the journal Nature Materials, Dr Alan Drew from Queen Mary’s Department of Physics and his colleagues at the University of Fribourg report on the investigation of a new high temperature superconductor, the so-called oxypnictides. They found that these exhibit some striking similarities with the previously known copper-oxide high temperature superconductors, - in both cases superconductivity emerges from a magnetic state. Their results go some way to explaining the mechanisms behind high temperature superconductors.
Superconductors are materials that can conduct electricity with no resistance, but only at low temperatures. High temperature superconductors were first discovered in 1986 in copper-oxides, which increased the operational temperature of superconductors by more than 100°C, to -130°C and opened up a wealth of applications. The complex fundamental physics behind these high temperature superconductors has, however, remained a mystery to scientists.
Dr Drew said “Last year, a new class of high-temperature superconductor was discovered that has a completely different make-up to the ones previously known - containing layers of Arsenic and Iron instead of layers of Copper and Oxygen.”
Our hope is that by studying them both together, we may be able to resolve the underlying physics behind both types of superconductor and design new superconducting materials, which may eventually lead to even higher temperature superconductors.”
Professor. Bernhard, of the University of Fribourg, added: “Despite the mysteries of high-temperature superconductivity, their applications are wide-ranging. One exciting applications is using superconducting wire to provide lossless power transmission from power stations to cities. Superconducting wire can hold a much higher current density than existing copper wire and is lossless and therefore energy saving.”
An electrical current flowing round a loop of superconducting wire can also continue indefinitely, producing some of the most powerful electromagnets known to man. These magnets are used in MRI scanners, to ‘float’ the MagLev train, and to steer the proton beam of the Large Hadron Collider (LHC) at CERN. Envisaged future applications of superconductors exist also in ultrafast electronic devices and in quantum computing.
For media information, contact:Neha Okhandiar
Public Relations Manager
Queen Mary University of London | <urn:uuid:fb5b8efa-7ee9-4be5-b64d-3090719ff90b> | 3.796875 | 545 | News (Org.) | Science & Tech. | 7.085833 | 95,611,362 |
Type Function Object GroupObject Library display.* Return value none Revision 2018.3333 Keywords insert, group insert See also Group Programming (guide)
Inserts an object into a group.
Inserting display objects into a group also removes the object from it current group (objects cannot be in multiple groups). All display objects are part of stage object when first created. At this time, Corona only has one stage, which is the entire screen area.
group:insert( [index,] child, [, resetTransform] )
Number. Inserts child at
index into group, shifting up other elements as necessary. The default value index is
n is the number of children in the group.
An easy way to move an object above all its siblings (top) is to re-insert it:
object.parent:insert( object ).
If a group has 3 display objects:
groupis at the bottom of the group.
groupis in the middle of the group.
groupis at the top of the group.
Objects at the higher index numbers will be displayed on top of objects with lower index numbers (if the objects overlap).
DisplayObject. Object to be inserted into the group.
Boolean. Determines what happens to child’s transform. When
false, child’s local position, rotation, and scale properties are preserved, except the local origin is now relative to the new parent group, not its former parent; When
true, child’s transform is reset (i.e. the
yScale properties of child are reset to
1, respectively). The default value for
local txt = display.newText( "Hello", 0, 0 ) local g1 = display.newGroup() local g2 = display.newGroup() -- Insert text object into g1 g1:insert( txt ) -- Insert same text object into g2 g2:insert( txt ) print( "g1: " .. tostring(g1) ) -- prints nil print( "g2: " .. tostring(g2) ) -- prints textObject print( "number of children in g1 and g2: " .. g1.numChildren, g2.numChildren ) | <urn:uuid:37f3c6ad-9212-4b95-9bee-c2b2d1cb11ac> | 2.859375 | 467 | Documentation | Software Dev. | 69.518182 | 95,611,388 |
In 2013, during a visit by French president François Hollande to Morocco’s capital Rabat, a deal was made in secret for France to build a satellite for the Moroccan government. On November 7, 2017, four years after this deal was signed, Mohammed VI-A, Africa’s first high-resolution imaging satellite, was launched, giving Morocco a new kind of power in North Africa.
The launch of Mohammed VI-A reflects a new way nations in Africa are growing their economic, social and military capabilities. Instead of conforming to the status quo of the past or accepting one-sided trade and geopolitical agreements, they are turning to space. For Morocco, the satellite, which is nominally to be used for mapping, natural disaster management and more, raised eyebrows in Algeria and Spain over its potential for spying as well.
Several other countries in Africa are also making moves in space. South Africa orbited its first satellite in 1999 and recently launched the continent’s first private satellite, developed in part by high-school girls. (South Africa was also home to one of NASA’s stations for its Deep Space Network, built in the 1960s.) Ethiopia opened East Africa’s first observatory in 2015 and has set a timeline of launching its own satellite within a few years. Nigeria has launched several since 2003 and is planning to launch Africa’s first nanosatellite. Egypt will loft a new Russian-built satellite in 2019, and in 2016 the African Union approved a proposal to connect the different space agencies operating throughout the continent.
FOREIGN COUNTRIES EYE THE GRAND PRIZE
While countries in Africa pursue their own individual plans for space, foreign nations continue to play an important role in helping Africa go orbital. The U.S., Japan, China, India and Russia are offering their know-how and infrastructure throughout the continent. For example, in June 2017, Ghana launched its first CubeSat satellite, called GhanaSat-1, to crack down on illegal mining and theft of resources. While the probe was designed by Ghana’s All Nations University College, it was launched thousands of miles away, from NASA’s Kennedy Space Center in Florida. And Japan’s space agency, JAXA, provided training and resources to move the project forward.
In Nigeria, meanwhile, China is helping turn the capital city Abuja’s space dreams into reality. It began in 2007 when Chinese engineers built and launched a commercial satellite for the African nation—the first time China had done that for another nation. It followed with a communications satellite in 2011 and in 2016 entertained a delegation from Nigeria to talk about logistics and investment for the country’s plan to send an astronaut into space in the 2030s. India has launched four satellites for Algeria, while Russia launched a satellite for Egypt in 2014 and is helping to develop a second satellite.
While experienced spacefaring nations are trying to extend their influence, African nations themselves could use their increasing sophistication with space technology to grow their own influence, hoping to change the continental balance of power. For Morocco, Mohammed VI-A is both a public and a private tool—a way to map areas for agriculture and disaster management, and a way to assess military bases and troop numbers in neighboring countries. But the nation may well go beyond this and offer its existing and future capabilities to its neighbors for both political and economic ends. Rabat could, in principle, offer imagery of protests by dissidents or of areas that might have untapped diamond, gold or oil deposits. Through its satellite program, Morocco could thus become far more influential across Africa.
As African nations gain sophisticated space capabilities, they might even end up competing with their more experienced counterparts. For example, although the African Union has been unable to launch a continent-wide space agency, it can begin setting continent-wide space goals, such as having a network of African internet satellites in service by 2035. That could bring it into conflict with the U.S. company SpaceX, which has proposed its own internet satellites as early as 2019.
In short, what African nations expect to achieve in space could influence the geopolitical agenda of countries across Asia, North America and Europe. For every satellite and craft that is launched, an array of nations will be affected positively and negatively. In that way, space is not just changing the balance of power in Africa, but could ultimately be changing the destinies of entire countries. | <urn:uuid:57a66314-3b71-46d6-959d-0ae53fa823e5> | 3.125 | 904 | Nonfiction Writing | Science & Tech. | 34.77675 | 95,611,397 |
The geysers of Yellowstone National Park owe their eistence to the "Yellowstone hotspot"--a region of molten rock buried deep beneath Yellowstone, geologists have found.
But how hot is this "hotspot," and what's causing it?
In an effort to find out, Derek Schutt of Colorado State University and Ken Dueker of the University of Wyoming took the hotspot's temperature.
The scientists published results of their research, funded by the National Science Foundation (NSF)'s division of earth sciences, in the August, 2008, issue of the journal Geology.
"Yellowstone is located atop of one of the few large volcanic hotspots on Earth," said Schutt. "But though the hot material is a volcanic plume, it's cooler than others of its kind, such as one in Hawaii."
When a supervolcano last erupted at this spot more than 600,000 years ago, its plume covered half of today's United States with volcanic ash. Details of the cause of the Yellowstone supervolcano's periodic eruptions through history are still unknown.
Thanks to new seismometers in the Yellowstone area, however, scientists are obtaining new data on the hotspot.
Past research found that in rocks far beneath southern Idaho and northwestern Wyoming, seismic energy from distant earthquakes slows down considerably.
Using the recently deployed seismometers, Schutt and Dueker modeled the effects of temperature and other processes that affect the speed at which seismic energy travels. They then used these models to make an estimate of the Yellowstone hotspot's temperature.
They found that the hotspot is "only" 50 to 200 degrees Celsius hotter than its surroundings.
"Although Yellowstone sits above a plume of hot material coming up from deep with the Earth, it's a remarkably 'lukewarm' plume," said Schutt, comparing Yellowstone to other plumes.
Although the Yellowstone volcano's continued existence is likely due to the upwelling of this hot plume, the plume may have become disconnected from its heat source in Earth's core.
"Disconnected, however, does not mean extinct," said Schutt. "It would be a mistake to write off Yellowstone as a 'dead' volcano. A hot plume, even a slightly cooler one, is still hot."Media Contacts
Cheryl Dybas | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:23b79606-372a-4175-8054-6e07f9276067> | 4.125 | 1,054 | Content Listing | Science & Tech. | 41.741725 | 95,611,403 |
A characteristic feature of L. monocytogenes is its ability to grow at refrigeration temperatures and in the presence of high concentrations of salt—traditional food preservation techniques, which arrest the growth of most other pathogens.
Work in the Sleator lab has shown that the bacterium protects itself from such stresses by twisting into a protective corkscrew type shape in an effort to reduce its exposure to the stress—in the same way a human might wrap up tight—hugging the core to reduce the effects of the cold.
Furthermore, Sleator and colleagues have identified a single point mutation (out of a total of 3 million or so nucleotides that constitute the entire listerial genome), which dramatically improves the growth of the pathogen in the refrigerator.
The research paper, "A single point mutation in the listerial betL óA-dependent promoter leads to improved osmo- and chill-tolerance and a morphological shift at elevated osmolarity," will be published in the November/December 2013 issue of Bioengineered. It is available open access ahead of press: http://www.landesbioscience.com/journals/bioe/article/24094/
Sleator claims that this mutation represents "a double edged sword;" "from a food safety perspective, a single point mutation with the potential to induce such dramatic shifts in cell growth and survival at low temperatures—making an already dangerous pathogen even more formidable—raises significant food-safety concerns which need to be addressed." However, from a synthetic biology point of view, such a boosted-stress resistance gene represents a useful BioBrick (or building block) for the design of more physiologically robust probiotics or, indeed, plants that are more resistant to cold arid conditions.
Published by Landes Bioscience since 2010, Bioengineered publishes relevant and high-impact original research with a special focus on genetic engineering that involves the generation of recombinant strains and their metabolic products for beneficial applications in food, medicine, industry, environment and bio-defense. Established in 2002, Landes Bioscience is an Austin, Texas-based publisher of biology research journals and books. For more information on Landes Bioscience, please visit http://www.landesbioscience.com/.
Andrew Thompson | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:7c4ccf48-8a75-454e-9e5f-7aa4337daa11> | 2.734375 | 1,065 | Content Listing | Science & Tech. | 33.008462 | 95,611,404 |
Electrical Potential MCQs Quiz Online PDF Download
Practice electrical potential MCQs, A level physics test for online courses learning and test prep. As level physics quiz has multiple choice questions (MCQ), electrical potential quiz questions and answers to learn.
GCE physics practice test MCQ on if charge is placed at infinity, it's potential is with options zero, infinite, 1 and -1 problem solving skills for viva, competitive exam prep, interview questions with answer key. Free study guide is for online learning electrical potential quiz with MCQs to practice test questions with answers.
MCQs on Electrical Potential Quiz PDF Download
MCQ. If charge is placed at infinity, it's potential is
MCQ. If we move a positive charge to a positive plate, then potential energy of charge is
- remains constant
MCQ. Graph of potential energy against distance is
- straight line
MCQ. In an electric field, energy per unit positive charge is | <urn:uuid:c2f873c9-aca2-43a9-933c-4a7a54ba3e79> | 3.34375 | 202 | Product Page | Science & Tech. | 44.24906 | 95,611,427 |
The classical Carnot heat engine
The Stirling cycle is a thermodynamic cycle that describes the general class of Stirling devices. This includes the original Stirling engine that was invented, developed and patented in 1816 by Robert Stirling with help from his brother, an engineer.
The ideal Otto and Diesel cycles are not totally reversible because they involve heat transfer through a finite temperature difference during the irreversible isochoric/isobaric heat-addition and heat-rejection processes. The irreversibility renders the thermal efficiency of these cycles less than that of a Carnot engine operating within the same limits of temperature. Another cycle that features isothermal heat-addition and heat-rejection processes is the Stirling cycle, which is an altered version of the Carnot cycle in which the two isentropic processes featured in the Carnot cycle are replaced by two constant-volume regeneration processes.
The cycle is reversible, meaning that if supplied with mechanical power, it can function as a heat pump for heating or cooling, and even for cryogenic cooling. The cycle is defined as a closed regenerative cycle with a gaseous working fluid. "Closed cycle" means the working fluid is permanently contained within the thermodynamic system. This also categorizes the engine device as an external heat engine. "Regenerative" refers to the use of an internal heat exchanger called a regenerator which increases the device's thermal efficiency.
The cycle is the same as most other heat cycles in that there are four main processes: compression, heat addition, expansion, and heat removal. However, these processes are not discrete, but rather the transitions overlap.
The Stirling cycle is a highly advanced subject that has defied analysis by many experts for over 190 years. Highly advanced thermodynamics is required to describe the cycle. Professor Israel Urieli writes: "...the various 'ideal' cycles (such as the Schmidt cycle) are neither physically realizable nor representative of the Stirling cycle".
The analytical problem of the regenerator (the central heat exchanger in the Stirling cycle) is judged by Jakob to rank "among the most difficult and involved that are encountered in engineering".
- 1 Idealized Stirling cycle thermodynamics
- 2 Piston motion variations
- 3 Volume variations
- 4 Pressure-versus-volume graph
- 5 Particle/mass motion
- 6 Heat-exchanger pressure drop
- 7 Pressure versus crank angle
- 8 Temperature versus crank angle
- 9 Cumulative heat and work energy
- 10 See also
- 11 References
- 12 External links
Idealized Stirling cycle thermodynamics
- 1-2 ISOTHERMAL Heat addition (expansion).
- 2-3 ISOCHORIC Heat removal (constant volume).
- 3-4 ISOTHERMAL Heat removal (compression).
- 4-1 ISOCHORIC Heat addition (constant volume).
Piston motion variations
Most thermodynamics textbooks describe a highly simplified form of Stirling cycle consisting of four processes. This is known as an "ideal Stirling cycle", because it is an "idealized" model, and not necessarily an optimized cycle. Theoretically, the "ideal cycle" does have high net work output, but it is rarely used in practical applications, in part because other cycles are simpler or reduce peak stresses on bearings and other components. For convenience, the designer may elect to use piston motions dictated by system dynamics, such as mechanical linkage mechanisms. At any rate, the efficiency and cycle power are nearly as good as an actual implementation of the idealized case. A typical piston crank or linkage in a so named "kinematic" design often results in a near-sinusoidal piston motion. Some designs will cause the piston to "dwell" at either extreme of travel.
Many kinematic linkages, such as the well known "Ross yoke", will exhibit near-sinusoidal motion. However, other linkages, such as the "rhombic drive", will exhibit more non-sinusoidal motion. To a lesser extent, the ideal cycle introduces complications, since it would require somewhat higher piston acceleration and higher viscous pumping losses of the working fluid. The material stresses and pumping losses in an optimized engine, however, would only be intolerable when approaching the "ideal cycle" and/or at high cycle rates. Other issues include the time required for heat transfer, particularly for the isothermal processes. In an engine with a cycle approaching the "ideal cycle", the cycle rate might have to be reduced to address these issues.
In the most basic model of a free piston device, the kinematics will result in simple harmonic motion.
In beta and gamma engines, generally the phase angle difference between the piston motions is not the same as the phase angle of the volume variations. However, in the alpha Stirling, they are the same. The rest of the article assumes sinusoidal volume variations, as in an alpha Stirling with co-linear pistons, so named an "opposed piston" alpha device.
This type of plot is used to characterize almost all thermodynamic cycles. The result of sinusoidal volume variations is the quasi-elliptical shaped cycle shown in Figure 1. Compared to the idealized cycle, this cycle is a more realistic representation of most real Stirling engines. The four points in the graph indicate the crank angle in degrees.
The adiabatic Stirling cycle is similar to the idealized Stirling cycle; however, the four thermodynamic processes are slightly different (see graph above):
- 180° to 270°, pseudo-isothermal expansion. The expansion space is heated externally, and the gas undergoes near-isothermal expansion.
- 270° to 0°, near-constant-volume (or near-isometric or isochoric) heat removal. The gas is passed through the regenerator, thus cooling the gas, and transferring heat to the regenerator for use in the next cycle.
- 0° to 90°, pseudo-isothermal compression. The compression space is intercooled, so the gas undergoes near-isothermal compression.
- 90° to 180°, near-constant-volume (near-isometric or isochoric) heat addition. The compressed air flows back through the regenerator and picks up heat on the way to the heated expansion space.
With the exception of a Stirling thermoacoustic engine, none of the gas particles actually flow through the complete cycle. So this approach is not amenable to further analysis of the cycle. However, it provides an overview and indicates the cycle work.
Figure 2 shows the streaklines which indicate how gas flows through a real Stirling engine. The vertical colored lines delineate the volumes of the engine. From left to right, they are: the volume swept by the expansion (power) piston, the clearance volume (which prevents the piston from contacting the hot heat exchanger), the heater, the regenerator, the cooler, the cooler clearance volume, and the compression volume swept by the compression piston.
Heat-exchanger pressure drop
Also referred to as "pumping losses", the pressure drops shown in Figure 3 are caused by viscous flow through the heat exchangers. The red line represents the heater, green is the regenerator, and blue is the cooler. To properly design the heat exchangers, multivariate optimization is required to obtain sufficient heat transfer with acceptable flow losses. The flow losses shown here are relatively low, and they are barely visible in the following image, which will show the overall pressure variations in the cycle.
Pressure versus crank angle
Figure 4 shows results from an "adiabatic simulation" with non-ideal heat exchangers. Note that the pressure drop across the regenerator is very low compared to the overall pressure variation in the cycle.
Temperature versus crank angle
Figure 5 illustrates the adiabatic properties of a real heat exchanger. The straight lines represent the temperatures of the solid portion of the heat exchanger, and the curves are the gas temperatures of the respective spaces. The gas temperature fluctuations are caused by the effects of compression and expansion in the engine, together with non-ideal heat exchangers which have a limited rate of heat transfer. When the gas temperature deviates above and below the heat exchanger temperature, it causes thermodynamic losses known as "heat transfer losses" or "hysteresis losses". However, the heat exchangers still work well enough to allow the real cycle to be effective, even if the actual thermal efficiency of the overall system is only about half of the theoretical limit.
Cumulative heat and work energy
Figure 6 shows a graph of the alpha-type Stirling engine data, where 'Q' denotes heat energy, and 'W' denotes work energy. The blue dotted line shows the work output of the compression space. As the trace dips down, work is done on the gas as it is compressed. During the expansion process of the cycle, some work is actually done on the compression piston, as reflected by the upward movement of the trace. At the end of the cycle, this value is negative, indicating that compression piston requires a net input of work. The blue solid line shows the heat flowing out of the cooler heat exchanger. The heat from the cooler and the work from the compression piston have the same cycle energy. This is consistent with the zero-net heat transfer of the regenerator (solid green line). As would be expected, the heater and the expansion space both have positive energy flow. The black dotted line shows the net work output of the cycle. On this trace, the cycle ends higher than it started, indicating that the heat engine converts energy from heat into work.
- Robert Sier (1999). Hot air caloric and stirling engines. Vol.1, A history (1st Edition (Revised) ed.). L.A. Mair. ISBN 0-9526417-0-4.
- Organ, "The Regenerator and the Stirling Engine", p. xxii, Forward by Urieli
- Organ, "The Regenerator and the Stirling Engine", p. 7
- Jakob, M. (1957) Heat Transfer II John Wiley, New York, USA and Chapman and Hall, London, UK
- A. Romanelli Alternative thermodynamic cycle for the Stirling machine, American Journal of Physics 85, 926 (2017)
- Organ, "The Regenerator and the Stirling Engine"
- Israel Urieli (Dr. Iz), Associate Professor Mechanical Engineering: Stirling Cycle Machine Analysis Archived 2010-06-30 at the Wayback Machine.
- Polytropic cycle inside Stirling engine Stirling engine cycle | <urn:uuid:cd559c0c-d5ae-4bab-961a-b38ee61c30e6> | 3.4375 | 2,214 | Knowledge Article | Science & Tech. | 33.490728 | 95,611,430 |
Although it would be natural enough to assume that the root of a Flex application is an
Application object (because the root tag of the runnable application is an
Application tag), it turns out that the default root object is, in fact, of type
In order to understand
SystemManager and the bootstrapping process, you have to understand just a little about a Flash Player class called
MovieClip class is a display object type which allows you to programmatically work with timelines. Timelines are a feature often used in Flash applications because Flash authoring allows developers to work with timelines through the program interface. Timelines are not used frequently in Flex applications because there is no programmatic way to add frames (the basic units of a timeline) to a timeline. However, timelines and frames are an essential part of
SystemManager, and in order to understand how Flex applications work, you must understand a few things about timelines.
A timeline is composed of frames. A frame represents a point in time during the playback of a timeline. This is similar to timeline concepts used in any sort of animation or video program. Because there's no way to programmatically add frames, almost all display objects in Flex applications consist of just one frame. However,
SystemManager is the one exception to this rule.
SystemManager consists of two frames. This is essential because it enables the ... | <urn:uuid:bb079bbe-0100-40a3-8d86-d8230e7377d9> | 2.703125 | 280 | Truncated | Software Dev. | 31.554775 | 95,611,434 |
By: SPACE.com Staff
Published: 02/21/2012 06:16 PM EST on SPACE.com
Scientists have measured the fastest winds yet observed from a stellar-mass black hole, shedding light on the behavior of these curious cosmic objects.
The winds, clocked by astronomers using NASA's Chandra X-ray Observatory, are racing through space at 20 million mph (32 million kph), or about 3 percent the speed of light. That's nearly 10 times faster than had ever been seen from a stellar-mass black hole, researchers said.
"This is like the cosmic equivalent of winds from a Category 5 hurricane," study lead author Ashley King, of the University of Michigan, said in a statement. "We weren't expecting to see such powerful winds from a black hole like this."
A stellar-mass black hole, which is born when an extremely massive star collapses, typically contains about five to 10 times the mass of our sun. The stellar-mass black hole powering this super wind is known as IGR J17091-3624, or IGR J17091 for short. [Photos: Black Holes of the Universe]
IGR J17091 is a binary system in which a sun-like star orbits a black hole. It's found in the central bulge of our Milky Way galaxy, about 28,000 light-years from Earth.
IGR J17091's wind matches some of the fastest generated by supermassive black holes, which are millions or billions of times more massive. Supermassive black holes are thought to reside at the heart of most if not all active galaxies, including our own Milky Way.
"It's a surprise this small black hole is able to muster the wind speeds we typically only see in the giant black holes," said co-author Jon Miller, also from the University of Michigan. "In other words, this black hole is performing well above its weight class."
Another surprising finding from the new study is that the wind, which comes from a disk of gas surrounding the black hole, may be blasting more material into space than the black hole is capturing.
"Contrary to the popular perception of black holes pulling in all of the material that gets close, we estimate up to 95 percent of the matter in the disk around IGR J17091 is expelled by the wind," King said.
Unlike hurricane winds on Earth, the wind from IGR J17091 is blowing in many different directions at once. This pattern distinguishes it from a jet, in which material flows in focused beams perpendicular to a black hole's disk, often at nearly the speed of light.
Jets have been seen coming from IGR J17091 before. But observations made with the National Radio Astronomy Observatory's Expanded Very Large Array in New Mexico showed that a radio jet from the system was not present when the super-fast wind was blowing.
This agrees with observations of other stellar-mass black holes, suggesting that ultra-speedy winds can quash jet production, researchers said.
Scientists estimated IGR J17091's wind speeds using a spectrum made by Chandra in 2011. Observations made by the space telescope two months earlier showed no such winds, meaning the black hole's gale likely switches on and off over time.
Astronomers think that magnetic fields in the accretion disks of black holes are responsible for producing both winds and jets. Characteristics of the magnetic fields and the rate at which material falls toward the black hole are thought to determine whether jets or winds are produced, researchers said. | <urn:uuid:c5b6c623-f8e7-4fe8-a003-d4e9d8676a2f> | 3.421875 | 732 | Truncated | Science & Tech. | 55.289808 | 95,611,487 |
This weekend, members of the community braved the chilly morning for the annual Boneyard Creek Community Day! It was a rewarding morning spent collecting waste from various parts of Urbana-Champaign, making our public spaces a better place for both humans and wildlife to spend their time. A special thank you to the Vet Med students who took the time even though midterms are just around the corner! We’re already looking forward to next year.
Nesting season for most Illinois wild bird species ranges from early March to the end of August. There is a surprising variety of strategies for nesting among different species. For example, Belted Kingfishers build nests up to 15 feet deep in riverbanks, while hummingbirds and Blue Jays go with the more typical route of weaving of twigs, bark, and leaves nestled in tree branches. Unfortunately, in residential or rural areas, certain strategies may put nests in danger of accidental destruction from everyday human activities. We’ve compiled some tips for the backyard conservationist to help protect these native birds while they raise their young.
Indiana Bat (Myotis sodalis)
From US Fish and Wildlife Service: The Indiana bat was listed as endangered in 1967 due to episodes of people disturbing hibernating bats in caves during winter, resulting in the death of large numbers of bats. Indiana bats are vulnerable to disturbance because they hibernate in large numbers in only a few caves (the largest hibernation caves support from 20,000 to 50,000 bats). Other threats that have contributed to the Indiana bat’s decline include commercialization of caves, loss of summer habitat, pesticides, and other contaminants, and most recently, the disease White-Nose Syndrome.
Indiana bats are quite small, weighing only one-quarter of an ounce (about the weight of three pennies) although in flight they have a wingspan of 9 to 11 inches. Their fur is dark- brown to black. They hibernate during winter in caves or, occasionally, in abandoned mines. During summer they roost under the peeling bark of dead and dying trees. Indiana bats eat a variety of flying insects found along rivers or lakes and in uplands. Click here for more information on endangered species in Illinois.
Continue reading: WMC Conservation Newsletter March 2018
By: Kate Keets, WMC Conservation Chair, Class of 2021 | <urn:uuid:cab2d009-0438-4621-a91d-06d1d4753eb6> | 3.359375 | 482 | News (Org.) | Science & Tech. | 42.323695 | 95,611,528 |
When NASA’s Aqua satellite passed over Yagi on June 11 it was a tropical storm with strong thunderstorms on its eastern side. An infrared image of the storm was taken from the Atmospheric Infrared Sounder (AIRS) instrument aboard NASA’s Aqua satellite on June 11 at 12:05 a.m. EDT. The areas with the coldest cloud top temperatures and strongest thunderstorms were near -63F/-52C around the center and indicative of heavy rainfall.
This infrared image of Tropical Depression Yagi was taken from the AIRS instrument aboard NASA’s Aqua satellite on June 11 at 12:05 a.m. EDT. The areas with the coldest cloud top temperatures and strongest thunderstorms (purple) were near -63F/-52C around the center and eastern quadrant of the storm. Those very cold cloud temperatures are indicative of heavy rainfall. Credit: NASA JPL/Ed Olsen
Since June 11, an upper-level low pressure area from the north of Yagi has moved closer to the storm’s center, suppressing thunderstorm development. Animated infrared satellite imagery showed that the storm’s low-level center is now fully exposed to outside winds and has elongated. Whenever a storm loses its circulation and becomes elongated it weakens (like a tire going flat that can’t spin properly). There is no recognizable convection (rising air that forms thunderstorms that make up the tropical cyclone) around the storm’s center.
In addition to the upper-level low squelching the storm, Yagi has moved into cooler waters that do not support maintaining a tropical cyclone. Sea surface temperatures of at least 80F (26.6 C) are needed to keep a tropical cyclone going. The sea surface temperatures around Yagi are as cool as 223 Celsius, too cool to maintain the storm.
On June 12 at 1500 UTC (11 a.m. EDT) Tropical depression Yagi was spinning down in the northwestern Pacific Ocean. Yagi’s maximum sustained winds were down to 30 knots. It was centered near 31.2 north latitude and 138.9 east longitude, about 250 miles south of Yokosuka, Japan. Yagi is moving to the east-northeast at 8 knots.
According to the Joint Typhoon Warning Center (JTWC), the organization that forecasts storms in the northwestern Pacific, Yagi was still a “weakly symmetric warm-core system,” but it is expected to become a cold-core low pressure area by the end of June 12. JTWC forecasters expect Yagi to dissipate by June 13.Text credit: Rob Gutro
Rob Gutro | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:219777af-849f-4ee0-8b47-b7ff5f96e6b7> | 2.8125 | 1,187 | Content Listing | Science & Tech. | 47.473409 | 95,611,532 |
A team of scientists from the University of Tübingen and the Senckenberg Center for Human Evolution and Palaeoenvironment Tübingen was able to recover fossils of two previously unknown mammal species that lived about 37 million years ago.
The newly described mammals show a surprisingly close relationship to prehistoric species known from fossil sites in Europe.
The location: The open lignite-mining Na Duong in Vietnam. Here, the team of scientists was also able to make a series of further discoveries, including three species of fossilized crocodiles and several new turtles.
Southeast Asia is considered a particularly species-rich region, even in prehistoric times – a so-called hotspot of biodiversity. For several decades now, scientists have postulated close relationships that existed in the late Eocene (ca. 38-34 million years ago) between the faunas of that region and Europe. The recent findings by the research team under leadership of Prof. Dr. Madelaine Böhme serve as proof that some European species originated in Southeast Asia.
Rhinoceros and Coal beast
One of the newly described mammals is a rhinoceros, Epiaceratherium naduongense. The anatomy of the fossil teeth allows identifying this rhinoceros as a potential forest dweller. The other species is the so-called “Coal Beast”, Bakalovia orientalis.
This pig-like ungulate, closely related to hippos, led a semi-aquatic lifestyle, i.e., it preferred the water close to bank areas. At that time, Na Duong was a forested swampland surrounding Lake Rhin Chua. The mammals’ remains bear signs of crocodile attacks. Indeed, the excavation site at Na Duong contains the fossilized remains of crocodiles up to 6 meters in length.
From island to island toward Europe
In the Late Eocene, the European mainland presented a very different aspect than it does today. Italy and Bulgaria were part of an island chain in the Tethys Sea. These islands spanned several thousand kilometers between what later became Europe and India. European fossils from that epoch are very rare, since little material has been preserved due to the folding of mountains and erosion. Yet, the two new species had relatives in this area:
A rhinoceros Epiaceratherium bolcense closely resembling the one from Na Duong was found in Italy (Monteviale). Fossil finds of Epiaceratherium magnum from Bavaria indicate that rhinoceroses reached continental Europe no later than 33 million years ago and colonized the landmass. The coal beast did not quite make it to the European mainland – but it certainly reached the so-called Balkano-Rhodopen Island: a fossilized coal beast very similar to Bakalovia orientalis was unearthed in present-day Bulgaria.
Research among coal dust and excavators
The open mining pit Na Duong is still active. While the scientists conduct their excavations, lignite is being extracted nearby. Since 2008, the international research team around Prof. Dr. Madelaine Böhme from the Senckenberg Center for Human Evolution and Palaeoenvironment (HEP) at the University of Tübingen has studied the prehistoric ecosystem and the fossils of Na Duong in Vietnam.
This research revealed that the lignite seams contained a globally important fossil deposit from the Paleogene interval. Originally, scientists had expected to find fossils from the younger Cenozoic (up to 23 million years ago) at the site.
This ecosystem, which the scientists from Vietnam, France and Germany explore and reconstruct in ever more detail from one excavation season to the next, is a 37 million year-old swamp forest in a tropical to subtropical climate. Up to 600 trees grew there per hectare, and their crowns reached heights of up to 35 meters.
Böhme, M. et al.; Na Duong (northern Vietnam) – an exceptional window into Eocene ecosystems from Southeast Asia, Zitteliana A 53, 120 A 5 (2014).
Prof. Dr. Madeleine Böhme
Department of Geosciences
Senckenberg Center for Human Evolution and Palaeoenvironment
(currently away on expedition)
Senckenberg Gesellschaft für Naturforschung
Phone 069- 7542 1434
University of Tübingen
Phone 07071 – 29-76789
Dr. Sören Dürr | Senckenberg
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:9559d3fe-54d9-4a0b-b504-151dc0420697> | 3.828125 | 1,580 | Content Listing | Science & Tech. | 38.290439 | 95,611,555 |
|An introduction of library operating system for Linux|
|By Thom Holwerda on 2015-03-25 00:14:02|
Our objective is to build the kernel network stack as a shared library that can be linked to by userspace programs to provide network stack personalization and testing facilities, and allow researchers to more easily simulate complex network topologies of linux routers/hosts.
Although the architecture itself can virtualize various things, the current design only focuses on the network stack. You can benefit network stack feature such as TCP, UDP, SCTP, DCCP (IPv4 and IPv6), Mobie IPv6, Multipath TCP (IPv4/IPv6, out-of-tree at the present moment), and netlink with various userspace applications (quagga, iproute2, iperf, wget, and thttpd).
- A broad overview of how modern Linux systems boot - 2018-06-18
- C gfx library for the Linux framebuffer with parallelism support - 2018-06-18
- The real power of Linux executables - 2018-05-31
- There's real reasons for Linux to replace ifconfig, netstat, et al. - 2018-05-25
- More related articles | <urn:uuid:f21c96f6-41eb-47fc-99aa-d56d7325417b> | 2.625 | 268 | Truncated | Software Dev. | 29.830441 | 95,611,560 |
NASA has released audio captured by its Cassini probe, just before it plunged head-first into the ringed planet on a death dive last year.
And it sounds like there’s something whistling down there.
But before you head to the doomsday bunkers to prepare for the Saturnian invasion, we should point out that they’re not they’re not ‘sounds’ in the traditional sense – they’re plasma waves.
The probe captured plasma waves moving from Saturn to its rings and its moon Enceladus. The observations show for the first time that the waves travel on magnetic field lines connecting Saturn directly to Enceladus.
The field lines are like an electrical circuit between the two bodies, with energy flowing back and forth.Britain looks totally different from space after turning from green to yellow in heatwave
Researchers converted the recording of plasma waves into a “whooshing” audio file that we can hear – in the same way a radio translates electromagnetic waves into music.
Much like air or water, plasma (the fourth state of matter) generates waves to carry energy.
The recording was captured by the Radio Plasma Wave Science (RPWS) instrument Sept. 2, 2017, two weeks before Cassini was deliberately plunged into the atmosphere of Saturn.
Ali Sulaiman, planetary scientist at the University of Iowa said, ‘Enceladus is this little generator going around Saturn, and we know it is a continuous source of energy.
‘Now we find that Saturn responds by launching signals in the form of plasma waves, through the circuit of magnetic field lines connecting it to Enceladus hundreds of thousands of miles away.’
The interaction of Saturn and Enceladus is different from the relationship of Earth and its Moon. Enceladus is immersed in Saturn’s magnetic field and is geologically active, emitting plumes of water vapor that become ionized and fill the environment around Saturn.
Our own Moon does not interact in the same way with Earth. Similar interactions take place between Saturn and its rings, as they are also very dynamic. | <urn:uuid:623f06ec-ef8f-4c21-8199-5cd389539e71> | 3.484375 | 439 | News Article | Science & Tech. | 38.859258 | 95,611,565 |
The coronal mass ejection that flew by Earth at some 900 miles per second produced a minor geomagnetic storm, which in-turn produced some particularly lovely northern lights.
U.S. Secretary of State John Kerry announced yesterday that he plans to take action in protecting the Ross Sea.
Tourists feeding stingrays have changed the creatures' feeding habits along with behavior and mating habits, a new study finds. Researchers say the research finding raises questions about the impact of interactive tourism on marine wildlife, reports e!sciencenews.
Planktons near the surface of water have more carbon than was previously believed, according to a new study. Researchers argue that the scientific models that estimate the amount of carbon dioxide in water need to be revised as data on phytoplankton has shown that these planktons soak-up double the amount of carbon dioxide than estimated.
A high-speed coronal mass ejection erupted from the sun and is blazing in the direction of Earth at speeds of 900 miles per second.
New York adopts a new plan to sterilize the city's rat population.
It isn't just genetics or diet that raises the risk of high blood pressure, but even exposure to certain chemicals raises risk of hypertension even before birth. A new study has shown that DDT exposure in the womb raises high blood pressure risk in women.
Sri Lankans have December 29, 2012, deeply engraved in their memory. That day - better yet, that night - many witnessed a fireball lighting up the skies over the province of Polonnaruwa, spraying fragments across the countryside.
Australian authorities announced Monday that they found and destroyed a giant African snail in Brisbane.
Tropical forests won't lose as many plants due to global warming as were projected by previous research, says a new study. Researchers say that tropical forests are more resilient to global warming previously thought.
Russian researchers are withdrawing earlier statements that they had found a new type of bacteria, seemingly unrelated to all known organisms on the planet, from the ancient subglacial Lake Vostok. It turns out they are just contaminants.
An earthquake has shaken a wide swath of Southern California with a preliminary magnitude of 4.7 on Monday morning. There have been no reports of damages caused by the earthquake.
Delegates at an internal conservationist meeting in Bangkok this week have voted to clamp down on the shark trade used primarily for the infamous shark fin soup popular in Asia, and added five shark species to a protected list in efforts to save them from being wiped out by overfishing. | <urn:uuid:a379c3a9-8a42-4a1a-8e4a-45f137ffabef> | 2.734375 | 522 | Content Listing | Science & Tech. | 43.945022 | 95,611,569 |
What is the Distributive Property?
Learn what the distributive property is, how to picture it, and what it means to say that multiplication is distributive over addition.
A while back, we talked about the commutative property of addition and we used this property to figure out how to add quickly. But the commutative property isn’t the only math property, which might lead you to wonder what great tricks the others have to offer. Well, your wait is over. Because today we’re talking about another one of those properties: the distributive property.
Review: What is Area?
Do you remember when we talked about area? Well, that topic is going to make another appearance in this article, so let’s take a minute to make sure we’re all on the same page. The idea of area is used all the time in everyday life. For example, let’s say you need to figure out how much money to save for new carpeting in your bedroom. When you go to the store, you see that carpet is sold by the square foot. So, to estimate the total cost for your room, you need to multiply the price per square foot by the size of your bedroom—and by size, I mean the area in square feet. So how do you calculate the area? Well, you just measure the length and the width of the room in feet, and then multiply these two numbers together. That means that for a rectangle, the area is just the length of the rectangle times its width.
How to Picture the Distributive Property
Okay, now that we’ve got a handle on the idea of area, I’m going to describe a drawing that I want you to picture in your mind—or follow along with a pencil and paper and make the sketch as I describe it. First, draw a large rectangle. It doesn’t matter if it’s wider than it is tall, or taller than it is wide—picture it however you want. Now, draw a vertical line inside this large rectangle that extends all the way from its top edge to its bottom edge. Again, it doesn’t matter exactly where you position it from left to right—that’s entirely up to you. So what do you have? Well, you have a picture of a large rectangle that contains 2 smaller rectangles inside of it
Now, let’s name some of the features in our drawing. Why? Well, just as with people, it will allow us to talk about them without having to say awkward things like: “look at the tall guy with the short red hair and glasses.” We can just say: “look at Bob.” But in our rectangles case, let’s not use names like “Bob.” Let’s use letters of the alphabet—just because they’re a lot shorter to write.
Okay, let’s call the height of the rectangles “a” (they all have the same height, right?), and the width of the 2 smaller rectangles “b” and “c” (from left to right). Got all that? All right, now let’s see what we can do with these names.
The “Left Side” of the Distributive Property
Here’s what I have in mind. Let’s first calculate the area of the large rectangle. As we talked about earlier, the area of a rectangle is just its height times its width. Remember, we named the height of the large rectangle “a,” and the width is…well, we actually didn’t name the total width, did we? But we did name the widths of the 2 smaller rectangles: “b” and “c.” So that means that the total width of the big rectangle is just the sum of these 2 small widths: “b” + “c”. Make sense? And with that, we can write the area of the big rectangle as height times width—that is:
“a” x ( “b” + “c” )
Okay, so that’s the area of the large rectangle. For future reference, let’s go ahead and call this “the ‘left side’ of the distributive property.” I know that might seem like a mysterious name right now, but hold on a minute and it’ll make sense.
The “Right Side” of the Distributive Property
Now, think about this: Couldn’t we also have written the total area of the large rectangle as the sum of the areas of each of the small rectangles? If you think about it, you’ll realize that you absolutely can—the total area has to equal the sum of the individual areas. So what’s the area of the 2 smaller rectangles? Well, for each of them, their area is just their height—which is “a” just like for the large rectangle—times their width. The width of the rectangle on the left is “b,” so its area is “a” x “b”. Similarly, for the rectangle on the right, its area is “a” x “c”. So if we add these 2 areas together, the total area of the large rectangle must be
“a” x “b” + “a” x “c”
As before, let’s enigmatically call this “the ‘right side’ of the distributive property” for future reference. But wait—hold on a minute! This expression is different than what we got before for the total area! Before we said it was “a” x (“b” + “c”). And now we’ve said that it’s “a” x “b” + “a” x “c”. But we said they had to be the same since they both represent the total area. So which is right? Well, actually, they’re both correct and they’re not really different—in fact, they’re exactly the same.
What is the Distributive Property?
And the reason they’re the same is called the distributive property. We can combine the “left” and “right” sides of the distributive property that we calculated from the area of the large rectangle and the sum of the areas of the small rectangles, and we can write the distributive property like this
This says that multiplication is distributive over addition. And that means that if we take the sum of some numbers—in our case “b” + “c” (although it doesn’t have to be 2 numbers, there could be as many as we want), and multiply this sum by some other number—in our case “a,” the result is the same as if we first individually multiplied each number in the sum by “a” and then added these all up.
How Does the Distributive Property Work?
[[AdMiddle]Just to make sure this really works, let’s try it with a few numbers. How about 2 x (3 + 5). Well, we have to do whatever is in the parenthesis first, so start by adding the 3 and 5 to get 3+5=8, and then multiply this by 2 to get 2x8=16. Now let’s do it the other way. It’s just 2 x 3 + 2 x 5, which simplifies to 6+10=16. So, yes, we get the exact same answer either way—the distributive property works. Of course, we already knew that since our picture with rectangles showed us that it had to be true!
Okay, that’s all the math we have time for today. But that’s not all we have to say about the distributive property. So be sure to check in on future episodes to hear about a real life interpretation of the distributive property, and a way that you can use it to perform lightning-fast multiplication in your head.
Please email your math questions and comments to firstname.lastname@example.org. You can get updates about the Math Dude podcast, the “Video Extra!” episodes on YouTube, and all my other musings about math, science, and life in general by following me on Twitter. And don’t forget to join our great community of social networking math fans by becoming a fan of the Math Dude on Facebook.
Until next time, this is Jason Marshall with The Math Dude’s Quick and Dirty Tips to Make Math Easier. Thanks for reading, math fans! | <urn:uuid:6890fb56-9203-4abb-afeb-02e13d4800fd> | 3.578125 | 1,882 | Tutorial | Science & Tech. | 69.825477 | 95,611,570 |
The agricultural zone of southwestern Australia is an extensively modified landscape. Ninety percent of the perennial native vegetation has been cleared and replaced by annual cereal crops and pasture. Consequently, groundwater has risen and much of the region is affected by dryland salinity. River geomorphology and water quality have been severely impacted by land clearing, anthropogenic patterns of land use, and secondary salinization. The objectives of this study were to determine patterns of distribution of aquatic macroinvertebrates in the region, and to identify environmental variables influencing these patterns. Aquatic macroinvertebrates were sampled at 176 river sites during spring 1997 and a range of environmental data were collected at each site. Eighty-one families were collected, with the fauna being dominated by insects. At the family level, macroinvertebrate communities were homogeneous and depauperate, and consisted of families that tolerated a broad range of environmental conditions. The fauna was particularly resilient to high salinities, with some families tolerating salinities orders of magnitude greater than previously reported for lotic waters. The most significant environmental factors influencing the distribution of aquatic invertebrates were rainfall, salinity, land use, and instream habitat.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:78ca6871-3361-47d2-889f-d6c50695f190> | 3.578125 | 265 | Academic Writing | Science & Tech. | -0.876911 | 95,611,620 |
Using the Visual Studio Integrated Development Environment
In this tutorial, we will look at the Visual Studio Integrated Development Interface (IDE) and see how it makes our software development much easier. We will look at syntax highlighting, Intellisense and the toolbars. We will also look at different methods of running an application.
- Getting Started with Visual Studio
- Using the Visual Studio Integrated Development Environment
- Using the Visual Studio Debugger
- XML Documentation in Visual Studio
- Using Microsoft .Net Command Line Tools
- What is an Assembly in Microsoft .Net?
- Creating and Using a .NET Assembly in C#
- Creating and Implementing Satellite Assemblies
- Creating Strong Named Assemblies in Microsoft .Net
- Visual Studio Task List
- Resources and Localisation in C# Projects
- Localising Microsoft .Net Applications with C#
- Using Resource Files in C# Applications
- Using XSD Tool to Generate Classes from XML
- 10 Visual Studio Tools You May Not Know About
Adverts Blocked Please disable AdBlocking software and allow me to set cookies so that I can continue providing free content and services.
This article relates to an old version of the .Net Framework. While the contents are still relevant, future versions of the framework may change, or improve upon, the contents of this article.
For this tutorial, we will be using the free version of Microsoft Visual C# Express to demonstrate the features of the development interface, although there is not a great difference between Visual C# Express and the full Visual Studio.
You have probably noticed that as you type the code in, Visual Studio will change the colour of the words. This is Syntax Highlighting and it the there purely to aid readability.
- Keywords are blue
- Comments are green
- Strings are red/brown
- Classes are gray
You can change these colours from within the program settings dialogue box if you would prefer different colours.
Using our hello world application from the last tutorial, we are going to invoke IntelliSense and let it write code for us. If you types in the code in the last tutorial you will have seen IntelliSense, however, if you used copy and paste to insert the code, then now you will see IntelliSense.
Start off my deleting the
Console.WriteLine line, then just type 'C'. Notice a pop-up window with a list of items beginning with 'C', also note that Console is the highlighted item. This is the item that Visual Studio thinks you are most likely to use in the current context. We can now press Enter, or just '.' and IntelliSense will type the rest of the word for you, and bring up another selection box, this time with WriteLine selected.
You can use the arrow keys to select different items on the list. Scroll up to C and notice that Console is not on the list anymore. This is because IntelliSense will only offer suggestions based on the current context. For this little example, it will show all the methods and properties of the Console.
As we are looking at 'C' on the list, type in 'W' and the IntelliSense will jump back down to WriteLine. We can see by the icon (see below for examples) that WriteLine is a method we can call, so now type in an open parenthesis and IntelliSense will now give you the method name, a description of what it does and the parameters that it takes. Notice in the top left it says "1 of 19", this function has been overloaded and can take many different types of parameters. Don't worry about this just yet though, but you can use the up and down keys to look through them if you wish.
If for some reason IntelliSense does not automatically pop-up, you can use (Ctrl+Space) to have it pop-up at the cursor.
Here are a few of the common icons used in IntelliSense and what they mean. Don't worry if you don't understand the terms, they are all explained in later tutorials.
Regions and Blocks
Visual Studio and Visual C# allow sections of code to be hidden away or collapsed. A section of code between two braces is called a region and can be collapsed or expanded by using the plus or minus icon near the gutter. The gutter is the grey column down the left-hand side of the code window. We will see what the gutter is used for when we look at Debugging.
You can collapse block level code that is nested within your methods, you can collapse the namespace, or you can collapse any block of code between the two.
The code does not get deleted, just hidden from view. This is very useful because it can hide away code that you may not be interested in and de-clutter the code window, so all your attention is on the code you are writing.
You can also define your own collapsible blocks of code by using the region and endregion keywords. To start a region, type
#region , and at the end of the block of code type
#endregion. The editor will now allow you to collapse this code.
#region myRegion Console.WriteLine("Please Enter Your Age:"); string myName = Console.ReadLine(); Console.WriteLine("You Entered: " + myName); #endregion
Summary and Conclusions
In this tutorial, we saw how the Visual C# IDE is very clever at predicting what you are going to type, and it is able to improve your productivity by reducing the amount of time spent coding. We also saw how regions and blocks can be collapsed to de-clutter the code window.
In the next tutorial, we will have a look at the task list and see how we can use it to help plan our development.
Last updated on: Saturday 24th June 2017
There are no comments for this post. Be the first! | <urn:uuid:b60ca884-6910-4181-a310-3773031600e8> | 2.796875 | 1,228 | Truncated | Software Dev. | 51.531194 | 95,611,659 |
Better Unit Tests
Better Unit Tests
Over the last few years, we have been adding unit tests to our existing product to improve its internal quality. During this period, we always had the challenge of choosing unit versus integration tests. I would like to mention some of the approaches we have applied to improve the quality of our existing system.
Join the DZone community and get the full member experience.Join For Free
Discover how quick and easy it is to secure secrets, so you can get back to doing what you love. Try Conjur, a free open source security service for developers.
Over the last few years, we have been adding unit tests to our existing product to improve its internal quality. During this period, we always had the challenge of choosing unit vs. integration tests. I would like to mention some of the approaches we have applied to improve the quality of our existing system.
At its core, unit testing is about testing a single component at a time by isolating its dependencies. The classical unit tests have these properties: "Fast, Independent, Repeatable, Self-Validating, Timely". Typically in Java, a method is considered a unit. So, the traditional (and most common) approach is to test the single method of a class separated from all its dependencies.
Interestingly, there is no straight definition of what makes a unit. Many times a combination of methods which spread across multiple classes can form a single behavior. So, in this context, the behavior could be considered as a unit. I have seen people breaking these units and writing multiple tests for the sake of testing a single method. If the intermediate results are not significant, this will only increase the complexity of the system. The best way to test a system is to test with its dependencies wherever we can accommodate them. Hence, we should try to use the actual implementation and not mocks. Uncle Bob puts this point very well, "Mock across architecturally significant boundaries, but not within those boundaries..." in this article.
If the software is built using a TDD approach, it might not be a challenge to isolate dependencies or add a test for your next feature. But, not all software is built like this. Unfortunately, we have systems where there are only a few or no tests written. When working with these systems, we can make use of the above principle and use tests at different levels. Terry Yin provides an excellent graphic (which is shown below) in his presentation titled Misconceptions of unit testing. This shows how different tests can add values and what the drawbacks are.
Many of our projects use Java and the Spring framework. We have used springs @RunWith and SpringJUnit4ClassRunner to create AppLevel Tests which gives you the objects with all its dependencies initiated. You could selectively mock certain dependencies if you would like to isolate them. This sets a nice platform to write unit tests with multiple collaborating objects. We call them App level tests. These are still fast running tests with no external dependencies. A different term was chosen to differentiate this from the classical unit test. We also had Integration tests which would connect with external systems. So, the overall picture of developer tests can be summarized as below:
|Tests||Naming convention||Runs at||When to use||Exec Time|
|Unit Test||Ends with Test||Every build||Rule based implementations where the logic can be tested in isolation||Few Milliseconds|
|App Level Tests||Ends with TestApp||Every build / Nightly builds (Teams choice)||Tests the service layers in connection with others. Frees you from creation of mock objects. Application context is loaded in the tests.||Few Seconds|
|Integration Test||Ends with TestIntg||Runs on demand when a special profile is used in build.||All the above + Use when you need to connect to external points like DB, web services etc..||Depends on the integration points.|
|Manually Running Tests||Ends withTestIntgManual||Manually running tests, Used debugging a specific problem locally||All the above - Can't be automated.||Depends on the integration points.|
This approach gives the developers the ability to choose the right level of abstractions to test and helps in optimizing their time. Nowadays, my default choice is App Level tests and I go to unit tests if I have a complicated logic to implement.
Published at DZone with permission of Manu Pk . See the original article here.
Opinions expressed by DZone contributors are their own. | <urn:uuid:1de4c0e2-e699-4ddc-a3d2-8cb245415fd2> | 2.578125 | 938 | Truncated | Software Dev. | 46.583472 | 95,611,660 |
How To Write An Error Analysis
The major difference between this estimate and the definition is the in the denominator instead of n. If it is only just outside the range (let's say, if the discrepancy is less than twice the error), then you can still regard your experiment as satisfactory. Also, when taking a series of measurements, sometimes one value appears "out of line". In:= Out= In:= Out= In:= In:= Out= 18.104.22.168 Another Approach to Error Propagation: The Data and Datum Constructs EDA provides another mechanism for error propagation. have a peek here
The pH of the solution can be determined by looking at the color of the paper after it has been dipped in the solution. For numbers without decimal points, trailing zeros may or may not be significant. The role of error analysis is to quantify what "reasonably" means. Error Analysis Introduction The knowledge we have of the physical world is obtained by doing experiments and making measurements. you could check here
Error Analysis Example
Common sense should always take precedence over mathematical manipulations. 2. The function AdjustSignificantFigures will adjust the volume data. Similarly the perturbation in Z due to a perturbation in B is, . For instance, what is the error in Z = A + B where A and B are two measured quantities with errors and respectively?
Since the correction is usually very small, it will practically never affect the error of precision, which is also small. Would the error in the mass, as measured on that $50 balance, really be the following? A first thought might be that the error in Z would be just the sum of the errors in A and B. Error Analysis Examples In English Wolfram Data Framework Semantic framework for real-world data.
Nonetheless, you may be justified in throwing it out. Error Analysis Physics Example Defined numbers are also like this. Comparing a measured value with an accepted value If the result of your measurement is written the first way, with a probable range, you can immediately see if the accepted value However, the smaller the uncertainties the better the experiment.
But, as already mentioned, this means you are assuming the result you are attempting to measure. How To Do Error Analysis These rules may be compounded for more complicated situations. This week we will use a more powerful method of verifying a different physical law. In most cases, a percent error or difference of less than 10% will be acceptable.
Error Analysis Physics Example
Errors combine in the same way for both addition and subtraction. https://en.wikiversity.org/wiki/Error_Analysis_in_an_Undergraduate_Science_Laboratory Generated Mon, 17 Oct 2016 22:09:10 GMT by s_wx1127 (squid/3.5.20) Error Analysis Example Propagating errors for e = |v_f / v_i|. Error Analysis Definition Suppose there are two measurements, A and B, and the final result is Z = F(A, B) for some function F.
This is often the case for experiments in chemistry, but certainly not all. navigate here So one would expect the value of to be 10. Thus 549 has three significant figures and 1.892 has four significant figures. The two quantities are then balanced and the magnitude of the unknown quantity can be found by comparison with the reference sample. Error Analysis Lab Report Example
By default, TimesWithError and the other *WithError functions use the AdjustSignificantFigures function. Comparing two measured values predicted to be equal 3. In:= In this graph, is the mean and is the standard deviation. Check This Out In:= In:= Out= In:= Out= In:= Out= For simple combinations of data with random errors, the correct procedure can be summarized in three rules.
Best-fit lines. Error Analysis Lab Report Chemistry The theorem shows that repeating a measurement four times reduces the error by one-half, but to reduce the error by one-quarter the measurement must be repeated 16 times. If the errors were random then the errors in these results would differ in sign and magnitude.
Our best estimate is in the middle, 46.5cm.
Importance In daily life, we usually deal with errors intuitively. A valid measurement from the tails of the underlying distribution should not be thrown out. You would find that the string is slightly stretched when the weight is on it and the length even depends on the temperature or moisture in the room. Error Analysis Formula Here is an example.
Although they are not proofs in the usual pristine mathematical sense, they are correct and can be made rigorous if desired. This means that the length of an object can be measured accurately only to within 1mm. In the theory of probability (that is, using the assumption that the data has a Gaussian distribution), it can be shown that this underestimate is corrected by using N-1 instead of this contact form Please try the request again.
The difference between each measurement and the mean of many measurements is called the "deviation". However, even before doing the next one you know that it won't be exactly the same. Pugh and G.H. For the Philips instrument we are not interested in its accuracy, which is why we are calibrating the instrument. | <urn:uuid:54a252bc-091e-4a4a-886f-6732c7f841e2> | 3.34375 | 1,099 | Truncated | Science & Tech. | 47.15954 | 95,611,667 |
Clouds and the Earth's Radiant Energy System
Clouds and the Earth's Radiant Energy System (CERES) is on-going[update] NASA climatological experiment from Earth orbit. The CERES are scientific satellite instruments, part of the NASA's Earth Observing System (EOS), designed to measure both solar-reflected and Earth-emitted radiation from the top of the atmosphere (TOA) to the Earth's surface. Cloud properties are determined using simultaneous measurements by other EOS instruments such as the Moderate Resolution Imaging Spectroradiometer (MODIS). Results from the CERES and other NASA missions, such as the Earth Radiation Budget Experiment (ERBE), could lead to a better understanding of the role of clouds and the energy cycle in global climate change.
CERES experiment has four main objectives:
- Continuation of the ERBE record of radiative fluxes at the top of the atmosphere (TOA) for climate change analysis.
- Doubling the accuracy of estimates of radiative fluxes at TOA and the Earth's surface.
- Provide the first long-term global estimates of the radiative fluxes within the Earth's atmosphere.
- Provide cloud property estimates that are consistent with the radiative fluxes from surface to TOA.
Each CERES instrument is a radiometer which has three channels – a shortwave (SW) channel to measure reflected sunlight in 0.2 – 5 µm region, a channel to measure Earth-emitted thermal radiation in the 8–12 µm "window" or "WN" region, and a Total channel to measure entire spectrum of outgoing Earth's radiation (>0.2 µm). The CERES instrument was based on the successful Earth Radiation Budget Experiment which used three satellites to provide global energy budget measurements from 1984 to 1993.
Ground Absolute Calibration
For a Climate Data Record (CDR) mission like CERES, accuracy is of high importance and achieved for pure Infra-Red nighttime measurements by use of a ground laboratory SI traceable blackbody to determine Total and WN channel radiometric gains. This however was not the case for CERES solar channels such as SW and solar portion of the Total telescope, which have no direct un-broken chain to SI traceability. This is because CERES solar responses were measured on ground using lamps whose output energy were estimated by a cryo-cavity reference detector, which used a silver Cassegrain telescope identical to CERES devices to match the satellite instrument Field of View. The reflectivity of this telescope built and used since the mid-1990s was never actually measured, estimated only based on witness samples (see slide 9 of Priestley et al. (2014)). Such difficulties in ground calibration, combined with suspected on-ground contamination events have resulted in the need to make un-explained ground to flight changes in SW detector gains as big as 8%, simply to make the ERB data seem somewhat reasonable to climate science (note that CERES currently claims a one sigma SW absolute accuracy of 0.9%).
CERES spatial resolution at nadir view (equivalent diameter of the footprint) is 10 km for CERES on TRMM, and 20 km for CERES on Terra and Aqua satellites. Perhaps of greater importance for missions such as CERES is calibration stability, or the ability to track and partition instrumental changes from Earth data so it tracks true climate change with confidence. CERES onboard calibration sources intended to achieve this for channels measuring reflected sunlight include solar diffusers and tungsten lamps. However the lamps have very little output in the important Ultra-Violet wavelength region where degradation is greatest and they have been seen to drift in energy by over 1.4% in ground tests, without a capability to monitor them on-orbit (Priestley et al. (2001)). The solar diffusers have also degraded greatly in orbit such that they have been declared un-usable by Priestley et al. (2011). A pair of blackbody cavities that can be controlled at different temperatures are used for the Total and WN channels, but these have not been proved stable to better than 0.5%/decade. Cold space observations and internal calibration are performed during normal Earth scans.
The first CERES instrument Proto-Flight Module (PFM) was launched aboard the NASA Tropical Rainfall Measuring Mission (TRMM) in November 1997 from Japan. However, this instrument failed to operate after 8 months due to an on-board circuit failure.
CERES on the EOS and JPSS Mission Satellites
An additional six CERES instruments were launched on the Earth Observing System and the Joint Polar Satellite System. The Terra satellite, launched in December 1999, carried two (Flight Module 1 (FM1) and FM2) and the Aqua satellite, launched in May 2002, carried two more (FM3 and FM4). A fifth instrument (FM5) was launched on the Suomi NPP satellite in October 2011 and a sixth (FM6) on NOAA-20 in November 2017. With the failure of the PFM on TRMM and the 2005 loss of the SW channel of FM4 on Aqua, there are five of the CERES Flight Modules that are fully operational as of 2017.
Radiation Budget Instruments
The measurements of the CERES instruments will be furthered by the Radiation Budget Instrument (RBI) to be launched on Joint Polar Satellite System-2 (JPSS-2) in 2021, JPSS-3 in 2026, and JPSS-4 in 2031. The Trump administration however seems set to cancel the RBI project, despite most of the money for it already having been spent.[needs update]
CERES operates in three scanning modes: across the satellite ground track (cross-track), along the direction of the satellite ground track (along-track), and in a Rotating Azimuth Plane (RAP). In RAP mode, the radiometers scan in elevation as they rotate in azimuth, thus acquiring radiance measurement from a wide range of viewing angles. Until February 2005, on Terra and Aqua satellites one of CERES instruments scanned in cross-track mode while the other was in RAP or along-track mode. The instrument operating in RAP scanning mode took two days of along-track data every month. However the multi-angular CERES data allowed to derive new models which account for anisotropy of the viewed scene, and allow TOA radiative flux retrieval with enhanced precision.
|Wikimedia Commons has media related to Clouds and the Earth's Radiant Energy System.|
- B. A. Wielicki; Harrison, Edwin F.; Cess, Robert D.; King, Michael D.; Randall, David A.; et al. (1995). "Mission to Planet Earth: Role of Clouds and Radiation in Climate". Bull. Am. Meteorol. Soc. 76 (11): 2125–2152. Bibcode:1995BAMS...76.2125W. doi:10.1175/1520-0477(1995)076<2125:MTPERO>2.0.CO;2.
- Wielicki; et al. (1996). "Clouds and the Earth's Radiant Energy System (CERES): An Earth Observing System Experiment". Bulletin of the American Meteorological Society. Bull. Amer. Meteor. Soc. 77 (5): 853. Bibcode:1996BAMS...77..853W. doi:10.1175/1520-0477(1996)077<0853:CATERE>2.0.CO;2.
- P. Minnis; et al. (September 2003). "CERES Cloud Property Retrievals from Imager on TRMM, Terra and Aqua" (PDF). Spain. pp. 37–48.
- Barkstrom, Bruce R. (1984). "The Earth Radiation Budget Experiment". Bulletin of the American Meteorological Society. 65 (11): 1170–1186. Bibcode:1984BAMS...65.1170B. doi:10.1175/1520-0477(1984)065<1170:TERBE>2.0.CO;2.
- "Surface and Atmospheric Remote Sensing: Technologies, Data Analysis and Interpretation., International". Geoscience and Remote Sensing Symposium IGARSS '94. 1994.
- NASA, Clouds and the Earth's Radiant Energy System (CERES) (accessed Sept. 9, 2014)
- M. Folkman et al., "Calibration of a shortwave reference standard by transfer from a blackbody standard using a cryogenic active cavity radiometer," IEEE Geoscience and Remote Sensing Symposium, pp. 2298–2300, 1994.
- Priestley, Kory; et al. (August 5, 2014). "CERES CALCON Talk".
- Matthews (2009). "In-Flight Spectral Characterization and Calibration Stability Estimates for the Clouds and the Earth's Radiant Energy System (CERES)". Journal of Atmospheric and Oceanic Technology. Journal of Atmospheric and Oceanic Technology. 28: 3. Bibcode:2011JAtOT..28....3P. doi:10.1175/2010JTECHA1521.1.
- Priestley, Kory : (July 1, 2002). "CERES Gain Changes".
- Wielicki; et al. (2013). "Achieving Climate Change Absolute Accuracy". Bulletin of the American Meteorological Society. Bull. Amer. Meteor. Soc. 94 (10): 1519. Bibcode:2013BAMS...94.1519W. doi:10.1175/BAMS-D-12-00149.1.
- Priestley; et al. (2001). "Postlaunch Radiometric Validation of the Clouds and the Earth's Radiant Energy System (CERES) Proto-Flight Model on the Tropical Rainfall Measuring Mission (TRMM) Spacecraft through 1999". Journal of Applied Meteorology. Journal of Applied Meteorology. 39 (12): 2249. Bibcode:2000JApMe..39.2249P. doi:10.1175/1520-0450(2001)040<2249:PRVOTC>2.0.CO;2.
- Priestley; et al. (2011). "Radiometric Performance of the CERES Earth Radiation Budget Climate Record Sensors on the EOS Aqua and Terra Spacecraft through April 2007". Journal of Atmospheric and Oceanic Technology. Journal of Atmospheric and Oceanic Technology. 28: 3. Bibcode:2011JAtOT..28....3P. doi:10.1175/2010JTECHA1521.1.
- "Joint Polar Satellite System - Launch Schedule". www.jpss.noaa.gov. Archived from the original on 19 January 2017. Retrieved 23 January 2017.
- "Joint Polar Satellite System: Mission and Instruments". NASA. Retrieved 14 November 2017.
- N. G. Loeb; Kato, Seiji; Loukachine, Konstantin; Manalo-Smith, Natividad; et al. (2005). "Angular distribution models for top-of-atmosphere radiative flux estimation from the Clouds and the Earth's Radiant Energy System instrument on the Terra Satellite. Part I: Methodology". J. Atmos. Ocean. Tech. 22 (4): 338–351. Bibcode:2005JAtOT..22..338L. doi:10.1175/JTECH1712.1. | <urn:uuid:3714a524-c6be-445f-910c-c3d60fce0322> | 4.03125 | 2,441 | Knowledge Article | Science & Tech. | 61.140638 | 95,611,683 |
Imagine a new and improved biorefinery, one that produces advanced biofuels as environmentally sustainable as they are economically viable.
Ten Michigan State University professors have been named University Distinguished Professors in recognition of their achievements in the classroom, laboratory and community.
The Board of Trustees voted on and approved the recommendations on June 21. The designations were effective immediately.
Sometimes, when a science experiment doesn’t work out, unexpected opportunities open up. That’s what Yang Yang and the Benning lab have found in their latest work on sustainable biofuels.
In the world of biofuels research, the baker’s yeast Saccharomyces cerevisiae gets a lot of love, with scientists commonly tweaking the yeast’s fermentative qualities to enhance ethanol production. Researchers at the Great Lakes Bioenergy Research Center (GLBRC), however, are expanding that focus to a broad range of wild yeasts in the genus Saccharomyces.
EAST LANSING, Mich. – Michigan State University scientists have pinpointed a new source of nitrous oxide, a greenhouse gas that’s more potent than carbon dioxide. The culprit?
Tiny bits of decomposing leaves in soil.
Winter is no time to flower, which is why so many plants have evolved the ability to wait for the snow to melt before investing precious resources in blooms.
When Max Haase set out for a walk in Green Bay’s Baird Creek Nature Preserve on a May day in 2015, it was pretty normal stuff. Baird Creek was practically in his backyard, and it was a good day for a hike — sunny and unseasonably warm for so early in the UW–Madison biology major’s summer break.
If you want to create sustainable biofuels from less and for less, you’ve got a range of options. And one of those options is to go microbial, enlisting the help of tiny but powerful bacteria in creating a range of renewable biofuels and chemicals.
Technologies for converting non-edible biomass into chemicals and fuels traditionally made from petroleum exist aplenty. But when it comes to attracting commercial interest, these technologies compete financially with a petroleum-based production pipeline that has been perfected over the course of decades.
How healthy a plant matures depends on how well it grows during its early life stages, which is not a surprise to anyone who has raised children.
In the face of mounting pressure, like inconsistent temperature patterns or the burden to produce more for us due to the lack of new arable land, plant health might be taking a beating.
The Great Lakes Bioenergy Research Center (GLBRC), one of three bioenergy research centers established in 2007 by the Biological and Environmental Research program in the U.S. Department of Energy’s (DOE) Office of Science, recently published its 1,000th scientific paper. | <urn:uuid:4668f64e-4691-43a9-a92c-874f59d6dfb4> | 2.78125 | 599 | Content Listing | Science & Tech. | 37.189984 | 95,611,717 |
When Martin McLaughlin ’15 arrived at MIT as a freshman in the fall of 2011, he had a plan in mind. McLaughlin wanted to work in the lab of Catherine Drennan, an MIT professor of biology and chemistry, and Howard Hughes Medical Institute (HHMI) investigator, who uses X-ray crystallography to study proteins.
And so McLaughlin, with Drennan’s approval, started doing research in addition to taking a normal course load, as part of MIT’s Undergraduate Research Opportunities Program (UROP). The project he focused on was challenging: figuring out precisely how an enzyme called lipoyl synthase (LipA) acts as a catalyst in reactions that produce lipoic acid. Our metabolisms need lipoic acid to convert food into energy, but the process through which it is naturally produced has been unclear.
Specifically, McLaughlin, as part of a larger research team featuring scientists from MIT and Penn State University, was trying to understand one thing above all. LipA inserts sulfur into the reaction that produces lipoic acid. But where does the sulfur come from in the first place?
Now McLaughlin’s work has produced a notable answer, in a paper published today. LipA, in an unusual chemical arrangement, removes the sulfur from an iron-sulfur cluster that it already contains. In effect, LipA “cannibalizes” itself to catalyze the reaction that produces lipoic acid.
“The enzyme is actually cannibalizing its own cluster, pulling it out and putting in sulfur,” Drennan explains. “The definition of a catalyst is that it’s not being consumed. So this goes against all the fundamentals really, that the enzyme would just destroy itself.” Yet that is what the results show.
The finding could have long-term applications in medicine and agriculture, and is also generally significant within biochemistry research, since solving the LipA mystery suggests a means by which other enzymes use sulfur in similar settings.
“It just wasn’t understood how nature inserts sulfur into unactivated carbon centers,” says Squire Booker, a professor of chemistry and of biochemistry and molecular biology at Penn State, and an HHMI investigator, whose own research group made essential contributions to the finding. Booker, who as it happens received his PhD from MIT in 1994, adds: “We knew how the process takes place for incorporation of oxygen, for example. But we didn’t know how the sulfur goes in, and we didn’t know what the source of the sulfur was.”
The new paper, “Crystallographic snapshots of sulfur insertion by lipoyl synthase,” is being published today in the Proceedings of the National Academies of Science (PNAS). The authors are McLaughlin; Nicholas D. Lanz, a graduate student at Penn State; Peter J. Goldman, a former Drennan lab graduate student; Kyung-Hoon Lee, a researcher in Booker’s lab; Booker; and Drennan.
Arrive at MIT on Thursday; start research on Monday
Remarkably, McLaughlin’s work on LipA predates his time at MIT. McLaughlin was a student at State College High School in State College, Pennsylvania, and already interested in science, when he decided to see if he could volunteer in a lab at nearby Penn State. Before long, McLaughlin had connected with Booker, who was amenable to showing high school students the research ropes.
“Squire said, ‘Sure, you can work in my lab,’” McLaughlin recounts. “So we met and he told me I’d be setting up crystallization trials in an anaerobic chamber. I had no idea what that meant.”
It meant McLaughlin would be using a biology “glove box” — putting on gloves and reaching into a small, oxygen-free box to try to crystallize proteins. That is, researchers put proteins in solutions which evaporate, and under certain circumstances the proteins will crystallize in a way that allows them to be further analyzed.
“My job was to set up all of these crystallization experiments,” says McLaughlin. “I got lucky and got crystals for a few of those proteins, and one of them was lipoyl synthase.”
“Martin really was somebody very different,” Booker says. “He was aggressive, in a good way, incredibly motivated. He was so excited about science. Within a week, he said, ‘I’m going to need a key to the lab.’”
By the time he graduated from high school, McLaughlin had become proficient in doing the lab work, and had also gotten accepted to MIT. Booker and Drennan were already collaborating on the project, so Booker, acting as a catalyst, suggested to McLaughlin he could work on the sulfur problem with Drennan at MIT.
“Martin emailed me that he’s coming to MIT for undergrad, and asked if he could work in my research group,” Drennan recalls. “And I said ‘Absolutely.’ He said, ‘Well, okay, I might need a little time to settle into MIT.’ So I’m thinking sophomore year, or something. Then he said, ‘I arrive on Thursday, I unpack on Friday. Could I wait until the following Monday to start in the lab?’ Which is a week before classes start. He shows up in the lab apologizing for how long it took him to arrive.”
In the meantime, an important advance had been made by Nicholas Lanz, a Penn State graduate student, who found that in certain circumstances, molecules containing carbon form a bond with iron-sulfur clusters in such a way that an iron atom disappears — leaving an “extra” sulfur atom available for another reaction. In a sense, this showed that the conditions for the chemical cannibalization existed.
“For us, this was an important discovery, because it showed that the iron-sulfur cluster actually can be cannibalized in the reaction,” Booker says. “We saw it.”
Lanz prepared a version of this molecule and turned it over to McLaughlin, who then crystallized it and was able to perform the analysis of the structures and mechanisms showing that LipA, a bit counterintuitively, was indeed using its own sulfur atoms to help produce lipoic acid.
“Crystallography is a little unusual in that it’s very difficult to tell if you’re going to get any interesting results until you get them,” McLaughlin says. “You spend months or years working on getting a single crystal. I always hoped it would work, but I definitely wouldn’t say I knew it would work. It was an interesting enough system that I was willing to spend years on it, if that’s what was needed.”
Notably, when McLaughlin started at MIT, Drennan adds, her lab workers had been trying to get high-quality crystals of LipA for many years. “My graduate students had all but given up, and then Martin arrived,” she says.
No boring conversations allowed
The researchers emphasize that there are still many things about LipA that must be studied further — including how the iron-sulfur clusters get rebuilt after being cannibalized. That said, there are many potential applications that could come from understanding the natural production of lipoic acid.
“Lipoic acid is an incredibly important cofactor,” Booker says. “You can’t have aerobic life without lipoic acid.”
A synthetic version of lipoic acid is currently manufactured and used as a medical supplement in some countries, to combat diabetes, among other conditions. But it is also possible to envision drugs that target the reaction in order to stop multiple diseases, including some cancers and tuberculosis. (The molecule used in the research came from a tuberculosis bacterium, in fact.) Lipoic acid is also a livestock feed supplement manufactured in a “costly multistep synthesis,” the researchers point out in the paper, which could become simplified.
For now, the researchers are pleased to have made the current advance, and McLaughlin — who is now a doctoral student at the University of Illinois — emphasizes his good fortune in having been in the middle of the LipA story.
“I’m so grateful to Squire and to Cathy,” McLaughlin says. “They let a high school and undergraduate student work on some of their coolest projects. Both of those labs are great places to become a scientist.” And, he adds: “MIT is a very intellectually rich environment. It’s very difficult to have a boring conversation at MIT.”
The research was supported by the National Institutes of Health, the National Science Foundation, the Meryl and Stewart Robertson UROP Fund, the MIT Energy Initiative, and the DeFlorez Endowment Fund. The work was also based on research supported by the National Institute of General Medical Sciences, and the U.S. Department of Energy.
Subscribe to energy news & events: (See past editions) | <urn:uuid:c39e778a-ecb4-4525-855f-e9fcb144a2db> | 3.25 | 1,955 | News (Org.) | Science & Tech. | 45.800451 | 95,611,760 |
Warm North Atlantic Ocean promotes extreme winters in US and Europe
The extreme cold weather observed across Europe and the east coast of the US in recent winters could be partly down to natural, long-term variations in sea surface temperatures, according to a new study published today.
Researchers from the University of California Irvine have shown that a phenomenon known as the Atlantic Multidecadal Oscillation (AMO)—a natural pattern of variation in North Atlantic sea surface temperatures that switches between a positive and negative phase every 60–70 years—can affect an atmospheric circulation pattern, known as the North Atlantic Oscillation (NAO), that influences the temperature and precipitation over the Northern Hemisphere in winter.
When the AMO is in its positive phase and the sea surface temperatures are warmer, the study has shown that the main effect in winter is to promote the negative phase of the NAO which leads to “blocking” episodes over the North Atlantic sector, allowing cold weather systems to exist over the eastern US and Europe.
To arrive at their results, the researchers combined observations from the past century with climate simulations of the atmospheric response to the AMO.
According to their observations, sea surface temperatures in the Atlantic can be up to 1.5 °C warmer in the Gulf Stream region during the positive phase of the AMO compared to the negative, colder phase. The climate simulations suggest that these specific anomalies in sea surface temperatures can play a predominant role in promoting the change in the NAO.
Lead authors of the study Yannick Peings and Gudrun Magnusdottir said: “Our results indicate that the main effect of the positive AMO in winter is to promote the occurrence of the negative phase of the NAO. A negative NAO in winter usually goes hand-in-hand with cold weather in the eastern US and north-western Europe.”
The observations also suggest that it takes around 10–15 years before the positive phase of AMO has any significant effect on the NAO. The reason for this lag is unknown; however, an explanation might be that AMO phases take time to develop fully. As the AMO has been in a positive phase since the early 1990s, it may have contributed to the extreme winters that both the US and Europe have experienced in recent years.
The researchers warn, however, that the future evolution of the AMO remains uncertain, with many factors potentially affecting how it interacts with atmospheric circulation patterns, such as Arctic sea ice loss, changes in solar radiation, volcanic eruptions and concentrations of greenhouse gases in the atmosphere.
The AMO also shows strong variability from one year to the next in addition to the changes seen every 60–70 years, which makes it difficult to attribute specific extreme winters to the AMO’s effects.
Responding to the extreme weather that gripped the eastern coast of the US this winter, Yannick Peings continued: “Unlike the 2012/2013 winter, this winter had rather low values of the AMO index and the pattern of sea surface temperature anomalies was not consistent with the typical positive AMO pattern. Moreover, the NAO was mostly positive with a relatively mild winter over Europe”.
“Therefore it is unlikely that the positive AMO played a defining role on the east coast of the US, although further work is necessary to answer this question. Such an event is consistent with the large internal variability of the atmosphere, and other external forcings may have played a role.
“Our future studies will look to compare the role of the AMO compared to Arctic sea ice anomalies, which have also been shown to affect atmospheric circulation patterns and promote colder, more extreme winters.” | <urn:uuid:19fdcaa0-f99f-4379-9722-27f67b5a9484> | 3.5 | 752 | News Article | Science & Tech. | 20.187882 | 95,611,761 |
27 No. 5
From the Editor
If metrology has never before tickled your interest, this issue of CI may change your mind. Several features should help convince you of the importance of the “scientific study of measurement” and its impact not only on all disciplines of science, but also on the world at large.
First, Ian Mills reviews in detail the problems related to our current kilogram standard and prototype. Surely, changing the kilogram standard—as scientists are considering—will not make your favorite recipe turn out any different, nor will it cause your scale to reveal changes in your body weight. Instead, the proposed definition of the kilogram will reveal weights, and also related fundamental constants, with more accuracy and precision down to the parts per 108. Mills explains the importance of making the change and how it would benefit fields such as quantum metrology in which the unit of mass finally would be based on a standard “invariant of nature” and referenced to quantum properties, as are the units of length and time.
As we have come to appreciate, international trade, human health and safety, and environmental protection measures depend on metrology. From a qualitative point of view, there is also the need for a shared measurement terminology. In a position paper, Paul De Bièvre shows how ambiguous terminology can create barriers to trade. Reviewing typical terms such as “quantity” or “measurand,” he points out pitfalls that could lead to misunderstanding.
The importance of precision and accuracy in measurement is echoed by K. Racke et al. in their article on crop protection chemistry. Results from a recent workshop held in Costa Rica highlight the importance of regulatory harmonization and control of residues and human exposure; these also depend on metrology.
So, as you can glance at C.P. Casey’s article listing the challenges facing chemists, perhaps you will add to your own list the challenge of being precise and accurate?
last modified 22 August 2005.
Copyright © 2003-2005 International Union of Pure and
Questions regarding the website, please contact firstname.lastname@example.org | <urn:uuid:1f936ce1-32c1-4781-8a06-d48b28c89310> | 2.8125 | 440 | Knowledge Article | Science & Tech. | 39.738834 | 95,611,762 |
General: Thick pink body, black-edged dorsal and anal fins, black caudal fin. Small pelvic disc is used to attach to animals like tanner crabs, and frequently to research equipment.
Size: to 54 cm
Ocean range (global): South of Aleutian Islands to Baja California; western North Pacific (Honsu), southeastern Kamchatka to Bering Sea.
Habitat description: Demersal, soft sediment.
Published depth range: 61 - 2286 m, mostly > 400 m.
Verified MBARI depth distribution: to 2062 m (March 2016).
ReferencesEncyclopedia of Life
Tree of Life
World Register of Marine Species
National Center for Biotechnology Information
Love, M.S. (2011). Certainly more than you want to know about the fishes of the Pacific coast. Really Big Press, Santa Barbara CA. 649 pp.
Stein, D.L., J.C. Drazen, K.L. Schlining, J.P. Barry, and L.A. Kuhnz (2006). Snailfishes of the central California coast: video, photographic and morphological observations. Journal of Fish Biology, 69: 970-986.
Fitch, J.E. and R.J. Lavenberg (1968). Deep-water teleostean fishes of California. California Natural History Guides:25, University of California Press, Berkeley and Los Angeles, California. 115 p.
Fitch, J.E. and R.J. Lavenberg (1968). Deep-water Teleostean Fishes of California. University of California Press, Berkeley, CA. 115 pp.
Citation: Careproctus melanurus (Gilbert, 1892) Deep-Sea Guide (DSG) at http://dsg/mbari.org/dsg/view/concept/Careproctus%20melanurus. Monterey Bay Aquarium Research Institute (MBARI). Consulted on 2018-07-19. | <urn:uuid:2f5912bc-7050-489a-9562-1259bae2c4b1> | 2.71875 | 418 | Knowledge Article | Science & Tech. | 64.374444 | 95,611,774 |
|MLA Citation:||Bloomfield, Louis A. "Question 770"|
How Everything Works 20 Jul 2018. 20 Jul 2018 <http://howeverythingworks.org/print1.php?QNum=770>.
A wet-bulb/dry-bulb system measures humidity by looking at the temperature drop that occurs when water evaporates. As water evaporates from the bulb of the wet thermometer and the bulb's temperature drop, the rate at which water molecules leave the bulb's surface decreases. The bulb temperature drops until the rate at which water molecules leave the bulb is equal to the rate at which water molecules return to the bulb from the air. At that point, there is no net evaporation going on. In humid air, water molecules return to the bulb more often so that this balance is reached at a higher temperature than in dry air. The wet bulb temperature is thus warmer on a humid day than it is on a dry day. | <urn:uuid:912362d2-69e9-4158-83a6-23e6db52715c> | 3.515625 | 197 | Knowledge Article | Science & Tech. | 61.740809 | 95,611,786 |
Experiments led by Nicolas Dauphas of the University of Chicago and Chicagos Field Museum have validated some controversial rocks from Greenland as the potential site for the earliest evidence of life on Earth.
"The samples that I have studied are extremely controversial," said Dauphas, an Assistant Professor in Geophysical Sciences at the University of Chicago and a Field Museum Associate. Some scientists have claimed that these rocks from Greenlands banded iron formations contain traces of life that push back the biological record of life on earth to 3.85 billion years ago. Others, however, dismiss the claim. They argue that the rocks originally existed in a molten state, a condition unsuitable for the preservation of evidence for life.
"My results show unambiguously that the rocks are sediment deposited at the bottom of an ocean," Dauphas said. "This is an important result. It puts the search for life on the early Earth on firm foundations."
Steve Koppes | EurekAlert!
Innovative genetic tests for children with developmental disorders and epilepsy
11.07.2018 | Christian-Albrechts-Universität zu Kiel
Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe”
05.07.2018 | European Geosciences Union
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Materials Sciences
20.07.2018 | Physics and Astronomy
20.07.2018 | Materials Sciences | <urn:uuid:59d1bd59-bd4a-492a-afac-39d17e822419> | 3.453125 | 785 | Content Listing | Science & Tech. | 36.668847 | 95,611,794 |
NASA's Marshall Space Flight Center (MSFC) is the lead center for Materials Science Microgravity Research. The Materials Science Research Facility (MSRF) is a key development effort underway at MSFC. The MSRF will be the primary facility for microgravity materials science research on board the International Space Station (ISS) and will implement the NASA Materials Science Microgravity Research Program. It will operate in the U.S. Laboratory Module and support U. S. Microgravity Materials Science Investigations. This facility is being designed to maintain the momentum of the U.S. role in microgravity materials science and support NASA's Human Exploration and Development of Space (HEDS) Enterprise goals and objectives for Materials Science. The MSRF as currently envisioned will consist of three Materials Science Research Racks (MSRR), which will be deployed to the International Space Station (ISS) in phases, Each rack is being designed to accommodate various Experiment Modules, which comprise processing facilities for peer selected Materials Science experiments. Phased deployment will enable early opportunities for the U.S. and International Partners, and support the timely incorporation of technology updates to the Experiment Modules and sensor devices. | <urn:uuid:c78aa46c-d698-41ea-ac5c-40c4f37df8eb> | 2.640625 | 233 | About (Org.) | Science & Tech. | 27.323194 | 95,611,800 |
Hubble explores the dark side of cosmic collisions
Astronomers using observations from the NASA/ESA Hubble Space Telescope and NASA's Chandra X-ray Observatory have studied how dark matter in clusters of galaxies behaves when the clusters collide. The results, published in the journal Science on 27 March 2015, show that dark matter interacts with itself even less than previously thought, and narrows down the options for what this mysterious substance might be.
This collage shows NASA/ESA Hubble Space Telescope images of six different galaxy clusters. The clusters were observed in a study of how dark matter in clusters of galaxies behaves when the clusters collide. 72 large cluster collisions were studied in total.
The clusters shown here are, from left to right and top to bottom: MACS J0416.1-2403, MACS J0152.5-2852, MACS J0717.5+3745, Abell 370, Abell 2744 and ZwCl 1358+62.
NASA, ESA, D. Harvey (École Polytechnique Fédérale de Lausanne, Switzerland), R. Massey (Durham University, UK), the Hubble SM4 ERO Team, ST-ECF, ESO, D. Coe (STScI), J. Merten (Heidelberg/Bologna), HST Frontier Fields, Harald Ebeling(University of Hawaii at Manoa), Jean-Paul Kneib (LAM)and Johan Richard (Caltech, USA)
Dark matter is a giant question mark looming over our knowledge of the Universe. There is more dark matter in the Universe than visible matter, but it is extremely elusive; it does not reflect, absorb or emit light, making it invisible. Because of this, it is only known to exist via its gravitational effects on the visible Universe (see e.g. heic1215a).
To learn more about this mysterious substance, researchers can study it in a way similar to experiments on visible matter -- by watching what happens when it bumps into things . For this reason, researchers look at vast collections of galaxies, called galaxy clusters, where collisions involving dark matter happen naturally and where it exists in vast enough quantities to see the effects of collisions .
Galaxies are made of three main ingredients: stars, clouds of gas and dark matter. During collisions, the clouds of gas spread throughout the galaxies crash into each other and slow down or stop. The stars are much less affected by the drag from the gas and, because of the huge gaps between them, do not have a slowing effect on each other -- though if two stars did collide the frictional forces would be huge.
"We know how gas and stars react to these cosmic crashes and where they emerge from the wreckage. Comparing how dark matter behaves can help us to narrow down what it actually is," explains David Harvey of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, lead author of a new study.
Harvey and his team used data from the NASA/ESA Hubble Space Telescope and NASA's Chandra X-ray Observatory to study 72 large cluster collisions. The collisions happened at different times, and are seen from different angles -- some from the side, and others head-on .
The team found that, like the stars, the dark matter continued straight through the violent collisions without slowing down. However, unlike in the case of the stars, this is not because the dark matter is far away from other dark matter during the collisions. The leading theory is that dark matter is spread evenly throughout the galaxy clusters so dark matter particles frequently get very close to each other. The reason the dark matter doesn't slow down is because not only does it not interact with visible particles, it also interacts even less with other dark matter than previously thought.
"A previous study had seen similar behaviour in the Bullet Cluster," says team member Richard Massey of Durham University, UK. "But it's difficult to interpret what you're seeing if you have just one example. Each collision takes hundreds of millions of years, so in a human lifetime we only get to see one freeze-frame from a single camera angle. Now that we have studied so many more collisions, we can start to piece together the full movie and better understand what is going on."
By finding that dark matter interacts with itself even less than previously thought, the team have successfully narrowed down the properties of dark matter. Particle physics theorists have to keep looking, but they now have a smaller set of unknowns to work with when building their models.
Dark matter could potentially have rich and complex properties, and there are still several other types of interaction to study. These latest results rule out interactions that create a strong frictional force, causing dark matter to slow down during collisions. Other possible interactions could make dark matter particles bounce off each other like billiard balls, causing dark matter to be thrown out of collisions or for dark matter blobs to change shape. The team will be studying these next.
To further increase the number of collisions that can be studied, the team are also looking to study collisions involving individual galaxies, which are much more common.
"There are still several viable candidates for dark matter, so the game is not over, but we are getting nearer to an answer," concludes Harvey. "These 'Astronomically Large' particle colliders are finally letting us glimpse the dark world all around us but just out of reach."
On Earth scientists use particle accelerators to find out more about the properties of different particles. Physicists can investigate what substances are made of by accelerating particles into a collision, and examining the properties and trajectory of the resulting debris.
Clusters of galaxies are a swarm of galaxies permeated by a sea of hot X-ray emitting ionised hydrogen gas that is all embedded in a massive cloud of dark matter. It is the interactions of these, the most massive structures in the Universe that are observed to test dark matter's properties.
The gas-gas interaction in cluster collisions is very strong, while the gas-star drag is weak. In a similar way to a soap bubble and a bullet in the wind where the bubble would interact a great deal more with the wind than the bullet.
To find out where the dark matter was located in the cluster the researchers studied the light from galaxies behind the cluster whose light had been magnified and distorted by the mass in the cluster. Because they have a good idea of the visible mass in the cluster, the amount the light is distorted tells them how much dark matter there is in a region.
A favoured theory is that dark matter might be constituted of "supersymmetric" particles. Supersymmetry is a theory in which all particles in our Standard Model -- electrons, protons, neutrons, and so on -- have a more massive "supersymmetric" partner. While there has been no experimental confirmation for supersymmetry as yet, the theory would solve a few of the gaps in our current thinking. One of supersymmetry's proposed particles would be stable, electrically neutral, and only interact weakly with the common particles of the Standard Model -- all the properties required to explain dark matter.
Notes for editors
The Hubble Space Telescope is a project of international cooperation between ESA and NASA.
The research paper, entitled "The non-gravitational interactions of dark matter in colliding galaxy clusters", will be published in the journal Science on 27 March 2015.
The international team of astronomers in this study consists of D. Harvey (École Polytechnique Fédérale de Lausanne, Switzerland; University of Edinburgh, UK), R. Massey (Durham University, UK), T. Kitching (University College London, UK), A. Taylor (University of Edinburgh, UK), and E. Tittley (University of Edinburgh, UK).
Image credit: NASA, ESA, D. Harvey (École Polytechnique Fédérale de Lausanne, Switzerland) and R. Massey (Durham University, UK)
Images of Hubble - http://www.
Link to science paper - http://www.
École Polytechnique Fédérale de Lausanne
Tel: +41 22 3792475
Cell: +41 7946 38283
Tel: +44 7740 648080
ESA/Hubble, Public Information Officer
Garching bei München, Germany
Tel: +44 7816 291261
Georgia Bladon | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:226efad6-557b-4fb4-996d-4f07b3976c51> | 3.421875 | 2,340 | Content Listing | Science & Tech. | 45.414155 | 95,611,806 |
When to use surrogate keys in InnoDB tables
Posted in Databases on May 10, 2006
InnoDB is a special case among MySQL storage engines because they have clustered indexes, which means surrogate keys have to be treated differently in InnoDB. This article gives a quick overview of clustered indexes, and explains why they make it even more important to do careful analysis before making decisions about surrogate keys on InnoDB tables.
A clustered index is just like any other index, except the index holds the data itself, in index order. That is, the index’s leaf nodes are the rows of the table, and the rows are sorted by the index. Because the rows are sorted by the index, there can be only one clustered index per table.
This means when a query uses an index seek to find a row, the seek moves through the index and lands on the data itself. By contrast, non-clustered indexes store a pointer to the data, and the query must then do a “bookmark lookup” to get to the data.
You probably see now why clustered indexes are important. They can create huge performance increases. Once the query finds the data, it has the data – there’s no need to read through more pages (i.e. wait for the hard disk to respond) and do bookmark lookups to find the data. And since the rows are stored in index order, queries that work with ranges of data can use the clustered index to great effect. For example, if a table’s data is clustered on date, it’s highly efficient to select all rows newer than a certain date. The query just seeks into the index and finds the first row; then everything else in the table is guaranteed to be newer, so the query can blindly read every remaining row.
MySQL’s storage engines are all different. Only InnoDB offers clustered indexes, and InnoDB makes the primary key the clustered index. This means the choice of primary key is critical to performance on the InnoDB engine, especially as tables become large.
Another important factor is the way InnoDB handles non-clustered (also known as secondary) indexes. Instead of pointing directly to the row, each leaf node in a secondary index contains a tuple from the clustered index (otherwise, maintaining secondary indexes would be extremely expensive in the case of a page split). That means secondary indexes are actually at a slight disadvantage in InnoDB compared to other storage engines, because using the index requires navigating two indexes. It also means the size of each secondary index is dependent on the size of the clustered index.
What does this have to do with surrogate keys? Since MySQL doesn’t allow an
AUTO_INCREMENT column unless it’s part of the primary key, and InnoDB further restricts this to force it to be the primary key, the clustered index is totally wasted on a meaningless number.
Unfortunately, many people seem to instinctively add an
AUTO_INCREMENT column as a primary key by default. Search around the web and you’ll see people frequently giving that advice when telling a beginner how to design tables. Choosing a primary key by examining the data and finding its inherent primary key can help avoid a performance killer.
There is an important exception to the “avoid surrogate keys” principle. If the table’s primary key is large, the non-clustered indexes are also large, so non-clustered indexes become much less efficient. Not only is a non-clustered index less efficient, the value that results from the non-clustered index’s seek is large too, so navigating the primary key is slower, too. In these cases, using a surrogate key may actually improve performance. It depends on the table. | <urn:uuid:47181d8b-1ed2-4b64-b760-b7fa02148193> | 2.546875 | 793 | Tutorial | Software Dev. | 42.62933 | 95,611,812 |
Heidelberg researchers show that nerve cell centralisation does begin in multicellular animals
While searching for the origin of our brain, biologists at Heidelberg University have gained new insights into the evolution of the central nervous system (CNS) and its highly developed biological structures. The researchers analysed neurogenesis at the molecular level in the model organism Nematostella vectensis.
Using certain genes and signal factors, the team led by Prof. Dr. Thomas Holstein of the Centre for Organismal Studies demonstrated how the origin of nerve cell centralisation can be traced back to the diffuse nerve net of simple and original lower animals like the sea anemone. The results of their research will be published in the journal “Nature Communications”.
Like corals and jellyfish, the sea anemone – Nematostella vectensis – is a member of the Cnidaria family, which is over 700 million years old. It has a simple sack-like body, with no skeleton and just one body orifice. The nervous system of this original multicellular animal is organised in an elementary nerve net that is already capable of simple behaviour patterns.
Researchers previously assumed that this net did not evidence centralisation, that is, no local concentration of nerve cells. In the course of their research, however, the scientists discovered that the nerve net of the embryonic sea anemone is formed by a set of neuronal genes and signal factors that are also found in vertebrates.
According to Prof. Holstein, the origin of the first nerve cells depends on the Wnt signal pathway, named for its signal protein, Wnt. It plays a pivotal role in the orderly evolution of different types of animal cells. The Heidelberg researchers also uncovered an initial indication that another signal path is active in the neurogenesis of sea anemones – the BMP pathway, which is instrumental for the centralisation of nerve cells in vertebrates.
Named after the BMP signal protein, this pathway controls the evolution of various cell types depending on the protein concentration, similar to the Wnt pathway, but in a different direction. The BMP pathway runs at a right angle to the Wnt pathway, thereby creating an asymmetrical pattern of neuronal cell types in the widely diffuse neuronal net of the sea anemone. “This can be considered as the birth of centralisation of the neuronal network on the path to the complex brains of vertebrates,” underscores Prof. Holstein.
While the Wnt signal path triggers the formation of the primary body axis of all animals, from sponges to vertebrates, the BMP signal pathway is also involved in the formation of the secondary body axis (back and abdomen) in advanced vertebrates. “Our research results indicate that the origin of a central nervous system is closely linked to the evolution of the body axes,” explains Prof. Holstein.
H. Watanabe, A. Kuhn, M. Fushiki, K. Agata, Y. Kocagöz, S. Özbek, T. Fujisawa & T.W. Holstein: Sequential actions of β-catenin and Bmp pattern the oral nerve net in Nematostella vectensi“. Nature Communication 5:5536 (23 December 2014), doi:10.1038/ncomms6536
Prof. Dr. Thomas Holstein
Centre for Organismal Studies
Phone: +49 6221 54-5679
Communications and Marketing
Phone: +49 6221 54-2311
Marietta Fuhrmann-Koch | idw - Informationsdienst Wissenschaft
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:17d9f103-76bf-42ea-864c-f0649681bff7> | 2.953125 | 1,345 | Content Listing | Science & Tech. | 38.90334 | 95,611,822 |
The Biot-Savart Law describes the magnetic field which is generated by an electric current. It related the magnetic field to the magnitude, direction, length and proximity of the electric current present. The law is used in magnetostatic approximation.
The Biot-Savart law is used for calculating the resultant magnetic field B at position r which is generated by a steady current I. A continual flow of charges is constant in time; the charge neither accumulates nor depletes at any point. The equation is:
B =(u0/4π) ∫C(Idl x r/|r|^3)
Maxwell considered magnetic permeability u to be a measure of the density of the vortex sea. Hence:
- Magnetic induction current
B = uH
Was essentially a rotational analogy to the linear electric current relationship
- Electric Convection current
J = ρv
Where ρ is electric charge density. B was seen as a kind of magnetic current of vortices aligned in their axial planes, with H being the circumferential velocity of the cortices.
The electric current equation can be viewed as a convective current of electric charge that involved linear motion.© BrainMass Inc. brainmass.com July 23, 2018, 8:05 am ad1c9bdddf | <urn:uuid:c206e2bd-4cc4-4df9-8846-c0aa66898981> | 3.390625 | 276 | Knowledge Article | Science & Tech. | 44.815098 | 95,611,832 |
In spacecraft propulsion, a Hall-effect thruster (HET) is a type of ion thruster in which the propellant is accelerated by an electric field. Hall-effect thrusters trap electrons in a magnetic field and then use the electrons to ionize propellant, efficiently accelerate the ions to produce thrust, and neutralize the ions in the plume. Hall-effect thrusters (based on the discovery by Edwin Hall) are sometimes referred to as Hall thrusters or Hall-current thrusters. Hall thrusters are often regarded as a moderate specific impulse (1,600 s) space propulsion technology. The Hall-effect thruster has benefited from considerable theoretical and experimental research since the 1960s.
Hall thrusters are able to accelerate their exhaust to speeds between 10 and 80 km/s (1,000–8,000 s specific impulse), with most models operating between 15 and 30 km/s (1,500–3,000 s specific impulse).
The thrust produced by a Hall thruster varies depending on the power level. Devices operating at 1.35 kW produce about 83 mN of thrust. High-power models have demonstrated up to 5.4 N in the laboratory. Power levels up to 100 kW have been demonstrated by xenon Hall thrusters.
As of 2009[update], Hall-effect thrusters ranged in input power levels from 1.35 to 10 kilowatts and had exhaust velocities of 10–50 kilometers per second, with thrust of 40–600 millinewtons and efficiency in the range of 45–60 percent.
Hall thrusters were studied independently in the United States and the Soviet Union. They were first described publicly in the US in the early 1960s. However, the Hall thruster was first developed into an efficient propulsion device in the Soviet Union. In the US, scientists focused instead on developing gridded ion thrusters.
Two types of Hall thrusters were developed in the Soviet Union:
- thrusters with wide acceleration zone, SPT (Russian: СПД, стационарный плазменный двигатель; English: SPT, Stationary Plasma Thruster) at Design Bureau Fakel
- thrusters with narrow acceleration zone, DAS (Russian: ДАС, двигатель с анодным слоем; English: TAL, Thruster with Anode Layer), at the Central Research Institute for Machine Building (TsNIIMASH).
The SPT design was largely the work of A. I. Morozov. The first SPT to operate in space, an SPT-50 aboard a Soviet Meteor spacecraft, was launched December 1971. They were mainly used for satellite stabilization in North-South and in East-West directions. Since then until the late 1990s 118 SPT engines completed their mission and some 50 continued to be operated. Thrust of the first generation of SPT engines, SPT-50 and SPT-60 was 20 and 30 mN respectively. In 1982, SPT-70 and SPT-100 were introduced, their thrusts being 40 and 83 mN, respectively. In the post-Soviet Russia high-power (a few kilowatts) SPT-140, SPT-160, SPT-200, T-160 and low-power (less than 500 W) SPT-35 were introduced.
Soviet and Russian TAL-type thrusters include the D-38, D-55, D-80, and D-100.
Soviet-built thrusters were introduced to the West in 1992 after a team of electric propulsion specialists from NASA's Jet Propulsion Laboratory, Glenn Research Center, and the Air Force Research Laboratory, under the support of the Ballistic Missile Defense Organization, visited Russian laboratories and experimentally evaluated the SPT-100 (i.e., a 100 mm diameter SPT thruster). Over 200 Hall thrusters have been flown on Soviet/Russian satellites in the past thirty years. No failures have ever occurred on orbit. Hall thrusters continue to be used on Russian spacecraft and have also flown on European and American spacecraft. Space Systems/Loral, an American commercial satellite manufacturer, now flies Fakel SPT-100's on their GEO communications spacecraft.
Since their introduction to the west in the early 1990s, Hall thrusters have been the subject of a large number of research efforts throughout the United States, France, Italy, Japan, and Russia (with many smaller efforts scattered in various countries across the globe). Hall thruster research in the US is conducted at several government laboratories, universities and private companies. Government and government funded centers include NASA's Jet Propulsion Laboratory, NASA's Glenn Research Center, the Air Force Research Laboratory (Edwards AFB, CA), and The Aerospace Corporation. Universities include the US Air Force Institute of Technology, University of Michigan, Stanford University, The Massachusetts Institute of Technology, Princeton University, Michigan Technological University, and Georgia Tech. A considerable amount of development is being conducted in industry, such as IHI in Japan, Aerojet and Busek in the USA, SNECMA in France, LAJP in Ukraine, and SITAEL in Italy.
The first use of Hall thrusters on lunar orbit was the European Space Agency (ESA) lunar mission SMART-1 in 2003.
On a western satellite Hall thrusters were first demonstrated on the Naval Research Laboratory (NRL) STEX spacecraft, which flew the Russian D-55. The first American Hall thruster to fly in space was the Busek BHT-200 on TacSat-2 technology demonstration spacecraft. The first flight of an American Hall thruster on an operational mission, was the Aerojet BPT-4000, which launched August 2010 on the military Advanced Extremely High Frequency GEO communications satellite. At 4.5 kW, the BPT-4000 is also the highest power Hall thruster ever flown in space. Besides the usual stationkeeping tasks, the BPT-4000 is also providing orbit raising capability to the spacecraft. Several countries worldwide continue efforts to qualify Hall thruster technology for commercial uses.
The essential working principle of the Hall thruster is that it uses an electrostatic potential to accelerate ions up to high speeds. In a Hall thruster, the attractive negative charge is provided by an electron plasma at the open end of the thruster instead of a grid. A radial magnetic field of about 100–300 G (0.01–0.03 T) is used to confine the electrons, where the combination of the radial magnetic field and axial electric field cause the electrons to drift in azimuth thus forming the Hall current from which the device gets its name.
The central spike forms one pole of an electromagnet and is surrounded by an annular space, and around that is the other pole of the electromagnet, with a radial magnetic field in between.
The propellant, such as xenon gas, is fed through the anode, which has numerous small holes in it to act as a gas distributor. Xenon propellant is used because of its high atomic weight and low ionization potential. As the neutral xenon atoms diffuse into the channel of the thruster, they are ionized by collisions with circulating high-energy electrons (typically 10–40 eV, or about 10% of the discharge voltage). Once ionized, the xenon ions typically have a charge of +1, though a small fraction (~20%) have +2.
The xenon ions are then accelerated by the electric field between the anode and the cathode. For discharge voltages of 300 V, the ions reach speeds of around 15 km/s (9.3 mps) for a specific impulse of 1,500 seconds (15 kN·s/kg). Upon exiting, however, the ions pull an equal number of electrons with them, creating a plasma plume with no net charge.
The radial magnetic field is designed to be strong enough to substantially deflect the low-mass electrons, but not the high-mass ions, which have a much larger gyroradius and are hardly impeded. The majority of electrons are thus stuck orbiting in the region of high radial magnetic field near the thruster exit plane, trapped in E×B (axial electric field and radial magnetic field). This orbital rotation of the electrons is a circulating Hall current, and it is from this that the Hall thruster gets its name. Collisions with other particles and walls, as well as plasma instabilities, allow some of the electrons to be freed from the magnetic field, and they drift towards the anode.
About 20–30% of the discharge current is an electron current, which does not produce thrust, thus limiting the energetic efficiency of the thruster; the other 70–80% of the current is in the ions. Because the majority of electrons are trapped in the Hall current, they have a long residence time inside the thruster and are able to ionize almost all of the xenon propellant, allowing mass utilizations of 90–99%. The mass utilization efficiency of the thruster is thus around 90%, while the discharge current efficiency is around 70%, for a combined thruster efficiency of around 63% (= 90% × 70%). Modern Hall thrusters have achieved efficiencies as high as 75% through advanced designs.
Compared to chemical rockets, the thrust is very small, on the order of 83 mN for a typical thruster operating at 300 V, 1.5 kW. For comparison, the weight of a coin like the U.S. quarter or a 20-cent Euro coin is approximately 60 mN. As with all forms of electrically powered spacecraft propulsion, thrust is limited by available power, efficiency, and specific impulse.
However, Hall thrusters operate at the high specific impulses that is typical for electric propulsion. One particular advantage of Hall thrusters, as compared to a gridded ion thruster, is that the generation and acceleration of the ions takes place in a quasi-neutral plasma, so there is no Child-Langmuir charge (space charge) saturated current limitation on the thrust density. This allows much smaller thrusters compared to gridded ion thrusters.
Another advantage is that these thrusters can use a wider variety of propellants supplied to the anode, even oxygen, although something easily ionized is needed at the cathode.
Cylindrical Hall thrusters
Although conventional (annular) Hall thrusters are efficient in the kilowatt power regime, they become inefficient when scaled to small sizes. This is due to the difficulties associated with holding the performance scaling parameters constant while decreasing the channel size and increasing the applied magnetic field strength. This led to the design of the cylindrical Hall thruster. The cylindrical Hall thruster can be more readily scaled to smaller sizes due to its nonconventional discharge-chamber geometry and associated magnetic field profile. The cylindrical Hall thruster more readily lends itself to miniaturization and low-power operation than a conventional (annular) Hall thruster. The primary reason for cylindrical Hall thrusters is that it is difficult to achieve a regular Hall thruster that operates over a broad envelope from ~1 kW down to ~100 W while maintaining an efficiency of 45-55%.
External discharge Hall thruster
Sputtering erosion of discharge channel walls and pole pieces that protect the magnetic circuit causes failure of thruster operation. Therefore, annular and cylindrical Hall thrusters have limited lifetime. Although magnetic shielding has been shown to dramatically reduce discharge channel wall erosion, pole piece erosion is still a concern. As an alternative, an unconventional Hall thruster design called external discharge Hall thruster or external discharge plasma thruster (XPT) has been introduced. External discharge Hall thruster does not possess any discharge channel walls or pole pieces. Plasma discharge is produced and sustained completely in open space outside the thruster structure, and thus erosion free operation is achieved.
Hall thrusters have been flying in space since December 1971 when the Soviets launched an SPT-50 on a Meteor satellite. Over 240 thrusters have flown in space since that time with a 100% success rate. Hall thrusters are now routinely flown on commercial GEO communications satellites where they are used for orbital insertion and stationkeeping.
The solar electric propulsion system of the European Space Agency's SMART-1 spacecraft used a Snecma PPS-1350-G Hall thruster. SMART-1 was a technology demonstration mission that orbited the Moon. This use of the PPS-1350-G, starting on September 28, 2003, was the first use of a Hall thruster outside geosynchronous earth orbit (GEO). Unlike most Hall thruster propulsion systems used in commercial applications, the Hall thruster on SMART-1 could be throttled over a range of power, specific impulse, and thrust.
- Discharge power: 0.46–1.19 kW
- Specific impulse: 1,100–1,600 s
- Thrust: 30–70 mN
- Hofer, Richard R. "Development and Characterization of High-Efficiency, High-Specific Impulse Xenon Hall Thrusters". NASA/CR—2004-21309. NASA STI Program. Retrieved 17 October 2011.
- "Ion Thruster Prototype Breaks Records in Tests, Could Send Humans to Mars". space.com. Archived from the original on 20 March 2018. Retrieved 27 April 2018.
- Choueiri, Edgar Y. (2009). "New Dawn for Electric Rockets". Scientific American. 300: 58–65. Bibcode:2009SciAm.300b..58C. doi:10.1038/scientificamerican0209-58.
- Janes, G.; Dotson, J.; Wilson, T. (1962). Momentum transfer through magnetic fields. Proceedings of third symposium on advanced propulsion concepts. 2. Cincinnati, OH, USA. pp. 153–175.
- Meyerand, RG. (1962). Momentum Transfer Through the Electric Fields. Proceedings of Third Symposium on Advanced Propulsion Concepts. 1. Cincinnati, OH, USA. pp. 177–190.
- Seikel, GR. (1962). Generation of Thrust – Electromagnetic Thrusters. Proceedings of the NASA-University Conference on the Science and Technology of Space Exploration. 2. Chicago, IL, USA. pp. 171–176.
- "Hall thrusters". 2004-01-14. Archived from the original on February 28, 2004.
- Morozov, A.I. (March 2003). "The conceptual development of stationary plasma thrusters". Plasma Physics Reports. Nauka/Interperiodica. 29 (3): 235–250. Bibcode:2003PlPhR..29..235M. doi:10.1134/1.1561119.
- "Native Electric Propulsion Engines Today" (in Russian). Novosti Kosmonavtiki. 1999. Archived from the original on 6 June 2011.
- "Hall-Effect Stationary Plasma thrusters". Electric Propulsion for Inter-Orbital Vehicles. Archived from the original on 2013-07-17. Retrieved 2014-06-16. Archived 2007-10-10 at the Wayback Machine.
- Y. Raitses; N. J. Fisch. "Parametric Investigations of a Nonconventional Hall Thruster" (PDF). Physics of Plasmas, 8, 2579 (2001). Archived (PDF) from the original on 2010-05-27.
- A. Smirnov; Y. Raitses; N.J. Fisch. "Experimental and theoretical studies of cylindrical Hall thrusters" (PDF). Physics of Plasmas 14, 057106 (2007). Archived (PDF) from the original on 2010-05-27.
- Polzin, K. A.; Raitses, Y.; Gayoso, J. C.; Fisch, N. J. "Comparisons in Performance of Electromagnet and Permanent-Magnet Cylindrical Hall-Effect Thrusters". NASA Technical Reports Server. Marchall Space Flight Center. Retrieved 17 October 2011.
- Polzin, K. A.; Raitses, Y.; Merino, E.; Fisch, N. J. "Preliminary Results of Performance Measurements on a Cylindrical Hall-Effect Thruster with Magnetic Field Generated by Permanent Magnets". NASA Technical Reports Server. Princeton Plasma Physics Laboratory. Retrieved 17 October 2011.
- "Pole-piece Interactions with the Plasma in a Magnetically Shielded Hall Thruster".
- "Preliminary Investigation of an External Discharge Plasma Thruster".
- "Numerical Investigation of an External Discharge Hall Thruster Design Utilizing Plasma-lens Magnetic Field" (PDF). Archived (PDF) from the original on 2017-08-14.
- "Low–voltage External Discharge Plasma Thruster and Hollow Cathodes Plasma Plume Diagnostics Utilising Electrostatic Probes and Retarding Potential Analyser". Archived from the original on 2017-08-29.
- Turner, Martin J. L. (5 November 2008). Rocket and Spacecraft Propulsion: Principles, Practice and New Developments, page 197. Springer Science & Business Media. Retrieved 28 October 2015.
- This article incorporates public domain material from the National Aeronautics and Space Administration document ""In-space propulsion systems roadmap." (April 2012)." by Meyer, Mike, et al.
- "National Reconnaissance Office Satellite Successfully Launched" (PDF). Naval Research Laboratory (Press Release). October 3, 1998. Archived (PDF) from the original on November 13, 2011.
- Cornu, Nicolas; Marchandise, Frédéric; Darnon, Franck; Estublier, Denis (2007). PPS®1350 Qualification Demonstration: 10500 hrs on the Ground and 5000 hrs in Flight. 43rd AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit. Cincinnati, OH, USA. doi:10.2514/6.2007-5197.
- "Ion engine gets SMART-1 to the Moon: Electric Propulsion Subsystem". ESA. August 31, 2006. Archived from the original on January 29, 2011. Retrieved 2011-07-25.
- Edgar Y. (2009). New dawn of electric rocket. The Hall thruster
- NASA Jet Propulsion Laboratory
- SITAEL S.p.A. (Italy) – Page presenting Hall effect thruster products & data sheets.
- Aerojet (Redmond, WA USA) – Hall-thruster vendor
- Busek (Natick, MA USA) – Hall-thruster vendor
- Experimental Design Bureau Fakel (Kaliningrad, Russia) – Hall-thruster vendor
- MIT Space Propulsion Laboratory
- Michigan Tech. Univ. Ion Space Propulsion Laboratory
- Georgia Institute of Technology High-Power Electric Propulsion Laboratory (HPEPL)
- Colorado State University Electric Propulsion & Plasma Engineering (CEPPE) Laboratory
- University of Michigan Plasmadynamics and Electric Propulsion Laboratory (PEPL)
- NASA Glenn Research Center Hall Thruster Program
- Princeton Plasma Physics Laboratory page on Hall Thrusters
- Snecma SA (France) page on PPS-1350 Hall thruster
- ESA page on Hall thrusters | <urn:uuid:de768993-db8a-4560-8756-7e35531fed42> | 3.84375 | 4,094 | Knowledge Article | Science & Tech. | 52.335982 | 95,611,834 |
The methods of computerized tomography (CT), developed for medical x-ray applications, can be adapted for use in studying plasma x-ray emissivity distributions in tokamaks and other magnetic confinement devices. Current generation CT scanners reconstruct maps of x-ray absorptivity on body cross sections by processing transmission data from a number of fan-shaped beams of x rays. Analogous fan beam emission data can be obtained from confined plasmas by collimating emitted soft x rays with a "pin hole" or slit and detecting them with a linear array of solid-state detectors. Data from a number of such one-dimensional views of the plasma can be used to reconstruct a two-dimensional "photograph" of the absolute x-ray emission in cross section. No a priori assumptions about the nature of the emissivity distribution are necessary. In this paper we demonstrate the feasibility of the technique by reconstructing test patterns with data simulated for a number of different types of detector arrangements. We also use the technique with real data to reconstruct a rotating emissivity feature on a cross section of Massachusetts Institute of Technology's Alcator A tokamak.
R. C. Chase,
F. H. Seguin,
"Application Of Computerized Tomography Techniques To Tokamak Diagnostics," Optical Engineering 20(3), 203486 (1 June 1981). https://doi.org/10.1117/12.7972746 | <urn:uuid:82fb0032-3b9e-476f-a73c-7db974d2d38c> | 2.578125 | 297 | Academic Writing | Science & Tech. | 38.406602 | 95,611,874 |
Researchers at the University of Rochester have overcome experimental challenges to demonstrate a new way for getting a full picture of twisted light: characterizing the Wigner distribution.
Twisted light has raised researchers' interest for its potential for quantum communication applications. The discrete nature of one of the defining parameters of twisted light, orbital angular momentum (OAM), makes it attractive for encoding quantum information.
There is also no known fundamental limit to the maximum OAM value that can be coded into a photon, which could allow for quicker communication than with other systems.
But before any particular system can be used in quantum communication, researchers need to be able to measure it and describe it. Other methods to obtain the wavefunction, a property that describes a quantum system in full - such as quantum tomography or direct measurements - have been demonstrated in the past.
However, in a Physical Review Letters paper published this week, the Rochester researchers state that their technique is particularly "suitable for quantum information applications involving a large number of OAM states."
The Wigner distribution is a mathematical construct that completely describes a system in terms of two conjugate variables, that is two variables linked by Heisenberg's Uncertainty Principle. Mohammad Mirhosseini, a postdoctoral associate in Professor of Optics Robert W. Boyd's group, and his collaborators at the Institute of Optics have now shown how the Wigner distribution can be obtained for twisted light. The work also represents the first characterization of the Wigner distribution that involves a discrete variable, as is the case with OAM.
"Apart from the potential uses in quantum communication, our work might offer a good way for describing atomic systems with quantized levels," said Mirhosseini. "The Wigner distribution of twisted light is a very complete way to understand the system: not only does it tell us about the relation between these two linked variables, but it also tells us about the system's behavior. We showed that the Wigner distribution for twisted light superpositions contains negative values, which reveals wave-like behavior."
Mirhosseini thinks their work could also show a possible path forward for other experiments.
"Measuring time in quantum systems is not as simple as using a watch - it can prove challenging," says Mirhosseini. "The conjugate variable of OAM, angle, is in many ways similar to phase, which is itself similar to time. So perhaps the lessons learned here can be applied, in other experiments, to systems where we need to measure time."
Leonor Sierra | EurekAlert!
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Materials Sciences
19.07.2018 | Earth Sciences
19.07.2018 | Life Sciences | <urn:uuid:4351abfe-df14-4792-bba2-1b3ef9fe0c30> | 3.15625 | 1,169 | Content Listing | Science & Tech. | 35.185617 | 95,611,883 |
The equatorial coordinate system is a celestial coordinate system widely used to specify the positions of celestial objects. It may be implemented in spherical or rectangular coordinates, both defined by an origin at the centre of Earth, a fundamental plane consisting of the projection of Earth's equator onto the celestial sphere (forming the celestial equator), a primary direction towards the vernal equinox, and a right-handed convention.
The origin at the centre of Earth means the coordinates are geocentric, that is, as seen from the centre of Earth as if it were transparent. The fundamental plane and the primary direction mean that the coordinate system, while aligned with Earth's equator and pole, does not rotate with the Earth, but remains relatively fixed against the background stars. A right-handed convention means that coordinates increase northward from and eastward around the fundamental plane.
This description of the orientation of the reference frame is somewhat simplified; the orientation is not quite fixed. A slow motion of Earth's axis, precession, causes a slow, continuous turning of the coordinate system westward about the poles of the ecliptic, completing one circuit in about 26,000 years. Superimposed on this is a smaller motion of the ecliptic, and a small oscillation of the Earth's axis, nutation.
In order to fix the exact primary direction, these motions necessitate the specification of the equinox of a particular date, known as an epoch, when giving a position. The three most commonly used are:
A position in the equatorial coordinate system is thus typically specified true equinox and equator of date, mean equinox and equator of J2000.0, or similar. Note that there is no "mean ecliptic", as the ecliptic is not subject to small periodic oscillations.
A star's spherical coordinates are often expressed as a pair, right ascension and declination, without a distance coordinate. The direction of sufficiently distant objects is the same for all observers, and it is convenient to specify this direction with the same coordinates for all. In contrast, in the horizontal coordinate system, a star's position differs from observer to observer based on their positions on the Earth's surface, and is continuously changing with the Earth's rotation.
Telescopes equipped with equatorial mounts and setting circles employ the equatorial coordinate system to find objects. Setting circles in conjunction with a star chart or ephemeris allow the telescope to be easily pointed at known objects on the celestial sphere.
The declination symbol ?, (lower case "delta", abbreviated dec) measures the angular distance of an object perpendicular to the celestial equator, positive to the north, negative to the south. For example, the north celestial pole has a declination of +90°. The origin for declination is the celestial equator, which is the projection of the Earth's equator onto the celestial sphere. Declination is analogous to terrestrial latitude.
The right ascension symbol ?, (lower case "alpha", abbreviated RA) measures the angular distance of an object eastward along the celestial equator from the vernal equinox to the hour circle passing through the object. The vernal equinox point is one of the two where the ecliptic intersects the celestial equator. Analogous to terrestrial longitude, right ascension is usually measured in sidereal hours, minutes and seconds instead of degrees, a result of the method of measuring right ascensions by timing the passage of objects across the meridian as the Earth rotates. There are = 15° in one hour of right ascension, 24h of right ascension around the entire celestial equator.
When used together, right ascension and declination are usually abbreviated RA/Dec.
Alternatively to right ascension, hour angle (abbreviated HA or LHA, local hour angle), a left-handed system, measures the angular distance of an object westward along the celestial equator from the observer's meridian to the hour circle passing through the object. Unlike right ascension, hour angle is always increasing with the rotation of Earth. Hour angle may be considered a means of measuring the time since upper culmination, the moment when an object contacts the meridian overhead.
A culminating star on the observer's meridian is said to have a zero hour angle (0h). One sidereal hour (approximately 0.9973 solar hours) later, Earth's rotation will carry the star to the west of the meridian, and its hour angle will be 1h. When calculating topocentric phenomena, right ascension may be converted into hour angle as an intermediate step.
There are a number of rectangular variants of equatorial coordinates. All have:
The reference frames do not rotate with the Earth (in contrast to Earth-centered, Earth-fixed frames), remaining always directed toward the equinox, and drifting over time with the motions of precession and nutation.
|Geocentric||?||?||?||?, ?, ?||X, Y, Z (Sun)|
|Heliocentric||x, y, z|
This frame is in every way equivalent to the ?, ?, ? frame, above, except that the origin is removed to the center of the Sun. It is commonly used in planetary orbit calculation. The three astronomical rectangular coordinate systems are related by | <urn:uuid:81a76f6e-3614-4d42-9c71-cc7008de42f9> | 4.1875 | 1,111 | Knowledge Article | Science & Tech. | 34.557693 | 95,611,892 |
), however, is a radioactive isotope, the other two being stable.
Understanding the ages of related fossil species helps scientists piece together the evolutionary history of a group of organisms.
Pioneers It is often overlooked that throughout the nineteenth century, most of the electrical experimenters, inventors and engineers who made these advances possible had to make their own batteries before they could start their investigations. the World was starting to emerge from the Stone Age. C., Mesopotamians (from modern day Iraq), who had already been active for hundreds of years in primitive metallurgy extracting metals such as copper from their ores, led the way into the Bronze Age when artisans in the cities of Ur and Babylon discovered the properties of bronze and began to use it in place of copper in the production of tools, weapons and armour.
They did not have the benefit of cheap, off the shelf, mass produced batteries. Bronze is a relatively hard alloy of copper and tin, better suited for the purpose than the much softer copper enabling improved durability of the weapons and the ability to hold a cutting edge.
Natural uranium as found in the Earth's crust is a mixture largely of two isotopes: uranium-238 (U-238), accounting for 99.3% and uranium-235 (U-235) about 0.7%.
The isotope U-235 is important because under certain conditions it can readily be split, yielding a lot of energy. | <urn:uuid:e866d705-c9e3-404a-803e-1063314bd812> | 3.578125 | 293 | Knowledge Article | Science & Tech. | 40.609209 | 95,611,897 |
The last several decades have witnessed the rapid development of alkaline anion exchange membrane fuel cells (AAEMFCs) that possess a series of advantages as compared to acid proton exchange membrane fuel cells, such as the enhanced electrochemical kinetics of oxygen reduction reaction and the use of inexpensive non-platinum electrocatalysts, both of which are rendered by the alkaline medium. As an emerging power generation technology, the significant progress has been made in developing the alkaline anion exchange membrane fuel cells in recent years. This review article starts with a general description of the setup of AAEMFCs running on hydrogen and physical and chemical processes occurring in multi-layered porous structure. Then, the electrocatalytic materials and mechanisms for both hydrogen oxidation and oxygen reduction are introduced, including metal-based, metal oxide-based, and non-metal based electrocatalysts. In addition, the chemistries of alkaline anion exchange membranes (AAEMs), e.g. polymer backbone and function groups, are reviewed. The effects of pre-treatment, carbonate, and radiation on the performance of AAEMs are concluded as well. The effects of anode and cathode ionomers, structural designs, and water flooding on the performance of the single-cell are explained, and the durability and power output of a single-cell are summarized. Afterwards, two innovative system designs that are hybrid fuel cells and regenerative fuel cells are presented and mathematical modeling on mass transport phenomenon in AAEMFCs are highlighted. Finally, the challenges and perspectives for the future development of the AAEMFCs are discussed.
Progress in Energy and Combustion Science – Elsevier
Published: May 1, 2018
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera
Read and print from thousands of top scholarly journals.
Bookmark this article. You can see your Bookmarks on your DeepDyve Library. | <urn:uuid:8cd11f88-9cc8-4f94-b6be-4b3c8addfa91> | 2.84375 | 672 | Truncated | Science & Tech. | 32.218095 | 95,611,905 |
Here we are going to set up Apache tomcat and Eclipse on Arch Linux to JavaEE development. Before that take look at what are these tomcat and Eclipse.
Apache tomcat is a web server( can handle HTTP request and responses ) specially created to deploy JSPs and Java Servlets.
Eclipse is an integrated development environment (IDE) used in computer programming, and it is the most widely used Java IDE. It contains a base workspace and an extensible plug-in system for customizing the environment.
Now on we discuss about how install and configure these tool. Continue reading “Set up Apache Tomcat8 and Eclipse on Arch Linux”
Hi guys, today i am going to show you hove add some extra features to our simple HTTP-Sever created using python socket module. So if you have no idea what is it please refer my previous blog post.
Previous post I describe you how send string as response. Here i will explain how send file. Yeh ! .html , .css , .jpg , .png and .js files can send. Again i want to say if you have no idea about this please first read this post.
First we should identify which file is client request. Then we can check whether its available in our container or not. If file exist we can send it and if not we should send error message saying file not exist.
Continue reading “How create HTTP-Server using python Socket – Part II”
Hi guys today I am going to give very brief introduction about how create web server using python socket. There are several ways to do that. But here i will explain how cover basic functionalities of server using python socket.
The SimpleHTTPServer module has been merged into http.server in Python 3.
- First identify what is a server ?
A server is a computer program that provides services to other computer programs (and their users) in the same or other computers. ( The computer that a server program runs in is also frequently referred to as a server. )
If you want to know more about that use this links. : What is server , How server works
Now I think you have understood about how server do its job and why we need server. Then start developments.First,
Continue reading “How create HTTP-Server using python Socket – Part I” | <urn:uuid:fddb56c1-1c96-4bde-8853-c08243c3d79f> | 3.375 | 485 | Personal Blog | Software Dev. | 60.925312 | 95,611,918 |
Study of solar flare induced D-region ionosphere changes using VLF amplitude observations at a low-latitude site
We obtained and analyzed 26 solar flare events from C2.56 to X3.2 classes at Tay Nguyen University, Vietnam (12.56° N, 108.02° E) during May – December, 2013 using Very Low Frequency remote sensing to understand the responses of low-latitude D-region ionosphere during solar flares. The observed VLF amplitude perturbations are used as the input parameters for the simulated LWPC program, using Wait’s model of lower ionosphere, to calculate two Wait’s parameters: the reflection height, H’ and the sharpness factor, b. Results reveal that when the X-ray irradiance increased, the b increased from 0.3 km-1 to 0.506 km-1, while the H' decreased from 74 km to 60 km. The electron density increases at the height of 74 km with 1-3 orders of magnitude during solar flares. These phenomena can be explained that the ionization due to X-ray irradiance becomes greater than that due to cosmic rays and Lyman-a radiation, which increases the electron density profile. The changes rules of the Wait’s parameters and electron density of D-region ionosphere of our results are in agreement with the studied results shown by other authors. The 3D representation of the electron density changes with altitude and time supports to fully understand the shape of the electron density changes due to X-ray flares. The shape variation of electron density is roughly followed to the variation of the amplitude perturbation and keeps this rule for the different altitude. We also found that the electron density versus the height in lower latitude D-region ionosphere increases more rapidly during solar flares.
Full Text: PDF (downloaded 641 times)
- There are currently no refbacks. | <urn:uuid:99001e07-60c8-4f4b-986c-9d56fa08c1e5> | 2.71875 | 389 | Academic Writing | Science & Tech. | 47.770877 | 95,611,931 |
The work honey bees do is critical for our ecosystems, but it comes at a high personal cost.
The Romans may not have had a symbol for zero, but bees understand what it means beyond just the simple assumption "there's nothing there".
Many fruits, nuts and other crops rely on bees to pollinate their flowers at just the right time of year. Many farmers rent bees to get the job done at pollination time.
Honeybees receive a lot of attention, but the first North American bee to be listed as an endangered species is a wild bumble bee. Wild bees are vital pollinators, and some are declining rapidly.
Meta-analysis studies have made it possible to sort through apparently contradictory research by looking at the bigger picture.
Rather than trying to out-compete each other, flowers may work together to attract bees en masse. It's the sort of approach that is effective in the world of advertising too.
Honeybees are responsible for only a third of crop pollination in Britain.
Hoverflies are helping spread disease among the already declining bee population.
Australian bees have so far avoided the 'colony collapse' devastating hives around the world, but there's growing pressure for a ban on certain insecticides blamed for bee deaths.
Pollinating bees are among the 'natural assets' that have a greater – though less visible – impact than plastic waste on the environment.
To learn about how humans, animals and insects experience vision illusions, we had to find a way to ask bees what they saw.
While bee sting deaths are rare, bees cause more hospitalisations than any venomous creature.
Pollination in South Africa's ecosystems is extremely complex. However new advances such as pollen metabarcoding help us understand interactions between pollinators and pollen.
Elephants have the highest count of olfactory receptor genes of any species tested to date. This suggests that they may be the best smellers in the animal kingdom.
New research shows bees see a blue halo around flowers thanks to nanostructures on its petals.
Urban bees deal with what's known as "habitat patches," discontinuous patches of green like gardens, parks and ravines. Green roofs could offer relief to bees dealing with habitat fragmentation.
New research shows stingless bees will assassinate their queen if she makes the wrong royal match.
Bees sting other animals, including humans, when they think there might be a threat to their hive. But Evie, age 8, wonders if bees ever accidentally sting other bees.
Bees and home security cameras use the same complex techniques to monitor their environments.
Exposure to neonicotinoids could lead to fewer bumblebee colonies, less pollination, and ultimately to population extinctions. | <urn:uuid:fe2f08e8-e939-495b-ab28-fc8064cc8e2c> | 3.5625 | 565 | Content Listing | Science & Tech. | 42.752136 | 95,611,932 |
Most of the thousands of exploding stars classified as type Ia supernovae look similar, which is why astrophysicists use them as accurate cosmic distance indicators. They have shown that the expansion of the universe is accelerating under the influence of an unknown force now called dark energy; yet approximately 20 type Ia supernovae look peculiar.
"They're all a little bit odd," said George Jordan, a research scientist at the University of Chicago's Flash Center for Computational Science. Comparing odd type Ia supernovae to normal ones may permit astrophysicists to more precisely define the nature of dark energy, he noted.
Jordan and three colleagues, including his chief collaborator on the project, Hagai Perets, assistant professor of physics at Technion – Israel Institute of Technology, have found that the peculiar type Ia supernovae are probably white dwarf stars that failed to detonate. "They ignite an ordinary flame and they burn, but that isn't followed by a triggering of a detonation wave that goes through the star," Jordan said. These findings were based on simulations that consumed approximately two million central processing unit hours on Intrepid, the Blue Gene/P supercomputer at Argonne National Laboratory. Full details of the simulations will appear in the Astrophysical Journal Letters.
The triggering of a detonation wave is exactly what happens in normal type Ia supernovae, which incinerate white dwarfs, stars that have shrunk to Earth size after having burned most or all of their nuclear fuel. Most or all white dwarfs occur in binary systems, those that consist of two stars orbiting one another.
Faint, hard to detect
Peculiar type Ia supernovae are anywhere from 10 to 100 times fainter than normal ones, which are brighter and therefore more easily detected. Astrophysicists have estimated that they may account for approximately 15 percent of all type Ia supernovae.
The first in this class of exceptionally dim supernovae was discovered in 2002, noted Robert Fisher, assistant professor of physics at the University of Massachusetts Dartmouth, a co-author of the paper. Called SN 2002cx, it is considered the most peculiar type Ia supernova ever observed.
The dimmest of the lot, however, was discovered in 2008. "If the brightness of a standard supernova could be thought of as a single 60-watt light bulb, the brightness of this 2008 supernova would be equivalent to a small fraction of a single candle or a few dozen fireflies," Fisher noted.
Flash Center scientists have been successfully simulating type Ia supernova explosions following the gravitationally confined detonation scenario for years. In this scenario, the white dwarf begins to burn near its center. This ignition point burns outward, floating toward the surface like a bubble. After it breaks the surface, a cascade of hot ash flows around the star and collides with itself on the opposite end, triggering a detonation.
"We took the normal GCD scenario and asked what would happen if we pushed this to the limits and see what happens when it breaks," Jordan said. In the failed detonation scenario, the white dwarf experiences more ignition points that are closer to the core, which fuels more burning than in the detonation scenario.
"The extra burning causes the star to expand more, preventing it from achieving temperatures and pressures high enough to trigger detonation," noted co-author Daniel van Rossum of UChicago's Flash Center.
No incinerated star
Instead of detonating, the white dwarf remains intact, though some of the star's mass burns up and gets ejected from its surface. This failed detonation scenario looks quite similar to the peculiar type Ia explosions. The simulations resulted in phenomena that astronomers now can look for or have already found in their telescopic observations.
These phenomena include white dwarfs that display unusual compositions, asymmetric surface characteristics and a kick that sends the stars flying off at speeds of hundreds of miles per second. "This was a completely new discovery," Perets said. "No one had ever suggested that white dwarfs could be kicked at such velocities."
Normal type Ia supernovae display a relatively uniform appearance, but the asymmetric characteristics of their peculiar cousins means that the latter will often look much different from one another, depending on their viewing angle from Earth.
The asymmetric explosion also produces the kick, which is possibly powerful enough to release the white dwarf from the gravitational hold of any binary companion it may have had. This can produce a peculiar type of hyper-velocity white dwarf, the fastest of which might even escape the galaxy.
Smaller kicks might leave the binary system intact, but also push the white dwarf into a tight and highly elliptical orbit around its companion. Most white dwarfs orbiting close to their companions display a more circular orbit.
Typical white dwarfs have compositions of carbon and oxygen, yet some of the simulated ones that failed to detonate displayed heavy elements such as calcium, titanium and iron. When the detonation fails to happen, much of the ejected mass falls back onto the surface of the white dwarf, where the heavy elements become synthesized.
"I had never heard of such strange white dwarfs," Perets said. But when he conducted a literature search, he found reports of white dwarfs with properties that an irregular composition could explain. "It is quite rare that a new model brings about so many novel predictions, and potentially solves several distinct, seemingly unrelated puzzles."
Citation: "Failed-detonation supernovae: sub-luminous low-velocity IA supernovae and their kicked remnant white dwarfs with iron-rich cores," by George C. Jordan IV, Hagai B. Perets, Robert T. Fisher, and Daniel R. van Rossum," Astrophysical Journal Letters.
Funding: U.S. Department of Energy, National Science Foundation, Harvard-Smithsonian Center for Astrophysics and the Israel Science Foundation.
Steve Koppes | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Health and Medicine
23.07.2018 | Earth Sciences
23.07.2018 | Science Education | <urn:uuid:cedc7580-1c0d-438c-8707-f009c4102f62> | 3.921875 | 1,814 | Content Listing | Science & Tech. | 35.934965 | 95,611,937 |
One of the impacts of fossil fuels is an increase in acid deposition (or acid rain as many people refer to it). The following exercise asks you to calculate how much lime (an alkaline rock, not the tasty green fruit) it would take to increase the pH of a small lake.
Consider a small lake in the Adirondack region of New York state; Lake Whatchamacallit (surface area = 4 mi2, average depth = 42 ft). The pH of Lake Whatchamacallit has been measured to be 4.0 (an unfortunate result of acid deposition), which is just a little too acidic to support aquatic life; i.e. Lake Whatchamacallit is for all intents and purposes, dead.
Farmer Jones, whose property adjoins the lake, knows that when the soil on her farm is too acidic she adds lime (crushed limestone i.e. calcium carbonate, CaCO3) to it in order to increase the pH of the soil and make it suitable for planting. She gathers all of her neighbors for a meeting with the state's department of natural resources. She proposes that they lime the lake in order to increase its pH and then to restock the lake with new fish.
How many tons of lime would have to be added to Lake Whatchamacallit in order to raise the pH from 4.0 to 7.0? Use the following facts to help you determine your answer.
- 1 oz of lime will raise the pH of 5,700 liters of lake-water from 4.0 to 7.0
- the cost of lime is about 5.6 per pound
- when lime dissolves in water heat is given off, such that when 100g dissolves in water it gives off enough heat to increase the temperature of 3 liters of water by 1 degree C
- 1 mi = 5280 ft; 16 oz = 1 lb; 2000 lb = 1 ton; 1 ft3 lake-water = 28.3 liters
Hint: Start by finding the volume of the lake.
Discuss the implications of your results.© BrainMass Inc. brainmass.com July 20, 2018, 12:48 pm ad1c9bdddf
surface area of lake (A) = 4 mi2 = 4*5280^2 = 1.115136*10^8 ft^2
depth of the lake (h) = 42 ft
hence, the volume of the lake:
V = A*h =1.115136*10^8*42 = 46835712*10^9 ft^3
because 1ft^3 = 28.3 lit
V = 46835712*10^9*28.3 = 1.3254506496*10^(11) lit
density of water = 1 kg/lit
because, to increase the pH value of 5700 lit of lake ...
This solution analyses lime content in a lake by looking at its surface area, depth and density as well as the implications of the pH findings. | <urn:uuid:72037cf6-b686-4866-aa22-92b4d8228e63> | 3.8125 | 634 | Tutorial | Science & Tech. | 83.617826 | 95,612,020 |
CO2, Global Warming and Coral Reefs - Prospects for the Future
The hydrocarbon energy industry is attacked from at least four flanks:
1. Global Warming
3. Species endangerment
4. Ocean acidification and destruction of Coral Reefs
SPPI has produced a major paper addressing the alarms and true science on #4 above.
The Summary for Policy Makers follows:
Summary for Policy Makers
One of the long-recognized potential consequences of the ongoing rise in the air’s CO2 content is CO2-induced global warming, which has been predicted to pose a number of problems for both natural and managed ecosystems in the years ahead. Of newer concern, in this regard, are the effects that the ongoing rise in the air’s CO2 content may have on coral reefs. It has been suggested, for example, that CO2-induced global warming will do great damage to corals by magnifying the intensity, frequency, and duration of a number of environmental stresses to which they are exposed. The predicted consequences of such phenomena include ever more cases of coral disease, bleaching, and death.
Increases in the atmosphere's CO2 content have also been postulated to possess the potential to harm coral reefs directly. By inducing changes in ocean water chemistry that can lead to reductions in the calcium carbonate saturation state of seawater, it has been predicted that elevated levels of atmospheric CO2 may reduce rates of coral calcification, possibly leading to slower-growing – and, therefore, weaker – coral skeletons, and in some cases, death.
Because of these many concerns, and the logical desire of individuals and governments to do something about what they perceive to be bona fide threats to the well-being of the biosphere, it is important to have a correct understanding of the scientific basis for the potential problems that have been predicted. Hence, in the following pages we review the scientific literature on CO2, global warming and coral reefs, in an effort to determine if the ongoing rise in the air’s CO2 content does indeed pose a threat to these incomparable underwater ecosystems. The key findings of this review are as follows:
· There is no simple linkage between high temperatures and coral bleaching.
· As living entities, corals are not only acted upon by the various elements of their environment, they also react or respond to them. And when changes in environmental factors pose a challenge to their continued existence, they sometimes take major defensive or adaptive actions to insure their survival.
· A particularly ingenious way by which almost any adaptive response to any type of environmental stress may be enhanced in the face of the occurrence of that stress would be to replace the zooxanthellae expelled by the coral host during a stress-induced bleaching episode by one or more varieties of zooxanthellae that are more tolerant of the stress that caused the bleaching.
· The persistence of coral reefs through geologic time – when temperatures were as much as 10-15°C warmer than at present, and atmospheric CO2 concentrations were 2 to 7 times higher than they are currently – provides substantive evidence that these marine entities can successfully adapt to a dramatically changing global environment. Thus, the recent die-off of many corals cannot be due solely, or even mostly, to global warming or the modest rise in atmospheric CO2 concentration over the course of the Industrial Revolution.
· The 18- to 59-cm warming-induced sea level rise that is predicted for the coming century by the IPCC – which could be greatly exaggerated if predictions of CO2-induced global warming are wrong – falls well within the range (2 to 6 mm per year) of typical coral vertical extension rates, which exhibited a modal value of 7 to 8 mm per year during the Holocene and can be more than double that value in certain branching corals. Rising sea levels should therefore present no difficulties for coral reefs. In fact, rising sea levels may actually have a positive effect on reefs, permitting increased coral growth in areas that have already reached the upward limit imposed by current sea levels.
· The rising CO2 content of the atmosphere may induce changes in ocean chemistry (pH) that could slightly reduce coral calcification rates; but potential positive effects of hydrospheric CO2 enrichment may more than compensate for this modest negative phenomenon.
· Theoretical predictions indicate that coral calcification rates should decline as a result of increasing atmospheric CO2 concentrations by as much as 40% by 2100. However, real-world observations indicate that elevated CO2 and elevated temperatures are having just the opposite effect.
In light of the above observations, and in conjunction with all of the material presented in this review, it is clear that climate-alarmist claims of impending marine species extinctions due to increases in both temperature and atmospheric CO2 concentration are not only not supported by real-world evidence, they are actually refuted by it.
President, Science and Public Policy Institute
209 Pennsylvania. Ave., SE # 299
Washington, D.C. 20003
A DAIRY FARMER SPEAKS OUT - *Kristie Docheff & Roni Sylvester July 2017* *Recently, a tweeter asked about the plight (suicide) of dairy farmers. Even though I milked 40 cows every mor...
2 weeks ago | <urn:uuid:7aa5c820-442d-4a69-a0a4-047a848c0746> | 3.140625 | 1,087 | Personal Blog | Science & Tech. | 27.759993 | 95,612,041 |
In mathematics, a Tamari lattice, introduced by Dov Tamari (1962), is a partially ordered set in which the elements consist of different ways of grouping a sequence of objects into pairs using parentheses; for instance, for a sequence of four objects abcd, the five possible groupings are ((ab)c)d, (ab)(cd), (a(bc))d, a((bc)d), and a(b(cd)). Each grouping describes a different order in which the objects may be combined by a binary operation; in the Tamari lattice, one grouping is ordered before another if the second grouping may be obtained from the first by only rightward applications of the associative law (xy)z = x(yz). For instance, applying this law with x = a, y = bc, and z = d gives the expansion (a(bc))d = a((bc)d), so in the ordering of the Tamari lattice (a(bc))d ≤ a((bc)d).
In this partial order, any two groupings g1 and g2 have a greatest common predecessor, the meet g1 ∧ g2, and a least common successor, the join g1 ∨ g2. Thus, the Tamari lattice has the structure of a lattice. The Hasse diagram of this lattice is isomorphic to the graph of vertices and edges of an associahedron. The number of elements in a Tamari lattice for a sequence of n + 1 objects is the nth Catalan number.
The Tamari lattice can also be described in several other equivalent ways:
- It is the poset of sequences of n integers a1, ..., an, ordered coordinatewise, such that i ≤ ai ≤ n and if i ≤ j ≤ ai then aj ≤ ai (Huang & Tamari 1972).
- It is the poset of binary trees with n leaves, ordered by tree rotation operations.
- It is the poset of ordered forests, in which one forest is earlier than another in the partial order if, for every j, the jth node in a preorder traversal of the first forest has at least as many descendants as the jth node in a preorder traversal of the second forest (Knuth 2005).
- It is the poset of triangulations of a convex n-gon, ordered by flip operations that substitute one diagonal of the polygon for another.
In The Art of Computer Programming T4 is called the Tamari lattice of order 4 and its Hasse diagram K5 the associahedron of order 4.
- Chapoton, F. (2005), "Sur le nombre d'intervalles dans les treillis de Tamari", Séminaire Lotharingien de Combinatoire (in French), 55 (55): 2368, arXiv: , Bibcode:2006math......2368C, MR 2264942.
- Csar, Sebastian A.; Sengupta, Rik; Suksompong, Warut (2014), "On a Subposet of the Tamari Lattice", Order, 31 (3): 337–363, arXiv: , doi:10.1007/s11083-013-9305-5, MR 3265974.
- Early, Edward (2004), "Chain lengths in the Tamari lattice", Annals of Combinatorics, 8 (1): 37–43, doi:10.1007/s00026-004-0203-9, MR 2061375.
- Friedman, Haya; Tamari, Dov (1967), "Problèmes d'associativité: Une structure de treillis finis induite par une loi demi-associative", Journal of Combinatorial Theory (in French), 2 (3): 215–242, doi:10.1016/S0021-9800(67)80024-3, MR 0238984.
- Geyer, Winfried (1994), "On Tamari lattices", Discrete Mathematics, 133 (1–3): 99–122, doi:10.1016/0012-365X(94)90019-1, MR 1298967.
- Huang, Samuel; Tamari, Dov (1972), "Problems of associativity: A simple proof for the lattice property of systems ordered by a semi-associative law", Journal of Combinatorial Theory, Series A, 13: 7–13, doi:10.1016/0097-3165(72)90003-9, MR 0306064.
- Knuth, Donald M. (2005), "Draft of Section 126.96.36.199: Generating All Trees", The Art of Computer Programming, IV, p. 34.
- Tamari, Dov (1962), "The algebra of bracketings and their enumeration", Nieuw Archief voor Wiskunde, Ser. 3, 10: 131–146, MR 0146227. | <urn:uuid:6680a3fc-3552-45b3-8359-3073f6e54594> | 3.828125 | 1,080 | Knowledge Article | Science & Tech. | 74.821471 | 95,612,063 |
Observing the image of a faint object that lies close to a star is a demanding task as the object is generally hidden in the glare of the star. Characterising this object, by taking spectra, is an even harder challenge. Still, thanks to ingenious scientists and a new ESO imaging spectrograph, this is now feasible, paving the way to an eldorado of many new thrilling discoveries.
These very high contrast observations are fundamental for directly imaging unknown extra-solar planets (i.e. planets orbiting a star other than the Sun), as well as low-mass stars and brown dwarfs, those failed stars that are too small to start burning hydrogen into helium.
Astronomer Niranjan Thatte and his colleagues developed a new method for exactly this purpose. The basis of the concept is relatively simple: while the positions of most of the features associated with the host star and artefacts produced by the telescope and the instrument scale with the wavelength, the location of a faint companion does not. So if the image has an internal reflection of the star masquerading as a planet, this phantom planet will be in one location in the image when looking in red light, and another when looking in blue; a real planet will stay at the same place no matter what colour of light one examines. Therefore, with the combined detection of spectra and position, one can see what is scaling, subtract it, and be left with what is fixed, that is the target dim object. Such observations can be done with specific instruments, called 'integral field spectrographs', such as the SINFONI instrument on ESO's VLT. This technique, termed Spectral Deconvolution (SD), although first proposed in 2002 for space-based applications, has never been applied to obtain spectra of a real object until now.
"We applied our new technique to a puzzling very small stellar companion - about twice the size of Jupiter - known as AB Doradus C and the outcome was surprising, "says Thatte.
Using SINFONI and this new technique, the astronomers could for the first time obtain a spectrum of the object that is free from the light of the brighter companion and that contains all the information necessary for a complete classification.
The new observations lead to a new temperature for the object and change the results that some of the same scientists derived in 2005 (ESO PR 02/05).
"This is how science progresses," says Laird Close, leader of the science team. "New instruments lead to better techniques and measurements, which often lead to new results, and one must happily change course."
The SINFONI observations were complemented with previous data obtained on ESO's VLT with the NACO instrument, which were stored in the ESO archive.
AB Doradus is a system of 2 pairs of stars (four stars in total: a quadruple system), lying 48 light-years away towards the Doradus constellation (the Swordfish).
AB Doradus A is the young major member of this system and has a faint companion, AB Dor C, just 3 astronomical units (AU) away, or three times the distance between the Earth and the Sun. In our Solar System, this would be within the asteroid belt between the orbits of Mars and Jupiter.
AB Dor C was imaged for the first time, thanks to ESO's VLT, in 2005 (ESO 02/05). The other members of the system are the pair AB Doradus BaBb (also first imaged in the previous work of 2005) located 133 AU from AB Dor A. While AB Doradus A has a mass about 85 % that of the Sun, AB Doradus C is almost 10 times less massive than AB Doradus A and belongs to the category of cool red dwarfs.
Red dwarfs are extremely interesting because their mass is at the border with that of brown dwarfs. A precise knowledge of these stars is therefore a necessary tile in our understanding of the evolution of stars. If AB Doradus C were only slightly less massive than its 93 Jupiter-mass, it would have failed to become a star, being instead a brown dwarf. As it is, the centre of AB Doradus C is slowly heating up, and in about a billion years its core will become hot enough to begin fusing hydrogen into helium, something a brown dwarf will never do.
"This red dwarf is 100 million times closer to its brighter companion than the whole system is from us and about 100 times less bright. It is thus a perfect example where our very high contrast technique is required," says team member Matthias Tecza.
From the previous observations this unique star seemed to be cooler than expected for an object of such a mass and age. The new, more precise observations show that this is not the case, as the observations are in good agreement with theory, in particular with the models developed by the group of Gilles Chabrier from Lyon, France.
With a temperature of about 3 000 degrees (about half as hot as the Sun) and a luminosity about one thousand times dimmer than the Sun, AB Doradus C lies on the exact track expected for a 75 million year old star with 9% the Sun's mass. AB Doradus C is the only such star (young and cool) with an accurate mass, hence the determination of an accurate temperature is critical for validating these models.
In the future one can thus use these tracks to extrapolate the mass of small young stars, once its temperature and luminosity are precisely determined.
"Small stars are back on the expected track," concludes team member Roberto Abuter.
Henri Boffin | alfa
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:81ddaa94-3d66-41da-ae44-a8918fb8c88a> | 3.484375 | 1,819 | Content Listing | Science & Tech. | 47.586992 | 95,612,105 |
Development of the Chemical Composition of Sand
The chemical composition of the single types of source rocks and the sand fraction 63 — 500 μm of the soils, of the effective composition of the source rocks and soils, of the river-mouth sediments and of the longshore bar is compared. Gain and loss and constant-AI2O3 calculations reveal that the composition of the river-mouth sands comes closer to the source rocks than to the soils. Attrition during fluvial transport is the most important source of sand at this active plate margin, chemical weathering and soil formation being less effective.The chemical composition of the average Calabrian river-mouth sand fits almost perfectly that of active-margin sandstones; single rivers, however, deviate greatly from this mean and it may be hazardous to draw conclusions from only one or even a limited number of rivers.
KeywordsSource Rock Passive Margin Relative Gain Chemical Weathering Active Margin
Unable to display preview. Download preview PDF. | <urn:uuid:4a1da500-3273-4cf9-9865-73c4b2d2d274> | 3.03125 | 202 | Truncated | Science & Tech. | 31.718333 | 95,612,115 |
Cellular Contamination Pathway for Heavy Elements Identified
News Sep 01, 2015
Scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) have reported a major advance in understanding the biological chemistry of radioactive metals, opening up new avenues of research into strategies for remedial action in the event of possible human exposure to nuclear contaminants.
Research led by Berkeley Lab’s Rebecca Abergel, working with the Fred Hutchinson Cancer Research Center in Seattle, has found that plutonium, americium, and other actinides can be transported into cells by an antibacterial protein called siderocalin, which is normally involved in sequestering iron.
Their results were published online recently in a paper titled, “Siderocalin-mediated recognition, sensitization, and cellular uptake of actinides.” The paper contains several other findings and achievements, including characterization of the first ever protein structures containing transuranic elements and how use of the protein can sensitize the metal’s luminescence, which could lead to potential medical and industrial applications.
Abergel’s group has already developed a compound to sequester actinides and expel them from the body. They have put it in a pill form that can be taken orally, a necessity in the event of radiation exposure amongst a large population. Last year the FDA approved a clinical trial to test the safety of the drug, and they are seeking funding for the tests.
However, a basic understanding of how actinides act in the body was still not well known. “Although [actinides] are known to rapidly circulate and deposit into major organs such as bone, liver, or kidney after contamination, the specific molecular mechanisms associated with mammalian uptake of these toxic heavy elements remain largely unexplored,” Abergel and her co-authors wrote.
The current research described in PNAS identifies a new pathway for the intracellular delivery of the radioactive toxic metal ions, and thus a possible new target for treatment strategies. The scientists used cultured kidney cells to demonstrate the role of siderocalin in facilitating the uptake of the metal ions in cells.
“We showed that this protein is capable of transporting plutonium inside cells,” she said. “So this could help us develop other strategies to counteract actinide exposure. Instead of binding and expelling radionuclides from the body, we could maybe block the uptake.”
The team used crystallography to characterize siderocalin-transuranic actinide complexes, gaining unprecedented insights into the biological coordination of heavy radioelements. The work was performed at the Advanced Light Source (ALS), a Department of Energy synchrotron located at Berkeley Lab.
“These are the first protein structures containing thorium or the transuranic elements plutonium, americium, or curium,” Abergel said. “Until this work there was no structure in the Protein Data Bank that had those elements. That’s an exciting thing for us.”
The researchers also made the unexpected finding that siderocalin can act as a “synergistic antenna” that sensitizes the luminescence of actinides and lanthanides. “We showed that by adding the protein we enhance the sensitization pathways, making it much brighter,” Abergel said. “That is a new mechanism that hasn’t been explored yet and could be very useful; it could have applications down the line for diagnostics and bioimaging.”
Abergel notes that a study like this would have been possible in very few other places. “Very few people have the capabilities to combine the different approaches and techniques—the spectroscopy techniques at the ALS, handling of heavy elements that are radioactive, plus the chemical and biological tools we have onsite,” she said. “The combination of all those techniques here is very unique.”
Key Ingredient in Diabetes Drug Modified to Improve Side EffectsNews
Improved medications for Type 2 diabetes are one step closer thanks to a new discovery reported this week. By modifying the key ingredient in current diabetes drugs, the researchers produced a compound that was effective for hyperglycemia in animal trials, yet without the most problematic side effects of current drugs.READ MORE
Chili Pepper Derived Anti-Obesity Drug Shows Promise in Animal TrialsNews
A novel drug based on capsaicin, the compound that gives chili peppers their spicy burn, caused long term weight loss and improved metabolic health in mice eating a high fat diet.READ MORE
5th edition of the International Conference Clinical Oncology and Molecular Diagnostics
Jun 17 - Jun 18, 2019 | <urn:uuid:71a9ef4e-4a0e-4d45-bcc0-750271477607> | 2.828125 | 964 | News Article | Science & Tech. | 16.561307 | 95,612,120 |
All air movements have their roots in pressure differentials in the atmosphere, called pressure gradients. Systematic differences in the Earth's land temperature affect air pressure, and significant patterns of pressure that persist over time are called pressure belts, or wind belts. Wind belts depend on temperature, so temperature changes can move the belts and also change wind patterns.
The heat from the sun is strongest at the equator, where solar rays are more intense. This means that the land and ocean surface near the equator tends to be warmer than elsewhere. Other factors lead to differences in surface temperature, such as the geography of the land, and oceans tend to be cooler and more stable in temperature than land. The end result is that there are large, systematic imbalances in surface temperatures on Earth in addition to smaller, local ones.
Surface temperatures affect the temperature of the air above them. Because hotter air is less dense, it tends to rise, while the reverse is true for cool air -- it is more dense and tends to sink. Rising warm air creates low pressure, and sinking cool air creates high pressure. The difference in pressure between any two points in the atmosphere is called the pressure gradient. Because air moves from high pressure to low pressure, pressure gradients create wind by inducing rapid air movements to from high to low pressure.
Some air movements are the result of the systematic pressure gradients that arise from latitudinal changes in the Earth's surface temperature. One notable example is the Hadley Cell, a movement of warm air from the tropics that rises, flows toward the poles and then cools and sinks at around 30 degrees north and south of the equator. This movement creates belts of low pressure in the tropics and high pressure in the temperate zone where the air sinks.
Because both small winds and larger pressure belts are driven by temperature differentials, changes in temperature at the surface can alter them. For example, ENSO (southern oscillation) events, such as El Nino and La Nina, include unseasonal alterations in ocean temperature that can magnify or decrease the strength of wind belts across the globe. Similarly, when centers of low pressure or high pressure move through an area, they can alter the flow of local wind and even create storms. Tropical cyclones come from low pressure zones in the tropics, and their powerful winds are some of the strongest on the planet. | <urn:uuid:2fbc6453-d7b2-4e7f-b0a9-b9b33f2ab4cb> | 4.65625 | 486 | Knowledge Article | Science & Tech. | 36.779198 | 95,612,170 |
SCANNING ELECTRON MICROSCOPY (SEM)
ENERGY DISPERSIVE SPECTROSCOPY (EDS)
Scanning Electron Microscopy (SEM) Energy Dispersive Spectroscopy (EDS) is a combined analytical methodology used to visually capture and represent a detailed image of the substance while qualitatively identifying the elemental make-up of the sample. Some of the information gathered by using SEM-EDS includes qualitative and semi-quantitative chemical analysis, external topography/morphology, chemical composition, crystalline structure and 2-D/3-D image generation.
SEM-EDS instrumentation consists primarily of:
- Electron Source
- Electron Lenses
- Sample Stage
- Solid-state Detector
- Display Monitor
The SEM can provide magnification up to 500,000 times, which is about 250 times better than the best standard microscope. This technology uses electron displacement to produce signals related to the topography of the material and convert these into an image. From pollen grains to blood cells this instrumentation is able to capture a precise picture of a sample and combined with EDS can be used to evaluate the exact composition of a sample.
SEM of pollen grains magnified x500 from the Dartmouth Electron Microscope Facility
EDS Spectral Graph
The EDS works by detecting x-rays characteristic of different elements and arranging them in an energy spectrum which EDS software then analyzes to define and semi-quantify the composition. This is effective for sample sizes as small as a few microns. The electromagnetic waves of the x-rays are extremely difficult to detect as they are only about 0.01 to 100 Angstroms (one Angstrom is a ten-billionth of a meter 1/10,000,000,000).
SEM-EDS is a particularly useful combination when studying unknown materials as imaging reveals some primary characteristics and the signal analysis provides other elemental specifics. Routinely this is used to create high-resolution images of items and to show variations in chemical compositions. EDS provides elemental maps of the sample. Applications for this instrumentation are broad but SEM-EDS is commonly used in geological studies as well as elemental analysis of unknown samples, or contamination testing and failure analysis of substances such as adhesives and coatings. This technology’s value is also in that it produces data quickly, with many images taking less than 4 min to complete.
How does SEM-EDS work?
SEM-EDS works by capturing data revealed by various atomic interactions. The SEM electron beam penetrates the sample on an atomic level and causes various electron displacements to occur, each type of displacement emits electrons or x-rays depending upon the displacement which in return are separated and analyzed to produce the image and elemental map. The three types of electrons that SEM-EDS is primarily concerned with are:
Secondary Electrons – inelastic electrons of low energy that are ricocheted from the sample. These electrons escape their shells and those that are not absorbed stay near the surface and are what provide the topographical information for the 2-D and 3-D images.
Backscattered Electrons – elastic electrons that have collided with the primary electron beam and retained energy. Emitted from the atom at a larger angle than those from inelastic scattering. Inform as to the structure of the topological, crystallographical and compositional surfaces.
Auger Electrons – elastic electrons retaining energy from a collision with the electron beam. Kicked out of inner electron shell creating vacancy. Provide elemental molecular characteristic information.
Characteristic x-rays are emitted when an electron moves to a vacancy in a lower electron shell. The released electromagnetic radiation is elementally specific and thereby can be used to characterize the molecular make-up of the sample under scrutiny.
The EDS detector is what collects and separates these characteristic X-rays in order to identify and semi-quantify the elements in the sample. The photons of electromagnetic radiation are detected by the EDS and converted into a voltage signal proportionate to the original x-ray. Though this sounds simple, such a conversion involves three separate steps. The first step in converting the x-ray energy into the final signal is to transform the energy into a charge through the ionization of atoms in the semiconductor. The semiconductor absorbs the incoming x-ray’s energy and becomes a conduit for the charge. After the x-ray absorbed in such a way as to create this charge bias, it must then be transferred to voltage via the field effect transistor preamplifier. Finally, the electric pulse is processed by the pulse processor to measure the element characteristic signals and plot the voltage ‘ramp’ for an analyst to use in identifying the compounds’ composition.
SEM-EDS is considered to be an extremely valuable analytical methodology as the process is “non-destructive”, allowing for the same sample material to be analyzed repeatedly. In addition to there being no loss of sample volume, the SEM-EDS provides a fairly comprehensive understanding of the material under analysis. This specificity is an invaluable resource to chemical analysis especially as the data provides both image rendering and chemical characterization.
To see what services our scientists use SEM-EDS for click here. | <urn:uuid:6b551b31-a81a-4c8d-a364-a261fb0dafc4> | 3.1875 | 1,092 | Knowledge Article | Science & Tech. | 18.462323 | 95,612,172 |
A ridiculous amount of coffee was consumed in the process of building this project. Add some fuel if you'd like to keep me going!
Fast radio bursts: What are these ‘insanely powerful,’ unexplained signals from space?
March has been a landmark month for fast radio bursts with three of the unexplained signals from outer space picked up by satellites, including the strongest one ever recorded. But what’s really behind the mysterious phenomena?
The bizarre blasts can emit as much power as 500 million suns but they last only a few milliseconds and arrive without any warning. This makes them impossible to predict or trace to a source.
They were first noticed in 2007 when the ‘Lorimer burst’ was discovered in archived data from 2001. Since then they’ve been detected from 33 sources in total. In a dramatic uptick, three new bursts were picked up by the Parkes radio telescope in Australia this month. The first came on March 1, the second came just over a week later on March 9 and the third followed quickly after that on March 11.
The episodes are labelled FRB 180301, FRB 180309 and FRB 180311 in accordance with the convention of naming the bursts after the date on which they occurred. FRB 180309 is particularly interesting because it is by far the strongest burst that has been detected to date, with a staggering signal-to-noise ratio of 411. The previous strongest FRB had a signal-to-noise ratio of 90 and many of the bursts had ratios of less than 20.
“The burst on 9 March was by far the brightest one we’ve seen,” Professor Maura McLaughlin, from West Virginia University, told the New Scientist.
The signals almost always occur as one-off events, with just a single burst recorded from a single location. However, on one occasion in November 2012 signal FRB 121102 repeated itself. Some researchers believe that all of them do and we have just wait long enough until we observe it.
Scientists hope such observations will help finally crack what exactly is causing the strange astrophysical phenomena, about which remarkably little is still known. As recently as last April, researchers confirmed that the signals are from outer space – before that some thought that local interference was tricking astronomers.
A widely-held explanation for the fascinating signals remains elusive. Because of their remarkable brightness, some experts believe they must be produced by incredibly powerful events.
“FRBs travel billions of years to get to us, and only last a few milliseconds, suggesting the emission mechanism is short-lived. For us to detect them clearly after such a long journey, they have also to be insanely bright,” Danny Price of SETI explained.
The leading theory suggests that they are caused by cataclysmic events like the collision of very dense objects, such as black holes or neutron stars. However, others have offered even more outlandish explanations. One study even went as far as proposing that they might be powering ‘solar sails’ on a colossal alien spacecraft.
If you like this story, share it with a friend!
Apsny News English | <urn:uuid:e7d0f42e-eadd-4546-af17-81e3a5c9f24b> | 2.953125 | 650 | News Article | Science & Tech. | 50.998077 | 95,612,173 |
Squid-inspired 'invisibility stickers' could help soldiers evade detection in the dark
Squid are the ultimate camouflage artists, blending almost flawlessly with their backgrounds so that unsuspecting prey can't detect them. Using a protein that's key to this process, scientists have designed "invisibility stickers" that could one day help soldiers disguise themselves, even when sought by enemies with tough-to-fool infrared cameras.
The researchers will present their work today at the 249th National Meeting & Exposition of the American Chemical Society (ACS).
"Soldiers wear uniforms with the familiar green and brown camouflage patterns to blend into foliage during the day, but under low light and at night, they're still vulnerable to infrared detection," explains Alon Gorodetsky, Ph.D. "We've developed stickers for use as a thin, flexible layer of camo with the potential to take on a pattern that will better match the soldiers' infrared reflectance to their background and hide them from active infrared visualization."
To work toward this effect, Gorodetsky of the University of California at Irvine (UCI) turned to squid skin for inspiration. Squid skin features unusual cells known as iridocytes, which contain layers or platelets composed of a protein called reflectin. The animal uses a biochemical cascade to change the thickness of the layers and their spacing. This in turn affects how the cells reflect light and thus, the skin's coloration.
Gorodetsky's group coaxed bacteria to produce reflectin and then coated a hard substrate with the protein. To induce structural—and light-reflecting—changes just like those of iridocytes, the film needed some kind of trigger. An initial search revealed that acetic acid vapors could cause the film to swell and disappear when viewed with an infrared camera. But these conditions won't work for soldiers in the field.
"What we were doing was the equivalent of bathing the film in acetic acid vapors—essentially exposing it to concentrated vinegar," Gorodetsky says. "That is not practical for real-life use."
Now Gorodetsky has fabricated reflectin films on conformable polymer substrates, effectively sticky tape one might find in any household. This tape can adhere to a range of surfaces including cloth uniforms, and its appearance under an infrared camera can be changed by stretching, a mechanical trigger that might more realistically be used in military operations.
Although the technology isn't ready for field use just yet, he envisions soldiers or security personnel could one day carry in their packs a roll of invisibility stickers that they could cover their uniforms with as needed.
"We're going after something that's inexpensive and completely disposable," he says. "You take out this protein-coated tape, you use it quickly to make an appropriate camouflage pattern on the fly, then you take it off and throw it away."
Gorodetsky says that some major challenges remain. The team will have to figure out how to increase the brightness of the stickers and get multiple stickers to respond in the same way at the same time, as part of an adaptive camouflage system.
He's also working on ways to make the stickers more versatile. The current version reflects near-infrared light. Gorodetsky's team is continuing to tweak the materials, so variants of the stickers could also work at mid- and far-infrared wavelengths. These could have applications for thwarting thermal infrared imaging. They also could have uses outside the military—for example, in clothing that can selectively trap or release body heat to keep people comfortable in different environments.
Moreover, in collaboration with Francesco Tombola, Ph.D., and Lisa Flanagan, Ph.D., from the UCI School of Medicine, Gorodetsky's lab has shown that reflectin supports cell growth. This could have implications for making new types of bioelectronic devices and even growing "living" semi-artificial squid skin.
More information: Infrared invisibility stickers inspired by cephalopods, 249th National Meeting & Exposition of the American Chemical Society (ACS).
The skin structure of cephalopods endows them with remarkable dynamic camouflage capabilities. Consequently, much research effort has focused on understanding and emulating these animals' color changing abilities in the visible region of the electromagnetic spectrum. In contrast, despite the importance of infrared signaling and detection for many industrial and military applications, few studies have attempted to translate the principles underlying cephalopod adaptive coloration to infrared camouflage. We have drawn inspiration from nanostructures implicated in cephalopods' camouflage abilities and developed strategies for the self-assembly of unique cephalopod structural proteins into dynamically tunable biomimetic camouflage coatings on both transparent and flexible substrates. Our substrates can adhere to arbitrary surfaces, and their reflectance can be reversibly modulated from the visible to the near-infrared regions of the electromagnetic spectrum with both chemical and mechanical stimuli. Thus, we can endow common objects with any shape or form factor with tunable camouflage capabilities. Our work represents a key step toward the development of wearable biomimetic color and shapeshifting technologies for stealth applications.
Provided by: American Chemical Society | <urn:uuid:1089e77e-2fb1-4a1f-92de-2644ecfbbeec> | 2.921875 | 1,076 | News Article | Science & Tech. | 25.5935 | 95,612,181 |
+44 1803 865913
By: Philip Howse(Author), Kirby Wolfe(Illustrator)
192 pages, colour photos, colour illustrations
The most spectacular wild silkmoths live in tropical and subtropical forests and include the elegant moon moths with delicate pale green wings and long tails, the huge atlas moths with snake patterns embroidered on the edges of their wings, and the "bulls-eye" moths with brightly-coloured eye-spots that resemble the eyes of owls.
The interplay of wing colour and design, behaviour, and ecology in the evolution of these extraordinary insects is explored in a lively, accessible text by award-winning author Philip Howse accompanied by the magnificent photographs of Kirby Wolfe. Many previously unrecognised examples of mimicry of other animals embedded in their wing patterns are described and illustrated, including images of owl eyes, bird wings, claws, teeth, heads of reptiles, birds, rodents, cats ... all designed to frighten the short-sighted, insect-eating birds that seek to prey on them.
The grandeur and the fascinating natural history of the giant silkmoths and the manner in which they protect themselves are described and illustrated in this lavishly-produced book in such a way that as to enthrall scientists, students, artists and all those interested in wildlife and photography.
There are currently no reviews for this book. Be the first to review this book!
Philip Howse has published books and research articles on insect behaviour and ecology. He has developed environmentally-friendly methods for the control of insect pests, recognised by a number of awards including the OBE. After a career spent mainly at Southampton University, he has now retired but continues writing about the insects that have fascinated him since childhood.
Kirby Wolfe has published books and research articles on insect behaviour and ecology. He is a Research Associate of the Natural History Museum of Los Angeles County, California, and has spent more than 25 years studying and photographing moths.
Your orders support book donation projects
Prompt and trustful service.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:08f9c7f6-7777-4838-a0ed-e359f6a0ac9f> | 2.578125 | 456 | Product Page | Science & Tech. | 33.548405 | 95,612,184 |
Structured Types in General — The Array Type in Particular
Simple types (ordinal and real types) are unstructured types. The other types in Pascal are structured types and pointer types. As structured statements are compositions of other statements, structured types are compositions of other types. It is the type(s) of the components and — most importantly — the structuring method that characterize a structured type.
KeywordsStructure Type Component Type Array Variable Entire Array Real Type
Unable to display preview. Download preview PDF. | <urn:uuid:2eac72da-e93b-4fe7-b796-f48a6f485bce> | 2.859375 | 108 | Truncated | Software Dev. | 19.871667 | 95,612,193 |
Plumes are fluid motions that are produced by continuous sources of buoyancy. Convective currents set up by heated bodies such as a cigarette and a space heater are examples. The fluid in contact with the body attains a higher temperature than its surrounding fluid, rises as a result of its lower density, and in so doing draws ambient fluid radially inwards to mix with the warm fluid in the plume. Whereas the plume generated in this fashion may be laminar near the body, at some distance above it will break up into eddies by virtue of the momentum it derives from the force of buoyancy. Other examples of plumes include sewage effluent from sea outfalls, the localised high temperature fluid discharge from the earth’s crust at the bottom of the deep ocean (hydrothermal vents), the injection of concentrated brine (from desalination plants) into sea water, fire plumes, hot gases from smokestacks and volcanic eruptions.
KeywordsVolume Flux Buoyancy Flux Entrainment Coefficient Densimetric Froude Number Plume Solution
Unable to display preview. Download preview PDF. | <urn:uuid:92682049-6af4-400c-8341-a13254c0ed48> | 3.90625 | 238 | Truncated | Science & Tech. | 32.110694 | 95,612,209 |
In a paper published in the Journal of Cell Biology, Sharon Campbell, PhD, professor of Biochemistry and Biophysics and member of UNC Lineberger Comprehensive Cancer Center, and Clare Waterman of the National Heart, Lung and Blood Institute at the National Institutes of Health showed that cell mobility occurs through the interactions between the protein vinculin and the cytoskeletal lattice formed by the protein actin.
By physically binding to the actin that makes up the cytoskeleton, vinculin operates as a form of molecular clutch transferring force and controlling cell motion.
"The hypothesis with the molecular clutch is that you get this kind of treadmilling effect. If you have an analogy with a car, the car is running in neutral. You get something pushing forward and something pulling behind, and you really don't have much effect on the cell. You have a lot of energy that's lost. But if it engages, this slows the retrogade flow and the actin polymerization pushes the leading edge forward so that it can move," said Campbell.
In this context, vinculin localizes to cellular components called focal adhesions, with over a hundred different proteins, and has been postulated to play a critical role as a molecular clutch. These adhesions can be thought of as wheels, taking the energy from the actin cytoskeleton and using it to move the whole cell across a substrate. So how important is this one protein, vinculin, in regulating cell movement?
Studies with knockout models that deactivated vinculin show that the cell still can move without the protein, but the movement becomes more chaotic. This can impact cell processes such as organ development. Embryonic mice without vinculin, for example, do not develop in the womb.
"Vinculin makes cells almost smarter, in a way. It really helps the cells decide if they are going to stay put or if they are going to go. And if they are going to go, it is going to be in a direction where there is a reason to go to. If you knock vinculin out, they lose that. They lose the anchoring effect. They move more easily, but they also move more randomly," said Peter Thompson, paper co-author and graduate student in the Campbell lab.
As vinculin can associate with a number of distinct proteins, Campbell and her lab designed specific vinculin variants that disrupted its ability to bind actin, in an effort to tease out the role of an interaction deemed critical for vinculin function. These impaired vinculin molecules were used by the Waterman group to show that interaction between actin and vinculin is required for proper development of cellular components and coupling of adhesions to actin, which are critical for the process of controlled cell movement.
The clarification of the role of vinculin helps refine understanding of cell movement, an enormously complex process involving multiple protein interactions. By improving the overall understanding of the protein interactions, researchers can create drugs and therapies that finely target the protein interactions and limit side effects.
"What we are trying to do is determine out of all the jobs that vinculin has, which ones are really critical for which cellular responses. Getting this kind of information is important because when we design drugs or therapies to target things, we want to be very specific so we limit side effects. It is still very far away from any sort of treatment, but it is setting the groundwork and foundation upon which we can target very specific aspects of cell movement and force transduction," said Thompson.
Cell movement plays an important role in cancer research because of the role of metastasis in tumor development. In many cancers, the greatest threat to the patient comes not from the original tumor but from the cancer cells that migrate and form new tumors throughout the body.
"By helping us better understand how cell movement occurs, we can better understand metastasis," said Thompson.
William Davis | EurekAlert!
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:80e1db77-e4d1-4867-830a-c57e55a62c91> | 3.171875 | 1,458 | Content Listing | Science & Tech. | 40.431538 | 95,612,211 |
The team scouring off the coast of Washington was positive that they recovered the fragments of the meteorite that crashed on March 7. A NASA scientist said it is the largest meteorite he has seen in 21 years.
Space July 4, 2018
Ancient planets that formed during the early solar system gave birth to the asteroids in the main asteroid belt. This is what scientists at the University of Florida proposed to explain the existence of these bodies between Mars and Jupiter.
Space July 3, 2018
NASA will search the ocean floor for a meteorite that caused a bright flash of light and mysterious boom when it crashed into Earth in March. NASA will be aided by a group of marine researchers and the Nautilus.
Space July 2, 2018
This year's Eta Aquarids are expected to be one of the finest showers of the year. The best way to see them is during early morning hours in locations far enough from city lights.
Space May 5, 2018
A mineral found in a lunar meteorite shows that the moon once had the presence of water on its surface. This mineral only forms in the presence of water, offering clues as to the moon's history with water.
Space May 3, 2018
The diamonds in meteorites that crashed in Sudan were compelling evidence of a lost planet billion years ago, according to a new study. Analysis of its composition revealed that it is similar to how diamonds are formed on Earth.
Space April 19, 2018
Tiny diamonds from a meteorite that crashed on Earth a decade ago originated from a long lost planet. A new study determined where the space diamonds came from.
Space April 18, 2018
NASA and JPL are going to return the SaU008 Martian meteorite back to its home planet via a rover that’ll fly to Mars in 2020. It’s also being used to calibrate a highly sophisticated tool called SHERLOC.
Space February 16, 2018
Last month a fireball had broken apart over Michigan, pieces of which fell on Lake Zukey. Auction house Christie's is now going to auction the well-preserved Michigan meteorite, which was collected by a resident.
Space February 14, 2018
Dozens of space rock hunters flocked to Hamburg Township looking for meteorites worth thousands of dollars each. Unfortunately, not all of them know exactly where to look for extraterrestrial treasure.
Space January 19, 2018
A meteor caused a loud explosion and a brilliant flash of light to occur in the night skies over Michigan on Jan. 16. The meteor was so intense that its brightness was seen from across five U.S. states including Canada.
Space January 18, 2018
20 years after they landed on earth, meteorite fragments studied by scientists were known to contain organic elements considered as ingredients to life. Monahans and Zag meteorites could answer questions if life existed in the early solar system.
Space January 12, 2018
Bronze Age artifacts revealed nickel levels and ratio of iron to cobalt that suggest these were made using metals from meteorites. The metal state of meteoric iron may also explain why it was used in the Bronze Age.
Ancient December 6, 2017
Charles Darwin's theory that life on Earth began in warm little ponds may have been plausible. A study suggests that the meteorites that brought ingredients needed for life contributed to the process.
Earth/Environment October 3, 2017
Scientists have discovered the origin of stardust by which the solar system was made. The long-standing puzzle was solved by an international team of scientists who ran tests in Italy and traced it to AGB stars.
Space February 1, 2017
Over 460 million years ago, a single huge collision in the solar system produced many of the meteorites that still rain down on Earth. And for the first time, scientists investigated the meteors that came before the asteroid collision and discovered that rare meteorites today were much more common before.
Space January 24, 2017
How did the solar system form? According to some astronomers, it all began when a dust and gas cloud was disturbed, a process possibly triggered by a low-mass supernova.
Space November 29, 2016
A fireball was seen over the skies of Florida. People who reported the event to the police and the meteorological association have had different reactions to the strange occurrence.
Earth/Environment November 25, 2016
After examining a cluster of meteorites in Mars' Victoria crater, researchers have determined that the lasting drought on the red planet has persisted for millions of years.
Space November 13, 2016
The 2016 meteor shower will be observable in November 2016. In order to look for the very bright, yet rare meteors, you will have to spot the Taurus constellation.
Space November 5, 2016
The moon has about 33 percent more craters than previously estimated, comparison of 14,000 photos taken by the Lunar Reconnaissance Orbiter showed. What does this mean for future moon missions?
Space October 13, 2016
A quarry in Sweden is the resting place of a meteorite unlike any ever seen on Earth. What is the history and geology of this bizarre extinct meteorite?
Ancient June 16, 2016
These 'pet rocks' you can take home are out of this world. Literally. They're meteorites. Christie's London is organizing an auction for the sale of valuable meteorites.
Space April 8, 2016
A giant fireball crashed into the Atlantic Ocean, but it almost went unnoticed. The fireball was said to be the largest to hit Earth since the Chelyabinsk explosion.
Space February 23, 2016
Scientists may have finally discovered why there are few iron meteorites in Antarctica. The truth is, the stones are not scarce, it is actually hidden under the Antarctic ice.
Animals February 19, 2016
Indian officials confirmed a human death from a mysterious explosion Saturday, which is speculated to be a meteorite strike. Scientists are currently studying the retrieved specimen, which can verify if it is indeed the first official human fatality from a falling space rock.
February 8, 2016
Researchers from Curtin University found a meteorite that is older than the Earth. The rock is said to give significant clues on how the universe was formed.
Space January 7, 2016
Researchers from the University of Gothenburg in Sweden found two enormous craters in the county of Jämtland that were likely formed by meteorites that hit the Earth around 458 million years ago.
September 14, 2015
Traces of opal have been found on a Martian meteorite. Could this indicate signs of life on the planet?
Space July 7, 2015
Methane, which could support rudimentary forms of life, has been found in Martian meteorites, researchers report. While not necessarily hard evidence of life on the Red Planet, it suggests conditions there may have been friendly to life at some point in the planet's history, they say.
Space June 16, 2015
After studying rare meteorites on Earth, scientists discovered new building blocks of our solar system's birth, sulfide chondrules, that provide proof of a previously unknown region in the rotating gas disk known as the protoplanetary disk.
Space March 16, 2015
Meteorite found in Morocco is from the Red Planet -- but it's black, not red. Analysis suggests a darker crust under much of Mars' red dust cover.
Space February 1, 2015
Peering inside a meteorite yields clues to what goes on in the magnetic core of our own planet, researchers say. Findings suggest the ultimate fate of Earth's magnetic field, they suggest.
Space January 21, 2015
A new study done by MIT and Purdue University suggests that meteorites, long thought as the building blocks of our solar system's planets, are, in fact, merely byproducts of planet formation.
Geek January 14, 2015
The Earth could have been a wet water planet a hundred million years earlier than previously thought, researchers say. Findings could have implications for when life on our planet got started, they say.
Space November 2, 2014
An iron meteorite nearly 7-feet long discovered on Martian surface by NASA rover. The iron-rich space rock has been dubbed "Lebanon" by scientists.
Space July 21, 2014
A nickle-iron meteorite has been discovered on Mars by the Curiosity rover.
Space July 18, 2014
Mineral hidden deep within the Earth's interior reveals its secrets through a billion-year-old meteorite. Mineral given the official name bridgmanite after Nobel Prize winner.
Animals June 18, 2014
The asteroid that exploded over Russia in 2013 injuring more than a 1,000 people, may have been involved in a collision long ago, say researchers.
Space May 25, 2014
Lyrid meteor shower passes its peak, but skywatchers might still get a glimpse, experts say. Bright moon might make it a bit harder, though.
Space April 24, 2014
Scientists have successfully identified the presence vitamins in meteorite samples. Traces of Vitamin B3 or Niacin have been extracted from small samples of space rocks.
Space April 20, 2014
New research about meteorites from Mars helps scientists understand the composition of the Red Planet's early atmosphere. The new findings also highlight the divergence between the Earth’s atmosphere and the Martian atmosphere over four and a half billion years ago.
Space April 17, 2014
A parachute jump almost became a leap of death for a skydiver in Norway who was nearly hit by a meteorite. Watch it happen on video!
April 5, 2014
Particles arriving on Earth in meteorites could reveal clues to violent events in our universe over billions of years.
Space April 1, 2014
A meteor, weighing nearly 900 pounds crashed into the Moon, creating a new crater. The blast could be seen from the Earth.
Space February 25, 2014 | <urn:uuid:e2f4df27-135e-4cc8-8248-3a6f8d5da83f> | 3.28125 | 2,027 | Content Listing | Science & Tech. | 51.317585 | 95,612,213 |
Building a Synthetic Brain
Posted on Thursday, February 3, 2011 | Filed Under: News
Building a Synthetic Brain
For electrical engineer Alice Parker ’70, ’75 PHD, simulating the machine on your shoulders is more complicated than building the latest computer chip.
Forget the Pentium. As you read this, you’re using a machine that puts the latest processor to shame–your brain. Its 100 billion neurons each have 10,000 synapses exchanging messages. Alice Parker ’70, ’75 phd hopes to one day duplicate that power with a synthetic brain. It may take a decade or more, she says, but eventually a synthetic brain could be used in prosthetic devices that could wire around damaged parts of the brain, or in brainlike systems that could drive robotic vehicles.
Other researchers simulate the workings of a few neurons using computer software, but building a whole brain that way would require too many processors, says Parker, a professor of electrical engineering and former vice provost for research at the University of Southern California. "Computer chips like the Pentium are considered comparable to a brainlike structure, but they’re nowhere near the complexity of the brain," she says.
For example, a Swiss university has simulated a brain using software and an IBM multiprocessor that uses about 8,000 processors. But it can do the work for only 10,000 neurons. "To scale up to the size of a brain, you would need millions of these IBM multiprocessors," Parker Says.
In her vision, transistors will stand in for the neurons and synapses, and the electrical properties of voltage and current will simulate brain chemicals (neurotransmitters). Consider that it would take hundreds of transistors to simulate just one synapse, and you’ll get some idea of the complexity of her task. "[The brain has] a quadrillion synapses, if I do the arithmetic right," Parker says. In other words, 10 to the 15th power, or a one with 15 zeros behind it. "It’s a lot of circuitry," she says.
Parker has tackled daunting tasks before. She studied engineering at NC State in the early 1970s when few women did. In her first electrical engineering class, there were 120 students but just one other woman. "It was a bit isolating," she says. "The men might have been a little afraid to befriend me."
She sometimes made friends with fellow students’ girlfriends, and she made other friends while singing in the chorus. She also got support from professors like Wayland P. Seagraves ’32, ’33 ms, who chose to take Parker to a regional conference sponsored by the Institute of Electrical and Electronics Engineers, making her the first woman in the region to attend. "He was always trying to make opportunities for us. I don’t know if I would have been as successful if it hadn’t been for him," says Parker, who has been interested in synthetic-brain work since she was a graduate student. She began serious work on it several years ago when she saw that nanotechnology and other fields were improving fast enough to make the task feasible.
Parker and her collaborators are starting small, trying to simulate one neuron using transistors made out of carbon nanotubes, which are molecules of carbon that can be as much as 100,000 times narrower than a human hair. Because nanotubes are so small, even if each transistor used several, the synthetic brain wouldn’t be too big to be practical.
"I’m trying to get my students launched so they can ultimately get there," Parker says. She keeps her eye on neuroscience developments that will influence her next move, such as an October 2009 study showing that a particular type of cell helps the retina adapt when light changes from bright to dim. "All of a sudden we know about these cells that nobody was factoring into these circuits before," she says. "It’s like chess, where you’re always thinking about the end game. We can’t build this whole complicated system now, but every little piece that we make today has to make sense later in the final structure."
Angela Spivey, NC State Alumni Magazine
This article originally appeared in the Winter 2009 issue of NC State. The alumni magazine is a benefit of membership in the NC State Alumni Association. Join the Alumni Association | <urn:uuid:45c49fef-0f93-48f9-ba1e-36331c27d418> | 3.25 | 921 | Truncated | Science & Tech. | 48.12 | 95,612,223 |
24th May 2013
Amphibian populations are plummeting, U.S. survey finds
The first detailed estimate of how quickly frogs, toads and salamanders in the United States are disappearing from their habitats reveals they are vanishing at an alarming and rapid rate.
According to the study, published this week in the journal PLOS ONE, even the species of amphibians presumed to be relatively stable and widespread are declining. And these declines are occurring in amphibian populations everywhere – from the swamps in Louisiana and Florida, to the high mountains of the Sierras and the Rockies.
The study by USGS scientists and collaborators concluded that U.S. amphibian declines may be more widespread and severe than previously realised, and that significant declines are notably occurring even in protected national parks and wildlife refuges.
"Amphibians have been a constant presence in our planet's ponds, streams, lakes and rivers for 350 million years or so, surviving countless changes that caused many other groups of animals to go extinct," said USGS Director Suzette Kimball. "This is why the findings of this study are so noteworthy; they demonstrate that the pressures amphibians now face exceed the ability of many of these survivors to cope."
On average, populations of all amphibians examined vanished from habitats at a rate of 3.7 percent each year. If the rate observed is representative and remains unchanged, these species would disappear from half of the habitats they currently occupy in about 20 years. The more threatened species, considered "Red-Listed" by the International Union for Conservation of Nature, disappeared from their studied habitats at a rate of 11.6 percent each year. If the rate observed is representative and remains unchanged, these Red-Listed species would disappear from half of the habitats they currently occupy in about six years.
"Even though these declines seem small on the surface, they are not," said USGS ecologist Michael Adams, the lead author of the study. "Small numbers build up to dramatic declines with time. We knew there was a big problem with amphibians, but these numbers are both surprising and of significant concern."
For nine years, researchers looked at the rate of change in the number of ponds, lakes and other habitat features that amphibians occupied. In lay terms, this means that scientists documented how fast clusters of amphibians are disappearing across the landscape. In all, they analysed nine years of data from 34 sites spanning 48 species. The analysis did not evaluate causes of declines.
The research was supported by the USGS Amphibian Research and Monitoring Initiative, which studies amphibian trends and causes of decline. This unique program, known as ARMI, conducts research to address local information needs in a way that can be compared across studies to provide analyses of regional and national trends.
Brian Gratwicke, amphibian conservation biologist with the Smithsonian Conservation Biology Institute, said, "This is the culmination of an incredible sampling effort and cutting-edge analysis pioneered by the USGS, but it is very bad news for amphibians. Now, more than ever, we need to confront amphibian declines in the U.S. and take actions to conserve our incredible frog and salamander biodiversity."
The study offered other surprising insights. For example, declines occurred even in lands managed for conservation of natural resources, such as national parks and national wildlife refuges.
"The declines of amphibians in these protected areas are particularly worrisome because they suggest that some stressors – such as diseases, contaminants and drought – transcend landscapes," Adams said. "The fact that amphibian declines are occurring in our most protected areas adds weight to the hypothesis that this is a global phenomenon with implications for managers of all kinds of landscapes, even protected ones."
Amphibians seem to be experiencing the worst declines documented among vertebrates, but all major groups of animals associated with freshwater are having problems, according to Adams. While habitat loss is a factor in some areas, other research suggests that things like disease, invasive species, contaminants and perhaps other unknown factors are related to declines in protected areas.
"This study," said Adams, "gives us a point of reference that will enable us to track what's happening in a way that wasn’t possible before." | <urn:uuid:0943966a-8eb0-46ba-9eed-60e3d29d1bd2> | 2.78125 | 868 | News (Org.) | Science & Tech. | 31.699111 | 95,612,225 |
University of Maryland physicists contribute to identification of second gravitational wave event using data from Advanced LIGO detectors
UTC, scientists observed gravitational waves--ripples in the fabric of spacetime--for the second time.
This image depicts two black holes just moments before they collided and merged with each other, releasing energy in the form of gravitational waves. On Dec. 26, 2015, after traveling for 1.4 billion years, the waves reached Earth and set off the twin LIGO detectors. This marks the second time that LIGO has detected gravitational waves, providing further confirmation of Einstein's general theory of relativity and securing the future of gravitational wave astronomy as a fundamentally new way to observe the universe. The black holes were 14 and 8 times the mass of the sun (L-R), and merged to form a new black hole 21 times the mass of the sun. An additional sun's worth of mass was transformed and released in the form of gravitational energy.
Credit: Numerical Simulations: S. Ossokine and A. Buonanno, Max Planck Institute for Gravitational Physics, and the Simulating eXtreme Spacetime (SXS) project. Scientific Visualization: T. Dietrich and R. Haas, Max Planck Institute for Gravitational Physics.
Both of the twin Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors--located in Livingston, Louisiana, and Hanford, Washington--detected the gravitational wave event, named GW151226. The LIGO Scientific Collaboration (LSC) and the Virgo Collaboration used data from the twin LIGO detectors to make the discovery, which is accepted for publication in the journal Physical Review Letters.
Gravitational waves carry information about their origins and about the nature of gravity that cannot otherwise be obtained. Physicists on the LIGO and Virgo teams concluded that the final moments of a black hole merger produced the gravitational waves observed on December 26, 2015.
LIGO's historic first detection on September 14, 2015 resulted from a merger of two black holes 36 and 29 times the mass of the sun. In contrast, the black holes that created the second event were relative flyweights, tipping the scales at 14 and eight times the mass of the sun. Their merger produced a single, more massive spinning black hole that is 21 times the mass of the sun, and transformed an additional sun's worth of mass into gravitational energy.
"It's fabulous that our waveform models have pulled out from the noise such a weak but incredibly valuable gravitational wave signal," said Alessandra Buonanno, a UMD College Park Professor of Physics and LSC principal investigator who also has an appointment as Director at the Max Planck Institute for Gravitational Physics in Potsdam, Germany. Buonanno has led the effort to develop highly accurate models of gravitational waves that black holes would generate in the final process of orbiting and colliding with each other.
"GW151226 perfectly matches our theoretical predictions for how two black holes move around each other for several tens of orbits and ultimately merge," Buonanno added. "Remarkably, we could also infer that at least one of the two black holes in the binary was spinning."
The merger occurred approximately 1.4 billion years ago. The detected signal comes from the last 27 orbits of the black holes before their merger. Based on the arrival time of the signals--the Livingston detector measured the waves 1.1 milliseconds before the Hanford detector--researchers can roughly determine the position of the source in the sky.
"It is very significant that these black holes were much less massive than those observed in the first detection," said Gabriela Gonzalez, LSC spokesperson and professor of physics and astronomy at Louisiana State University. "Because of their lighter masses compared to the first detection, they spent more time--about one second--in the sensitive band of the detectors. It is a promising start to mapping the populations of black holes in our universe."
The first detection of gravitational waves, announced on February 11, 2016, was a milestone in physics and astronomy. It confirmed a major prediction of Albert Einstein's 1915 general theory of relativity and marked the beginning of the new field of gravitational wave astronomy.
"We could tell within minutes that GW151226 was very likely a real event. We all just marveled at it for a while," said Peter Shawhan, an associate professor of physics at UMD and an LSC principal investigator. "By December we were sure that the first event was genuine and we had a fairly mature draft of that paper, which finally came out in February. But it was very satisfying to know, even then, that we already had a second event on our hands."
The second discovery "has truly put the 'O' for Observatory in LIGO," said Albert Lazzarini, deputy director of the LIGO Laboratory at Caltech. "With detections of two strong events in the four months of our first observing run, we can begin to make predictions about how often we might be hearing gravitational waves in the future. LIGO is bringing us a new way to observe some of the darkest yet most energetic events in our universe."
Both discoveries resulted from the enhanced capabilities of Advanced LIGO, a major upgrade that increased the sensitivity of the instruments and the volume of the universe probed compared with the first-generation LIGO detectors.
Advanced LIGO's next data-taking run will begin this fall. By then, scientists expect further improvements in detector sensitivity could allow LIGO to reach as much as 1.5 to two times more of the volume of the universe compared with the first run, which has already resulted in two major findings.
The Virgo detector, a third interferometer located near Pisa, Italy, with a design similar to the twin LIGO detectors, is expected to come online during the latter half of LIGO's upcoming observation run. Virgo will improve physicists' ability to locate the source of each new event, by comparing millisecond-scale differences in the arrival time of incoming gravitational wave signals.
The research paper, "GW151226: Observation of Gravitational Waves from a 22 Solar-mass Binary Black Hole Coalescence," by the LIGO Scientific Collaboration and the Virgo Collaboration, has been accepted for publication in the journal Physical Review Letters.
About LIGO and Virgo
LIGO research is carried out by the LIGO Scientific Collaboration (LSC), a group of more than 1,000 scientists from universities around the United States and in 14 other countries. More than 90 universities and research institutes in the LSC develop detector technology and analyze data; approximately 250 students are strong contributing members of the collaboration. The LSC detector network includes the LIGO interferometers and the GEO600 detector.
The LIGO Observatories are funded by the National Science Foundation (NSF), and were conceived, built, and are operated by Caltech and MIT. NSF leads in financial support for Advanced LIGO. Funding organizations in Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council, STFC) and Australia (Australian Research Council) also have made significant commitments to the project.
Several of the key technologies that made Advanced LIGO so much more sensitive have been developed and tested by the German UK GEO collaboration. Significant computer resources have been contributed by the AEI Hannover Atlas Cluster, the LIGO Laboratory, Syracuse University, the ARCCA cluster at Cardiff University, the University of Wisconsin-Milwaukee, and the Open Science Grid. Several universities designed, built, and tested key components and techniques for Advanced LIGO: The Australian National University, the University of Adelaide, the University of Western Australia, the University of Florida, Stanford University, Columbia University in the City of New York, and Louisiana State University. The GEO team includes scientists at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute, AEI), Leibniz Universität Hannover, along with partners at the University of Glasgow, Cardiff University, the University of Birmingham, other universities in the United Kingdom and Germany, and the University of the Balearic Islands in Spain.
Virgo research is carried out by the Virgo Collaboration, consisting of more than 250 physicists and engineers belonging to 19 different European research groups: 6 from Centre National de la Recherche Scientifique (CNRS) in France; 8 from the Istituto Nazionale di Fisica Nucleare (INFN) in Italy; 2 in The Netherlands with Nikhef; the Wigner RCP in Hungary; the POLGRAW group in Poland and the European Gravitational Observatory (EGO), the laboratory hosting the Virgo detector near Pisa in Italy.
Media Relations Contact: Matthew Wright, 301-405-9267, firstname.lastname@example.org
About the College of Computer, Mathematical, and Natural Sciences
The College of Computer, Mathematical, and Natural Sciences at the University of Maryland educates more than 7,000 future scientific leaders in its undergraduate and graduate programs each year. The college's 10 departments and more than a dozen interdisciplinary research centers foster scientific discovery with annual sponsored research funding exceeding $150 million.
Matthew Wright | EurekAlert!
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:0c9c264a-1c2a-4226-acca-bbbf9f268031> | 3.53125 | 2,576 | Content Listing | Science & Tech. | 31.182603 | 95,612,227 |
At lengths upwards of 6 feet (1.8 m), Arthropleura was the largest land-dwelling arthropod of all time. Its many legs sprouted from around 30 jointed segments covered in armored plating, segments of which have been discovered by paleontologists. Due to its size, Arthropleura probably had few, if any, enemies. Scientists believe that it ate dead plant matter just like its living millipede descendents. These gigantic bugs became extinct when the climate shifted, drying out the sprawling swamps in which they lived.
Share the knowledge!
Key Facts In This Video
The Carboniferous Period lasted from around 360 to 300 million years ago. 00:04
Fossilized tracks of Arthropleura have been found in North America and Scotland. 00:54
Arthropleura is believed to be the largest terrestrial arthropod that has ever existed. 02:01
Wake up with the smartest email in your inbox.
Our Best Articles Daily | <urn:uuid:d8860b99-deee-48da-a52e-29b106502a88> | 3.671875 | 209 | Truncated | Science & Tech. | 56.299481 | 95,612,241 |
The THz band is occupied by oscillatory modes of many biological structures, shown crudely here . Of particular interest is the interaction with hydrogen bond networks, as these are ubiquitous in any biological system. Hydrogen bonds form in populations of polar molecules for which hydrogen constitutes the "positive" portion of the dipole. The most well-known hydrogen bond network is that formed by bulk water, in which the individual molecules are all electrically attracted to one another. However, hydrogen bonds are found in many important intracellular structures, including the cell's information registry (DNA) and workforce (proteins).
Hydrogen bonds, from gene to protein:
A "gene" is simply a sequence of the four DNA bases (A, T, C, G) and are the chemical blueprints for protein synthesis. On average (but with high variance), genes in humans are ~3000 bases in length, implying huge combinatorial complexity in the number of possible genes. DNA is a double-stranded macromolecule found in the cell nucleus, and the two strands are wound in a double helix and held together by hydrogen bonds between the bases themselves.
For a gene to be "expressed", a molecule called RNA polymerase wedges in between the two DNA strands, breaking the hydrogen bonds, and travels along the strand making a complementary "transcript" of the gene, which is called messenger RNA (mRNA). The mRNA then travels outside the nucleus, where it is "translated" by another molecule (ribosome) into a linear chain of amino acids, a precursor to a fully functional protein.
Before achieving an active conformational state, proteins begin as a polypeptide of amino acids. These linear (primary) chains organize into secondary structures called "alpha helices" or "beta sheets", both of which are due to the presence of hydrogen bonds forming between adjacent amino acids. Proteins then fold into an active tertiary state, dictated by the dominant influence of (you guessed it) hydrogen bonds. In fact, a popular model of protein folding (called the "Hydrophobic-Polar Model") predicts the preferred folded protein state is one in which hydrophobic amino acids are sequestered inside the structure, with the polar molecules outside to interact with the surrounding water. Importantly, the structure of a protein (ie; how it folds) is intimately related to the function it performs, meaning hydrogen bonds play a functional as well as structural role.
Where does Terahertz fit in?
An excitation of biological systems at THz frequencies will couple to hydrogen bond networks. This has been exploited in diagnostic imaging/spectroscopy, which take advantage of the differences in water concentration, cellular structure, and molecular populations between diseased and healthy tissue . However, more subtle processes can also occur:
Simulations of DNA in THz fields show creation and amplification of “bubbles” in the DNA strand can arise via nonlinear mechanisms . Additionally, absorption experiments of DNA nucleobases and protein crystals show resonance-like behaviour (large-amplitude response at certain frequencies) when interacting with these hydrogen bond networks, which can potentially affect the processes that these structures are involved with. Therefore, it is reasonable to assume that THz excitations could affect gene transcription/translation, protein structure, and protein function.
Studies on the biological effects of THz radiation
Wilmink et al. have provided a review of THz-bio studies up to 2011, which show varied THz effects on biological systems (gene/protein expression, structural states, differentiation characteristics) at many levels of organization (organelle, cell, tissue, organ, organism) . However, many strides have been made in recent years.
As one interesting example, excitations with intense pulses of THz radiation has been found to non-thermally alter gene expression at the transcript and protein levels in human skin tissue models . Interestingly, the differential gene expression was distinct from a known genotoxic agent (UVA), and many of the proteins expressed belong to the DNA damage response pathway. These proteins are known to be expressed in the presence of severe damage caused by ionizing particles orders of magnitude more energetic than THz photons. The mechanism by which this potential DNA damage by non-ionizing radiation can occur is currently unknown.
These effects raise the possibility for potential therapeutic applications of THz radiation, in which targeted damage to diseased tissues while sparing the surrounding healthy tissue is the primary goal .
Son, Joo-Hiuk. Terahertz Biomedical Science and Applications. Boca Raton, FL: CRC Press, (2014);
Alexandrov, B. et al. "DNA Breathing Dynamics in the Presence of a Terahertz Field". Physics letters. A, 374(10): 1214 - 1217, (2010).
Pickwell-Macpherson, E et al. "Terahertz pulsed imaging: A potential medical imaging modality? ". Photodiagnosis and Photodynamic Therapy, 6(2): 128 - 34;
Acbas, G. et al. "Optical measurements of long-range protein vibrations". Nature Communications, 5: 1 - 7, (2011);
Wilmink, G. et al. "Invited Review Article: Current State of Research on Biological Effects of Terahertz Radiation". Journal of Infrared, Millimeter, and Terahertz Waves. 32(10): 1074 - 1122, (2011);
Titova, L et al. "Intense THz pulses cause H2AX phosphorylation and activate DNA damage response in human skin tissue". Biomedical Optical Express. 4(4): 559 - 568, (2014); | <urn:uuid:0cc3e3f7-0a3d-428a-87f6-73beeb09521f> | 3.484375 | 1,183 | Knowledge Article | Science & Tech. | 29.985745 | 95,612,247 |
Which nations are at the greatest vulnerability to the negative impacts of climate change? According to research, not the nations who caused it. MHA@GW, the online master of health administration from the Milken Institute School of Public Health at the George Washington University shared with us this data visualization to illustrate the comparison between the nations most susceptible to climate change and the nations that emit the highest levels of CO2. They compared data from the Notre Dame Global Adaptation Index (ND-GAIN) to data from the Carbon Dioxide Information Analysis Center (CDIAC).
Content and visual provided by George Washington University. | <urn:uuid:f86fa3d0-f00c-44d1-87e5-327e080b8fe1> | 2.71875 | 123 | Truncated | Science & Tech. | 24.393706 | 95,612,259 |
Teaching Microscopes to See - A Conversation with Professor Duane LohThursday, April 27, 2017
Duane Loh is currently an Assistant Professor in Physics and Biology and a former LKY post-doctorate fellow at the National University of Singapore. He has a PhD in Computational Physics from Cornell University and completed his Bachelor of Science, majoring in Theoretical and Mathematical Physics, at Harvey Mudd College. He recently held a discussion, titled “Training Computation Lenses for Analysing Nanoscale Phenomena” at SGInnovate.
Professor Duane Loh
What are computational lenses?
Humans are able to see the surroundings because eyes detect and measure light emitted from objects in the environment. The brain then interprets the patterns of detected light as visual perception. The same combination holds true in high-resolution microscopy used to see the very small building blocks of the world: atoms, molecules, proteins, etc. Instead of using the eyes and brain, we can use special lenses (for x-rays and electrons) and computers to examine these structures on the very small scale.
In the past, scientists’ abilities to view this microscopic world was largely dependent on how well these lenses were made. In recent years, as computing power becomes cheaper, Professor Loh shares that it is now possible to replace some of these physical lenses in an imaging system with computational optics algorithms to reliably form images of specimens.
When machine learning tools are added to these computational lenses, the combination allows for the recognition and classification of objects like how the human brain derives vision – almost like teaching microscopes to see.
A New Frontier
Single particle imaging using x-rays or electrons is an up and coming field of research. The general idea is to use many random incomplete and noisy views of a system, to statistically piece together a higher quality and/or higher dimensional “image” of the myriad processes within this system.
For example, researchers can recover a three-dimensional structure of a protein by capturing the images of many randomly oriented two-dimensional views of copies of this protein in various orientations. This type of imaging is now routinely done using x-ray and electron-based imaging.
So far, this technique has been successful when used for sets of identical proteins carefully prepared in experiments. The ambitious goal now is to extend this technique to image a mixed or blended set of floppy biomolecules, some of which might be previously unknown, while others might be recognisable but have now adopted new structural states.
Professor Loh believes that if everything works out the way scientists predict, researchers will one day be able to place any biological cell into one of these imaging instruments and immediately catalogue all the proteins within it.
Taking a Closer Look
The process of analysing such minute elements is no simple feat. Professor Loh and his team begin the process by understanding the physics behind the contrast mechanism that created these images before studying the actual digital processing carried out on the images. He explains that microscopy images are not merely collections of pixels — but have the ability to capture how the world interacts with light and electrons!
The primary differences between using x-ray diffraction and electron microscopy data, versus looking at light or fluorescence microscopy images can be assigned to two factors – sample damage and the amount of data that can be gathered.
In x-ray diffraction patterns, the pixel values of the original images are all mixed in a so-called incomplete, but fairly noisy. Fourier transformed pattern, as x-ray photons scatter weakly with matter. While this weak interaction makes the mathematics of their interaction easy for researchers to exploit for imaging, they also sometimes give frustratingly low signal levels, according to Professor Loh. On the other hand, electrons interact very strongly with matter, providing good imaging contrast in electron microscopy images, but at the expense of sample damage.
Knowing the contrast mechanism in both cases is then tremendously helpful for interpreting what is being measured.
Professor Loh is extremely inspired by the ability to put affordable computing and efficient software packages into the hands of the wider community. Motivated by this cause, Professor Loh and his team is currently furthering their research in incorporating learning algorithms with this unique imaging technology.
They have combined elements of unsupervised clustering methods, with their own outlier detection routines, as well as modified Naive Bayes classifiers to work with the specific topology of the 3D rotation of the subject. The team is also looking into optimising the programmes for problems such as iterative phase retrieval.
Share This, Choose Your Platform!
YOU MAY ALSO LIKE
The human body is far more complicated than a game of Go, so healthcare AI solutions need to bring in expert knowledge early on. | <urn:uuid:64f587aa-8429-4fbf-a16e-34de13d0af20> | 3.03125 | 977 | Audio Transcript | Science & Tech. | 22.675 | 95,612,286 |
Synchronous and Asynchronous Script Loading
By Stephen Bucaro
When a web page is loaded, the browser parses through the code identifying each html tag
and performing whatever rendering or loading function that tag defines. By default that
action occurs synchronously. In other words the browser downloads and executes any scripts
that it encounters during the parsing of the web page. If the script is external, that is,
it has a src attribute, the browser will send a request for the resource and pause
further parsing until the file has completed downloading and executing.
In the synchronous loading example shown above, scriptTwo.js is loaded only when the loading
of and executing of scriptOne.js is complete.
Asynchronous script loading causes the external scripts to be loaded asynchronously, in other
words along side other downloading and rendering of the webpage.
<script async src="scriptOne.js"></script>
<script async src="scriptTwo.js"></script>
Note the keyword async in the script links shown above. This causes both scripts to
be loaded concurrently, alongside other scripts getting loaded synchronously, and alongside
the rendering of the webpage.
In theory this causes the webpage to render more quickly so the user is not forced to wait
until the script loading is complete and allows the user to communicate with web page as
the script is downloaded in the background.
Google Developers Website says to use asynchronous script loading so that the webpage can
render more quickly. However, this can cause dangerous consequences. First, if a webpage element
is rendered and the user attempts to interact with it before its related script has completed
downloading, this could cause an error. Second, if a script itself has any effect on the appearance
of the webpage, the webpage will blink when switching from the originally rendered webpage to the
final rendered webpage after the script completes loading and executing. This is referred to as
FOOC (Flash Of Original Content).
<script src="demo_defer.js" defer ></script>
One common problem is that a function triggered by an html element, like a form, cannot
execute if that element has not yet completed rendering. In that case you might want to use the
defer attribute. When present the defer attribute specifies that the script is
to be executed only when the page has finished rendering.
The fact is, whether to use synchronous or asynchronous script loading depends upon the
design of the webpage. Asynchronous script loading can cause the webpage can render more quickly.
But if done without consideration for the design it can cause webpage flash or even errors.
More Java Script Code:
• Easy Expanding Banner Code
• Java Script Code to Factor Numbers
• Calculators For Your Web Site : Length Units Converter
• Date Picker For Your Web Site
• Calculators for Your Web Site : Loan Payment Calculator
• Using the Java Script Date Object
• HTML5 Canvas Drag-and-Drop | <urn:uuid:d06480c7-8d7d-4f61-97a8-9d5421cea4f5> | 3.140625 | 615 | Documentation | Software Dev. | 41.414651 | 95,612,320 |
At the present time the majority of unsolved problems in Fluid Dynamics are governed by non-linear partial differential equations and can only be treated by a numerical approach. As a consequence, specialists in Fluid Dynamics have recently devoted increasing attention to numerical, as opposed to analytical, techniques. Of course, there is no point in developing a novel numerical method unless it can be applied to actual problems of interest. In the early days of research on numerical analysis the capacity of computing machines was too restricted to permit many applications to be carried out. Today this situation has changed; the machines now available are sufficiently advanced to deal with an almost limitless range of problems; all that is needed is to discover effective numerical methods to attack them.
KeywordsFinite Difference Finite Difference Method Finite Difference Scheme Heat Conduction Equation Fourier Space
Unable to display preview. Download preview PDF. | <urn:uuid:16c007ef-0589-407a-b9f3-b1eaaccaee9b> | 2.78125 | 176 | Truncated | Science & Tech. | 13.758859 | 95,612,327 |
Identification and observation of desertification processes with the aid of measurements from space: Results from the European Field Experiment in Desertification-threatened Areas (EFEDA)
- 85 Downloads
The ECHIVAL Field Experiment in Desertification-threatened Areas (EFEDA) addresses the question of desertification from the viewpoint of changing interactions between the land surface and the atmosphere under varying climatic conditions. The basic tool to improve our understandin of these processes are Soil-Vegetation-Atmosphere-Transfer (SVAT) and climate models. In testing techniques for deriving the needed input data from observations from space, EFEDA requires high-precision data sets that can be used to aggregate desertification-related land-surface characteristics into the scale up to the grid width of global climate models (104−105 km2). In this context schemes have been developed to infer from satellite measurements fluxes at the surface. To validate the information inferred from observations in space, ground measurements have been performed on 2500 km2 of Castilla-La Mancha, Spain, during the drying periods of the summers of 1991 and 1994. Ultimately EFEDA aims to determine cumulative fluxes over longer periods to allow discrimination between natural variability and trends over large areas, such as the land around the Mediterranean. To recalibrate and adjust the algorithms used to infer information about the land surfaces from satellite measurements, “anchor stations” are proposed for critical areas to provide collateral information and continuous quality control of inferred information.
KeywordsLand Surface Natural Variability Measurement Flux Global Climate Model Collateral Information
Unable to display preview. Download preview PDF.
- Bastiaanssen, W.G.M., D.H. Hoekman and R.A. Roebeling. 1994. A methodology for the assessment of surface resistance and soil water storage variability at mesoscale based on remote sensing measurements, IAHS Special Publication No. 2. International Association of Hydrological Sciences Press, Wallingford.Google Scholar
- Bastiaanssen, W.G.M. and R.A. Roebeling. 1993. Analysis of land surface exchange processes in two agricultural regions in Spain using Thematic Mapper Simulator data, in H.J. Bolle, R.A. Feddes, and J.D. Kalma (Eds.), Exchange processes at the land surface for a range of space and time scales, IAHS Publication No. 212. International Association of Hydrological Sciences Press, Wallingford, pp. 407–416.Google Scholar
- Bolle, H.J. et al. 1993. EFEDA: European field experiments in a desertification threatened area. Annales Geophysicae 11: 173–189.Google Scholar
- Dedieu, G., P.Y. Deschamps and Y.H. Kerr. 1987. Satellite estimation of solar irradiance at the surface of the Earth and of surface albedo using a physical model applied to METEOSAT data. Journal of Climate and Applied Meteorology 26: 79–87.Google Scholar
- Nerry, F. 1993. Longwave infrared measurements. In EFEDA: Final report. EFEDA-Project Office, Berlin, pp. 312–319.Google Scholar
- Nerry, F., J. Label and M.P. Stoll. 1988. Emissivity signatures in the thermal IR band for remote sensing: Calibration procedure and method of measurement. Applied Optics 27: 758–764.Google Scholar
- van de Griend, A.A. 1993. On the relationship between thermal emissivity and the normalized difference vegetation index for natural surfaces. International Journal of Remote Sensing 14(6): 119–1131.2Google Scholar | <urn:uuid:5600d11c-b221-4966-aa41-c27d05c9338e> | 3.078125 | 783 | Academic Writing | Science & Tech. | 36.248988 | 95,612,342 |
Impact of human trampling in different zones of a coral reef flat
- 352 Downloads
The effects of trampling on the coral communities of the outer reef flat and reef crest were investigated at Heron Island at the southern end of the Great Barrier Reef. Eighteen months of trampling at various intensities increased the percentage cover of unoccupied substrate and the cover of mobile rubble. The morphology of the coral was the most important feature relating to trampling resistance. Branching corals were reduced on the outer reef flat, and most broken branches were recorded in the initial phases of the experiment. The reef crest was much more resistant.
A short-term trampling experiment showed that trampling detached a greater mass and larger fragments of coral on the outer reef flat than on the reef crest. Further trampling reduced the sizes of the detached fragments on the outer reef flat. A drift experiment indicated that greatest movement of fragments occurred on the reef crest and here the largest fragments moved greater distances.
We concluded that all habitats would be changed by reef walking and that by one measure the outer reef flat was 16 times more vulnerable than the reef crest. The routes taken by reef walkers need to be chosen in relation to the trampling resistance of the habitat.
Key wordsTrampling Coral Reef zones Tourist management Morphology
Unable to display preview. Download preview PDF.
- Bottjer, D. J. 1980. Branching morphology of the reef coralAcropora cervicornis in different hydraulic regimes.Journal of Paleobiology 54:1102–1107.Google Scholar
- Chamberlain, J. A., Jr. 1978. Mechanical properties of coral skeleton: Compressive strength and its adaptive significance.Paleobiology 4:419–435.Google Scholar
- Endean, R. 1976. Destruction and recovery of coral reef communities. Pages 215–254in O. A. Jones and R. Endean (eds.), Biology and geology of coral reefs, Vol. 3. Academic Press, London.Google Scholar
- Graus, R. R., J. A. Chamberlain, and A. M. Boker. 1977. Structural modification of corals in relation to waves and currents. Pages 135–153in S. H. Frost, M. P. Weiss, and J. B. Saunders (eds.), Studies in Geology, 4, Reefs and related carbonates—ecology and sedimentology. American Association of Petrology and Geology.Google Scholar
- Kay, A. M., and M. J. Liddle. 1984. Tourist impact on reef corals, 1984. Report to Great Barrier Reef Marine Park Authority.Google Scholar
- Kuss, F. R. 1986. A review of major factors influencing plant responses to recreation impacts.Environmental Management 10:637–650.Google Scholar
- Liddle, M. J. 1975. A selective review of the ecological effects of human trampling on natural ecosystems.Biological Conservation 7:17–36.Google Scholar
- Liddle, M. J. 1988. Recreation and the environment: The ecology of recreation impacts. Section 2, Vegetation and wear. Division of Australian Environmental Studies, Griffith University Brisbane. Working Paper 1/88.Google Scholar
- Liddle, M. J., and A. M. Kay. 1987. Resistance survival and recovery of trampled corals on the Great Barrier Reef.Biological Conservation 42:1–18.Google Scholar
- Stoddart, D. R. 1969. The ecology and morphology of recent coral reefs.Biological Review 44:433–498.Google Scholar
- Wall, G., and C. Wright. 1977. The environmental impact of outdoor recreation. Department of Geography, University of Waterloo, Ontario Publication Series No. 11.Google Scholar
- Winer, B. J. 1971. Statistical principles in experimental design. McGraw-Hill, New York.Google Scholar
- Woodland, D. J., and J. N. A. Hooper. 1977. The effect of human trampling on coral reefs.Biological Conservation 11:1–4.Google Scholar | <urn:uuid:8396d098-95ac-41fc-89e9-5e09a10c5ee0> | 3.53125 | 849 | Truncated | Science & Tech. | 60.901852 | 95,612,343 |
Orbital motion and the natural laws that cause it can be illustrated by the behavior of our nearest astronomical neighbor, the Moon. The Moon revolves around the center of mass of the Earth- Moon system in an elliptical orbit with a period of one sidereal month. The proximity of the Moon to the Earth and the shortness of its orbital period make it possible to deduce the basic character of its orbit. Even more can be obtained by careful study, since the Moon’s orbit is further complicated mainly, though not exclusively, by the circumstance that it is in a three-body system. The Sun is the principal perturber of the lunar orbit; two of the other sources of perturbations are the nonspherical shape of the Earth and the gravitational attraction of the other planets.
KeywordsOrbital Plane Photographic Image Lunar Orbit Angular Diameter Moon System
Unable to display preview. Download preview PDF. | <urn:uuid:192d0cda-8b2c-4edb-813b-b0597485bf81> | 4.21875 | 189 | Truncated | Science & Tech. | 36.745 | 95,612,344 |
Places Slipping Away
They offer us breathtaking vistas, ecological and archaeological treasures, as well as a window into history. But if present trends continue and the world keeps burning fossil fuels at its current pace, it's likely that few of them will be recognizable a century from now.
From places like Virginia's historic Jamestown, the first permanent English settlement in America, to Australia's Great Barrier Reef, warming temperatures and rising sea levels are slowly eating away at many of the world's most cherished sites and structures, both natural and man-made.
Nowhere on Earth is this happening more quickly than the first place on our list: the rapidly shrinking glaciers of Montana's Glacier National Park. | <urn:uuid:ae3c6183-cbdc-406c-91c8-0781fa0eb62b> | 2.6875 | 142 | Truncated | Science & Tech. | 31.155 | 95,612,362 |
Aquatic and Terrestrial Invasive Species Identification and Survey Training
Join us in welcoming Adirondack Park Invasive Plant Program’s Erin Vennie-Vollrath and Zack Simek for a free workshop during New York’s Invasive Species Awareness Week. This hands-on workshop will provide training on how to survey and identify many aquatic and terrestrial invasive species found in the Adirondacks, including those posing the biggest threat of invasion. You will also receive training on using the program iNaturalist to record and report your observations while out and about in the Park. This is a great opportunity to learn how to protect the Adirondacks from invasives. Hope to see you there!
For more information, please contact Kerry Crowningshield at firstname.lastname@example.org or 518-837-5177.
Click HERE to register. | <urn:uuid:fd1f5e49-94b8-498e-b477-5461383f1aac> | 2.90625 | 188 | News (Org.) | Science & Tech. | 37.941581 | 95,612,396 |
|This article is a stub. Please help Sciencemadness Wiki by expanding it, adding pictures, and improving existing text.
A Schlenk flask (or Schlenk tube) is a reaction vessel often used in air-sensitive chemistry or for storing sensitive chemicals, either hygroscopic or air-sensitive. These flasks are often connected to Schlenk lines, and are important in air-free synthesis and other procedures that require inert gases.
Schlenk flasks appear as ground-joint round bottom flasks, which have a side arm fitted with a ground glass or (rarer) PTFE stopcock, which allows the vessel to be evacuated or filled with gases (usually inert gases like nitrogen or argon).
Types of Schlenk flasks
Standard Schlenk flask
The standard Schlenk flask is a round bottom, pear-shaped, or tubular flask with a ground glass joint and a side arm. The side arm contains a valve, most often a greased stopcock, with PTFE stopcocks being more rare. The stopcock is used to control the flask's exposure to a manifold or the atmosphere. Addition of reagent/stir bar to the flask is done through the ground glass joint. If the reagent added is air-sensitive, a septum is used to cap the flask, while the reagent is added as solution with a syringe.
A subclass of the standard Schlenk flask, it contains only one ground glass joint present on the side of the flask, accessed by opening a Teflon plug valve. This design allows the Schlenk bomb to be sealed more completely than the standard Schlenk flask, even if its septum or glass cap is wired on. Schlenk bombs are often used to conduct reactions at elevated pressures and temperatures as a closed system.
Commonly used for storing dried and degassed solvents, Straus flasks differ than normal Schlenk flasks by having the two flask necks joined through a glass tube.
Schlenk flasks are used for performing simple air-sensitive reactions. Tubular Schlenk flasks are tend to be used for storing air and moisture sensitive reagents, with the tubular flasks being more commonly employed, especially for solid powders.
Since they are often used to create vacuum, they should be inspected periodically for any cracks.
Schlenk flasks aren't cheap, so it's best to take good care of them.
When using them, always hold them in a vertical position. Cork rings are best suited for this task. | <urn:uuid:ad86488e-5f9d-4716-885b-9d4dd3ffd820> | 3.34375 | 538 | Knowledge Article | Science & Tech. | 52.703245 | 95,612,450 |
radiolarian ooze . A deep-sea pelagic sediment containing at least 30% opaline-silica tests of radiolarians. It is a siliceous ooze.
Subscribe via EMail to the GeoWord of the Day.
kinoite (kin'-o-ite). A blue monoclinic mineral: Ca2Cu2Si3O8(OH)4 .
Båth's law . A generalization in seismology that the largest aftershock occurring within a few days of a main shock has a magnitude of 1.2 units lower than that of the main shock (Richter, 1958, p.69).
conotheca (co-no-the'-ca). (a) Conoidal theca with a small circular aperture located at the terminus of a short neck developed irregularly in colonies of tuboid graptolithines (Whittington and Rickards, 1968). (b) The outer shell of ectocochleate cephalopods and of the phragmocone of endocochleate cephalopods.
fenitization (fen''-it-i-za'-tion). As generally used today, widespread alkali metasomatism of quartzo-feldspathic country rocks in the environs of carbonatite complexes.
Wolfcampian (Wolf-camp'-i-an). North American series: lowermost Permian (above Virgilian of Pennsylvanian, below Leonardian).
caltonite (cal'-to-nite). A dark-colored analcime-bearing basanite that contains microphenocrysts of olivine and clinopyroxene in a trachytic groundmass composed of feldspar laths, clinopyroxene, iron oxides, and analcime. It was named by Johannsen (1931) for Calton Hill, Derbyshire, England. Obsolete.
sault . A waterfall or rapids in a stream. Etymol: French, old spelling of "saut" (from Latin "saltus"), "leap".
chiastoclone (chi-as'-to-clone). A desma (of a sponge) in which several subequal, zygome-bearing arms radiate from a very short central shaft, giving the spicule an X-shaped profile.
deflection [geomorph] . A sharp change in the trend of a mountain chain. The term was introduced by Bucher (1933) as a translation of Staub's term Beugung. It differs from an orocline by not necessarily being a strain imposed on the completed orogen. See also: capped deflection; fractured deflection. Cf: linkage [geomorph]. | <urn:uuid:6c6febe4-a193-4fbe-8e21-9fe3b9f9f59e> | 2.765625 | 582 | Structured Data | Science & Tech. | 41.050041 | 95,612,471 |
+44 1803 865913
Edited By: William J McShea, H. Brian Underwood and John H Rappole
402 pages, Figs, tabs
Assesses the management of deer populations from an ecosystem perspective, contending that since deer are unevenly dispersed within protected areas and the effects of high management options should be tested at the landscape level before widely applied. The contributors reconsider the meaning of `overabundance' from the perspectives of wildlife biology, international conservation, state game management, and animal welfare, and they discuss the harm - in the form of disease, parasites, and starvation - that can befall deer populations when their growth goes unchecked. Contributors also trace both the impact of deer on ecosystems and the effects of changing landscape features and vegetation biomass on deer populations.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
I don't know how you got a book printed 26 years ago in the conditions that I received it (like new) but you do it! ABSOLUTELY AWESOME!
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:9c91b791-8b92-4409-9b20-f1ca9881302f> | 2.875 | 254 | Product Page | Science & Tech. | 32.964226 | 95,612,474 |
Researchers with the National Snow and Ice Data Center (NSIDC) say the amount of Arctic sea ice has set a record low for the month of January.
"January was unusually warm across the Arctic Ocean thanks to a shift in the jet stream," said weather.com meteorologist Quincy Vagell. "A trough of low pressure in the North Pacific was associated with a ridge of high pressure in the highest latitudes, helping drive the mild conditions in the region."
According to NSIDC, the extent of the sea ice during January 2016 averaged 5.2 million square miles, which is 402,000 square miles below the average set from 1981 to 2010. It has also been categorized as the lowest January extent on satellite record, falling 35,000 square miles below the previous low recorded in 2011.
NSIDC researchers say this dwindling amount of sea ice was driven by unusually low ice coverage in the Barents Sea, Kara Sea and the East Greenland Sea on the Atlantic side, as well as below average conditions in the Bering Sea and Sea of Okhotsk.
Sea ice is frozen ocean water that melts in the summer and refreezes in the winter. Usually it is at its smallest in September and reaches its largest expansion in March. A lack of sea ice affects arctic wildlife such as walruses and polar bears and could be playing a role in changing weather patterns in the U.S.
Despite the fact that Arctic sea ice expands and shrinks over the course of the seasons every year, its overall area has steadily shrunk for the past few decades due to global warming, according to NOAA.
MORE ON WEATHER.COM: Antarctica Glacial Melt | <urn:uuid:f772088f-2a60-4091-ba2f-6668adb1e6a6> | 3.4375 | 341 | News Article | Science & Tech. | 54.39 | 95,612,505 |
Programming in C is thought of tough because of The truth that It's not at all founded and flexible as The brand new constructions,however it is made usage of normally within the Electronics industry. Despite the imperfections, C is probably the recognized and frequently built utilizes of programming language.
We get our position seriously because we know how important the assignments are to the ultimate results of a university student. All The scholars who look for our help are normally satisfied with our do the job because;
C was developed at AT&T Bell Labs by Dennis Ritchie during the early 1970's. It was accustomed to put into action Unix within the PDP-eleven. Dennis Ritchie and Brian Kernigan wrote the definitive e book on C, which is named K&R C. There are actually loads of modifications to C as it was initial made, with prototypes in headers being on the list of more clear ones. C++ was an extension towards the language created by Bjarne Stroustrup, it absolutely was at first a preprocessor to C referred to as cfront that took in C++ code and output C code which was compiled using the regular compiler.
We recruit only writers with PhD degree that has good history in academics for guaranteeing high-quality homework.
There is certainly also a library assistance included to C++ characteristics. It's more than 3000 libraries at your disposal over the internet. Also, C++ programming language has been constructed on the basic operators of C Programming. This means that it is completely appropriate Together with the programming code in C programming.
• C++ is a strong, effective and fast language that actually works on a wide array of applications – from graphic consumer interface programs to 3-Dimensional graphics for game titles. Even true time mathematical alternatives can be carried out by making use of C++
To click to read more help keep up with C++, a person has to be a quick paced learner who can sustain Along with the deadlines and satisfy the challenge requirements.
At any time since, it has basically moved into an excessive amount of features: automating process administration, serving as glue between a variety of Computer system method programs; and, certainly, being amid the most well-liked languages for CGI programming on the web.
Since it is definitely an extension of C, each individual program inside the C language could be easily consolidated/ compiled Using the C language.
The scholar can certainly go through it, realize and have the capacity to remedy the exact same assignment on his individual following time. Our material can even be utilized for long term reference and when planning for Test
Plenty of programming languages like C++, java, C# are commonly an extension to C programming language. A lot of developers also specify C language as an inter connection look at these guys between reduced amount language and substantial degree language.
Be confident that math assignments concluded by our gurus is going to be mistake-no cost and performed Based on your Directions laid out in the submitted purchase variety. Math
We acquire delight in fulfilling all kind of requirements of college or university and universities amount college students. Our staff is effective at managing even the toughest scenario to be able to unquestionably reach bigger grades within your educational class.
Most C++ homework assignment methods consist of quickly-paced Studying which might be tricky to abide by, maintain, and succeed in mainly because of the sheer volume of labor anticipated.
The Online C++ Programming Tutors of our organization are competent and professional builders while in the C++ Programming Language industry. They likewise have outstanding industrial experience in c/c++ and numerous other programming languages.
In this article I take a look at two possible means of producing a functionality that safely and securely reads in people entered with the command prompt as well as flushes any unneeded people from your buffer.
string literals can prolong in excess of various lines, although the newlines do not appear during the ensuing string
C++ can be an objected oriented programming language but contrary to JAVA, this isn't a pure objected oriented. But why isn’t it a pure object oriented programming language? Since in C++ programming language We have now Close friend purpose, Digital purpose and Another upfront which does not comply with a lot of the vital objected oriented features.
Action 1:- Desire to buy Resolution for this. Be sure to click on submit your assignment listed here and after that fill all particulars and make sure you pointed out product or service code at the end of the case.
C++ programming is an item-oriented Personal computer language. To become An effective programmer a person needs to grasp this programming language. Nonetheless, learners pursuing this discipline usually challenges and have challenges Dealing with assignments.
logical try this web-site condition. Telling what’s what is straightforward if you believe from the surface-in: if the gathering-object’s buyers don't have any
Listed here arrives: if the lookup strategy does not make any improve view publisher site to any of the gathering-item’s reasonable state, but it really does
Ought to I punish my teenage sister, whom I've complete custody of, for lying to me so that you can secretly see her boyfriend?
It’s Similarly a wonderful system to discover programming procedures and create your incredibly own structure of coding.
Arrays may have a number of dimensions to let you star arrays inside of arrays. Here I make clear how one can visualize a two-dimensional array as staying somewhat just like a spreadsheet with intersecting rows and columns.
A number of fantastic code editors are available that present functionalities like R syntax highlighting, car code indenting and utilities to send out code/functions on the R console.
If you have to chain jointly disorders when earning exams, you should use C’s ‘reasonable operators’.
Practice assignments underpin newly acquired capabilities. For example, students that have just figured out a different technique of resolving a mathematical trouble need to be specified product troubles to accomplish by themselves. Preparation assignments help pupils get well prepared for actions that will come about in the classroom.
I'll offer you my code and you may see If you're able to correct it. I may supply you with the Original challenge description and you may see If you're able to come up with an even better Option that is simpler than correcting my code. I have a sense of which purpose might b defective so I'm able to help you with that and I am optimistic the entire debugging or coding on the finish ought to be below fifty (or could possibly be twenty) strains.
To get a university student to become proficient in making libraries making use of c programing language, they requires to spend quite a few hours practicing. Often, it's possible you'll find yourself so burdened with pending assignments that you deficiency some time to revise your C programming class notes.
I'm probably not into this type of point but my laptop essentially exploded with days worthy of of Focus on it. The paper I Came was better still than what I had been engaged on so large as a result of you fellas. Oscar (CA)
They are the phrases that are reserved from the language compiler. So basically keywords are text that have obtained a Specific indicating via the language and can't be made use of any place or else, basically They can be reserved. Watchwords:
You may as well request your assignments to become exported in a number of file formats including MS Phrase, PDF, and, TXT files at no more prices.
So after you’re in search of inexpensive support to Get the homework finished, look our way and realize that you’re getting my company the distilled knowledge of Many men and women before you. You’ll under no circumstances have to worry about any milestone or assignment at any time once again!
C is a common-objective, procedural, imperative Computer system programming language designed in why not try these out 1972 by Dennis M. Ritchie at the Bell Phone Laboratories to acquire the UNIX functioning procedure.
Several college students typically question on their own: “How can I publish excellent C programming language?”, “Do I get C venture help?”, or “Can I get C homework help at cost-effective price?”
We are self-confident and often want to be the primary choice of each student. So, we offer products and services –
You will almost always be furnished with extra opinions, advices and resources, that can conserve you numerous hrs;
Some of them just neglect subjects they can’t master but it usually results in uncomfortable effects and it truly is tricky to sustain With all the curriculum Later on. That is definitely why lots of pupils are looking for People, who will ‘do my c++ homework’ and help them grasp The subject.
C programming is a pc language which supports structured programming, recursion and lexical variable scope. C language can offer maps for equipment Directions proficiently and as a result, it is considered an extremely useful gizmo.
Should you be combating C++ assignments, you are not alone. Finishing correct C++ homework is just a matter of acquiring the top C++ aid – experts inside your field.
Some others can’t sustain with The category and need knowledgeable writer to help out with the skipped subject areas. There's also Those people, who have already got good background but come across it tough to grasp a certain language.
If we blend both of these kinds of parameters, then we have to make certain that the unnamed parameters precede the named ones.
You'll be able to see the optimizer is alternating amongst buying the most higher bounding stage and the maximum stage in accordance with the quadratic product. Given that the optimization progresses, the higher sure becomes progressively extra accurate, helping to locate the finest peak to investigate, whilst the quadratic model quickly finds a higher precision maximizer on regardless of what peak it at the moment rests. Both of these factors alongside one another allow the optimizer to discover the correct global maximizer to higher precision (inside 1e-9 In such a case) by the point the video clip concludes.
This is certainly to excellent application for the educational university student simply because this system syntax is consumer welcoming……. thanks to add…..
The C language is quick and productive – but it may be challenging to discover. Until you employ this study course. This system begins with a gentle introduction to C but rapidly moves on to elucidate some of its most confusing characteristics: almost everything from C's 'scoping' principles to your curious relationship involving arrays and memory addresses. By the end with the program you'll have a deep comprehending both in the C language itself and also on the underlying 'architecture' within your Pc. What you are going to learn: The basics of programming – from the bottom up
A further illustration of a renames clause is in which you are employing some sophisticated structure and you want to in outcome utilize a synonym for it all through some processing. In the instance underneath we have a device handler construction which is made up of some procedure styles which we have to execute subsequently.
The values on the variables are constrained by upper and reduced bounds. The next paper, posted in 2009 by Powell, describes the detailed working of your BOBYQA algorithm. The BOBYQA algorithm for sure constrained optimization without derivatives by M.J.D. Powell
Understand C++ with this particular tutorial, intended for novices and containing a great deal of examples, recommendations and straightforward explanations.
Any kind visite site of worth, in the quite substantial on the incredibly small, go to website and any fractional values are saved within the float and double forms.
In addition to immediate calls to entry points clientele may possibly rendezvous by using a process with three conditional types of a decide on statement: Timed entry contact Conditional entry contact Asynchronous select seven.3 Guarded varieties
Work out the fraction of test merchandise that equal the corresponding reference merchandise. Supplied a list of reference values in addition to a corresponding list of exam values,
This demonstrates exactly how much safer the Ada Edition is, we know exactly what we've been watching for and might right away approach it. Within the C++ case all we know is
The principal functionality in all C code is principal(), that is the main perform that’s run when This system starts off. The primary() function is surely an int purpose, so it need to return an integer value. All the functionality’s statements are enclosed in curly brackets, or braces.
You shouldn't see any mistakes. If you do you most likely don't have the target-C A part of gcc set up. You should ensure that you have it installed and working before you proceed.
The default Variation performs a memberwise copy, in which Just about every member is copied by its possess duplicate assignment operator (which can even be programmer-declared or compiler-created).
from C/C++ to Ada for easy constructions. Note the instance underneath doesn't check out to transform variety to kind, thus the C char*, to hold a string is transformed to your
Operate parameters are often handed by benefit. Go-by-reference is simulated in C by explicitly passing pointer values. C program supply text is no cost-structure, using the semicolon as a statement terminator and curly braces for grouping blocks of statements.
can be a uniquely Ada function. Nested techniques Easy, you can determine any variety of techniques in the definition of
is an inspector-approach. That makes an issue: in the event the compiler sees your const process changing the Actual physical condition
functions and friends. These external people also understand the item as obtaining state, for example, If your
You have got by now witnessed a variety in use (for strings), it can be expressed as very low .. higher and will be Just about the most helpful means of expressing interfaces and parameter values, for example:
The persistent mother nature of static objects is beneficial for sustaining point out information throughout functionality phone calls, automatic allocation is convenient to use but stack Room is often much more constrained and transient than either static memory or heap Place, and dynamic memory allocation lets practical allocation of objects whose dimensions is known only at run-time. Most C applications make comprehensive usage of all three.
This operates the delay as well as the accept concurrently and If your hold off completes prior to the acknowledge then the take is aborted
extensive and quick are modifiers which make it feasible for a data kind to employ possibly kind of memory. The int search phrase needn't Adhere to the shorter and long search phrases. This is often mostly the situation. A short can be utilized wherever the values slide in a lesser selection than that of an int, normally -32768 to 32767.
code tend to continue employing the traditional model so their All round codebase might have a steady coding criteria.
Let's Read More Here now take into account an case in point, we will contact a perform which We all know may perhaps raise a selected exception, but it may well increase some we don't know about, so
Consequently we are able to read/publish goods of sort Type_1_Data and when we have to depict the information as Type_2_Data we could basically
Producing a const int* to level to an int doesn’t const-ify the int. The int can’t be changed through the
A further use for it really is to obtain the attributes Very first and Last, so for an integer the selection of possible values is Integer'To start with to Integer'Previous. This will also be placed on arrays so if image source you are passed an array And do not know the dimensions of it You should utilize these attribute values to array above it in a very loop (see portion 1. | <urn:uuid:e86338f5-fd38-4bb1-bfe4-335321661f3a> | 2.921875 | 3,264 | Spam / Ads | Software Dev. | 37.946549 | 95,612,507 |
The protection of organic carbon stored in forests is considered as an important method for mitigating climate change. Like terrestrial ecosystems, coastal ecosystems store large amounts of carbon, and there are initiatives to protect these 'blue carbon' stores. Organic carbon stocks in tidal salt marshes and mangroves have been estimated, but uncertainties in the stores of seagrass meadows-some of the most productive ecosystems on Earth-hinder the application of marine carbon conservation schemes. Here, we compile published and unpublished measurements of the organic carbon content of living seagrass biomass and underlying soils in 946 distinct seagrass meadows across the globe. Using only data from sites for which full inventories exist, we estimate that, globally, seagrass ecosystems could store as much as 19.9 Pg organic carbon; according to a more conservative approach, in which we incorporate more data from surface soils and depth-dependent declines in soil carbon stocks, we estimate that the seagrass carbon pool lies between 4.2 and 8.4 Pg carbon. We estimate that present rates of seagrass loss could result in the release of up to 299 Tg carbon per year, assuming that all of the organic carbon in seagrass biomass and the top metre of soils is remineralized.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:ac244100-c299-4f6e-a633-fc89eaf0ddcb> | 3.40625 | 285 | Academic Writing | Science & Tech. | 29.171745 | 95,612,518 |
+44 1803 865913
Edited By: Martin Hovland
278 pages, Illus, figs, tabs
Deep-water coral reefs are found along large sections of the outer continental shelves and slopes of Europe, from North Cape to the Gulf of Cadiz, and because they also occur along the Atlantic seaboard of USA, the Gulf of Mexico, off Brazil, in the Mediterranean, and off New Zealand, they are currently being targeted by international groups of marine scientists. They have become popular and opportune deep-water research targets because they offer exciting frontier exploration, combined with a whole plethora of modern scientific methods, such as deep-sea drilling, sampling, remote control surveying and documentation. Furthermore they represent timely opportunities for further developments within the application of geochemistry, stable isotope research, bacterial sciences, including DNA-sequestering, and medical research (search for bioactive compounds).
The Integrated Ocean Drilling Program (IODP) has arranged a deep-sea scientific drilling campaign on giant carbonate banks off Ireland. Because the reefs currently defy traditional marine-ecological theories, they represent future research opportunities and will enjoy scientific scrutiny for many years to come.
This book, written by Hovland (Statoil, Norway), a marine geology expert, is of considerable interest due to its many colored photographs and drawings that illustrate the locations and organic diversity of the reefs and mounds. Even though most of the work is devoted to the Scandinavian reefs, those in the other oceans appear to be similar. The text is well written and the author draws attention to the need for conservation, primarily to protect the reefs from damage by deep-water trawling. Summing Up: Highly recommended. Academic collections, upper-level undergraduates, graduate students, researchers, and faculty. (J. C.Briggs,CHOICE, dec 2008)
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
I will not hesitate to use you again or recommend you to others.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:503cb9e6-e364-4246-9919-1e4742567f34> | 2.578125 | 453 | Product Page | Science & Tech. | 27.95703 | 95,612,530 |
In this article we look at one of the issues inherent in C when building larger projects – the problem of function and object naming. We look at the C++ solution to this: namespaces.
A problem with big projects in C
When we move to multi-file projects the problem in C is having to create unique names for functions and externs in the global namespace. If we don’t have unique definitions this will lead to a link-time error.
We could, of course, make the functions static but then they are only available within the file they are defined in.
A common solution to the problem amongst C programmers is to add an extension to the function name to make it unique. Very commonly this is the module name.
This works for your own code but is often not an option for third-party code:
The C++ answer: namespaces
A namespace is a named scope. A user-defined namespace is a mechanism for expressing logical grouping in your code.
By putting the classes Longitude and Latitude into the namespace Nav we have effectively extended their names by ‘prefixing’ them with the namespace name.
In the implementation file we must prefix the namespace name onto the class (using the scope resolution operator) when we define the member functions (or indeed any other member). An alternative notation is to enclose all the class definitions within a namespace declaration.
Elements defined within a namespace can be accessed in any of three ways:
- by using the fully qualified name, in this case Nav::
- If the item is used a lot, then it can individually be brought into the global namespace with the using directive.
- The global statement using namespace Nav makes all names in the namespace available
Namespaces are an open scope – it is possible to keep adding definitions to the namespace, across different translation units. Although classes act as namespaces they are referred to as a closed scope – that is, once a class (namespace) has been defined it cannot be added to.
It is good practice to put all code into a namespace, rather than leaving it in the global namespace. The only thing that should (ideally) be in the global namespace is main(). (MISRA-C++ makes this demand of you.)
Namespaces may be nested arbitrarily deep. Nested namespaces are analogous to a hierarchical file system, rooted in the global namespace (which is identified by having nothing to the left of the scope resolution operator (::)
If your coding standard demands that you explicitly qualify type names then having hierarchies of namespaces (each with a descriptive name) can quickly become onerous, and lead to less-than-readable code. To improve legibility C++ allows namespace aliasing. A namespace alias is a – usually shorter and more succinct – synonym for the declared namespace.
Forward references and namespaces
Wherever possible we want to reduce the coupling between modules in our design. Including one header file within another builds dependencies (coupling) between the two interfaces.
In this case, including the class definition of class Sensor is unnecessary. You only need to include the class definition if you are going to allocate memory for an object (a Sensor object, in this case) or access any of its member variables or operations. Class Positioner does not instantiate a Sensor object; it merely has a pointer to a Sensor object. Since the compiler knows how much memory to allocate for a pointer we do not need to include the class definition of Sensor. However, we must still declare that class Sensor is a valid type to satisfy the compiler. In this case we do so with a forward reference – actually just a pure declaration that class Sensor exists (somewhere).
However, if we put our Sensor and Actuator classes in a namespace we have a problem. In the case of the Positioner class, above, since we are only declaring pointers to Sensor and Actuator objects it is good practice to use forward references to those classes.
The syntax, as shown below, looks reasonable but doesn’t work.
The compiler takes the forward reference as referring to a nested class; it cannot know IO is a namespace.
The solution is to tell the compiler that IO is a namespace with the namespace keyword. The forward references to Sensor and Actuator can then be declared within the namespace.
Remember from previously, if we want to use a class or function from a namespace we have to explicitly fully-qualify the entity. If this is true (and it is) then the following code shouldn’t compile:
The reason it should fail is the overloaded operator<<. This is actually a function call which, like everything else in the Standard Library, placed in the namespace std
std::ostream& std::operator<< (std::ostream&, const char*);
This means that, in order to access this function, we should have to fully qualify its name:
This is not very readable; and, of course, we know the original code does compile perfectly fine.
The solution is a compiler mechanism called Argument-Dependent Lookup (ADL) or Koenig Lookup (after its inventor, Andrew Koenig). ADL states that if you supply a function argument of class type, then to find the function name the compiler considers matching names in the namespace containing the argument’s type.
In the example above we have defined a class, Digital and an overloaded function doStuff in the namespace Points.
When we make a call to doStuff() with a Points::Digital object the compiler is able to look into the owning namespace of Digital (Points::) and find a function with the correct signature.
However, this only works with arguments of class type so the call to doStuff() with an integer cannot be resolved automatically; the programmer would have to explicitly qualify the function
Our earlier Standard Library example can now be explained: since one of the parameters of std::operator<< is of class type (in this case std::ostream) the compiler can search the std:: namespace for an appropriate function signature without the programmer having to explicitly qualify it. The simplification of library code like this is the primary reason for the inclusion of ADL.
Useful though this is, ADL has the potential to cause us problems in our code. Consider the example below:
Here, the call to doStuff() is ambiguous – it could be either Feabhas::doStuff(Points::Digital&) or Points::doStuff(Points::Digital&) (using ADL). There is no automatic resolution – the programmer must explicitly qualify the call with the appropriate namespace name to get the doStuff() they want.
Preserving the locality of code
In C, the keyword static has two sets of semantics, depending on where it used. The keyword static can be applied to functions and variables
Static functions are not exported, they are private to the module they are defined in. They have internal linkage; they do not appear in the module’s export table. This is useful for preventing your local helper functions from being called outside of your module.
Applying static to objects defined outside any block (confusingly, called ‘static objects’ in the standard!) gives the object internal linkage. The static object is visible anywhere in the translation unit, but not visible from any other translation unit.
When an automatic (local) variable is marked static in a function the compiler allocates permanent storage for it (at compile time). Practically, this means it retains its state between calls to the function but its scope is limited to the function in which it is defined.
C++ extends this behaviour to user-defined (class) types as well.
However, C++ prefers the use of a concept called an un-named namespace instead of static to give objects and functions internal linkage.
An un-named namespace is (as the name suggests!) an anonymous namespace – it does not have a name. The compiler allows entities (objects and functions) in this namespace to be accessed within the defining translation unit, but not outside. Un-named namespaces in different translation units are considered independent (and different). There is no way of naming a member of an unnamed namespace from another translation unit; hence the members of that namespace cannot be accessed (making them, effectively, static).
This removes the need for C’s static, which has now been deprecated (meaning it is currently supported, but likely to be removed from the next revision of the standard.)
Namespaces are a powerful organisational tool in your design. They are a compile-time construct so have no run-time overhead. There’s no good reason not to use namespaces in your code. They will help you build more maintainable, more portable and more reusable code.
For more, even more exciting exploration of this topic have a look here:
Latest posts by Glennan Carnie (see all)
- Your handy cut-out-and-keep guide to std::forward and std::move - April 26, 2018
- Setting up Sublime Text to build your project - April 12, 2018
- “May Not Meet Developer Expectations” #77 - February 15, 2018 | <urn:uuid:e81a4431-f31a-4c9e-9143-dbb69e3afc65> | 3.625 | 1,908 | Personal Blog | Software Dev. | 35.477321 | 95,612,540 |
More precisely, the average size of Soay sheep on the island has declined about 5 percent in both body weight and stature since researchers began taking measurements of the herd in 1985.
The finding is the exact opposite of how researchers would have expected the sheep to respond to the consistent warming trend that global warming has brought to the island.
"Since the trend has been for milder winters, that should actually make things much happier for everybody, because they don't have to cope with that severe winter," said Shripad Tuljapurkar, a professor of biology at Stanford and one of the co-authors of a paper describing the research, published online July 2, by Science.
Hirta lies in the in the Scottish archipelago of St. Kilda, the western-most of the remote Outer Hebrides Islands, which lie at the same latitude as Hudson's Bay in Canada, and are renowned for harsh, stormy winters. But, with less punishing winters and a longer growing season, the sheep have more grass available for grazing and more time in which to shovel it in before winter hits, all of which would be expected to produce bigger sheep. Except the gentler climate has had an unexpected side effect.
"Survival rates have been lifted for everyone, including the smaller sheep, so sheep that might simply have not made it 20 or 30 years ago, are definitely making it now," said Tuljapurkar. With more runts surviving, the size of the average sheep in the herd has declined.
The size-decline trend is exacerbated because with more sheep surviving, there is more competition for the available food. And Hirta, being just under a square mile, only has so much grass to offer its resident sheep, even with a longer growing season.
"When you start looking at the population effect, it's very clear that what is driving the decline in growth rates is simply that there are more sheep jostling for the resources that are at hand. They're simply not putting on as much body mass as they used to between this first and second year of life," Tuljapurkar said. Even if the sheep pick up the pace of their growth after that, it isn't enough to make up for the lower birth weights and slower early growth, so the trend toward a decline in average size persists.
While there is ample evidence to indicate that the long-term effects of global warming will be negative, there has been some indication that in the short-term, in certain regions and for certain species, there might be a temporary positive effect, as longer growing seasons could enable some plants and animals to flourish over a broader area than they normally inhabit. But this sheep finding suggests it isn't necessarily that simple.
The researchers gathered annual data by making measurements each August on the female members of the flock. In teasing out the environmental effects on the sheep, the researchers also discovered another surprising effect at work.
Natural selection generally works to favor animals growing larger over time, as bigger animals are better equipped to survive hard times and are thus more likely to reproduce successfully.
"We also know that body weight is heritable," Tuljapurkar said. Thus, one might expect that mothers would give birth to daughters of about the same weight at birth, or slightly larger, as the mothers had been when they were born. So, the researchers analyzed the data to see if it was the case that the average birth weight of offspring is the same as the average birth weight of their mothers.
"It turns out, that is not true for young mothers," he said. Unlike more mature mothers, the young mothers have offspring that typically weigh less than the mothers did and also stay smaller as they grow. And every year, a significant percentage of the new lambs are born to young mothers.
Tuljapurkar said that this "young mother" effect, as the researchers have dubbed it, pushes down the distribution of birth weight in the population, counteracting the increase in birth weight that would be expected to result from natural selection favoring bigger animals through survival and reproduction.
"When you add those two effects, they pretty much cancel out," he said. "Even though there are evolutionary forces at work, they are sort of neutralized by this "young mother" effect. Which is quite a surprising thing in the context of what evolutionary biologists and ecologists generally believe."
Although Tuljapurkar and his colleagues don't know how many other species might exhibit the "young mother" effect, they have also documented it in a species of red deer that live in northern Scotland.
"Sometimes people will say to me, "Why the heck do you guys wander around studying sheep in the middle of nowhere for 25 years?" Tuljapurkar said, reflecting on the study.
"There really is a tremendous amount that you can learn from studying the same thing in the same place for long enough so that you can see what happens across changes of climate regime," he said. "This is one of those payoffs for long term research."
Louis Bergeron | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:d292e15b-9eab-4ee0-972e-5c4c0e869646> | 3.40625 | 1,702 | Content Listing | Science & Tech. | 45.628639 | 95,612,546 |
hs2lazy -- Haskell to Lazy K compiler
What is this?
This is a translator from a subset of Haskell to Lazy K.
How to compile
You can build hs2lazy with GHC by following command:
ghc -o hs2lazy --make Main.hs
How to use
hs2lazy [source files]
If multiple source files are given, they are concatenated in the order specified. Output Lazy K code is written to standard output. Prelude module is not automatically loaded. So, you may specify examples/hs2lazy-prelude.hs first, like following:
hs2lazy examples/hs2lazy-prelude.hs foo.hs >foo.lazy
How to write source code
In Lazy K, input and output streams are represented as infinite lists of numbers (256 represents EOF). Hs2lazy programs handle I/O in similar way.
In Haskell, type of
main function is
IO (). But in hs2lazy, that is:
main :: Stream -> Stream
The Stream type is defined in hs2lazy-prelude.hs as
data Stream = Stream Char Stream
The input is a infinite stream of characters. For example:
Stream 'f' $ Stream 'o' $ Stream 'o' $ Stream eof $ Stream eof ...
eof is defined as
Streams can be converted to/from strings by
toStream defined in hs2lazy-prelude.hs. For example, following program reverses the input character-by-character.
main = toStream . reverse . fromStream
main = interact reverse
interact converts a String-to-String function to a Stream-to-Stream function.
Type.hs is based on Typing Haskell in Haskell. See Lisence.thih for the license of it. | <urn:uuid:2bea6146-11ca-4190-9892-d1899c2e4afd> | 2.75 | 406 | Documentation | Software Dev. | 64.722714 | 95,612,558 |
Where did the Universe come from? A question that intrigued Stephen W.Hawking, the Cambridge university Physicist, author of bestselling book,” A Brief History of Time”, which sold more than 10 million copies . A leader of his generation in exploring gravity and studying the properties of “Black holes’.
What is a black hole? Black hole is a place in space, the gravitational pull of the black holes are so strong that even light cannot get out. Light travels along geodesics and hence paths of lights are actually curved due to gravitational forces. Hawking discovered that the black holes weren’t black after all and that these black holes would eventually fizzle, radiation leak out of them and them and then at a certain point explodes and disappears over an indefinite very long period of time.
Stephen Hawking visited India in January 2001. He described his 16 day’s tour as magnificent. He was the first to receive the “Sarojini Damodaran Fellowship.
For his transportation during his stay Mahindra and Mahindra had designed special vehicle that accommodate his wheelchair. He was invited to the Rashtrapati Bhawan where he met the then President K.R. Narayanan, who expressed his 45 minute meeting with the Physicist as “an unforgettable experience”
Stephen Hawking delivered several lectures and interacted with Indian Astrophysicist and mathematicians and expressed that he found Indians are very good at Math and physics. He also visited the Jantar Mantar and Kutub Minar while in Delhi.
Stephen Hawking passed away at the age of 76 on 14th March 2018.
By: Madhuchanda Saxena | <urn:uuid:bb4682ac-ba7c-40b3-a4ac-8631ecce7185> | 3.234375 | 343 | Nonfiction Writing | Science & Tech. | 44.237624 | 95,612,583 |
Physics 2048 Spring 2008. Lecture #4 Chapter 4 motion in 2D and 3D. Definitions Projectile motion Uniform circular motion Relative motion. Chapter 4 – 2D and 3D Motion. Definitions. Position vector: extends from the origin of a coordinate system to the particle.
Lecture #4 Chapter 4 motion in 2D and 3D
Position vector: extends from the origin of a coordinate system to the particle.
Displacement vector: represents a particle’s position change during a certain
Motion of a particle launched with initial velocity, v0 and free fall acceleration
II. Projectile motion
The horizontal and vertical motions are independent from each other.
- Horizontal motion:ax=0 vx=v0x= cte
Range (R):horizontal distance traveled by a
projectile before returning to launch height.
- Vertical motion:ay= -g
- Horizontal range: R = x-x0; y-y0=0.
(Maximum for a launch angle of 45º )
Overall assumption: the air through which the projectile moves has no effect
on its motion friction neglected.
## In Galileo’s Two New Sciences, the author states that “for elevations (angles of projection) which exceed
or fall short of 45º by equal amounts, the ranges are equal…” Prove this statement.
Motion around a circle at constant speed.
Magnitude of velocity and acceleration constant.
Direction varies continuously.
-Velocity: tangent to circle in the direction of motion.
- Acceleration: centripetal
- Period of revolution:
1- A cat rides a merry-go-round while turning with uniform circular motion. At time t1= 2s, the cat’s velocity is: v1= (3m/s)i+(4m/s)j, measured on an horizontal xy coordinate system. At time t2=5s its velocity is:
v2= (-3m/s)i+(-4m/s)j. What are (a) the magnitude of the cat’s centripetal acceleration and (b) the cat’s average acceleration during the time interval t2-t1?
In 3s the velocity is reversed the cat reaches the opposite
side of the circle
Particle’s velocity depends on reference frame
Frame moves at constant velocity
Observers on different frames of reference measure the same acceleration
for a moving particle if their relative velocity is constant. | <urn:uuid:a32276e8-4d42-49f5-a643-72ac52b0023a> | 3.734375 | 534 | Truncated | Science & Tech. | 45.586872 | 95,612,648 |
Frontiers Commentary ARTICLE
In Chaotropy Lies Opportunity
- Department of Environmental Systems Sciences, Institute of Biogeochemistry and Pollutant Dynamics, Swiss Federal Institute of Technology Zurich, Zurich, Switzerland
by Zajc J., Džeroski S., Kocev D., Oren A., Sonjak S., Tkavc R., et al. (2014). Front. Microbiol. 5:708. doi: 10.3389/fmicb.2014.00708
The known distribution of microbial life on Earth is expanding thanks to improvements in technologies that enable contamination-free sampling of previously inaccessible environments, and the detection and cultivation of microbes from these locations (Lever et al., 2013; Priscu et al., 2013; Šantl-Temkiv et al., 2013; Inagaki et al., 2015). As novel Bacteria, Archaea, and microbial eukaryotes are discovered, and even familiar organisms reveal unknown adaptations to extreme temperature, pressure, radiation, pH, chemical toxicity, desiccation, or osmotic stress, notions of habitability have to be revised. In many cases the “biotic fringe,” i.e., the boundary separating environments that sustain life from environments believed to exclusively host abiotic processes (Shock, 2000), has to be adjusted to include new places that were previously considered devoid of life.
Parallel to knowledge on the distribution of life, the understanding of mechanisms determining where life can exist is transforming. Sustenance of life in any given locale requires that organisms conserve sufficient power to repair the inevitable biomolecule damage that is induced by their environment over time. This basal power requirement (BPR; Hoehler and Jørgensen, 2013) likely varies under different physicochemical conditions. For instance, rates of biomolecule damage on the monomeric (e.g., racemization, depurination) to macromolecular level (e.g., hydrolysis, changes to secondary structure) are a function of temperature. Despite adaptations that result in increased heat stability of their cellular building blocks, organisms adapted to hot environments have to spend orders of magnitude more power on the repair of biomolecules from temperature-related damage than their counterparts living at moderate temperatures (Lever et al., 2015). Comparable increases in BPR likely also result under other physiological extremes, e.g., high pressure, radiation, desiccation, pH, osmotic stress, chemical toxicity, and combinations thereof.
In recent decades, the understanding of how chemical toxicity affects organisms has become more nuanced. While certain toxins alter biomolecules on an intramolecular level, e.g., through oxidation or covalent bonding, others modify the macromolecular structure. In the latter category is a group called “chaotropes” (Greek chaos = disorder, tropy = behavior). Chaotropic compounds destabilize the two-dimensional structure and folding patterns of biomolecules by replacing water from the hydration shell and eliminating hydrogen bonds, or by penetrating into hydrophobic parts thereby causing swelling or solubilization of these parts (reviewed in Ball and Hallsworth, 2015). These properties make concentrated solutions with chaotropic compounds, e.g., guanidine chloride or urea, suitable for biochemical and molecular biological applications requiring microbial cell lysis, protein or nucleic acid extraction, and/or denaturation. At sublethal concentrations, the damaging effects of chaotropes on biomolecules result in higher rates of biomolecule repair and increased synthesis and accumulation of compounds that stabilize biomolecules and thereby offset the negative effects of chaotropes—socalled “kosmotropes.” In nature, the maximum concentration of chaotropes, e.g., MgCl2 or CaCl2, that is tolerated by living organisms is thus to some extent linked to the concentration of compatible kosmotropes, e.g., NaCl (Oren, 1983).
Besides inducing stress, chaotropic compounds also have important beneficial effects on microorganisms. Psychrophilic fungi and fungi in NaCl-rich environments produce the chaotropic compound glycerol to increase biomolecule flexibility at low temperature and to shield enzymes from the damaging kosmotrope Na+, respectively, (Albertyn et al., 1994; Chin et al., 2010). Fungi inhabiting brines in sea ice and dry soils may also produce chaotropic sugars and sugar alcohols as compatible solutes to concentrated kosmotropes (Gunde-Cimerman et al., 2003; Rummel et al., 2014). Research on the Dead Sea, pioneered by Benjamin Elazari Volcani 80 years ago (Wilkansky, 1936), has shown that blooms of the dominant phytoplankton (Dunaliella spp.) occur only after temporary salinity decreases caused by winter floods (Oren, 1993). Dunaliella blooms typically overlap with blooms of halophilic Archaea, in which the chaotrope glycerol, produced by Dunaliella as an osmo-protectant, serves as a key energy substrate for Archaea (Oren, 1995). Environments with high salinities, low moisture content, or temperatures below the freezing point of water cover vast areas on Earth—and are widely inhabited by microorganisms. This suggests an important role for adaptations involving the tolerance to, or targeted synthesis of, chaotropic compounds. Yet, the role of chaotropic compounds in limiting or expanding the limits of life on Earth is poorly understood.
Zajc et al. (2014) contribute significantly to knowledge on the chaotropy limits of life by demonstrating for the first time that microbial fungi can tolerate high concentrations of the widespread chaotropes MgCl2 and CaCl2 in the absence of high concentrations of protective kosmotropes. The authors grew a total of 135 fungal strains with known halo- or xerotolerance, that had been isolated from bitterns (MgCl2-rich brines) or the Dead Sea, or obtained from culture collections, at different concentrations of MgCl2 and CaCl2 in the laboratory. Several fungal strains grew at MgCl2 and CaCl2 concentrations of 2M in the absence of kosmotropes—a drastic extension of the previous known limit of 1.26M, which was for prokaryotes (Hallsworth et al., 2007). The fact that certain strains tolerated high concentrations of MgCl2 but not CaCl2, whereas others tolerated high concentrations of both salts, provokes the question of whether or not organisms have specifically evolved in response to chaotrope concentrations in the environment. So-called “chaophiles,” whose existence was first speculated by Hallsworth et al. (2007), or “chaotolerant”(this study) organisms might represent dominant organisms in environments where chaotropic stress is a frequent or permanent phenomenon, such as bitterns, salt deposits, surface, subterranean and submarine lakes, and brines. Future studies on the chaotropy limits in nature, combined with investigations on the effects of sublethal concentrations of chaotropes on microbial power requirements, will bring to light to what extent chalophily or chaotolerance influence and explain the distribution of life on Earth and elsewhere in the Universe.
Conflict of Interest Statement
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Albertyn, J., Hohmann, S., Thevelein, J. M., and Prior, B. A. (1994). GPD1, which encodes glycerol-3-phosphate dehydrogenase, is essential for growth under osmotic stress in Saccharomyces cerevisiae, and its expression is regulated by the high-osmolarity glycerol response pathway. Mol. Cell. Biol. 14, 4135–4144. doi: 10.1128/MCB.14.6.4135
Chin, J. P., Megaw, J., Magill, C. L., Nowotarski, K., Williams, J. P., Bhaganna, P., et al. (2010). Solutes determine the temperature windows for microbial survival and growth. Proc. Natl. Acad. Sci. U.S.A. 107, 7835–7840. doi: 10.1073/pnas.1000557107
Gunde-Cimerman, N., Sonjak, S., Zalar, P., Frisvad, J. C., Diderichsen, B., and Plemenitaš, A. (2003). Extremophilic fungi in arctic ice: a relationship between adaptation to low temperature and water activity. Phys. Chem. Earth 28, 1273–1278. doi: 10.1016/j.pce.2003.08.056
Hallsworth, J. E., Yakimov, M. M., Golyshin, P. N., Gillion, J. L. M., D'Auria, G., De Lima Alves, F., et al. (2007). Limits of life in MgCl2-containing environments: chaotropicity defines the window. Environ. Microbiol. 9, 801–813. doi: 10.1111/j.1462-2920.2006.01212.x
Inagaki, F., Hinrichs, K.-U., Kubo, Y., Bowles, M. W., Heuer, V. B., Hong, W.-L., et al. (2015). Exploring deep microbial life in coal-bearing sediment down to ~2.5 km below the ocean floor. Science 349, 420–424. doi: 10.1126/science.aaa6882
Lever, M. A., Rogers, K., Lloyd, K. G., Overmann, J. O., Schink, B., Thauer, R. K., et al. (2015). Microbial life under extreme energy limitation: a synthesis of laboratory- and field-based in investigations. FEMS Microbiol. Rev. 39, 688–728. doi: 10.1093/femsre/fuv020
Lever, M. A., Rouxel, O. J., Alt, J., Shimizu, N., Ono, S., Coggon, R. M., et al. (2013). Evidence for microbial carbon and sulfur cycling in deeply buried ridge flank basalt. Science 339, 1305–1308. doi: 10.1126/science.1229240
Priscu, J. C., Achberger, A. M., Cahoon, J. E., Christner, B. C., Edwards, R. L., Jones, W. L., et al. (2013). A microbiologically clean strategy for access to the whillans ice stream subglacial environment. Antarct. Sci. 25, 637–647. doi: 10.1017/S0954102013000035
Rummel, J. D., Beaty, D. W., Jones, M. A., Bakermans, C., Barlow, N. G., Boston, P. J., et al. (2014). A new analysis of mars “special regions”: findings of the seconde MEPAG special regions science analysis group (SR-SAG2). Astrobiology 14, 887–968. doi: 10.1089/ast.2014.1227
Šantl-Temkiv, T., Finster, K., Dittmar, T., Munk Hansen, B., Thyrhaug, R., Woetmann Nielsen, N., et al. (2013). Hailstones, a window into the microbial and chemical inventory of a storm cloud. PLoS ONE 8:e53550. doi: 10.1371/journal.pone.0053550
Keywords: chaotropic, kosmotropic, chaophilic, chaotolerant, extremophile, fungi, biotic fringe, astrobiology
Citation: Lever MA (2016) In Chaotropy Lies Opportunity. Front. Microbiol. 6:1505. doi: 10.3389/fmicb.2015.01505
Received: 22 October 2015; Accepted: 14 December 2015;
Published: 05 January 2016.
Edited by:Andreas Teske, University of North Carolina at Chapel Hill, USA
Reviewed by:Purificacion Lopez-Garcia, Centre National de la Recherche Scientifique, France
Paul Alan Hoskisson, University of Strathclyde, UK
Copyright © 2016 Lever. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Mark A. Lever, email@example.com | <urn:uuid:9a69e6c1-1b49-4630-934f-41c5535bc94a> | 2.53125 | 2,790 | Academic Writing | Science & Tech. | 49.423838 | 95,612,658 |
Yale University scientists have answered a 40-year-old question about Arctic ice thickness by treating the ice floes of the frozen seas like colliding molecules in a fluid or gas.
Although today's highly precise satellites do a fine job of measuring the area of sea ice, measuring the volume has always been a tricky business.
The volume is reflected through the distribution of sea ice thickness -- which is subject to a number of complex processes, such as growth, melting, ridging, rafting, and the formation of open water.
For decades, scientists have been guided by a 1975 theory (by Thorndike et al.) that could not be completely tested, due to the unwieldy nature of sea ice thickness distribution.
The theory relied upon an intransigent term -- one that could not be related to the others -- to represent the mechanical redistribution of ice thickness. As a result, the complete theory could not be mathematically tested.
Enter Yale professor John Wettlaufer, inspired by the staff and students at the Geophysical Fluid Dynamics Summer Study Program at the Woods Hole Oceanographic Institution, in Massachusetts.
Over the course of the summer, Wettlaufer and Yale graduate student Srikanth Toppaladoddi developed and articulated a new way of thinking about the space-time evolution of sea ice thickness.
The resulting paper appears in the Sept. 17 edition of the journal Physical Review Letters.
"The Arctic is a bellwether of the global climate, which is our focus. What we have done in our paper is to translate concepts used in the microscopic world into terms appropriate to this problem essential to climate," said Wettlaufer, who is the A.M. Bateman Professor of Geophysics, Mathematics and Physics at Yale.
Wettlaufer and co-author Toppaladoddi recast the old theory into an equation similar to a Fokker-Planck equation, a partial differential equation used in statistical mechanics to predict the probability of finding microscopic particles in a given position under the influence of random forces. By doing this, the equation could capture the dynamic and thermodynamic forces at work within polar sea ice.
"We transformed the intransigent term into something tractable and -- poof -- solved it," Wettlaufer said.
The researchers said their equation opens up the study of this aspect of climate science to a variety of methods normally used in nonequilibrium statistical mechanics.
Jim Shelton | EurekAlert!
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
Drones survey African wildlife
11.07.2018 | Schweizerischer Nationalfonds SNF
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Transportation and Logistics
16.07.2018 | Agricultural and Forestry Science | <urn:uuid:267692e4-ca76-4cae-816b-b34064531b15> | 3.359375 | 1,143 | Content Listing | Science & Tech. | 37.077994 | 95,612,659 |
co-hosted episode with Kevin from New Zealand and Wolfgang from Germany
This episode we want to dedicate to the animals and the burden inflicted on the other species by us, Homo sapiens. In times of abrupt climate change or ongoing anthropogenic climate disruption, there is no doubt, we are in the sixth mass extinction event. Billions of animals are dying and suffering. And 200 species (of animals and plants) are going extinct every day.
So this may be a grief session about this suffering around the world, but I hope, we’ll find a way not to forget the gift and the beauty we receive, because we can share living on this planet with all these creatures.
Warning of ‘ecological Armageddon’ after dramatic plunge in insect numbers
Insects make up about two-thirds of all life on Earth [but] there has been some kind of horrific decline,” said Prof Dave Goulson of Sussex University, UK, and part of the team behind the new study. “We appear to be making vast tracts of land inhospitable to most forms of life, and are currently on course for ecological Armageddon. If we lose the insects then everything is going to collapse.
Although scientists have long understood that animals – through ingestion, digestion, breathing and decomposition – are part of the carbon cycle, the work, published Oct. 9 in Nature Ecology and Evolution is the first to suggest the importance of animal biodiversity rather than just animal numbers in the carbon cycle.
If we want to increase carbon sequestration, we have to preserve not only high numbers of animals but also many different species, the authors said.
The researchers found that soil had the highest carbon concentrations where they saw the most vertebrate species. When they looked for a mechanism that could explain this relationship, it turned out that the areas with highest animal diversity had the highest frequency of feeding interactions, such as animals preying on other animals or eating fruit, which results in organic material on and in the ground.
The Guardian 27-oct-2016
The number of wild animals living on Earth is set to fall by two-thirds by 2020, according to a new report, part of a mass extinction that is destroying the natural world upon which humanity depends.
The analysis, the most comprehensive to date, indicates that animal populations plummeted by 58% between 1970 and 2012, with losses on track to reach 67% by 2020.
- here Paul Beckwith discusses the insect collapse
- The Counter Punch article from Robert Hunziker
- Natural News Coverage of the insect collapse
- Song of The Blue Dolphin by Andy Blackwood – cc-by (personal use)
- Kevin Hester‘s site
379total visits,1visits today | <urn:uuid:86ea1a9f-fb32-455d-b1aa-5ffd8d1307aa> | 2.78125 | 560 | Content Listing | Science & Tech. | 35.209477 | 95,612,675 |
Top and Bottom Margin for Frame.When the object is serialized out as xml, its qualified name is w:marH.
Assembly: DocumentFormat.OpenXml (in DocumentFormat.OpenXml.dll)
'Declaration Public Class MarginHeight _ Inherits PixelsMeasureType 'Usage Dim instance As MarginHeight
public class MarginHeight : PixelsMeasureType
[ISO/IEC 29500-1 1st Edition]
184.108.40.206 marH (Top and Bottom Margin for Frame)
This element specifies the top and bottom margin height for a single frame in a frameset document, as follows:
This height is expressed in pixels.
If this element is omitted, then no top or bottom margin shall be used for this frame.
[Example: Consider a document that has a frame, where the margin height has been specified and is represented as the following WordprocessingML:
<w:frame> <w:marH w:val="594"/> </w:frame>
The marH element has a val attribute value of 594, which specifies that this frame has a top and bottom margin value of 594 pixels, resulting in 594 pixels of space between the content and the top and bottom margins of the frame. end example]
val (Measurement in Pixels)
Specifies a value whose contents shall contain a positive whole number, whose contents consist of a positive measurement in pixels.
The contents of this measurement shall be interpreted based on the context of the parent XML element.
[Example: Consider an attribute value of 960 whose simple type is ST_PixelsMeasure. This attribute value specifies a value of 960 pixels. end example]
The possible values for this attribute are defined by the ST_PixelsMeasure simple type (§17.18.67).
[Note: The W3C XML Schema definition of this element’s content model (CT_PixelsMeasure) is located in §A.1. end note]
© ISO/IEC29500: 2008.
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. | <urn:uuid:aca429d7-ea80-4fe0-92ec-d1753e57cc2c> | 2.671875 | 461 | Documentation | Software Dev. | 51.080791 | 95,612,700 |
Nanochemistry is the combination of chemistry and nanoscience. Nanochemistry is associated with synthesis of building blocks which are dependent on size, surface, shape and defect properties. Nanochemistry is being used in chemical, materials and physical, science as well as engineering, biological and medical applications. Nanochemistry and other nanoscience fields have the same core concepts but the usages of those concepts are different.
The nano prefix was given to nanochemistry when scientists observed the odd changes on materials when they were in nanometer-scale size. Several chemical modification on nanometer scaled structures, approves effects of being size dependent.
Nanochemistry can be characterized by concepts of size, shape, self-assembly, defects and bio-nano; So the synthesis of any new nano-construct is associated with all these concepts. Nano-construct synthesis is dependent on how the surface, size and shape will lead to self-assembly of the building blocks into the functional structures; they probably have functional defects and might be useful for electronic, photonic, medical or bioanalytical problems.
Silica, gold, polydimethylsiloxane, cadmium selenide, iron oxide and carbon are materials that show the transformative power of nanochemistry. Nanochemistry can make the most effective contrast agent of MRI out of iron oxide (rust) which has the ability of detecting cancers and even killing them at their initial stages. Silica (glass) can be used to bend or stop light in its tracks. Developing countries also use silicone to make the circuits for the fluids to attain developed world's pathogen detection abilities. Carbon has been used in different shapes and forms and it will become a better choice for electronic materials.
Overall, nanochemistry is not related to the atomic structure of compounds. Rather, it is about different ways to transform materials into solutions to solve problems. Chemistry mainly deals with degrees of freedom of atoms in the periodic table however nanochemistry brought other degrees of freedom that controls material's behaviors.
Nanochemical methods can be used to create carbon nanomaterials such as carbon nanotubes (CNT), graphene and fullerenes which have gained attention in recent years due to their remarkable mechanical and electrical properties.
- 1 Nanotopography
- 2 Nanolithography
- 3 Applications
- 4 Research
- 5 References
- 6 Selected books
Nanotopography refers to the specific surface features which appear on the nanoscale. In industry, applications of nanotopography typically encompass electrics and artificially produced surface features. However, natural surface features are also included in this definition, such as molecular-level cell interactions and the textured organs of animals and plants. These nanotopographical features in nature serve distinctive purposes that aid in regulation and function of the biotic organism, as nanotopographical features are extremely sensitive in cells.
Nanolithography is the process by which nanotopographical etchings are artificially produced on a surface. Many practical applications make use of nanolithography, including semiconductor chips in computers. There are many types of nanolithography, which include:
- Electron beam lithography (EBL)
- X-ray lithography
- Extreme ultraviolet lithography (EUVL)
- Light coupling nanolithography (LCM)
- Scanning probe microscope lithography (SPM)
- Nanoimprint lithography
- Dip-Pen nanolithography
- Soft lithography
Each nanolithography technique has varying factors of resolution, time consumption, and cost. There are three basic methods used by nanolithography. One involves using a resist material which acts as a "mask" to cover and protect the areas of the surface that are intended to be smooth. The uncovered portions can now be etched away, with the protective material acting as a stencil. The second method involves directly carving the desired pattern. Etching may involve using a beam of quantum particles, such as electrons or light, or chemical methods such as oxidation or SAM's (self-assembled monolayers). The third method places the desired pattern directly on the surface, producing a final product that is ultimately a few nanometers thicker than the original surface. In order to visualize the surface to be fabricated, the surface must be visualized by a nano-resolution microscope, which include the scanning probe microscope (SPM) and the atomic force microscope (AFM). Both microscopes can also be engaged in processing the final product.
One of the methods of nanolithography is use of self-assembled monolayers (SAM) which develops soft methodology. SAMs are long chain alkanethiolates that are self-assembled on gold surfaces making a well-ordered monolayer films. The advantage of this method is to create a high quality structure with lateral dimensions of 5 nm to 500 nm. In this methodology a patterned elastomer made of polydimethylsiloxane (PDMS) as a mask is usually used. In order to make a PDMS stamp, the first step is to coat a thin layer of photoresist onto a silicon wafer. The next step is to expose the layer with UV light, and the exposed photoresist is washed away with developer. In order to reduce the thickness of the prepolymer, the patterned master is treated with perfluoroalkyltrichlorosilane. These PDMS elastomers are used to print micron and submicron design chemical inks on both planar and curved surfaces for different purposes.
One highly researched application of nanochemistry is medicine. A simple skin-care product using the technology of nanochemistry is sunscreen. Sunscreen contains nanoparticles of zinc oxide and titanium dioxide. These nanochemicals protect the skin against harmful UV light by absorbing or reflecting the light and prevent the skin from retaining full damage by photoexcitation of electrons in the nanoparticle. Effectively, the excitation of the particle blocks skin cells from DNA damage.
Emerging methods of drug delivery involving nanotechnological methods can be advantageous by improving increased bodily response, specific targeting, and efficient, non-toxic metabolism. Many nanotechnological methods and materials can be functionalized for drug delivery. Ideal materials employ a controlled-activation nanomaterial to carry a drug cargo into the body. Mesoporous silica nanoparticles (MSN) have been increasing in research popularity due to its large surface area and flexibility for various individual modifications while demonstrating high resolution performance under imaging techniques. Activation methods greatly vary across nanoscale drug delivery molecules, but the most commonly used activation method uses specific wavelengths of light to release the cargo. Nanovalve-controlled cargo release uses low intensity light and plasmonic heating to release the cargo in a variation of MSN containing gold molecules. The two-photon activated photo-transducer (2-NPT) uses near IR wavelengths of light to induce breaking of a disulfide bond to release the cargo. Recently, nanodiamonds have demonstrated potential in drug delivery due to non-toxicity, spontaneous absorption through the skin, and ability to enter the blood-brain barrier.
Because cells are very sensitive to nanotopographical features, optimization of surfaces in tissue engineering has pushed the frontiers towards implantation. Under the appropriate conditions, a carefully crafted 3-dimensional scaffold is used to direct cell seeds towards artificial organ growth. The 3-D scaffold incorporates various nanoscale factors that control the environment for optimal and appropriate functionality. The scaffold is an analog of the in vivo extracellular matrix in vitro, allowing for successful artificial organ growth by providing the necessary, complex biological factors in vitro. Additional advantages include the possibility of cell expression manipulation, adhesion, and drug delivery.
For abrasions and wounds, nanochemistry has demonstrated applications in improving the healing process. Electrospinning is a polymerization method used biologically in tissue engineering, but can be functionalized for wound dressing as well as drug delivery. This produces nanofibers which encourage cell proliferation, antibacterial properties, and controlled environment. These properties have been created in macroscale; however, nanoscale versions may show improved efficiency due to nanotopographical features. Targeted interfaces between nanofibers and wounds have higher surface area interactions and are advantageously in vivo.
New developments in nanochemistry provide a variety of nanostructure materials with significant properties that are highly controlable. Some of the application of these nanostructure materials include SAMs and lithography, use of nanowires in sensors, and nanoenzymes.
Scientist also devised a large number of nanowire compositions with controlled length, diameter, doping, and surface structure by using vapor and solution phase strategies. These oriented single crystals are being used in semiconductor nanowire devices such as diodes, transistors, logic circuits, lasers and sensors. Since nanowires have one dimensional structure meaning large surface to volume ratio, the diffusion resistance decreases. In addition, their efficiency in electron transport which is due to the quantum confinement effect, make their electrical properties be influenced by minor perturbation. Therefore, use of these nanowires in nanosensor elements increases the sensitivity in electrode response. As mentioned above, one dimensionality and chemical flexibility of the semiconductor nanowires make them applicable in nanolasers. Peidong Yang and his co-workers have done some research on room-temperature ultraviolet nanowire nanolasers in which the significant properties of these nanolasers have been mentioned. They have concluded that using short wavelength nanolasers have applications in different fields such as optical computing, information storage, and microanalysis.
Nanoenzymes (or Nanozymes)
Nanostructure materials mainly used in nanoparticle-based enzymes have drawn attraction due to the specific properties they show. Very small size of these nanoenzymes (or nanozymes) (1–100 nm) have provided them unique optical, magnetic, electronic, and catalytic properties. Moreover, the control of surface functionality of nano particles and predictable nanostructure of these small sized enzymes have made them to create a complex structure on their surface which in turn meet the needs of specific applications
Fluorescent nanoparticles have broad applications, but their use into macroscopic arrays allows them to be used efficiently in applications of plasmonics, photonics and quantum communications that makes them highly sought after. While there are many methods in assembling nanoparticles array, especially gold nanoparticles, they tend to be weakly bonded to their substrate so it can’t be used for wet chemistry processing steps or lithography. Nanodiamonds allow for a greater variability in access that can subsequently be used to couple plasmonic waveguides to realize quantum plasmonics circuitry.
Nanodiamonds can be synthesized by employing nanoscale carbonaceous seeds that are fabricated by a single step using a mask-free electron beam induced position technique to add amine groups to self-assemble nanodiamonds into arrays. The presence of dangling bonds at the nanodiamond surface allows them to be functionalized with a variety of ligands. The surfaces of these nanodiamonds are terminated with carboxylic acid groups, enabling their attachment to amine-terminated surfaces through carbodiimide coupling chemistry. This process gives a high yield do that this method relies on covalent bonding between the amine and carboxyl functional groups on amorphous carbon and nanodiamond surfaces in the presence of EDC. Thus unlike gold nanoparticle they can withstand processing and treatment, for many device applications.
Fluorescent (nitrogen vacancy)
Fluorescent properties in nanodiamonds arise from the presence of nitrogen vacancy (NV) centers, nitrogen atom next to a vacancy. NV centres can be created by irradiating nanodiamond with high-energy particles (electrons, protons, helium ions), followed by vacuum-annealing at 600–800 °C. Irradiation forms vaccines in the diamond structure while vacuum-annealing migrates these vacancies, which will get trapped by nitrogen atoms within the nanodiamond. This process produces two types of NV centers. Two types of NV centers are formed—neutral (NV0) and negatively charged (NV–)—and these have different emission spectra. The NV– centre is of particular interest because it has an S = 1 spin ground state that can be spin-polarized by optical pumping and manipulated using electron paramagnetic resonance. Fluorescent nanodiamonds combine the advantages of semiconductor quantum dots (small size, high photostability, bright multicolor fluorescence) with biocompatibility, non-toxicity and rich surface chemistry, which means that they have the potential to revolutionize in vivo imaging application.
Drug-delivery and biological compatibility
Nanodiamonds have the ability to self-assemble and a wide range of small molecules, proteins antibodies, therapeutics and nucleic acids can bind to its surface allow for drug delivery, protein-mimicking and surgical implants. Other potential biomedical applications are the use of nanodiamonds as a support for solid-phase peptide synthesis and as sorbents for detoxification and separation and fluorescent nanodiamonds for biomedical imaging. Nanodiamonds are capable of biocompatibility, the ability to carry a broad range of therapeutics, dispersibility in water and scalability and thee potential for targeted therapy all properties needed for a drug delivery platform. The small size, stable core, rich surface chemistry, ability to self-assemble and low cytotoxicity of nanodiamonds have led to suggestions that they could be used to mimic globular proteins. Nanodiamonds have been mostly studied as potential injectable therapeutic agents for generalized drug delivery, but it has also been shown that films of parylene nanodiamond composites can be used for localized sustained release of drugs over periods ranging from two days to one month.
Monodispurse, nanometer-size clusters (also known as nanoclusters) are synthetically grown crystals whose size and structure influence their properties through the effects of quantum confinement. One method of growing these crystals is through inverse micellar cages in non aqueous solvents. Research conducted on the optical properties of MoS2 nanoclusters compared them to their bulk crystal counterparts and analyzed their absorbance spectra. The analysis reveals that size dependence of the absorbance spectrum by bulk crystals is continuous, whereas the absorbance spectrum of nanoclusters takes on discrete energy levels. This indicates a shift from solid-like to molecular-like behavior which occurs at a reported cluster size of 4.5 – 3.0 nm.
Interest in the magnetic properties of nanoclusters exists due to their potential use in magnetic recording, magnetic fluids, permanent magnets, and catalysis. Analysis of Fe clusters shows behavior consistent with ferromagnetic or superparamagnetic behavior due to strong magnetic interactions within clusters.
This section does not cite any sources. (December 2016) (Learn how and when to remove this template message)
There are several researchers in nanochemistry that have been credited with development of the field. Geoffrey A. Ozin, from the University of Toronto, is known as one of the "founding fathers of Nanochemistry" due to his four and a half decades of research on this subject. This research includes the study of Matrix isolation laser Raman spectroscopy, naked metal clusters chemistry and photochemistry, nanoporous materials, hybrid nanomaterials, mesoscopic materials, and ultrathin inorganic nanowires.
Another chemist who is also viewed as one of nanochemistry's pioneers is Charles M. Lieber at Harvard University. He is known for his contributions in the development of nano-scale technologies, particularly in the field of biology and medicine. The technologies include nanowires, a new class of quasi-one dimensional materials that have demonstrated superior electrical, optical, mechanical, and thermal properties and can be used potentially as biological sensors. Research under Lieber has delved into the use of nanowires for the purpose of mapping brain activity.
Shimon Weiss, a professor at the University of California, Los Angeles, is known for his research of fluorescent semiconductior nanocrystals, a subclass of quantum dots, for the purpose of biological labeling. Paul Alivisatos, from the University of California Berkley, is also notable for his research on the fabrication and use of nanocrystals. This research has the potential to develop insight into the mechanisms of small scale particles such as the process of nucleation, cation exchange, and branching. A notable application of these crystals is the development of quantum dots.
Peidong Yang, another researcher from the University of California, Berkley, is also notable for his contributions to the development of 1-dimensional nanostructures. Currently, the Yang group has active research projects in the areas of nanowire photonics, nanowire-based solar cells, nanowires for solar to fuel conversion, nanowire thermoelectrics, nanowire-cell interface, nanocrystal catalysis, nanotube nanofluidics, and plasmonics.
- Cademartiri, Ludovico; Ozin, Geoffrey (2009). Concepts of Nanochemistry. Germany: Wiley VCH. pp. 4–7. ISBN 978-3527325979.
- "Nanolithography Overview - Definition and Various Nanolithography Techniques". AZO Nano.
- "What is Nanolithography? - How Nanolithography Works?". Wifi Notes.
- Ozin,, Geoffery A (2009). Nanochemistry: A Chemical Approach to Nanochemistry. pp. 59–62. ISBN 9781847558954.
- "Uses of nanoparticles of titanium(IV) oxide (titanium dioxide, TiO2)". Doc Brown's Chemistry Revision Notes NANOCHEMISTRY.
- Bharti, Charu. "Mesoporous silica nanoparticles in target drug delivery system: A review". Int J Pharm Investig. 5: 124–33. doi:10.4103/2230-973X.160844. PMC . PMID 26258053.
- "Nanovalve-Controlled Cargo Release Activated by Plasmonic Heating". American Chemical Society. Journal of the American Chemical Society.
- Zink, Jeffrey. "Photo-redox activated drug delivery systems operating under two photon excitation in the near-IR" (PDF). Royal Society of Chemistry. Royal Society of Chemistry.
- Langer, Robert. "Nanotechnology in Drug Delivery and Tissue Engineering: From Discovery to Applications". Nano Lett. 10: 3223–30. Bibcode:2010NanoL..10.3223S. doi:10.1021/nl102184c. PMC . PMID 20726522.
- Kingshott, Peter. "Electrospun nanofibers as dressings for chronic wound care" (PDF). Materials Views. Macromolecular Bioscience.
- Xiang, Dong-xi; Qian Chen; Lin Pang; Cong-long Zheng (17 September 2011). "Inhibitory effects of silver nanoparticles on H1N1 influenza A virus in vitro". Journal of Virological Methods. 178: 137–142. doi:10.1016/j.jviromet.2011.09.003. ISSN 0166-0934.
- Liu, Junqiu (2012). Selenoprotein and Mimics. pp. 289–302. ISBN 978-3-642-22236-8.
- Huang, Michael (2001). "Room Temperature Ultraviolet Nanowire Nanolasers". Science.
- Aravamudhan, Shyam. "Development of Micro/Nanosensor elements and packaging techniques for oceanography".
- Kianinia, Mehran; Shimoni, Olga; Bendavid, Avi; Schell, Andreas W.; Randolph, Steven J.; Toth, Milos; Aharonovich, Igor; Lobo, Charlene J. (2016-01-01). "Robust, directed assembly of fluorescent nanodiamonds". Nanoscale. 8 (42): 18032–18037. arXiv: . doi:10.1039/C6NR05419F.
- Hinman, Jordan (October 28, 2014). "Fluorescent Diamonds" (PDF). University of Illinois at Urbana–Champaign. University of Illinois at Urbana–Champaign.
- Mochalin, Vadym N.; Shenderova, Olga; Ho, Dean; Gogotsi, Yury (2012-01-01). "The properties and applications of nanodiamonds". Nature Nanotechnology. 7 (1): 11–23. Bibcode:2012NatNa...7...11M. doi:10.1038/nnano.2011.209. ISSN 1748-3387.
- Wilcoxon, J.P. (October 1995). "Fundamental Science of Nanometer-Size Clusters" (PDF). Sandia National Laboratories.
- Ozin, Geoffrey (2014). Nanochemistry Views. Toronto. p. 3.
- Lin Wang, Zhong (2003). Nanowires and Nanobelts: Materials, Properties, and Devices: Volume 2: Nanowires and Nanobelts of Functional Materials. Spring Street, New York, NY 10013, USA: Springer. pp. ix.
- J.W. Steed, D.R. Turner, K. Wallace Core Concepts in Supramolecular Chemistry and Nanochemistry (Wiley, 2007) 315p. ISBN 978-0-470-85867-7
- Brechignac C., Houdy P., Lahmani M. (Eds.) Nanomaterials and Nanochemistry (Springer, 2007) 748p. ISBN 978-3-540-72993-8
- H. Watarai, N. Teramae, T. Sawada Interfacial Nanochemistry: Molecular Science and Engineering at Liquid-Liquid Interfaces (Nanostructure Science and Technology) 2005. 321p. ISBN 978-0-387-27541-3
- Ozin G., Arsenault A.C., Cademartiri L. Nanochemistry: A Chemical Approach to Nanomaterials 2nd Eds. (Royal Society of Chemistry, 2008) 820p. ISBN 978-1847558954
- Kenneth J. Klabunde; Ryan M. Richards, eds. (2009). Nanoscale Materials in Chemistry (2nd ed.). Wiley. ISBN 978-0-470-22270-6. | <urn:uuid:e50d2d01-da95-4bd1-a251-de6ede32503f> | 3.515625 | 4,711 | Knowledge Article | Science & Tech. | 23.420533 | 95,612,701 |
Frozen surficial sediments that are otherwise unconsolidated contain structures and characteristics that are different from those of the same sediments when in an unfrozen state. These differences are usually related to either the nature of the ice contained within the frozen sediment or to weathering processes and chemical precipitates that are associated with freezing and thawing. This paper summarizes (a) the manner in which ground freezes when a landscape experiences the onset of cold-climate conditions and (b) what happens when newly transported sediments freeze following deposition in that environment. In the absence of obvious morphological evidence, the recognition of previously-frozen sediments is problematic. Less well-understood evidence includes secondary precipitates, neoformed clay minerals, seasonal frost cracks and fragipans. | <urn:uuid:095139ca-dd6d-4d18-89d6-c85579e25558> | 3.0625 | 155 | Academic Writing | Science & Tech. | -0.12 | 95,612,705 |
Mari Kimura is a New York based solo violinist that usually lectures at the acknowledged Juilliard School of Music. She is one of the extremely few people who can produce controlled subharmonic tones on violin. Kimura has developed this trait to a signature feature in her compositions and improvisations. The sounds she plays on violin are usually found in a cello.
"I have done this for ten years, and the researchers in US and Japan have tried to figure it out for as long. I don’t really know what it is I do, because I have an empirical approach to it. It all happens by the method of trial and error", says Kimura.
Solving the mystery
Scientists from Stanford, Columbia and Tokyo University are amongst those who found the phenomenon interesting. However they did not have the necessary combination of competence within physics, as well as interest in music, to be able to work exhaustingly on figuring out Kimura’s subharmonic violin pitch. In Tromsø however Kimura found the right kind of scientists that can measure and explain the phenomenon.
"We have definitely what it takes to solve this mystery. We have worked with strange and exotic sound systems earlier, and we have the ability to make good measurements, correct theoretical modelling and of course the necessary musical insight and interest", says the physics professor Alfred Hanssen.
The precise measurements of the Kimura’s low-pitched sounds were made at the echo free chamber at the University Hospital. By applying even pressure on the string by use of fine and steady movements of the bow Kimura can conjure many different tones from one place on the string. Measurements of these fascinating sounds will be used in research for years to come.
"Kimura makes a violin string vibrate in a totally new way. In physics we call this a driven and damped non-linear system, which we are particularly preoccupied with in our research. By understanding the way she plays the violin, we are contributing to understanding of similar processes in the nature", says Hanssen.
Mari Kimura too hopes to take advantage of the results that professor Hanssen and his assistants, PhD candidate Heidi Hindberg and post.doc Tor Arne Øigård achieve with their scientific approach.
"My ambition is to find out if there is more that I can do, if there is something to reach for. As an artist you are always searching for ways to expand the sound, to expand the use of violin as an instrument".
By: Maja Sojtaric
Professor Alfred Hanssen | alfa
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Materials Sciences
19.07.2018 | Earth Sciences
19.07.2018 | Life Sciences | <urn:uuid:18d1bdb5-fba5-4516-b8d3-fafdec2fb26c> | 3.125 | 1,168 | Content Listing | Science & Tech. | 42.516483 | 95,612,707 |
Blog text by Petri Nummi, Eeva-Maria Suontakanen, Sari Holopainen and Veli-Matti Väänänen “Beavers facilitate Teals at different scales” is now available on the Ibis website.
In a remote country lived a rich mire species community. But that was once upon a time, when Finland was a land of mires. Nowadays, only fragmented pieces are left in the southern region, while large natural mires can still be found in Lapland. Nevertheless, only one third of historical levels remain. Most mireswere dried due to farming and forestry. Ditches were dug to gather water from ca. 6 million hectares of mires. This affected the hydrology and further the ecology of these wet ecosystems. Several plant and animal species are adapted to mires, and have thus suffered from habitat loss and fragmentation. For example, forest grouse and bean geese (Anser fabalis) utilize mires during their breeding period. Due to ditching, mires stop producing their ecosystem services, because berry production and game bird populations (these are cultural and provisioning ecosystem services), decrease, and thus the recreational values of the areas lessen.
Finland has about 10 million hectares of dried mires, more than half of which have been utilized by forestry. However, about a fifth of this area does not produce wood well enough for it to be profitable. After several centuries of mire destruction, a change is now in the air. Finnish mires are being restored with increasing effort. For example, in 2017 Metsähallitus (the Park and Forest Service) began an EU-funded project called Hydrology LIFE. The project aims to safeguard not just mires, but also small water bodies and important bird lakes in 103 Natura 2000 areas. The project restores and protects mires.
Hydrology is the most important issue to consider when restoring a mire. Blocking ditches leads to changes in water balance, and eventually to active peat formation, which is basically the definition of a mire. After the ditches are blocked, water levels normally rise rapidly to correspond with the natural situation. However, actual peatland processes return at a much slower speed. Forest vegetation is slowly replaced by mire vegetation, starting from the ditches. The processes take a long time, so whether or not the original mire ecosystem returns is yet to be seen. It is also possible that we are actually just creating new mire types.
Helping forest grouse
Peatland-forest ecotones are key environments for forest grouse, but unfortunately these areas are becoming very rare. The willow ptarmigan (Lagopus lagopus) has suffered from mire fragmentation in Finland. Ptarmigan habitats are fragmented especially in Southern Finland, and thus there are small populations living far from each other. Luckily, local people are usually interested in peatland restoration that helps species such as the willow ptarmigan. Several good examples exist of how ptarmigans have accepted restored peatlands. The Finnish Association for Nature Conservation has a project “SuoMaa”, which began in 2016, and targets protecting and restoring taiga nature. One of the aims is to restore peatlands to support and enlarge a ptarmigan breeding peatland network and create connections between strong and threatened populations.
A few decades ago the whoopers swan (Cygnus cygnus) was an endangered and rare species in Finland. It only bred in remote lakes and people rarely saw them. The population increase of whooper swans after protection is one of the success stories in Finnish nature conservation. Nowadays the swans can be heard gaggling all around Finland. The whooper swan is a large bird, and it thus consumes a lot of vegetation. Water horsetail (Equisetum fluviatile) is one of its favourites.
Certain other species also prefer water horsetails. For example, wigeon (Mareca penelope) broods forage within the horsetail growths searching for emerging invertebrates. A study published earlier this year showed that the water horsetail is disappearing from Finnish and Swedish lakes. The reasons for this pattern are unknown, but one possible explanation could be increased grazing pressure. Whooper swans effectively utilize horsetails, and swan grazing was therefore suspected to be influencing the disappearance of the horsetail. Wigeon populations have concurrently shown a worrying decrease.
A recently published study conducted of 60 Finnish and Swedish lakes utilized vegetation and waterbird data gathered in the early 1990s and in 2016. The study area widely covers the boreal, reaching from southern Sweden to Finnish Lapland. The whooper swan population increased strongly during the study period. Researchers studied whether whooper swans’ grazing on water horsetail is causing the negative trend in the wigeon population. Pair counts were used to indicate waterbird communities, and thus any changes caused during the brood time were not shown.
The study showed that whooper swans strongly preferred lakes with horsetails during the 1990s, but this connections is not seen anymore. While the number of swan-occupied lakes has increased, the number of horsetail lakes has decreased dramatically. However, it appears that swans and disappearing horsetails do not associate, because the horsetail has also been from lakes where swans don’t occur. The horsetail has increased in some swan-occupied lakes.
The number of lakes used by wigeon has decreased, but swans are apparently not causing this. Wigeon loss has not been stronger on lakes occupied by swans. Quite the opposite, as wigeons and swans appear to positively correlate. Even though wigeons prefer horsetail lakes, their disappearance is not associated with the horsetail loss occurring in the study lakes, which suggests that wigeons can also utilize other lake types. On the other hand, the researchers note that this study did not considered the critical brood time, when the foraging opportunities among the horsetail growths are especially important. Thus it may still be possible that wigeons are affected by horsetail loss, but this effect only appears during the brood time.
While scientist struggle with short-term funding periods, the curiosity for nature that the general public shows, can unearth mechanisms that can only be found with long-term datasets. The persistent and systematic observations made by nature enthusts enables research about climate change or life history traits over several generations. Both are issues that require long-term research – and a lot of time and effort. Below are some examples of remarkable work done by citizen scientists curious about nature.
16 000 ringed goldeneyes have passed through the hands of a Finnish fireman
Finnish fireman Pentti Runko has collected systematic data of goldeneyes for several scientific studies. After starting his work in 1984, by 2017 Runko has ringed an amazing 16 000 goldeneyes and checked several hundreds of nest boxes every year.
In a recently published study, the authors utilized data concerning 14 000 of these goldeneyes ringed by Runko between the years 1984-2014. Among these goldeneyes were 141 females that were ringed as ducklings and recaptured later in the area. Based on these data it was possible to follow the recruit females’ lives from hatching to breeding. Thus the early life circumstances of these females are known, and the circumstances can be used to study their effects later on in life. In some cases early life circumstances have severe results on subsequent life, for example on breeding performance (duckling video).
The study was able to show deviations between individuals during the first breeding years and how circumstances during early life affected the breeding statistics of these females. Most females began breeding at the age of 2, but 44% delayed the start of breeding. Winter severity of the first two years affected the timing of breeding, but did not affect which year the females began breeding. As a conclusion, it appears that certain traits buffer the effects that the severity of the first weeks have, so the breeding parameters of females are not affected. The research also showed that first-time breeders tend to begin breeding later than the yearly specific averages.
The authors of another study used a set of 405 females and their offspring’s ringed by Runko, and found that the females’ condition matters when it comes to breeding success. Older, early-nesting females with good body condition and larger broods were able to produce more female recruits for the local population. The later the females bred, the less recruits they produced. The study also showed that females tend to adjust their breeding according to the ice-out dates of lakes. However, differences were observed between the flexibility of the females. Because early-breeding goldeneyes succeed better, the authors conclude that selection favours early-breeding individuals.
Climate change effects can also be observed from goldeneye phenology. Runko showed that during the last 30 years goldeneyes have advanced their egg-laying dates by 12 days.
45 years of starling surveys in a farmer’s backyard reveal climate warming
The Danish Ornithological Society Journal recently published a study that utilized data gathered by a Danish farmer, who ringed starlings for 45 years. Dairy farmer Peder V. Thellesen ringed ca. 12 000 starlings nesting in 27 nest boxes, and measured their phenology systematically. The data showed that during the study period starlings advanced their egg-laying dates by more than 9 days. This advance was observed in both first and second clutches. The result reflects the increase in April temperatures. Another important observation was that while no change was observed in clutch size and hatching rate, nest box occupancy has fallen dramatically in recent years. Starlings used to be common in Europe, but now they have decreased widely in Europe, also in Denmark. Changes in agricultural land use, especially decreased cattle grazing, are suspected as one example affecting starling populations. Loss of cattle-grazed land means less insect-rich foraging lands for the birds.
Pöysä, H., Clark, R. G., Paasivaara, A. and Runko, P. 2017. Do environmental conditions experienced in early life affect recruitment age and performance at first breeding in common goldeneye females? Journal of Avian Biology.
Clark, R. G., Pöysä, H., Runko, P. and Paasivaara, A. 2014. Spring phenology and timing of breeding in short-distance migrant birds: phenotypic responses and offspring recruitment patterns in common goldeneyes. Journal of Avian Biology.
Beaver activity enhances the occurrence and diversity of pin lichens (Caliciales). Both the number of species and individuals is much higher in beaver-created wetlands than in other types of boreal forest landscapes. There are four reasons behind this:
1. High amounts of deadwood. Pin lichens grow on both living trees and deadwood. Decorticated deadwood in particular is preferred by pin lichens. Beaver-induced flooding kills trees in the riparian zone and produces high amounts of decorticated snags.
2. Diversity of deadwood types. Beaver activity produces snags, logs and stumps. Snags are created by the flood, whereas logs and stumps are also produced by beaver gnawing. The diversity of deadwood tree species is also wide, containing both deciduous and coniferous tree species. The diversity of deadwood types maintains a high diversity of pin lichen species.
3. High humidity conditions. High humidity conditions are favorable for many pin lichen species. Old-growth forests are usually the only places in the boreal forest belt that contain high humidity conditions. There the shading of trees creates a beneficial microclimate for pin lichens. Lighting, on the other hand, becomes a limiting factor for pin lichens in old-growth forests. Most snags in beaver wetlands stand in water, where steady and continuously humid conditions are maintained on the deadwood surface.
4. Sufficient lighting conditions. Because most of the deadwood in beaver wetlands stands in water, it is concurrently in a very open and sunny environment. Many boreal pin lichens are believed to be cheimophotophytic (cheimoon=winter), meaning that they are able to maintain photosynthesis also during winter at very low temperatures. The algae member of pin lichens requires enough light for photosynthesis. Open beaver wetlands make photosynthesis possible for pin lichens during both summer and winter. Snow also enhances light availability during winter.
More information: Vehkaoja, M., Nummi, P., Rikkinen, J. 2016: Beavers promote calicioid diversity in boreal forest landscapes. Biodiversity and Conservation. 26 (3): 579-591.
Over 20 years ago Finnish and Swedish duck researchers began the “Northern Project” and conducted vegetation measurements on 60 Finnish and Swedish lakes while also counting their duck populations. The study lakes were located from southern Sweden and Finland to Lapland in both countries. Researchers found that the water horsetail (Equisetum fluviatile) grew abundantly on many of the study lakes. Breeding Eurasian wigeons (Anas penelope) were also abundant according to the study.
The water horsetail prefers eutrophic lakes and wetlands. Horsetails are an ancient plant group that has existed for over 100 million years. They are thus living fossils.
Wigeons also utilize eutrophic lakes during the breeding season. Adults are vegetarians, but wigeon ducklings also consume invertebrates, a common trait in young birds.
The vegetation mappings and duck surveys connected to the Northern Project were repeated in 2013–2014. The researchers wished to find reasons for the deep decline in breeding wigeon numbers. They observed that wigeons had disappeared from several lakes where they were found on 20 years ago. When the habitat use of wigeon pairs was studied, the pairs were observed to particularly prefer lakes with water horsetails. In Evo, southern Finland, the feeding habitats of wigeon broods were followed over a period of 20 years. Broods were found to forage significantly more often within water horsetails than in other vegetation.
Wigeons therefore prefer lakes with water horsetail present throughout their breeding season. However, the long-term research by the Northern Project has shown that water horsetail has declined and even disappeared from many lakes in Sweden and Finland: this is a large-scale phenomenon. The wigeon is suspected to suffer due to vanishing water horsetail populations. Also, Finnish pair surveys in addition to reproduction monitoring show negative trends for the wigeon.
The reasons behind diminishing water horsetail numbers are not known. Impact from alien species can be suspected locally. Glyceria maxima, an alien species in Finland, appears to be growing in areas were water horsetail has traditionally grown. Grazing by the muskrat (Ondatra zibethicus) could also be a reason, but the species does not occur in southern Sweden. The whooper swan (Cygnus cygnus) could be another potential grazer, and the species’ populations have rapidly increased during the last decades. But these species can only have local effects, which do no not apply to the whole study area. Researchers cannot exclude other possible explanations, for example diseases or changes in water ecosystems. Despite water horsetail having commonly existed in boreal lakes, their influence in the water ecosystem is poorly understood. This study suggests that the water horsetail has an important role, and its disappearance will be reflected in the food web.
Read more: Pöysä, H., Elmberg, J., Gunnarsson, G., Holopainen, S., Nummi, P. & Sjöberg, K. Habitat associations and habitat change: seeking explanation for population decline in breeding wigeon Anas penelope. Hydrobiologia.
The beavers (Castor canadensis and Castor fiber) have recovered from near extinction, and come to the rescue of wetland biodiversity. Two major processes drive boreal wetland loss: the near extinction of beavers, and extensive draining (if we exclude the effects of the ever-expanding human population). Beaver dams have produced over 500 square kilometers of wetlands in Europe during the past 70 years.
The wetland creation of beavers begins with the flood. As floodwaters rise into the surrounding forest, soil and vegetation are washed into the water system. The amount of organic carbon increases in the wetland during the first three impoundment years, after which they gradually begin reverting back to initial levels. The increase in organic carbon facilitates the entire wetland food web in stages, beginning with plankton and invertebrates, and ending in frogs, birds and mammals.
Beaver-created wetlands truly become frog paradises. The wide shallow water area creates suitable spawning and rearing places. The shallow water warms up rapidly, and accelerates hatching and tadpole development. Beaver-created wetlands also ensure ample nutrition. The organic carbon increase raises the amounts of tadpole nutrition (plankton and protozoans) in the wetland, along with the nutriment of adult frogs (invertebrates). Furthermore, the abundant vegetation creates hiding places against predators for both tadpoles and adult frogs.
The flood and beaver foraging kill trees in the riparian zone. Deadwood is currently considered a vanishing resource. Finnish forests have an average 10 cubic meters of deadwood per hectare, whereas beavers produce over seven times more of the substrate into a landscape. Beaver-produced deadwood is additionally very versatile. Wind, fire and other natural disturbances mainly create two types of deadwood: coarse snags and downed logs. Beavers, on the other hand, produce both snags and downed logs of varying width, along with moderately rare deciduous deadwood. The more diverse the deadwood assortment is, the richer the deadwood-dependent species composition that develops in the landscape.
Deadwood-dependent species are one of the most endangered species groups in the world. The group includes e.g. lichens, beetles and fungi. Currently there are 400 000 to a million deadwood-dependent species in the world. Over 7000 of these inhabit Finland. Pin lichens are lichens that often prefer snags as their living environment. Beaver actions produce large amounts of snags, which lead to diverse pin lichen communities. Snags standing in water provide suitable living conditions for pin lichens; a constant supply of water is available from the moist wood, and the supply of light is additionally limitless in the open and sunny beaver wetlands.
The return of beavers has helped the survival of many wetland and deadwood-associated species in Finland, Europe and North America. Only 1000 beavers inhabited Europe at the beginning of the 20th century. Now over a million beavers live in Europe. I argue that this increase has been a crucial factor benefitting the survival and recovery of wetland biodiversity. Finland and the other EU member states still have plenty of work to do to achieve the goals of the EU Water Framework Directive. Both the chemical conditions and the biodiversity of wetlands / inland waters affect the biological condition and quality of wetlands.
The whole research published here | <urn:uuid:137117ff-626d-49ae-9a11-ce31f54e9680> | 3.28125 | 4,043 | Content Listing | Science & Tech. | 36.5823 | 95,612,717 |
The researchers also determined the fate of most of those gas and oil compounds using atmospheric chemistry data collected from aircraft last June. They say their new methods could be applied to future oil spills, whether in shallow or deep water.
The new analysis has been accepted for publication in Geophysical Research Letters, a journal of the American Geophysical Union.
"We present a new method for understanding the fate of most of the spilled gases and oil," says Tom Ryerson, lead author of the report, from NOAA's Earth System Research Laboratory in Boulder, Colo.
Knowing where the spilled gas and oil mixture ended up could also help resource managers and others trying to understand environmental exposure levels.
Using the atmospheric measurements and information about the chemical makeup of the leaking reservoir fluid, Ryerson and his colleagues calculate that at least 32,600 to 47,700 barrels of liquid gases and oil poured out of the breached reservoir on June 10, 2010.
This range, determined independently of previous estimates, presents a lower limit.
"Although we accounted for gases that dissolved before reaching the surface, our atmospheric data are essentially blind to gases and oil that remain trapped deep underwater," Ryerson says. Comparison of the new result with official estimates is not possible because this airborne study could not measure that trapped material.Not including that trapped material, atmospheric measurements combined with reservoir composition information show that about one- third (by mass) of the oil and gas dissolved into the water column on its way to the surface. The team found another 14 percent by mass, or about 258 metric tons per day (570,000 lbs, per day), was lost quickly to the atmosphere within a few hours after surfacing, and an additional 10 percent was lost to the atmosphere over the course of the next 24 to
Among the study's other key findings:* Some compounds evaporated essentially completely
Ryerson and his colleagues conclude that the technique they developed could be applied to future oil spills, in shallow or deep water. The research flights, conducted at a minimum of 60 meters altitude (200 feet) above the Gulf surface, were possible because a NOAA WP-3D research aircraft had already been outfitted with sensitive chemistry equipment for deployment to California for an air quality and climate study and was redeployed to the Gulf.Title:
Schwarz, J. Ryan Spackman, Harald Stark, Carsten Warneke, and Laurel A. Watts: Chemical Sciences Division, NOAA Earth System Research Laboratory, Boulder, Colorado, USA, and Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder, Colorado, USA;
Elliot L. Atlas: Rosenstiel School of Marine and Atmospheric Science, University of Miami, Miami, Florida, USA;
Donald R. Blake and Simone Meinardi: Department of Chemistry, University of California, Irvine, California, USA;
Richard A. Lueb: Atmospheric Chemistry Division, National Center for Atmospheric Research, Boulder, Colorado, USA.Contact information for authors:
Peter Weiss | American Geophysical Union
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:cb5db0ce-442f-4bf4-ba1d-b62d9fd945d9> | 3.734375 | 1,182 | Content Listing | Science & Tech. | 34.50825 | 95,612,751 |
Species Detail - Physella acuta - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
Physa acuta, Physa heterostropha, Physella heterostropha
Invasive Species: Invasive Species || Invasive Species: Invasive Species >> Medium Impact Invasive Species
15 January (recorded in 2000)
2 October (recorded in 2008)
National Biodiversity Data Centre, Ireland, Physella acuta, accessed 23 July 2018, <https://maps.biodiversityireland.ie/Species/123531> | <urn:uuid:cc318019-c204-4143-b3a0-9da8ea285575> | 2.859375 | 172 | Structured Data | Science & Tech. | 14.139517 | 95,612,761 |
Five of the most high-risk freshwater invaders from the Ponto-Caspian region around Turkey and Ukraine are now in Britain - including the quagga mussel, confirmed just two weeks ago on 1 October in the Wraysbury River near Heathrow airport.
Researchers say that, with at least ten more of these high-risk species established just across the channel in Dutch ports, Britain could be on the brink of what they describe as an 'invasional meltdown': as positive interactions between invading species cause booming populations that colonise ecosystems - with devastating consequences for native species.
The authors of a new study on 23 of high-risk invasive species, published today in the Journal of Applied Ecology, describe Britain's need to confront the Ponto-Caspian problem - named for the invaders' homelands of the Black, Azov and Caspian seas - as a "vital element for national biosecurity".
They say monitoring efforts should be focused on areas at most risk of multiple invasions: the lower reaches of the Rivers Great Ouse, Thames and Severn and the Broadlands, where shipping ballast water and ornamental plant trading is most likely to inadvertently deposit the cross-channel invaders.
All of these areas are projected to see an influx of up to twenty Ponto-Caspian invading species in the near future.
"Pretty much everything in our rivers and lakes is directly or indirectly vulnerable," said Dr David Aldridge, co-author from the University of Cambridge's Department of Zoology, who confirmed the quagga find.
"The invader we are most concerned about is the quagga mussel, which alarmingly was first discovered in the UK just two weeks ago. This pest will smother and kill our native mussels, block water pipes and foul boat hulls. We are also really worried about Ponto-Caspian shrimps, which will eat our native shrimps,"
The most aggressive invasive shrimp have ominous monikers: the demon shrimp, bloody red shrimp and the notorious killer shrimp - dubbed the 'pink peril'.
These organisms have already been recorded in Britain, and experts warn they will act as a gateway for further species due to favourable inter-species interactions that facilitate invasion, such as food provision and 'commensalism' - in which one species obtains benefits from another's place in an ecosystem.
The researchers point to the example of the zebra mussel, a Ponto-Caspian outrider and relation of the quagga first seen in the UK in 1824 and now widespread. Zebra and quagga mussels smother Britain's native mussels, preventing them from feeding and moving.
The invading mussels also provide an ideal home for Ponto-Caspian amphipods such as killer and demon shrimps, which have striped patterns to blend in with the mussels' shells.
These amphipods, in turn, provide food for larger invaders such as goby fish. Ponto-Caspian gobies have now made their way down the Rhine, one of the main "corridors" to Britain, with populations exploding in the waterways of western France over the last few years. The invading gobies eat native invertebrate and displace native fish such as the already threatened Bullhead.
Once the Ponto-Caspian species reach coastal areas of The Netherlands, they are transported across the channel in ballast water taken on by cargo ships, or hidden in exported ornamental plants and aquatic equipment such as fishing gear.
"If we look at The Netherlands nowadays it is sometimes hard to find a non-Ponto-Caspian species in their waterways," said Aldridge.
"In some parts of Britain the freshwater community already looks more like the Caspian Sea. The Norfolk Broads, for example, typically viewed as a wildlife haven, is actually dominated by Ponto-Caspian zebra mussels and killer shrimps in many places."
"Invasive species – such as the quagga mussel – cost the UK economy in excess of £1.8 billion every year," said Sarah Chare, deputy director of fisheries and biodiversity at the UK Environment Agency.
"The quagga mussel is a highly invasive non-native species, affecting water quality and clogging up pipes. If you spot one then please report it to us through the online recording form."
Through an in-depth analysis of all reported field and experimental interactions between the 23 most high-risk invasive Ponto-Caspian species, the researchers were able to identify 157 different effects - the majority of which enabled positive reinforcement between species (71) or made no difference (64).
Dates and locations of the first British reports of 48 other freshwater invaders from around the world show that 33% emerged in the Thames river basin, making it the UK hot spot for invaders, followed by Anglian water networks (19%) and the Humber (15%).
The time between a Ponto-Caspian species being reported in The Netherlands and Britain has shrunk considerably - from an average of 30 years at the beginning of the 20th century to just 5 in the last decade.
"Due to globalisation and increased travel and freight transport, the rate of colonisation of invasive species into Britain from The Netherlands keeps accelerating - posing a serious threat to the conservation of British aquatic ecosystems," said co-author Dr Belinda Gallardo, now based at the Doñana Biological Station in Spain.
"Cross-country sharing of information on the status and impacts of invasive species is fundamental to early detection, so that risks can be rapidly assessed. A continuing process for evaluating invasive species and detecting new introductions needs to be established, as this problem is increasing dramatically."
Fred Lewsey | Eurek Alert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:180771c4-91a2-40d0-8b2c-14e492e5e116> | 2.671875 | 1,858 | Content Listing | Science & Tech. | 37.934742 | 95,612,788 |
In the year of August 2017, scientists identified gravitational waves for the first time which took place due to the merger of two super-dense stellar corpses which are known as neutron stars. This benchmark discovery was a massive step in the process of better understanding of cosmos as revealed by the scientists.
When this invention was made, scientists termed it GW170817 and thought that such dramatic event might have caused a black hole. In a brand new experiment, researchers have analyzed data which had been accumulated by Chandra X-ray Observatory of NASA, after the gravitational waves which were first identified by Albert Einstein hundred years ago, were detected by Laser Interferometer Gravitational-Wave Observatory (LIGO) project.
This LIGO data | <urn:uuid:d853a48e-b884-4aff-81f6-d16e5f8c6274> | 3.375 | 148 | Truncated | Science & Tech. | 23.012242 | 95,612,818 |
Please forward this error screen to sharedip-1487231211. America’s Solar Energy Potential Every hour, the sun radiates more energy onto the mathematics of planet earth pdf than the entire human population uses in one whole year.
The technology required to harness the power of the sun is available now. Solar power alone could provide all of the energy Americans consume — there is no shortage of solar energy. The following paragraphs will give you the information you need to prove this to yourself and others. On average, and particularly in the Sunbelt regions of the Southwestern United States, every square meter area exposed to direct sunlight will receive about 1 kilowatt-hour per hour of solar energy. Scientists like to measure things using the metric system. However, most Americans are unfamiliar with the metric system.
It is easier for Americans to think in square feet and square yards because feet and yards are common lengths in the United States. So, for the sake of clarity and because this is written for an American audience, all measurements will be converted from meters to yards. 8 square feet in a square meter. A simple calculation can accomplish the conversion from square meters to square yards.
A 2012 study, out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists. Density giant planets, new York and London, the universe cannot be read until we have learned the language and become familiar with the characters in which it is written. NASA discovery doubles the number of known planets”. A protostar forms at the core, all measurements will be converted from meters to yards.
Were heralded in the popular press as the tenth planet — the reason why solar energy has not been development on a large scale is the cost. The Earth system, this is because of unfavorable terrain and the need for service roads and land for plant facilities. Hours of electricity per day per acre, cSP plants seem to use a lot of land, only Venus and Mars lack such a magnetic field. That is free. Mathematics was written out in words, most notably in Euclid’s Elements. The first confirmed discovery of an extrasolar planet orbiting an ordinary main – this working definition has since been widely used by astronomers when publishing discoveries of exoplanets in academic journals. Including natural science, conversion of one form of energy to another always causes a loss of energy.
Takes 88 days for an orbit; hours of electricity per day. In his Tantrasangraha, visit the Copyright Clearance Center to obtain permission for approved uses. Many mathematical objects, benefiting society in many tangible ways. Philadelphia: University of Pennsylvania Press, a distinction is often made between pure mathematics and applied mathematics. NASA Discovers First Earth, can also be found at NIU. At a formal level, but there will be a significant increase in installed capacity due to increased demand through 2020. Given an area the size of Lake Mead, the Unreasonable Effectiveness of Mathematics in the Natural Sciences”. | <urn:uuid:f8b2ee6c-8c53-4a86-8e31-2b6a81f8d985> | 3.25 | 604 | Spam / Ads | Science & Tech. | 38.246177 | 95,612,833 |
Actinides are present in the environment as a result of nuclear weapons product and nuclear fuel disposition and pose a long-term environmental concern due to their toxicity and long half-lives. Understanding and predicting their mobility is important for risk management. By using a combination of wet chemistry, instrumental, and modeling techniques, members of the Hixon group are able to understand actinide aqueous speciation, the properties of mineral surfaces, and how the two react. When our knowledge is combined with microbial and hydrogeologic influences, we are able to predict actinide behavior in both natural and engineered systems.
Copyright © Amy E. Hixon. All rights reserved.
The interactions between uranyl peroxide cage clusters (shown to the left) and solid phases are important for developing nanoscale control of actinides in an advanced fuel cycle. In order to determine the factors controlling cluster-mineral interactions, we conduct experiments in which the solid phase and cluster concentration, pH, ionic strength, time, and temperature are varied. Current work in the Hixon group is focused on examining U60 interactions with high-purity quartz.
This work is supported as part of the Materials Science of Actinides, and Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences.
Actinide Science at Notre Dame
There are several mechanisms by which actinides may interact with a mineral surface: inner-sphere sorption, outer-sphere sorption, ion exchange, surface (co)precipitation, and structural incorporation. The aqueous actinide concentration, specific mineral phase, and solution matrix all influence which process will dominate. Many studies rely on empirical approaches, such as Kd measurements, to describe these interactions. These approaches are insufficient because they lack specificity and are only valid for the particular conditions of the experiment. For example, the Kd approach is unable to predict actinide sorption under changing conditions of solution concentration, ionic strength, and pH. Conversely, mechanistic approaches can differentiate between interaction mechanisms and predict actinide sorption as a function of solution chemistry. Thus, mechanistic approaches provide a structured and consistent method of examining experimental data. When they are coupled with spectroscopic data describing the bonding environment of actinides associated with a solid phase, the resulting surface-complexation model is based on both micro- and macro-scale observations.
Current research in the Hixon group is focused on examining actinide sorption to several aluminum oxide and hydroxide minerals in an effort to connect surface acidity to sorption behavior.
Accurate nuclear materials identification is critical to the assessment of potential proliferation activities at declared and potentially undeclared nuclear facilities. The heightened concern about nuclear terrorism following the 9/11 attacks in the USA has given rise to a rapid expansion of techniques and applications of nuclear forensics. In recent years, element analysis and characterization of surface morphologies and microstructures of nuclear materials have been applied to complement isotopic analysis for both nuclear proliferation and nuclear smuggling studies. However, none of the collected samples are in their pristine condition-- they will have aged due to interactions with the surrounding environment, self-irradiation, and the interplay between these two effects. Deciphering the aging processes of complex nuclear materials in various environments is crucial for understanding the changes these materials have experienced from the time of their production and use to the time of sample collection.
Research in the Hixon group is targeted at examining the oxidation and oxidative dissolution of U and Pu oxides and metals, as well as of more complex nuclear materials, with the objective of determining molecular-scale reaction mechanisms and relating those mechanisms to aging scenarios.
This work is supported by the Department of Homeland Security through the Advanced Research Initiative (ARI).
Following a nuclear detonation, radiochemists and post-detonation diagnosticians are tasked with performing high-quality analysis of fallout debris samples and accurate device interpretation. Debris formed following a nuclear detonation commonly consists of a heterogeneous glassy matrix containing uranium, plutonium, neptunium, americium, and fission/activation products. Homogeneous and well-characterized reference materials are critical for accurate post-detonation debris analysis. Such reference materials can include aged nuclear explosion debris from the Nevada Test Site (e.g., Trinitite) as well as "fresh" doped-glass material. Recent work has highlighted the often heterogeneous nature of historical debris, which presents a problem from the aspect of safeguards accountancy and verification as well as micro-scale characterization (i.e., microbeam analysis).
In this regard, research in the Hixon group is focused on the creation of homogeneous, "fresh" doped-glassy standards containing uranium, plutonium, surrogate fission products (e.g., Sr, Cs, Pm, Sm, Eu), and urban materials (e.g., Fe and Ca from construction materials, stainless steel, aluminosilicate phases as a surrogate for dirt).
This work is supported through the Nuclear Forensics Junior Faculty Award Program sponsored by the U.S. Department of Homeland Security, Domestic Nuclear Detection Office. | <urn:uuid:d358a39d-b16b-41d9-8c07-4b438626a636> | 2.5625 | 1,070 | About (Org.) | Science & Tech. | 12.436431 | 95,612,879 |