text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
In Finland, for example, the Northern lapwing and Eurasian curlew have usually built their ground nests on barley fields after farmers have sown their crops in the spring. But as temperatures have risen, the birds are now increasingly laying their eggs before the farmers get to the fields, which means their well-concealed nests are more likely to get destroyed by tractors and other machinery.
Looking at 38 years of data, researchers found that farmers in Finland are now sowing their fields a week earlier in response to warmer temperatures, but the birds are laying their eggs two to three weeks earlier. “This has created a phenological mismatch,” said Andrea Santangeli, a postdoctoral researcher at the Finnish Museum of Natural History and lead author of the study. “The response we’ll see is declines of these birds.”
Caribou show up late for lunch
Caribou in western Greenland follow a strict seasonal diet. In the winter, they eat lichen along the coast. In the spring and summer, they venture inland to give birth to their calves and eat the Arctic plants that grow there.
As Greenland has warmed up and sea ice has declined, however, those inland Arctic plants have been emerging earlier — with some plant species now greening 26 days earlier than they did a decade ago. But the caribou have not shifted their migration as quickly. And scientists have documented a troubling trend in the region: More caribou calves appear to be dying early in years when the spring plant growth preceded the caribou’s calving season.
While that study only found a correlation between warmer temperatures and caribou calf deaths, “it’s consistent with the idea that mismatch is disadvantageous,” said Eric Post, an ecology professor at the University of California, Davis. When Arctic plants green up earlier, they may become tougher and less nutritious by the time the caribou get there and start eating them.
Why don’t the caribou speed up their migration? One possibility is that their reproductive cycles respond most strongly to seasonal signals like the length of the day, whereas plants respond more strongly to local temperatures, which are rising.
In theory, if given enough time, the caribou might eventually adjust as natural selection takes its course and favors individuals that calve earlier. But with the Arctic warming faster than the rest of the globe, Dr. Post said, “the question is whether things are changing too fast for evolution to matter.” | <urn:uuid:c1d704ae-228e-4030-a4fb-5d598ffeaf46> | 3.6875 | 519 | News Article | Science & Tech. | 45.316569 | 95,478,865 |
ScienceDaily (Oct. 25, 2012) — In the future, warmer waters could significantly change ocean distribution of populations of phytoplankton, tiny organisms that could have a major effect on climate change.
Reporting in this week's online journal Science Express, researchers show that by the end of the 21st century, warmer oceans will cause populations of these marine microorganisms to thrive near the poles and shrink in equatorial waters.
"In the tropical oceans, we are predicting a 40 percent drop in potential diversity, the number of strains of phytoplankton," says Mridul Thomas, a biologist at Michigan State University (MSU) and co-author of the journal paper.
"If the oceans continue to warm as predicted," says Thomas, "there will be a sharp decline in the diversity of phytoplankton in tropical waters and a poleward shift in species' thermal niches--if they don't adapt."
Thomas co-authored the paper with scientists Colin Kremer, Elena Litchman and Christopher Klausmeier, all of MSU.
"The research is an important contribution to predicting plankton productivity and community structure in the oceans of the future," says David Garrison, program director in the National Science Foundation's (NSF) Division of Ocean Sciences, which funded the research along with NSF's Division of Environmental Biology.
"The work addresses how phytoplankton species are affected by a changing environment," says Garrison, "and the really difficult question of whether adaptation to these changes is possible."
The MSU scientists say that since phytoplankton play a key role in regulating atmospheric carbon dioxide levels, and therefore global climate, the shift could in turn cause further climate change.
Phytoplankton and Earth's climate are inextricably intertwined.
"These results will allow scientists to make predictions about how global warming will shift phytoplankton species distribution and diversity in the oceans," says Alan Tessier, program director in NSF's Division of Environmental Biology.
"They illustrate the value of combining ecology and evolution in predicting species' responses."
The microorganisms use light, carbon dioxide and nutrients to grow. Although phytoplankton are small, they flourish in every ocean, consuming about half of the carbon dioxide emitted into the atmosphere.
When they die, some sink to the ocean bottom, depositing their carbon in the sediment, where it can be trapped for long periods of time.
Water temperatures strongly influence their growth rates.
Phytoplankton in warmer equatorial waters grow much faster than their cold-water cousins.
With worldwide temperatures predicted to increase over the next century, it's important to gauge the reactions of phytoplankton species, say the scientists.
They were able to show that phytoplankton have adapted to local temperatures.
Based on projections of ocean temperatures in the future, however, many phytoplankton may not adapt quickly enough.
Since they can't regulate their temperatures or migrate, if they don't adapt, they could be hard hit, Kremer says.
"We've shown that a critical group of the world's organisms has evolved to do well under the temperatures to which they're accustomed," he says.
But warming oceans may significantly limit their growth and diversity, with far-reaching implications for the global carbon cycle.
"Future models that incorporate genetic variability within species will allow us to determine whether particular species can adapt," says Klausmeier, "or whether they will face extinction."
Share this story on Facebook, Twitter, and Google: | <urn:uuid:f0ad7879-8845-4b3f-82dd-e0e020731650> | 3.46875 | 746 | News Article | Science & Tech. | 26.228244 | 95,478,866 |
Portable Device to Sniff Out Trapped People
News Apr 19, 2018 | Original Story from the American Chemical Society.
The first step after buildings collapse from an earthquake, bombing or other disaster is to rescue people who could be trapped in the rubble. But finding entrapped humans among the ruins can be challenging. Scientists now report in the ACS journal Analytical Chemistry the development of an inexpensive, selective sensor that is light and portable enough for first responders to hold in their hands or for drones to carry on a search for survivors.
In the hours following a destruction-causing event, the survival rate of people stuck in the rubble rapidly drops, so it’s critical to get in there fast. Current approaches include the use of human-sniffing dogs and acoustic probes that can detect cries for help. But these methods have drawbacks, such as the limited availability of canines and the silence of unconscious victims. Devices that detect a human chemical signature, which includes molecules that are exhaled or that waft off the skin, are promising. But so far, these devices are too bulky and expensive for wide implementation, and they can miss signals that are present at low concentrations. So, Sotiris E. Pratsinis and colleagues wanted to develop an affordable, compact sensor array to detect even the most faint signs of life.
The researchers built their palm-sized sensor array from three existing gas sensors, each tailored to detect a specific chemical emitted by breath or skin: acetone, ammonia or isoprene. They also included two commercially available sensors for detecting humidity and CO2. In a human entrapment simulation, the sensors rapidly detected tiny amounts of these chemicals, at levels unprecedented for portable detectors--down to three parts per billion. The next step is to test the sensor array in the field under conditions similar to those expected in the aftermath of a calamity.
This article has been republished from materials provided by the American Chemical Society. Note: material may have been edited for length and content. For further information, please contact the cited source.
Sniffing Entrapped Humans with Sensor Arrays. Andreas T. Güntner, Nicolay J. Pineau, Paweł Mochalski, Helmut Wiesenhofer, Agapios Agapiou, Christopher A. Mayhew, and Sotiris E. Pratsinis. Anal. Chem., 2018, 90 (8), pp 4940–4945, DOI: 10.1021/acs.analchem.8b00237.
Getting to Know the Microbes that Drive Climate ChangeNews
A new understanding of the microbes and viruses in the thawing permafrost in Sweden may help scientists better predict the pace of climate change.READ MORE
Rocky Planet Neighbour Looks Familiar, but is Not Earth's TwinNews
Last autumn, the world was excited by the discovery of an exoplanet called Ross 128 b, which is just 11 light years away from Earth. New work has for the first time determined detailed chemical abundances of the planet’s host star, Ross 128.READ MORE
You Thought Your Bread was Stale!News
At an archaeological site in northeastern Jordan, researchers have discovered the charred remains of a flatbread baked by hunter-gatherers 14,400 years ago. It is the oldest direct evidence of bread found to date, predating the advent of agriculture by at least 4,000 years.READ MORE | <urn:uuid:dcb8e68e-f7ec-442d-9f5a-c17c0005e23c> | 3.265625 | 709 | Truncated | Science & Tech. | 46.329916 | 95,478,876 |
Loading in 2 Seconds...
Loading in 2 Seconds...
6.4 Integration with tables and computer algebra systems 6.5 Approximate Integration. Tables of Integrals. A table of 120 integrals, categorized by form, is provided on the References Pages at the back of the book. References to more extensive tables are given in the textbook.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
These are the polynomials, rational functions, exponential functions, logarithmic functions, trigonometric and inverse trigonometric functions,
and all functions that can be obtained from these by the five operations of addition, subtraction, multiplication, division, and composition.
then f ’ is an elementary function,
but its antiderivative need not be an elementary function.
In fact, the majority of elementary functions don’t have elementary antiderivatives.
How to find definite integrals for those functions? Approximate!
Recall that the definite integral is defined as a limit of Riemann sums.
A Riemann sum for the integral of a function f over the interval [a,b] is obtained by first dividing the interval [a,b] into subintervals and then placing a rectangle, as shown below, over each subinterval. The corresponding Riemann sum is the combined area of the green rectangles. The height of the rectangle over some given subinterval is the value of the function f at some point of the subinterval. This point can be chosen freely.
Taking more division points or subintervals in the Riemann sums, the approximation of the area of the domain under the graph of f becomes better.
where xi* is any point in the ith subinterval [xi-1,xi].
First find the exact value using definite integrals.
Actual area under curve:
Left endpoint approximation:
Right endpoint approximation:
Averaging the right and left endpoint approximations:
(closer to the actual value)
the two rectangles is the
same as taking the area
of the trapezoid above
This gives us a better approximation
than either left or right rectangles.
Can also apply
choose the midpoint of the subinterval as the sample point.
The midpoint rule gives a closer approximation than the trapezoidal rule, but in the opposite direction.
(higher than the
(lower than the
Notice that the trapezoidal rule gives us an answer that has twice as much error as the midpoint rule, but in the opposite direction.
If we use a weighted average:
This is the exact answer!
This weighted approximation gives us a closer approximation than the midpoint or trapezoidal rules.
Simpson’s rule can also be interpreted as fitting parabolas to sections of the curve.
Simpson’s rule will usually give a very good approximation with relatively few subintervals.
Examples of error estimations on the board. | <urn:uuid:6338b052-a8fd-4be3-b6d0-950885bc36bf> | 3.234375 | 683 | Tutorial | Science & Tech. | 38.554722 | 95,478,887 |
Why photon exist without rest mass and why it can never have a charge. Plz reply.
well physics tend not to answer all "why" questions, we can only move them to a different level.
i) the photon moves at the speed of light, only massless (i.e. 0 rest mass) can do so
ii) The gauge symmetry group of electromagnetism is U(1), which is abelian, thus the photon has not charge.
Then you can ask why it moves at the speed of light and why EM has U(1) gauge symmetry group and so on.
There are however "photons" with charge and mass, the W- and W+ bosons. (Roughly speaking of course)
Please explain it simple i can't under stand above reply
and I don't understand what part of them you did not grasp or at what level you currently are at. What is "simple" is relative...
Photon is a wave, it travels. In vacuum it travels fast (v=c), in a transparent medium (glass, water) it travels slower, with v<c, so there is an accompanying reference frame where it looks as a standing wave. Despite that it still have energy ћω. You can assign some mass to it, if you like.
Photon carries energy-momentum. It can transfer part of them to another particle or system. Photons strongly interacts with charged particles: roughly speaking, photon is an electric wave. Photon practically does not interact with another photon in vacuum. It is due to the principle of superposition of electric fields, if you like. Electric fields make only sense when they appear in the particle equation motion as external forces.
I was wondering the same thing.
Prys die Heer!
Keeping with the theme of particle physics, the question should really be why other particles have mass (and why they have the value of the mass that they have), rather than why doesn't the photon have mass. 0 is easy to understand.
Mass is related to the spontaneous symmetry breaking of a scalar Higgs. The photon gains no mass as it is the gauge field corresponding to a direction in field space where symmetry is not broken from the electroweak phase transition. For more information there is plenty of material on the web about the Higgs mechanism.
The photon could still be massive if it couples to another scalar and gains mass that way:
If the photon has a small mass that is generated by such a Higgs mechanism, then it turns out that the very sharp upper bounds on the photon mass that depend on the vector potential of the galactic magnetic field, are not valid.
So I assume this other scalar would develop a vacuum expectation value that will be a different value than usual one, i.e., the one whose ground state is U(1) charge invariant and gives mass to the Standard Model. But if this happens, then don't you lose conservation of charge, since your theory will lose U(1) charge invariance?
Actually, the article assumes that the ordinary Higgs mechanism would generate the photon mass. Anyway, you still have a conserved current, even in Proca theory, as the article explains.
The Photon is an electro-magnetic wave and thus should be viewed as pure energy to understand it better, this will give insight into the no mass question, different particles are in simple a combination of a set mass ratio to their energy at ground state. The absorbsion of a photon in a particle will raise its energy state and then subsequently releases the exact energy out as a duplicate photon, this happens almost instantaneously but a fragment of time is used thus the speed of light through air liquid and solids becomes increasingly slower.
ohh Define pure energy please
Are photons made of quarks?
no they are made of air
Quarks have mass. Photons don't have mass.
Quarks have electric charge. Photons don't have electric charge.
Quarks interact strongly. Photons don't interact strongly.
Quarks interact weakly. Photons don't interact weakly.
So no, photons are not made of quarks.
So photons aren't really particles in the traditional sense are they?. Their particle nature is just a reflection of the quantization of their energy.
define "traditional particle" ...
I guess i can't really give an exact definition. I would say that traditional meant non zero rest mass but thats hardly a traditional definition.
Then why using home-made defition of things when writing in a physics forums where we are trying to discuss current accepted physics (see forum rules) ?
I apologise. I was just curious and I'm new to this.
it's ok, see also the term "pure energy" invented by darkwood in this thread.
Separate names with a comma. | <urn:uuid:22b58a56-b2fe-4e10-8715-cbd9264f16a6> | 3.125 | 1,011 | Comment Section | Science & Tech. | 59.888072 | 95,478,905 |
STEPHEN Hawking fears Donald Trump’s decision to pull out of the Paris climate change agreement could be the “tipping point” which wipes out humanity and turns our planet into a living hell.
The British professor is worried that The Donald’s rejection of this plan to combat global warming could cause irreversible changes which doom our planet to a grim fate.
In recent years, Hawking has become something of a doommonger, who is determined to sketch out a variety of grim fates which await the human race.
In his latest prophecy, the professor suggested our planet could be destined to become totally uninhabitable.
“We are close to the tipping point where global warming becomes irreversible. Trump’s action could push the Earth over the brink, to become like Venus, with a temperature of 250°C, and raining sulphuric acid,” he told the BBC.
“Climate change is one of the great dangers we face, and it’s one we can prevent if we act now.
“By denying the evidence for climate change, and pulling out of the Paris Climate Agreement, Donald Trump will cause avoidable environmental damage to our beautiful planet, endangering the natural world, for us and our children.”
The famed physicist also fears that “evolution has inbuilt greed and aggression to the human genome”, meaning that we might wipe ourselves out before Earth turns into Venus.
MOST READ IN TECH AND SCIENCE
20,000 TREES UNDER THE SEAMysterious ancient underwater forest offers a chilling glimpse of Earth's grim future, scientists say
He added: “There is no sign of conflict lessening, and the development of militarised technology and weapons of mass destruction could make that disastrous. The best hope for the survival of the human race might be independent colonies in space.”
Professor Hawking fears technology will wipe out humanity and has called for global government to defeat killer robots.
He has also said humanity should probably leave Earth within 1,000 years, although even this great escape might not save us because an alien civilisation could come along and wipe us out.
No future for you…. | <urn:uuid:f78714a2-99b4-4856-851b-55c99f9d9a69> | 2.625 | 450 | News Article | Science & Tech. | 38.997256 | 95,478,906 |
The plant on the left is a normal laboratory test plant Arabidopsis. The plant on the right doesn’t have the gene BIK1, which helps fight off Botrytis cinerea, a pathogen that causes the gray mold disease on flowers, fruits and vegetables. Tesfaye Mengiste, a Purdue plant molecular biologist, discovered the gene and that mutant plants without it have curly leaves and shorter primary roots but more root hairs, as shown in the bottom photo. (Photos courtesy of Tesfaye Mengiste laboratory)
A single gene apparently thwarts a disease-causing invader that creates a fuzzy gray coating on flowers, fruits and vegetables. But the same gene provides access to a different type of pathogen.
A Purdue University plant molecular biologist and his collaborators in Austria and North Carolina identified the gene that helps plants recognize pathogens and also triggers a defense against disease. The gene and its defense mechanisms are similar to an immunity pathway found in people and in the laboratory research insect, the fruit fly.
As Botrytis cinerea, a pathogen that makes strawberries gray and fuzzy, tries to invade a plant, the gene BIK1 recognizes the pathogen and sets off a defensive reaction. Botrytis is a type of pathogen that can infect and obtain nutrients from dead cells on a plant and actually secretes toxic substances into plant tissue in order to gain entry. Another type of pathogen, called a biotroph, must feed on live plant cells. As a strategy to contain a pathogen, plants actually kill their own cells at the site where a biotrophic pathogen is attempting to invade.
Susan A. Steeves | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:8ddc239f-70a4-46a0-9eae-d3f7631c6bd8> | 3.6875 | 926 | Content Listing | Science & Tech. | 38.025158 | 95,478,934 |
A team of scientists, led by Boy Lankhaar at Chalmers University of Technology, has solved an important puzzle in astrochemistry: how to measure magnetic fields in space using methanol, the simplest form of alcohol. Their results, published in the journal Nature Astronomy, give astronomers a new way of investigating how massive stars are born.
Over the last half-century, many molecules have been discovered in space. Using radio telescopes, astronomers have with the help of these molecules been able to investigate just what happens in the dark and dense clouds where new stars and planets are born.
Magnetic fields play an important role in the places where most massive stars are born. This illustration shows the surroundings of a forming massive star, and the bright regions where radio signals from methanol can be found. The bright spots represent methanol masers -- natural lasers that are common in the dense environments where massive stars form -- and the curved lines represent the magnetic field. Thanks to new calculations by astrochemists, astronomers can now start to investigate magnetic fields in space by measuring the radio signals from methanol molecules in these bright sources.
Credit: Wolfgang Steffen/Boy Lankhaar et al. (molecules: Wikimedia Commons/Ben Mills)
Scientists can measure temperature, pressure and gas motions when they study the signature of molecules in the signals they detect. But especially where the most massive stars are born, there's another major player that's more difficult to measure: magnetic fields.
Boy Lankhaar at Chalmers University of Technology, who led the project, takes up the story.
"When the biggest and heaviest stars are born, we know that magnetic fields play an important role. But just how magnetic fields affect the process is a subject of debate among researchers. So we need ways of measuring magnetic fields, and that's a real challenge. Now, thanks to our new calculations, we finally know how to do it with methanol", he says.
Using measurements of methanol (CH3OH) in space to investigate magnetic fields was suggested many decades ago. In the dense gas surrounding many newborn stars, methanol molecules shine brightly as natural microwave lasers, or masers. The signals we can measure from methanol masers are both strong and emitted at very specific frequencies.
"The maser signals also come from the regions where magnetic fields have the most to tell us about how stars form. With our new understanding of how methanol is affected by magnetic fields, we can finally start to interpret what we see", says team member Wouter Vlemmings, Chalmers.
Earlier attempts to measure the magnetic properties of methanol in laboratory conditions have met with problems. Instead, the scientists decided to build a theoretical model, making sure it was consistent both with previous theory and with the laboratory measurements.
"We developed a model of how methanol behaves in magnetic fields, starting from the principles of quantum mechanics. Soon, we found good agreement between the theoretical calculations and the experimental data that was available. That gave us the confidence to extrapolate to conditions we expect in space", explains Boy Lankhaar.
Still, the task turned out to be surprisingly challenging. Theoretical chemists Ad van der Avoird and Gerrit Groenenboom, both at Radboud University in the Netherlands, needed to make new calculations and correct previous work.
"Since methanol is a relatively simple molecule, we thought at first that the project would be easy. Instead, it turned out to be very complicated because we had to compute the properties of methanol in great detail", says Ad van der Avoird.
The new results open up new possibilities for understanding magnetic fields in the universe. They also show how problems can be solved in astrochemistry - where the disciplines of astronomy and chemistry meet. Huib Jan van Langevelde, team member and astronomer at the Joint Institute for VLBI Eric and Leiden University, explains.
"It's amazing that such detailed calculations are required to reveal the molecular complexity which we need to interpret the very accurate measurements we make with today's best radio telescopes. It takes experts from both the chemistry and astrophysics disciplines to enable new discoveries in the future about molecules, magnetic fields and star formation", he says.
Robert Cumming | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:61aa513d-396f-46a2-a936-9f6387071883> | 3.75 | 1,462 | Content Listing | Science & Tech. | 36.145986 | 95,478,935 |
Work Done By Friction In Stopping A Moving Object
Strings (SiPjAjk) = S7P3A31 Base Sequence = 12735 String Sequence = 12735 - 3 - 31
A luggage weighing 490 lbs is being moved on a horizontal conveyor belt at 6 miles per hour. The conveyor is then brought to an abrupt stop. The luggage then slided for 1.37 secs during which time it covered 6 feet before coming to a complete stop.
(a) Calculate the work done by friction in bringing the luggage to a complete stop. The coefficient of friction between conveyor belt and luggage is 0.2.
(b) Confirm that the distance covered during the slide and the duration of the slide are 6 feet and 1.37 secs respectively.
(a) S7P3A31 (Force - Pull).
Pj Problem of interest is of type force. Work problems are generally of type force because force is the doer of work. In the case of white-collar work, intellectual force actuates materiality.
Consider the above diagram (FW.1):
Let W be the work done by friction to stop the moving object.
When conveyor belt is stopped abruptly, the only force acting to bring the motion of the luggage to a stop is the friction force between the luggage and the belt. This friction force is equal to the weight of the luggage. The work done by friction is equal to the kinetic energy of luggage before conveyor belt was abruptly stopped.
Speed of conveyor belt = 6 miles/hr = 8.8 ft/sec
Mass of luggage = 490/32 = 15.31 lbs
So, Kinetic Energy = (mv2)/2 = (15.31 x 8.8 x 8.8)/2 = 592.8 ft-lbs.
So, W = kinetic energy = 592.8 ft-lbs.
(b) Strings are: S7P4A41 (Linear motion) and S7P5A51 (Physical Change - Duration) respectively.
To calculate distance assuming it was not given, we equate kinetic energy with work done by friction
So, 592.8 = (friction force) x distance
So, 592.8 = μ490 x distance = 0.2 x 490 x distance
So, distanced luggage slided = 592.8/98 = 6 feet.
To calculate duration of slide assuming it was not given, we use the impulse momentum formula:
R x t = m(vf - vo).
R is the resultant force, t is time the resultant force is acting on the luggage, m is mass of the luggage, vf is the luggage's final velocity and vo is the luggage's original velocity.
So, R = friction force = -98 lbs; m = 490/32; vo = 8.8 ft/sec; vf = 0.
So, -98t = (490/32)(0 - 8.8)
So, t = 1.37 secs.
In SI units: mass is in kilograms, distance is in meters, velocity is in meters/sec, acceleration is in meters/sec2, force is in Newton and work is in Joules.
The point . is a mathematical abstraction. It has negligible size and a great sense of position. Consequently, it is front and center in abstract existential reasoning.
Single Variable Functions
Ordinary Differential Equations (ODEs)
The Universe is composed of matter and radiant energy. Matter is any kind of mass-energy that moves with velocities less than the velocity of light. Radiant energy is any kind of mass-energy that moves with the velocity of light.
Composition And Structure Of Matter
How Matter Gets Composed
How Matter Gets Composed (2)
Molecular Structure Of Matter
Molecular Shapes: Bond Length, Bond Angle
Molecular Shapes: Valence Shell Electron Pair Repulsion
Molecular Shapes: Orbital Hybridization
Molecular Shapes: Sigma Bonds Pi Bonds
Molecular Shapes: Non ABn Molecules
Molecular Orbital Theory
More Pj Problem Strings | <urn:uuid:614d8c7d-9d00-4ea5-8c10-308f4af4e6a1> | 3.3125 | 880 | Tutorial | Science & Tech. | 68.691451 | 95,478,955 |
Is there a Black Hole at the Centre of our Galaxy?
Scientists believe that at the centre of the Milky Way there lies an unknown compact entity believed to be a supermassive black hole.
At the centre of the Milky Way lies an unknown compact entity. Located in the constellation Sagittarius and coinciding with an intense radio source called Sagittarius A* (pronounced A-star), it boasts a mass over 2.5 million times that of our Sun, squeezed into a region no greater than the distance between the Earth and the Sun. Unfortunately, that's where our knowledge ends. Some believe it is a supermassive black hole while others have hypothesised more exotic objects. Scientists still know little about its nature or how it formed, and solving the mystery of this supermassive object is one of the greatest challenges in cosmology today. Our best evidence for a supermassive object comes from Doppler studies of the orbits of stars around the Galactic Centre. They are rotating with a period of 5.6 days, so fast that the only explanation is the existence of a single object.
Another star in particular - S2 - appears to orbit in just 15 years, allowing scientists to deduce both the mass and volume of the central dark object. The astoundingly high density calculated has led researchers to reject simpler explanations such as dense clusters of dark objects - neutron stars, planets, star-sized black holes and so on - as these would become unstable within such a reduced region and collapse. Instead, the search is on for more exotic candidates.
Using the period plus spectral measurements of the visible companion's orbital speed leads to a calculated system mass of about 35 solar masses. The calculated mass of the dark object is 8-10 solar masses; much too massive to be a neutron star which has a limit of about 3 solar masses - hence a black hole theory.
Most in the astronomical community believe Sagittarius A* is a supermassive black hole, especially as some theories of galaxy formation indicate these reside at galactic centres, varying in size from millions to billions of solar masses. Further evidence that strengthens the case for the unseen object being a black hole is the emission of X-rays from its location, an indication of temperatures in the millions of Kelvins. This X-ray source at Sagittarius A* exhibits rapid variations, with time scales on the order of a millisecond. This suggests a source not larger than a light-millisecond or 300 km, so it is very compact. The only possibilities that we know that would place that much matter in such a small volume are black holes and neutron stars, and the consensus is that neutron stars can't be more massive than about 3 solar masses.
The formation of supermassive black holes is a subject that is still under investigation. It is still not completely clear whether they were the condensing seeds for galaxies or whether they are a result of galaxy formation. Others have speculated that Sagittarius A* may be a so-called boson star - a theoretical entity composed of exotic elementary particles that may also be candidates for dark matter. These strange stars have no surface and interact with normal matter only through gravity.
For now, there is no definitive evidence either way but researchers hope to have more answers soon. Upcoming projects such as ALMA and the European Southern Observatory's VLT Interferometer will image the complex dynamics of our Galactic Centre in unprecedented detail, revealing its turbulent processes and perhaps even observe stars that orbit the supermassive black hole in as little as a year. Moreover, the next generation of infrared interferometers should allow astronomers to see the 'shadow' cast by the gravitational diffraction of light rays near the black hole and the effects of the black hole horizon. As there is no horizon in boson stars, this would provide an extremely undeniable signature of what lies at the centre of the Milky Way.
Last updated on: Tuesday 20th June 2017
There are no comments for this post. Be the first! | <urn:uuid:e031396d-fe09-4120-9b71-359f5a449264> | 4.15625 | 808 | Truncated | Science & Tech. | 38.435069 | 95,478,962 |
Waterfall – Software Development Life Cycle Model
Software Development Life Cycle
The waterfall model is a sequential software development process, in which progress is seen as flowing steadily downwards like a waterfall through the phases of
The waterfall development model originates in the manufacturing and construction industries; highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development.
Requirement Gathering& Definition: This phase is focused on possible requirements of the system for the development are captured. Requirements are gathered subsequent to the end user consultation.
Software Design: Prior to beginning the actual coding, it is inevitable to understand what actions are to be taken and what they should like. The requirement specifications are studied in detail in this phase and the design of the system is prepared. The design specifications are the base for the implementation and unit testing model phase.
Development / Coding: Subsequent to receiving the system design documents, the work is shared into various modules and the real coding is commenced. The system is developed into small coding units. These units are later integrated in the subsequent phase. Every unit is tested for its functionality.
Testing: The modules that are divided into units are integrated into a complete system and tested for proper coordination among modules and system behaves as per the specifications. Once the testing is completed, the software product is delivered to the customer.
Maintenance: It is a never ending phase. Once the system is running in production environment, problems come up. The issues that are related to the system are solved only after deployment of the system. The problems arise from time to time and need to be solved; hence this phase is referred as maintenance. | <urn:uuid:575f04a0-a7dd-4d52-8a24-11a698557061> | 3.109375 | 359 | Tutorial | Software Dev. | 17.205689 | 95,478,985 |
Irreversible Processes: The Onsager and Boltzmann Pictures
The primary objective of this book is to develop a mathematical picture of measurable quantities that can be used to understand macroscopic observations of matter. As we have discussed in Chapter 1, that picture is necessarily stochastic and involves ensembles of systems that are prepared in similar ways. In Chapter 1 we outlined some of the techniques of the theory of stochastic processes that are necessary for understanding physical ensembles. Although we used Brownian motion to illustrate the physical relevance of stochastic processes, the stochastic point of view is essential for understanding all kinds of macroscopic observations. Fluctuations are inherent in all matter because of its molecular constitution. Indeed, one of the lessons of Brownian motion is that these fluctuations are observable and that they are closely related to the irreversible processes caused by molecular motion.
KeywordsBoltzmann Equation Irreversible Process Extensive Variable Dissipation Function Pressure Tensor
Unable to display preview. Download preview PDF.
Linear Statistical Theory of Nonequilibrium Thermodynamics
- L.D. Landau and E.M. Lifshitz, Statistical Physics, 2nd ed. (Pergamon, London 1969).Google Scholar
- G.K. Batchelor, An Introduction to Fluid Dynamics (Cambridge University Press, Cambridge, 1970).Google Scholar
- L. Boltzmann, Lectures on Gas Theory (University of California Press, Berkeley, 1964).Google Scholar
- S. Chapman and T.G. Cowling, The Mathematical Theory of Non-uniform Gases (Cambridge University Press, Cambridge, 1970).Google Scholar
- P. Résibois and M. De Leener, Classical Kinetic Theory of Fluids (John Wiley, New York, 1977).Google Scholar
- G.E. Uhlenbeck and G.W. Ford, Lectures in Statistical Mechanics (American Mathematical Society, Providence, RI, 1963).Google Scholar | <urn:uuid:cf9f8ab2-6236-45bb-9e88-9d8a731a8321> | 2.734375 | 417 | Truncated | Science & Tech. | 29.175833 | 95,479,012 |
Separation and location of microseism sources
|Title:||Separation and location of microseism sources||Authors:||Moni, Aishwarya
Bean, Christopher J.
|Permanent link:||http://hdl.handle.net/10197/4681||Date:||20-Jun-2013||Abstract:||Microseisms are ground vibrations caused largely by ocean gravity waves. Multiple spatially separate noise sources may be coincidentally active. A method for source separation and individual wavefield retrieval of microseisms using a single pair of seismic stations is introduced, and a method of back azimuth estimation assuming Rayleigh-wave arrivals of microseisms is described. These methods are combined to separate and locate sources of microseisms in a synthetic model and then applied to field microseismic recordings from Ireland in the Northeast Atlantic. It is shown that source separation is an important step prior to location for both accurate microseism locations and microseisms wavefield studies.||Type of material:||Journal Article||Publisher:||Wiley||Copyright (published version):||2013 American Geophysical Union||Keywords:||Microseisms;Source separation||DOI:||10.1002/grl.50566||Language:||en||Status of Item:||Peer reviewed|
|Appears in Collections:||Earth Sciences Research Collection|
Show full item record
Page view(s) 5094
This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. For other possible restrictions on use please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply. | <urn:uuid:3a27a622-6c19-41b0-86e5-5c38a7b9cc3d> | 2.546875 | 365 | Truncated | Science & Tech. | 29.328675 | 95,479,013 |
These particles may act as organic building blocks for even more complicated molecules and their discovery was completely unexpected because of the chemical composition of the atmosphere (which lacks oxygen and mainly consists of nitrogen and methane). The observation has now been verified on 16 different encounters and findings will be published in Geophysical Research Letters on November 28.
Professor Andrew Coates, researcher at UCL’s Mullard Space Science Laboratory and lead author of the paper, says: “Cassini’s electron spectrometer has enabled us to detect negative ions which have 10,000 times the mass of hydrogen. Additional rings of carbon can build up on these ions, forming molecules called polycyclic aromatic hydrocarbons, which may act as a basis for the earliest forms of life.
“Their existence poses questions about the processes involved in atmospheric chemistry and aerosol formation and we now think it most likely that these negative ions form in the upper atmosphere before moving closer to the surface, where they probably form the mist which shrouds the planet and which has hidden its secrets from us in the past. It was this mist which stopped the Voyager mission from examining Titan more closely in 1980 and was one of the reasons that Cassini was launched.”
The new paper builds on work published in Science (May 11) where the team found smaller tholins, up to 8,000 times the mass of hydrogen, forming away from the surface of Titan.
Dr Hunter Waite of the South West Research Institute in Texas and author of the earlier study, said: “Tholins are very large, complex, organic molecules thought to include chemical precursors to life. Understanding how they form could provide valuable insight into the origin of life in the solar system."
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, manages the Cassini-Huygens mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter was designed, developed and assembled at JPL.
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:42861d60-4c39-4f53-8033-f8b588690fc8> | 3.65625 | 1,070 | Content Listing | Science & Tech. | 38.998349 | 95,479,014 |
A View from Emerging Technology from the arXiv
First Observation of Hawking Radiation
Hawking predicted it in 1974. Now physicists say they’ve seen it for the first time
For some time now, astronomers have been scanning the heavens looking for signs of Hawking radiation. So far, they’ve come up with zilch.
Today, it looks as if they’ve been beaten to the punch by a group of physicists who say they’ve created Hawking radiation in their lab. These guys reckon they can produce Hawking radiation in a repeatable unambiguous way, finally confirming Hawking’s prediction. Here’s how they did it.
Physicists have long realised that on the smallest scale, space is filled with a bubbling melee of particles leaping in and out of existence. These particles form as particle-antiparticle pairs and rapidly annihilate, returning their energy to the vacuum.
Hawking’s prediction came from thinking about what might happen to particle pairs that form at the edge of a black hole. He realised that if one of the pair were to cross the event horizon, it could never return. But its partner on the other side would be free to go.
To an observer it would look as if the black hole were producing a constant stream of quantum particles, which became known as Hawking radiation.
Since then, other physicists have pointed out that black holes aren’t the only place where event horizons can form. Any medium in which waves travel can support an event horizon and in theory, it should be possible to see Hawking radiation in these media too.
Today, Franco Belgiorno at the University of Milan and a few buddies say they’ve produced Hawking radiation by firing an intense laser pulse through a so-called nonlinear material, that is one in which the light itself changes the refractive index of the medium.
As the pulse moves through the material, so too does the change in refractive index, creating a kind of bow wave in which the refractive index is much higher than the surrounding material.
This increase in refractive index causes any light heading into it to slow down. “By choosing appropriate conditions, it is possible to bring the light waves to a standstill,” say Belgiono and co. This creates a horizon beyond which light cannot penetrate, what physicists call a white hole event horizon, the inverse of a black hole.
White holes aren’t so different to black holes (in fact Hawking argues that they are formally equivalent). And it’s not hard to imagine what happens to particle pairs that form at this type of horizon. If one of the pair crosses the horizon, it can make no headway and so becomes trapped. The other is free to go. So the horizon ought to look as if it is generating quantum particles.
It is this radiation that Belgiorno and co say they’ve seen by watching from the side as a high power infrared laser pulse ploughs through a lump of fused silica. Their pulse has frequency of 1055nm but the light they see emitted at right angles has a wavelength of around 850nm.
Of course, the big question is whether the emitted light is generated by some other mechanism such Cerenkov radiation, scattering or, in particular, fluorescence which is the hardest to rule out.
However, Belgiorno and pals say they can rule out all these sources of light for the radiation they see. In particular, they that the fluorescent light is well characterised and that it differs in various significant ways from the emissions they see. Therefore, they must be seeing Hawking radiation, they conclude.
That’s an astounding claim and one that many physicists will want to pour over before popping any champagne corks.
Why is it important? One reason is that Hawking radiation is the only known a way in which black holes can evaporate and so a proof of its existence will have profound effects for cosmology and the way the universe will end.
And now that it’s been observed once, expect a rash of other announcemetns as researchers race to repeat the result.
Ref: arxiv.org/abs/1009.4634: Hawking Radiation From Ultrashort Laser Pulse Filaments
Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video | <urn:uuid:62e4abb4-2c19-4de5-a32e-d785efd00e0d> | 3.875 | 909 | Truncated | Science & Tech. | 51.407127 | 95,479,016 |
Characteristics of the asymmetric circulation associated with tropical cyclone motion
- 145 Downloads
This paper examines the characteristics of the asymmetric flow associated with tropical cyclone (TC) motion using the Final Analysis dataset produced after the Tropical Cyclone Motion Experiment (TCM-90). The wind data vertically-integrated between 850 and 300 hPa around a TC are first separated into an environment flow and a vortex circulation using the filtering algorithm of Kurihara et al. (1995). The latter is then Fourier-decomposed azimuthally to obtain the symmetric and asymmetric components. Nine TCs that occurred during the TCM-90 Experiment are examined.
For generally westward-moving TCs, the wavenumber-1 (WN-1) component is found to dominate the asymmetric flow. However, its pattern does not always exhibit a pair of counterrotating gyres as would be expected from previous modelling results (Fiorino and Elsberry, 1989). Further, the ventilation flow associated with WN-1 does not necessarily point towards the northwest. For a TC undergoing recurvature, the WN-2 flow becomes significant, and even has a larger magnitude than the WN-1 component, starting from about one day before recurvature. Consistent with the modelling results of Williams and Chan (1994), the WN-2 component also rotates counter-clockwise with time.
The growth and decay of the asymmetric components result from the interaction between the environmental flow and the symmetric flow of the TC through an energy exchange, in addition to such exchanges between the asymmetric components. Energy generally flows from the environment and the symmetric circulation of the TC to the WN-1 component during intensification but vice versa when the TC is weakening. The growth of the WN-2 component in recurving TCs is due to a transfer of energy from the environment, the symmetric circulation and the WN-1 flow. It is for this reason that the WN-1 flow becomes weaker than the WN-2 flow in such cases. The WN-1 component of fast-moving TCs is found to extract energy from the WN-2 component, in addition to those from the environment and the symmetric flow.
KeywordsVortex Cyclone Tropical Cyclone Large Magnitude Wind Data
Unable to display preview. Download preview PDF.
- Chan, J. C. L., 1982: On the physical processes responsible for tropical cyclone motion.Ph. D. Thesis, Colorado State University, Ft. Collins, Colorado, 199pp.Google Scholar
- Chan, J. C. L., Gray, W. M., 1982: Tropical cyclone movement and surrounding flow relationships.Mon. Wea. Rev.,110, 1354–1374.Google Scholar
- Chan, J. C. L., Williams, R. T., 1987: Analytical and numerical studies of the beta-effect in tropical cyclone motion.J. Atmos. Sci.,44, 1257–1265.Google Scholar
- Cheung, K. K. W., Chan, J. C. L., 1995: Fourier components of the circulation associated with tropical cyclone motion.Res Rept. AP-95-03, Dept. of Phys. and Materials Sci., City University of Hong Kong, 71 pp.Google Scholar
- Elsberry, R. L., 1990: International experiments to study tropical cyclones in the western North Pacific.Bull. Amer. Meteor. Soc.,71, 1305–1316.Google Scholar
- Fiorino, M., Elsberry, R. L., 1989: Some aspects of vortex structure in tropical cyclone motion.J. Atmos. Sci.,46, 979–990.Google Scholar
- Flatau, M., Schubert, W. H., Stevens, D. E., 1994: The role of baroclinic processes in tropical cyclone motion: The influence of vertical tilt.J. Atmos. Sci.,51, 2589–2601.Google Scholar
- Holland, G. J., 1983: Tropical cyclone motion: Environmental interaction plus a beta effect.J. Atmos. Sci.,40, 328–342.Google Scholar
- Holland, G. J., Leslie, L. M., Diehl, B. C., 1992: Comments on “The detection of flow asymmetries in the tropical environment”.Mon. Wea. Rev.,120, 2394–2397.Google Scholar
- Holland, G. J., Wang, Y., 1995: Baroclinic dynamics of simulated tropical cyclone recurvature.J. Atmos. Sci.,52, 410–426.Google Scholar
- Kurihara, Y., Bender, M. A., Ross, R. J., 1993: An initialization scheme of hurricane models by vortex specification.Mon. Wea Rev.,121, 2030–2045.Google Scholar
- Ngan, K. W., Chan, J. C. L., 1995: Tropical cyclone motion-propagation vs steering.Preprints, 21 st conf. Hurricanes and Trop. Meteor., Amer. Meteor. Soc., Maimi, April 24–28, 23–25.Google Scholar
- Reeder, M. J., Smith, R. K., 1991: The detection of flow asymmetries in the tropical cyclone environment.Mon. Wea. Rev.,119, 848–855.Google Scholar
- Rogers, E., Stephen, S. L., Deaven, D. G., DiMego, G. J., 1993: Data assimilation and forecasting for the Tropical Cyclone Motion Experiment at the National Meteorological Centre.Preprints, 20th Conf. Hurr. and Trop. Meteor. San Antonio, Amer. Meteor. Soc., 329–330.Google Scholar
- Wang, Y., Li, X., 1995: Propagation of a tropical cyclone in a meridionally-varying zonal flow: An energetics analysis.J. Atmos. Sci.,52, 1421–1433.Google Scholar
- Wang, Y., Holland, G. J., 1995: On the interaction of tropical cyclone-scale vortices Part IV: Baroclinic vortices.Quart. J. Roy. Meteor. Soc.,122, 95–126.Google Scholar
- Williams, R. T., Chan, J. C. L., 1994: Numerical studies of the beta effect in tropical cyclone motion. Part II: Zonal mean flow effects.J. Atmos. Sci.,51, 1065–1076.Google Scholar
- Wu, C. C., Emanuel, K. A., 1994: On hurricane outflow structure.J. Atmos. Sci.,51, 1995–2003.Google Scholar
- Wu, C. C., Emanuel, K. A., 1995a: Potential vorticity diagnostics of hurricane movement Part I: A case study of Hurricane Bob (1991).Mon. Wea. Rev.,123, 69–92.Google Scholar
- Wu, C. C., Emanuel, K. A., 1995b: Potential vorticity diagnostics of hurricane movement. Part II: Tropical Storm Ana (1991) and Hurricane Andrew (1992).Mon. Wea. Rev.,123, 93–109.Google Scholar | <urn:uuid:c43f21e9-81ca-45f8-944b-beb3d55f1252> | 2.5625 | 1,536 | Academic Writing | Science & Tech. | 71.760066 | 95,479,025 |
Compact jets that shoot matter into space in a continuous stream at near the speed of light have long been assumed to be a unique feature of black holes. But these odd features of the universe may be more common than once thought.
Artist concept shows jets of material
shooting out from the neutron star. NASA/JPL-Caltech/R. Hurt (SSC)
Astrophysicists using NASAs Spitzer Space Telescope recently spotted one of these jets around a super-dense dead star, confirming for the first time that neutron stars as well as black holes can produce these fire-hose-like jets of matter. A paper detailing their surprising discovery appears in this week’s issue of the Astrophysical Journal Letters.
"For years, scientists suspected that something unique to black holes must be fueling the continuous compact jets because we only saw them coming from black hole systems,” said Simone Migliari, an astrophysicist at the University of California, San Diego’s Center for Astrophysics and Space Sciences and the lead author of the paper. “Now that Spitzer has revealed a steady jet coming from a neutron star in an X-ray binary system, we know that the jets must be fueled by something that both systems share.”
Kim McDonald | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:c3200421-0ea9-4ee7-9f53-e97c7e26e4ca> | 3.5 | 847 | Content Listing | Science & Tech. | 41.773226 | 95,479,037 |
From Greenhouse Gas to 3-D Surface-Microporous Graphene
Tiny dents in the surface of graphene greatly enhances its potential as a supercapacitor. Even better, it can be made from carbon dioxide.
A material scientist at Michigan Technological University invented a novel approach to take carbon dioxide and turn it into 3-D graphene with micropores across its surface. The process is the focus of a new study published in the American Chemical Society's Applied Materials & Interfaces (DOI: 10.1021/acsami.7b07381).
The conversion of carbon dioxide to useful materials usually requires high energy input due to its ultrahigh stability. However, materials science professor Yun Hang Hu and his research team created a heat-releasing reaction between carbon dioxide and sodium to synthesize 3-D surface-microporous graphene.
“3-D surface-microporous graphene is a brand-new material,” Hu says, explaining the material's surface is pockmarked with micropores and folds into larger mesopores, which both increase the surface area available for adsorption of electrolyte ions. “It would be an excellent electrode material for energy storage devices.”
Basically, a supercapacitor material needs to store—and release—a charge. The limiting factor is how quickly ions can move through the material.
The supercapacitive properties of the unique structure of 3-D surface-microporous graphene make it suitable for elevators, buses, cranes and any application that requires a rapid charge/discharge cycle. Supercapacitors are an important type of energy storage device and have been widely used for regenerative braking systems in hybrid vehicles.
Current commercialized supercapacitors employ activated carbon using swaths of micropores to provide efficient charge accumulation. However, electrolyte ions have difficulty diffusing into or through activated carbon's deep micropores, increasing the charging time.
"The new 3-D surface-microporous graphene solves this," Hu says. "The interconnected mesopores are channels that can act as an electrolyte reservoir and the surface-micropores adsorb electrolyte ions without needing to pull the ions deep inside the micropore."
The mesopore is like a harbor and the electrolyte ions are ships that can dock in the micropores. The ions don't have to travel a great distance between sailing and docking, which greatly improves charge/discharge cycles they can steer through. As a result, the material exhibited an ultrahigh areal capacitance of 1.28 F/cm2, which is considered an excellent rate capability as well as superb cycling stability for supercapacitors.
From Thin Air
To synthesize the material from carbon dioxide, Hu’s team added carbon dioxide to sodium, followed by increasing temperature to 520 degrees Celsius. The reaction can release energy, as heat, instead of requiring an energy input.
During the process, carbon dioxide not only forms 3-D graphene sheets, but also digs the micropores. The tiny dents are only 0.54 nanometers deep in the surface layers of graphene.
Source: Michigan Technological University – 02.07.2017.
Investigated and edited by:Dr.-Ing. Christoph Konetschny, Inhaber und Gründer von Materialsgate
Büro für Material- und Technologieberatung
The investigation and editing of this document was performed with best care and attention.
For the accuracy, validity, availability and applicability of the given information, we take no liability.
Please discuss the suitability concerning your specific application with the experts of the named company or organization.
You want additional material or technology investigations concerning this subject?Materialsgate is leading in material consulting and material investigation.
Feel free to use our established consulting services
Your weekly MaterialTRENDS for
Engineering & Design
Partner of the Week
Search in MaterialsgateNEWS
Books and products | <urn:uuid:e70cb22a-4689-4780-bf51-20fe03270633> | 3.78125 | 829 | Truncated | Science & Tech. | 20.94252 | 95,479,049 |
Scientists have discovered a new, persistent structure in one of two radiation belts surrounding Earth
NASA's twin Van Allen Probes spacecraft have shown that high-energy electrons in the inner radiation belt display a persistent pattern that resembles slanted zebra stripes. Surprisingly, this structure is produced by the slow rotation of Earth, previously considered incapable of affecting the motion of radiation belt particles, which have velocities approaching the speed of light.
Scientists had previously believed that increased solar wind activity was the primary force behind any structures in our planet's radiation belts. However, these zebra stripes were shown to be visible even during low solar wind activity, which prompted a new search for how they were generated. That quest led to the unexpected discovery that the stripes are caused by the rotation of Earth. The findings are reported in the March 20, 2014, issue of Nature.
"It is because of the unprecedented resolution of our energetic particle experiment, RBSPICE, that we now understand that the inner belt electrons are, in fact, always organized in zebra patterns," said Aleksandr Ukhorskiy, lead author of the paper at The Johns Hopkins Applied Physics Laboratory, or APL, in Laurel, Md. "Furthermore, our modeling clearly identifies Earth's rotation as the mechanism creating these patterns. It is truly humbling, as a theoretician, to see how quickly new data can change our understanding of physical properties."
Because of the tilt in Earth's magnetic field axis, the planet's rotation generates an oscillating, weak electric field that permeates through the entire inner radiation belt. To understand how that field affects the electrons, Ukhorskiy suggested imagining that the electrons are like a viscous fluid. The global oscillations slowly stretch and fold the fluid, much like taffy is stretched and folded in a candy store machine. The stretching and folding process results in the striped pattern observed across the entire inner belt, extending from above Earth's atmosphere, about 500 miles above the planet's surface up to roughly 8,000 miles.
The radiation belts are dynamic doughnut-shaped regions around our planet, extending high above the atmosphere, made up of high-energy particles, both electrons and charged particles called ions, which are trapped by Earth's magnetic field. Radiation levels across the belts are affected by solar activity that causes energy and particles to flow into near-Earth space. During active times, radiation levels can dramatically increase, which can create hazardous space weather conditions that harm orbiting spacecraft and endanger humans in space. It is the goal of the Van Allen Probes mission to understand how and why radiation levels in the belts change with time.
"The RBSPICE instrument has remarkably fine resolution and so it was able to bring into focus a phenomena that we previously didn't even know existed," said David Sibeck, the mission scientist for the Van Allen Probes at NASA's Goddard Space Flight Center in Greenbelt, Md. "Better yet, we have a great team of scientists to take advantage of these unprecedented observations: We couldn't have interpreted this data without analysis from strong theoreticians."
NASA launched the Van Allen Probes in the summer of 2012. APL built and operates the probes for NASA's Science Mission Directorate. This is the second mission in NASA's Living With a Star program, which Goddard manages. The program explores aspects of the connected sun-Earth system that directly affect life and society.
Susan Hendrix | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:bc57d45a-d1e8-4476-8c31-0e9386a0a5e3> | 4.0625 | 1,272 | Content Listing | Science & Tech. | 37.457127 | 95,479,057 |
A calibrated 120 kHz single-beam echo-sounder was integrated into an ocean glider and deployed in the Weddell Sea, Southern Ocean. The glider was deployed for two short periods in January 2012, in separate survey boxes on the continental shelf to the east of the Antarctic Peninsula, to assess the distribution of Antarctic krill (Euphausia superba). During the glider missions, a research vessel undertook acoustic transects using a calibrated, hull-mounted, multi-frequency echo-sounder. Net hauls were taken to validate acoustic targets and parameterize acoustic models. Krill targets were identified using a thresholded schools analysis technique (SHAPES), and acoustic data were converted to krill density using the stochastic distorted-wave Born approximation (SDWBA) target strength model. A sensitivity analysis of glider pitch and roll indicated that, if not taken into account, glider orientation can impact density estimates by up to 8-fold. Glider-based, echo-sounder—derived krill density profiles for the two survey boxes showed features coherent with ship-borne measurements, with peak densities in both boxes around a depth of 60 m. Monte Carlo simulation of glider subsampling of ship-borne data showed no significant difference from observed profiles. Simulated glider dives required at least an order of magnitude more time than the ship to similarly estimate the abundance of krill within the sample regions. These analyses highlight the need for suitable sampling strategies for glider-based observations and are our first steps toward using autonomous underwater vehicles for ecosystem assessment and long-term monitoring. With appropriate survey design, gliders can be used for estimating krill distribution and abundance.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:fb52b025-3532-4ad8-adb3-13d6e46fc554> | 3.015625 | 366 | Academic Writing | Science & Tech. | 15.529619 | 95,479,059 |
Interface defining methods for algorithms, which search for convex hull of the specified points' set.Namespace: AForge.Math.Geometry
Assembly: AForge.Math (in AForge.Math.dll) Version: 184.108.40.206 (220.127.116.11)
public interface IConvexHullAlgorithm
The interface defines a method, which should be implemented by different classes performing convex hull search for specified set of points.
Note:All algorithms, implementing this interface, should follow two rules for the found convex hull:
- the first point in the returned list is the point with lowest X coordinate (and with lowest Y if there are several points with the same X value);
- points in the returned list are given in counter clockwise order (Cartesian coordinate system). | <urn:uuid:1f86b724-9c26-4491-91e5-e99c2fb25957> | 2.53125 | 172 | Documentation | Software Dev. | 55.480172 | 95,479,102 |
News Release 13-062
Not Slippery When Wet: Geckos Adhere to Surfaces Submerged Underwater
University of Akron study may help inform future bio-inspired gecko-like adhesives
April 4, 2013
This material is available primarily for archival purposes. Telephone numbers or other contact information may be out of date; please see current contact information at media contacts.
Geckos are known for their sticky adhesive toes that allow them to stick to, climb on, and run along surfaces in any orientation--even upside down! But until recently, it was not well understood how geckos kept their sticking ability even on wet surfaces, as are common in the tropical regions in which most geckos live. A 2012 study in which geckos slipped on wet glass perplexed scientists trying to unlock the key to gecko adhesion in climates with plentiful rain and moisture.
A study supported by the National Science Foundation and published in the Proceedings of the National Academy of Sciences this week solves the mystery, showing that wet, water-repellant surfaces, like those of leaves and tree trunks, actually secure a gecko's grip in a manner similar to dry surfaces.
Researchers from the University of Akron, led by integrated bioscience doctoral candidate Alyssa Stark, tested geckos on four different surfaces. The surfaces ranged from hydrophilic--those that liquids spread across when wet, like glass--to hydrophobic--water-repellent surfaces on which liquids bead, like the natural leaves geckos walk on--and intermediate ones, like acrylic sheets. Geckos were tested on these surfaces both when the surfaces were dry and when they were submerged underwater, and water completely covered the gecko's feet.
Fitting a small harness around the pelvis, geckos were gently pulled along the substrate until their feet began to slip. At this point the maximum force with which a gecko could stick was measured. On wet glass geckos slipped and could not maintain adhesion. However when tested on more hydrophobic surfaces, geckos stuck just as well to the wet surface as they did to the dry ones. When tested, geckos stuck even better to wet Teflon than dry.
To understand these findings, researchers developed a model that explains the results from the gecko study and may also help inform future bio-inspired gecko-like adhesives that can maintain adhesion underwater.
For more details, see: Geckos keep firm grip in wet natural habitat.
A tokay gecko (Gekko gecko) clings to leaf stem wet with water droplets.
Credit and Larger Version
Andrew J. Lovinger, NSF, (703) 292-4933, email: firstname.lastname@example.org
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2018, its budget is $7.8 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives more than 50,000 competitive proposals for funding and makes about 12,000 new funding awards.
Useful NSF Web Sites:
NSF Home Page: https://www.nsf.gov
NSF News: https://www.nsf.gov/news/
For the News Media: https://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: https://www.nsf.gov/statistics/
Awards Searches: https://www.nsf.gov/awardsearch/ | <urn:uuid:f3d7ea5e-34e6-4a6b-abdd-1df9b46012f4> | 3.59375 | 768 | News (Org.) | Science & Tech. | 50.376599 | 95,479,113 |
Graphene, a strong, lightweight carbon honeycombed structure that’s only one atom thick, holds great promise for energy research and development. Recently scientists with the Fluid Interface Reactions, Structures, and Transport (FIRST) Energy Frontier Research Center (EFRC), led by the US Department of Energy’s Oak Ridge National Laboratory, revealed graphene can serve as a proton-selective permeable membrane, providing a new basis for streamlined and more efficient energy technologies such as improved fuel cells.
The work, published in the March 17 issue of Nature Communications, pinpoints unprecedented proton movement through inherent atomic-scale defects, or gaps, in graphene.
“Now you’re able to take a barrier that you can make very thin, like graphene, and change it so you build gates on a molecular scale,” says principal investigator Franz Geiger of Northwestern University, the senior author and a FIRST researcher.
The foundation for the study was laid six years ago at ORNL as part of DOE’s EFRC initiative to accelerate the scientific breakthroughs needed to build a new 21st century energy economy. The goal of FIRST is to use interdisciplinary research to develop both a fundamental understanding and validated, predictive models of the unique nanoscale environment at fluid–solid interfaces, which will enable transformative advances in electrical energy storage and catalysis, according to FIRST Director David Wesolowski.
Of the paper’s 15 authors, all are FIRST researchers with diverse science backgrounds ranging from chemistry to computer modeling. Pooling their expertise, the scientists investigated the mechanisms and structure of graphene using a multifaceted theoretical, experimental, materials synthesis, and computational approach.
Science from the ground up
With a tight lattice of carbon reminiscent of chicken wire, pristine graphene was believed to be impenetrable. Current studies, however, have shown that in aqueous solutions, graphene allows surprising numbers of protons to pass through its atomic structure.
The researchers’ first step was to create an atomically thin layer of graphene on fused silica, an effort led by ORNL’s Ivan Vlassiouk, an expert in the synthesis of two-dimensional materials including graphene using chemical vapor deposition techniques.
Then Raymond Unocic at ORNL’s Center for Nanophase Materials Sciences, a DOE Office of Science User Facility, analyzed the graphene using an aberration-corrected scanning transmission electron microscope (STEM). The high-powered microscope, a state-of-the-art technology, allowed direct imaging of individual carbon atoms of the adjoining hexagons that compose graphene.
Unocic and associates were able to focus on rare, naturally occurring atomic-scale defects in graphene that allowed aqueous protons to “hop” through holes in the thin, strong single layer.
Regions of missing atoms are so small that they cannot be detected by standard microscopic techniques, so access to ORNL’s STEM facility was critical. “To be able to see these images—the individual positions of the carbon atoms in the graphene—is just spectacular,” says Geiger.
The scientists later isolated the paths of movement the protons followed. By creating a single-layer sliver of graphene on silica glass, separated from the glass by mere molecules of water, the scientists designed a trap for the hopping protons. Changes in the acidity of the aqueous solution on either side of the graphene layer revealed the covert gating mechanism in the material’s structure, which they were able to detect using a laser technique called second harmonic generation.
“The major advantage of second harmonic generation,” says Northwestern’s Jennifer Achtyl, lead author of the Nature Communication article, “is that it is highly sensitive to chemistry at the interface or, in this case, the nanometer-thick environment between the aqueous solution and the surface of the silica. This acute sensitivity and the fact that these experiments can be run nondestructively were critical to our ability to capture experimental evidence of the transfer of protons through graphene.”
Using computational methods to analyze the configurations of defects in the graphene, the FIRST researchers isolated proton-transfer occurrences at defect areas. In addition, the team demonstrated that even the smallest of molecules, hydrogen and helium, are unable to pass through the proton gates under normal conditions.
“Finally, when we were able to put all the pieces together, we made a conclusive statement that—even though there’s a high energetic barrier for proton transport through graphene—if you lower that energetic barrier, you can allow protons to pass right through,” says Unocic. “This opens a new pathway for the atomic-scale engineering of graphene.”
Key to energy’s future?
Although the scientists focused on the fundamental mechanics of graphene surfaces, the results of this study open the doors for further graphene development across the energy economy and beyond.
With fuel cells, to name but one area of promise, issues range from cumbersome size to fleeting efficiency. Isolating single ion-transfer mechanisms and structural gaps in graphene could facilitate improvements in the production, transportation and use of energy.
“We’ve looked at this problem from really as many sides as you can possibly look at it with today’s technology,” Geiger says. “It makes a very strong case for taking the effect that we’ve observed and the mechanism that we’ve found and doing something technologically relevant with it. There are so many people working with graphene that to show how aqueous protons actually transfer across graphene will make a big difference.”
Coauthors of “Aqueous Proton Transfer Across Single Layer Graphene” are ORNL’s Unocic, Robert Sacci, Vlassiouk, Pasquale Fulvio, Panchapakesan Ganesh, Wesolowski and Sheng Dai; Northwestern University’s Achtyl and Geiger; University of Virginia’s Lijun Xu, Yu Cai and Matthew Neurock (all three now at the University of Minnesota); and Pennsylvania State University’s Muralikrishna Raju, Weiwei Zhang and Adri van Duin.
This work was supported by the FIRST Center, an EFRC funded by the US Department of Energy’s Office of Science. Microscopy was conducted as part of a user proposal at ORNL’s Center for Nanophase Materials Sciences.
ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.–by Ashanti B. Washington
Dawn Levy | newswise
Machine-learning predicted a superhard and high-energy-density tungsten nitride
18.07.2018 | Science China Press
In borophene, boundaries are no barrier
17.07.2018 | Rice University
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:8d8744a5-0182-4cdb-82a5-b1729d9cdab3> | 3.453125 | 2,053 | Content Listing | Science & Tech. | 29.670409 | 95,479,126 |
Switching between mobile and sessile life styles is typical of many species of the bacteria of the Roseobacter group. Microbiologists at the Leibniz Institute DSMZ have now demonstrated that the genes responsible for this, usually are located outside the bacterial chromosome on a single plasmid. The study show that even characteristics as complex as the ability of biofilm formation can be passed on via horizontal gene transfer. Within the Roseobacter group, this has actually happened multiple times.
Their diverse metabolic characteristics make bacteria of the Roseobacter group some of the most abundant microorganisms in nutrient-rich coastal waters. Roseobacters together with other bacteria form highly complex biofilms that have been dubbed “cities of microbes”.
However, they can also be found swimming freely in the oceans. Switching between mobile and sessile life styles is typical of many species of these marine bacteria. This flexibility is based on their ability to actively move with the help of flagella, while they are also capable of reversibly attaching to surfaces.
Jörn Petersen, a microbiologist at the Leibniz Institute DSMZ – German Collection of Microorganisms and Cell Cultures, in Braunschweig, Germany, and his colleagues have now demonstrated that the genes responsible for biofilm formation usually are located outside the bacterial chromosome on a single plasmid.
They reached this conclusion based on a physiological and genetic study of 33 Roseobacter strains. First, Petersen and colleagues showed that all bacteria that are efficient biofilm formers also are mobile. By removing the plasmids responsible for biofilm formation, they were able to demonstrate that the bacteria not only lost their ability of adhering to surfaces, but also their swimming capability.
“Thus, it is obvious that the responsible genes are located on these extrachromosomal elements,” said Petersen. Genes that are passed on via plasmids are able to cross the species boundary relatively easily. “Our studies show that even characteristics as complex as the ability of biofilm formation can be passed on via horizontal gene transfer,” said Petersen. Within the Roseobacter group, this type of horizontal gene transfer has actually happened multiple times, reflecting the great importance of plasmids for quick adaptation to new ecological niches.
The majority of the 33 Roseobacter strains studied were type strains, i.e., reference strains representative of the world’s bacterial diversity, which are archived in the DSMZ collection. Unlike previous studies, the DSMZ researchers genetically analyzed the group across its full evolutionary range. “Our present studies by far exceed anecdotal findings previously reported for individual model organisms. Moreover, they demonstrate the close link between basic research and collection activities here at DSMZ. As such, they also reflect DSMZ’s importance as one of the world’s leading resource centers for biological materials,” said Petersen.
The studies were supported by the Deutsche Forschungsgemeinschaft (DFG) and are part of the DFG Collaborative Research Center TRR51, “Roseobacter.” Results were recently published in Nature Publishing Group’s distinguished ISME Journal.
Michael V, Frank O, Bartling P, Scheuner C, Göker M, Brinkmann H, Petersen J. (2016). Biofilm plasmids with a rhamnose operon are widely distributed determinants of the ʻswim-or-stickʼ lifestyle in roseobacers. ISME J [Epub ahead of print]. http://doi.org/10.1038/ismej.2016.30
Petersen J, Frank O, Göker M, Pradella S. (2013). Extrachromosomal, extraordinary and essential - the plasmids of the Roseobacter clade. Applied Microbiology and Biotechnology 97: 2805-2815.
Frank O, Michael V, Päuker O, Boedeker C, Jogler C, Rohde M, Petersen J. (2015). Plasmid curing and the loss of grip--the 65-kb replicon of Phaeobacter inhibens DSM 17395 is required for biofilm formation, motility and the colonization of marine algae. Systematic and Applied Microbiology 38:120-127.
PD Dr. Jörn Petersen
Department Protists and Cyanobacteria (PuC)
Head of Projekt A5 „Plasmide“ Transregio Sonderforschungsbreich (TRR51)
Phone.: 0531 2616-209
About Leibniz Institute DSMZ
The Leibniz Institute DSMZ – German Collection of Microorganisms and Cell Cultures GmbH is a Leibniz Association institution. Offering comprehensive scientific services and a wide range of biological materials it has been a partner for research and industry organizations worldwide for decades. DSMZ is one of the largest biological resource centers of its kind to be compliant with the internationally recognized quality norm ISO 9001:2008. As a patent depository, DSMZ currently offers the only option in Germany of accepting biological materials according to the requirements of the Budapest Treaty. The second major function of DSMZ, in addition to its scientific services, is its collection-related research. The Brunswick (Braunschweig), Germany, based collection has existed for 42 years and holds more than 52,000 cultures and biomaterials. DSMZ is the most diverse collection worldwide: In addition to fungi, yeasts, bacteria, and archea, it is home to human and animal cell cultures, plant viruses, and plan cell cultures that are archived and studied there. http://www.dsmz.de
Christian Engel | idw - Informationsdienst Wissenschaft
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:c069647c-7eb8-4476-a7e7-5e8cba0ceed1> | 2.953125 | 1,851 | Knowledge Article | Science & Tech. | 36.05438 | 95,479,146 |
We'll start with climate change for two reasons. First, of all of the specific issues in this lesson, this is the one that potentially has the most devastating impact because of the scale of the problem. If the climate continues to change, the impacts will likely be catastrophic and on a global scale. Second, climate change will likely impact all of the other sectors of sustainability and society, including all of those listed in this section. It is absolutely essential to understand climate change if you want to address sustainability. The following is a short list of facts that indicate why we should be concerned about the human influence on the climate.
First, a few important terms:
- Greenhouse effect: the term used to describe the phenomenon whereby infrared heat warms the lower atmosphere of the earth or another planet due to the gaseous content of the atmosphere.
- Enhanced greenhouse effect: This occurs when the magnitude of the greenhouse effect is enhanced by human activity, due to the emission of greenhouse gasses at an unnaturally high level.
- Greenhouse gas: a gas that absorbs infrared radiation and contributes to the greenhouse effect.
- Anthropogenic: caused by humans.
- Anthropogenic climate change: the component of climate change that is believed to be caused by humans.
Fact 1: The Greenhouse Effect is Settled Science
The greenhouse effect is a universally accepted natural phenomenon, and carbon dioxide (CO2) is one of the primary greenhouse gases. Without it, life on earth would not be possible. The video below from NASA does the best job of succinctly explaining the greenhouse effect of any video I've found. It is not the most high-tech video out there, but don't let that distract you from the content. (For those of you who have been around long enough to remember a teacher popping a tape into a VCR player connected to one of those big CRT televisions, this may spark some memories.)
To Read Now
Please also note that methane is considered approximately 30 times as powerful as carbon dioxide in terms of causing increased warming (over a 100 year period). Methane is the primary component of natural gas and is what gives natural gas its energy. If natural gas is burned, it releases about half as much CO2 as if you burn an equivalent amount of coal. But if natural gas leaks or is otherwise emitted, it is about 30 times more potent than carbon dioxide. Despite this, carbon dioxide reduction is the focus of the greenhouse gas emissions reductions because it is far and away the biggest contributor to anthropogenic greenhouse gas emissions.
Fact 2: Carbon Dioxide Levels are Increasing Due to Human Activity
There are a few fundamental things to know in regards to the carbon dioxide content of the atmosphere. First, the amount of CO2 in the atmosphere is measured in parts per million (ppm). A concentration of 1 ppm means that there is one unit of mass of a fluid for every million units of mass of the enveloping fluid. The current concentration of carbon dioxide is a little more than 400 ppm. This means that if you took 1 kg of air, there would be about 400/1,000,000 kg, which is 0.0004 kg or 0.4 g of CO2 in that kg of air. Second, the atmosphere is considered the same everywhere you go on earth. Localized variations occur, but the current CO2 concentration of is considered to be effectively the same no matter where you are on the earth.
There is no dispute that the concentration of carbon dioxide in the atmosphere has been rising for about the past 150 years. We have been directly measuring the atmospheric concentration since 1958 in the Mauna Loa Observatory in Hawaii. We know with a very high level of certainty the concentration of the ancient atmosphere through time as well through proxy measures such as ice core samples from ancient ice (click here for some links to explanations of how this is done- click on CO2 Past at the top of the page). The current levels of CO2 are almost certainly unprecedented in the past 800,000 years (source: National Academy of Sciences). The chart below depicts the carbon dioxide levels in the atmosphere for the past 400,000 years.
It is an established fact that the burning of fossil fuels releases carbon dioxide and that the concentration of carbon dioxide has been increasing rapidly since around the beginning of the Industrial Revolution in the late 1700s. The Industrial Revolution is characterized by the increased use of fossil fuels - first coal, then oil, then natural gas. All of these non-renewable energy sources release CO2 when burned, and aside from minor natural occurrences like volcanic eruptions, are what has primarily caused the increased carbon dioxide concentration over the past 200+ years.
In short, energy is the primary culprit in anthropogenic greenhouse gas emissions. In fact, according to the International Energy Agency, two-thirds of global anthropogenic greenhouse gas emissions are due to energy use and production (source: IEA, "Energy and Climate Change," World Energy Outlook 2015). This boils down to the fact that we are emitting carbon dioxide and other greenhouse gases at rates faster than can naturally be absorbed. This causes an imbalance, and thus the concentration increases.
Mythbusting: The Global Carbon Cycle
It is not unusual to hear something like the following as a reason to be skeptical of anthropogenic climate change: "The earth naturally emits WAY more CO2 than humans do. The emissions are so relatively small that they cannot have an impact on CO2 concentrations, never mind climate change."
The earth does, in fact, emit significantly more CO2 than humans do! The image below is from the Intergovernmental Panel on Climate Change's (IPCC) most recent report, called the Fifth Assessment Report or simply AR5. This is an illustration of the global carbon cycle. Carbon, like most other elements, is constantly moving around the earth, e.g. being emitted and absorbed by oceans, being taken up by plants, being released by decaying plants, being released by volcanoes, etc. The carbon cycle illustrates this process.
This is a pretty busy image, so I'll summarize it for you: Humans directly cause about 9 billion tonnes (Gt) of carbon to enter the atmosphere each year. Natural emissions are on the order of 170 Gt per year. Hmm, okay, so there are way more natural than anthropogenic emissions. So why care so much about the measly 9 billion anthropogenic tonnes? As it turns out, if there were no anthropogenic emissions, the carbon cycle would likely even out, or perhaps even cause a reduction in carbon in the atmosphere. There are many natural processes that absorb carbon, mostly oceans, and vegetation. According to the IPCC, the total increase in carbon in the atmosphere is only about 4 Gt per year (including anthropogenic emissions). If you do a little math it becomes apparent: if those 9 Gt of emissions caused by humans were not there, then there would likely be no increase in overall concentration. Even though the relative contribution is small, anthropogenic emissions throw the global carbon cycle out of whack.
One good analogy of this process is weight gain. Let's say you average around 2,000 calories of food intake each day, and on average you burn off the same amount each day. If this continues over time, you will not gain weight. But if you add one extra 100 calorie snack each day, it will throw this balance (think of it as a calorie or energy cycle if you want to) out of whack. Even though you are only increasing your calorie intake by a measly 5%, over time this will cause weight gain. Well, it appears that the earth has put on some serious carbon weight in the past ~200 years, and it is almost entirely due to the extra human emissions!
Fact 3: The Climate Is Warming
Humans have been taking direct temperature measurements since about 1880. There has been an upward trend in global temperature since around 1900, and the increase has become very sharp since about 1980.
To Read Now
Fact 4: If Climate Change Occurs as Many Scientists Believe, the Results Will Almost Certainly Be Catastrophic
There is wide consensus that if the climate continues to change and CO2 levels continue to rise the results will not be good (okay, "not good" is a pretty big understatement). As the Intergovernmental Panel on Climate Change (IPCC) stated in their 2007 report: "Taken as a whole, the range of published evidence indicates that the net damage costs of climate change are likely to be significant and to increase over time" (source: IPCC, quoted by NASA). This is a stuffy way of saying that "things will probably be really bad and continue to get worse." The short report below outlines some of the possible impacts, some of which have already begun to occur. Note that I am not saying that all of these things will happen, even if climate change continues, but it is meant as a survey of some of the most commonly cited negative impacts of climate change. Also note that some of the likely consequences may be positive in some areas, including extended growing seasons in cool climate zones and some increased growth of plants due to extra carbon being available, but the overall impact will very likely be overwhelmingly negative.
To Read Now
- "Global Warming Impacts: The consequences of climate change are already here." Union of Concerned Scientists
It is also very important to note that the most vulnerable to these impacts will be low-income and otherwise marginalized people all over the world. As the IPCC states in their 2014 assessment: "(Climate change) risks are unevenly distributed and are generally greater for disadvantaged people and communities in countries at all levels of development" (IPCC, Climate Change 2014 Syntheses Report, p. 13). Translation: the people with little power and/or resources will be disproportionately affected by climate change, regardless of whether they live in a low- or high-income country.
Fact 5: There is Broad Scientific Consensus that Humans are Most Likely the Primary Driver of Observed Climate Change
Multiple reports in peer-reviewed journals have found that at least 97% of scientists actively publishing in the climate field agree that the climate change observed in the past century is likely due to human influence, i.e. it is anthropogenic. See these links to some studies. In 2015, 24 of Britain's top "Learned Societies" - groups of scientific experts, basically - wrote a letter urging that we need to establish a "zero-carbon world" early in the second half of the 21st century. In the past 15 years, 18 U.S. scientific societies have confirmed that climate change is likely being caused by humans. Big players in the private sector are concerned as well. For example, CEOs from 43 companies in various sectors (with over $1.2 trillion of revenue in 2014) signed an open letter urging action in April of 2015. Even Exxon Mobil states as their official position on climate change (as of the summer of 2017) that:
The risk of climate change is clear and the risk warrants action. Increasing carbon emissions in the atmosphere are having a warming effect. There is a broad scientific and policy consensus that action must be taken to further quantify and assess the risks.
Exxon Mobil, the world's largest publicly traded oil and gas company, is not known to be a friend of carbon reduction advocates. In fact, a study published in August of 2017 found that they systematically misled the public for nearly 40 years about the dangers of climate change, even though they acknowledged the risks internally.
Putting it All Together
Let's consider these facts together:
- We know that the greenhouse effect warms the planet and that carbon dioxide is a greenhouse gas.
- We know that humans are emitting greenhouse gasses at a rate that is increasing their concentration in the atmosphere.
- We know that the climate is warming.
These three facts alone indicate that there is likely a problem. But, on top of this, you add that:
- The vast majority of active climate scientists agree that climate change is a problem and that climate change is at least very likely being caused by humans. So, the people that we trust to understand the climate widely agree that it is a problem.
- Finally, if climate change is happening, then the results will likely be devastating and on a global scale.
Even if we are not certain that humans are impacting the climate (we can never by 100% certain because we only have one planet to run this global "experiment" on), it is probably worth taking the precaution to prevent it if it is true. Yes, it is possible that so many climate experts are wrong - it is a rare occurrence that so many experts are wrong, but there is a possibility, however slim. And yes, we do not know for a fact that humans impact the climate, though basically all signs point to it being the case. And yes, there will be costs associated with making the change to a low-carbon society. But why do people buy life insurance? What about fire insurance? As silly as it sounds, what about buying an extended warranty on a new piece of electronics, or extra insurance for a rental car? The point is that even though the likelihood of using those insurances is minimal - probably less than the likelihood that climate change is caused by humans - people are willing to pay the cost in order to avoid catastrophe. The same could be said of climate change. Taking steps to avoid the worst-case scenario, or perhaps something near the worst-case scenario, is known as the precautionary principle. This may cost money or other resources in the short term, but is seen as worth it because of the situation it may prevent.
One quick addendum to this: If steps are successfully taken to reduce climate emissions to a sustainable level, it is very likely that there will also be cleaner air, less environmental damage, more energy security (not being dependent on another country for energy), and probably more active/healthy citizens. Something to think about.
Check Your Understanding
Carbon dioxide is a more potent greenhouse gas than methane, which is why it is such a big concern.
Further Reading - OPTIONAL
If you are interested in reading more about this topic, here are some suggested readings.
- "Climate Change Evidence & Causes: An overview from the Royal Society and the US National Academy of Sciences." The Royal Society and the National Academy of Sciences.
- "Climate Change 2014 Synthesis Report: Summary for Policymakers." Intergovernmental Panel on Climate Change.
- Link to a number of IPCC documents.
- "Energy & Climate Change." World Energy Outlook Special Report. International Energy Agency, 2015.
- U.S. Environmental Protection Agency's Climate Change Website | <urn:uuid:e9870ad7-9d6a-4c66-aa99-48fa1d2575c2> | 3.765625 | 3,002 | Knowledge Article | Science & Tech. | 42.714298 | 95,479,165 |
In earlier days, organic compounds were known by their common names for example methane was known as marsh gas as it was found in marshy places. With the evolution of so many organic compounds and continuous addition of new compounds, dealing with trivial names became a difficulty. Therefore a proper method was introduced in order to name the organic compounds. This uniform system for naming the compounds is called as IUPAC system, which is the International Union of Pure and Applied Chemistry.
Organic Chemistry Nomenclature- Features of the Trivial system:
A trivial system is a vernacular name or non-systematic name of an organic compound. There are no particular set of rules for the trivial name of the compound.
In this system, names will be simple like acetic acid, toluene, and phenol etc. For example, a carbolic acid which is generally found in tamarind is named as tartaric acid. But in IUPAC, it is named as 2,3-dihydroxy-1,4-Butanedioic acid.
- Many trivial names are present for a single compound. For example, Phenol has different names like hydroxybenzene, carbolic acid, and phenol.
- This system is limited to few compounds in each group. For example, the first two members of the carbolic acid family have trivial names, formic acid, and acetic acid respectively but carbolic acid with more atoms does not have any trivial names.
- There are no particular guidelines for naming complex compounds.
Chemical Nomenclature – IUPAC Rules:
According to the IUPAC system, the nomenclature of organic compounds consists of the following parts:
- Longest Chain Rule: Identify the parent hydrocarbon and name it. The parent chain of the compound is considered as the longest chain of carbon atoms. This chain can either be straight or of any other shape.
- Lowest Set of Locants: The numbering of the carbon atoms in the longest chain starts from the end which gives the lowest number to the carbon atoms carrying the substituents.
- Presence of same substituent more than once: Prefixes such as di, tri, etc. are given to the substituents which are present twice, thrice respectively in the parent chain.
- Naming different substituents: If more than one substituent is present then the substituents are arranged in alphabetical order of their names.
- Naming different substituents at equivalent positions: If two different substituents are present on the same position from the two ends then the substituents are named such that the substituent which comes first in the alphabetical order gets the lowest number.
- Naming The Complex Substituents: Naming of a complex substituent is done when the substituent on the parent chain has a branched structure (i.e complex structure).These substituents are named as a substituted alkyl group and the carbon atom of this substituent attached to the parent chain is numbered as 1. Name of these type of substituents is written in brackets.
The final name will be in format : Locant + Prefix + Root + Locant + Suffix
Word root: It indicates the number of carbon atoms in the longest carbon chain that is selected. For example, C1 is Meth and C5 is Pent.
Suffix – primary (1°) and secondary (2°): A suffix is generally a functional group in the molecule which follows the word root. It is further divided into two types.
- Primary suffix: It is written immediately after the word root. For example, in alkanes the suffix is ane.
- Secondary suffix: It is written after the primary suffix. For instance, if a compound has alkane and alcohol group attached to it, the naming will be alkanol, -ol being the suffix for alcohol.
Prefix – primary (1°) and secondary (2°): It is added before the word root while naming the compound. It indicates the presence of substituent groups or side chains in the organic molecule. It reveals the cyclic and acyclic nature of the compound.
- Primary prefix: Indicates whether the molecule is cyclic or not. For example, for cyclic compounds the prefix used is cyclo.
- Secondary prefix: Indicates the presence of substituent groups or any side chain. For example -CH3 is known as Methyl and -Br is Bromo.
Chemical Nomenclature- Types
- The term is used to denote the named constructions based on the composition of species or substances being named, against the systems that involve structural composition or information.
- One among them is the generalized stoichiometric name. Substances or the elements are named with multiple prefixes in order to give the overall stoichiometry of an element or a compound.
- When there are more components, then they are divided into 2 classes namely electropositive and electronegative components.
- These names will sound like salt names and this does not imply the chemical nature and behavior of species that are named.
- The rules ordered are required to the using up of multiple prefixes, ordering of components and proper endings in names for electronegative components.
Examples– Sodium Chloride – NaCl, Trioxygen – O3, Phosphorous trichloride – PCl3
- It is based on the approach where parent hydride is changed by replacement of hydrogen atoms with atoms or a group of atoms.
- It is a system where organic compounds are named using functional groups as the suffix or prefix to the name of the parent compound.
- This system is also used in naming compounds derived from hydrides of specific group elements in the periodic table.
- Similar to that of carbon, these elements may form rings and chains that will have many derivatives.
- Rules come in handy in naming the parent or main compounds and their substituents.
- Hydrides belong to group 13-17 of the periodic table are named suffix – ane. For example – Borane, Phosphane, and oxidane etc.
Examples – 1, 1-difluoro trisilane (SiH3 SiH2 Si HF2), Trichlorophosphine, PCl3
- This method is mainly formulated for the coordination compounds even though it has wide applications. An example for its application is pentaamminechlorocobalt(III) chloride – [CoCl(NH3)5]Cl2.
- Chloride will have the prefix chloro while ligand will have chlorido.
Examples– PCl3 – trichloridophosphorus, [CoCl3 (NH3)3] – tri-ammine-trichloridocobalt.
Names of some important aliphatic compounds
- Alkanes: General formula = CnH2n+2
Suffix – ane
Examples: CH4 – methane and C4H10 – Butane
- Alkenes: General formula = CnH2n
Suffix – ene
Examples: C2H4 – ethene and C3H6 – Propene
- Alkynes: General formula = CnH2n-2
Suffix – yne
Examples: C2H2 – Ethyne and C4H6 – But-1-yne
Let us understand it with the help of an example:
In this case, we have 9 carbon atoms in the straight chain. 5th Carbon atom from both the ends of the straight chain consists of a substituent having 3 carbon chains. On the first two carbon atoms of the substituent group, there is one additional carbon atom attached.
Now if we consider this as a new parent chain, it has a substituent which has one additional carbon each. For naming them we will first number the parent chain. In this case, we have 9 carbon atoms in the straight chain which is also the parent chain. Then we find that the substituent is on the fifth carbon atom.
Now we have 3 substituent carbons and out of these three, two substituents have additional carbons atoms attached to them. We find that the longest chain, in this case, can be the first four carbon atom chain but this is wrong as the last carbon is not attached to the parent chain.
So we will consider only three carbon atom chain as the main chain. Thus it can be named as propane and on the first and second position, we have methyl group. We can write the name as 1-2 Dimethyl propane, but it will be written as 1-2 Dimethyl propyl as it is a substituent group.
Now taking the substituent with the parent chain we will get 5-(1-2-Dimethyl Propyl) and as the parent chain has 9 carbon atoms so, it will be named as nonane. Thus the final name of the compound will be 5-(1-2-Dimethyl Propyl)nonane.
This article covers the basics of naming the organic compounds. For any further query on organic chemistry, install, Byju’s the learning app.
Never forget to use the Div – Table style generator and the online HTML editor to compose perfect articles for your website!
Practise This Question | <urn:uuid:89a286a4-71c8-40c2-bfbf-ef970395c4c6> | 3.921875 | 1,974 | Knowledge Article | Science & Tech. | 43.460388 | 95,479,208 |
The removal efficiency of conventional drinking water for picophytoplankton and the contribution of picophytoplankton to AOC were investigated in this research. The removal ratio during coagulation–sedimentation step was determined by jar test using PAC (poly-aluminium chloride). Lower coagulation pH showed better picophytoplankton removal in coagulation–sedimentation. The optimum coagulant dosage for picophytoplankton was twice or more than that for turbidity. The removal efficiency of picophytoplankton was 44–60% at lowest pH in water quality standard (5.8) and at an optimum coagulant dosage for turbidity. The removal ratio of picophytoplankton in rapid sand filtration was determined by pilot scale column experiments with sand and anthracite. The average removal percentage was 16.3% without PAC addition and chlorination before sand filtration; on the other hand it was 51.5% with PAC and chlorination. AOC increased by the chlorination of picoplankton including 6,800 cells/L of picophytoplankton was 21 μg-acetateC/L at 0.1 mg/L of residual free chlorine. The AOC was increased by the increase of residual chlorine concentration, and leveled off at 0.3 mg-Cl/L. From the result, the AOC originating from picoplankton (maximum AOC from picophytoplankton) could increase up to 155 μg-acetateC/L in this reservoir. It indicates that the removal of picoplankton (picophytoplankton) in drinking water treatment process is important from the viewpoint of AOC control.
Assimilable organic carbon (AOC) originating from picophytoplankton in drinking water
T. Okuda, W. Nishijima, M. Okada; Assimilable organic carbon (AOC) originating from picophytoplankton in drinking water. Water Science and Technology: Water Supply 1 March 2006; 6 (2): 169–176. doi: https://doi.org/10.2166/ws.2006.066
Download citation file: | <urn:uuid:12948a89-033a-439b-95ce-714209144204> | 2.8125 | 462 | Academic Writing | Science & Tech. | 36.4675 | 95,479,211 |
Thirty-five years after biologist Garrett Hardin issued his prophetic essay, "The Tragedy of the Commons," which warned that human beings would ultimately destroy commonly shared resources, a re-examination of the state of common pool resources by three researchers, including Indiana University Bloomington political scientist Elinor Ostrom, offers an urgent yet hopeful message.
The authors of a new report, "The Struggle to Govern the Commons," which will appear in a special Dec. 12 issue of Science, say they are "guardedly optimistic" about mankinds ability to govern such critical commons as the oceans and the climate. They point to systematic multidisciplinary research showing that widely diverse adaptive governance systems have been effective stewards of many resources.
"In many areas of commons governance, we have witnessed significant improvement," said Ostrom, the Arthur F. Bentley Professor of Political Science and co-director of the Workshop in Political Theory and Policy Analysis and the Center for the Study of Institutions, Population and Environmental Change. "Certainly the world is not uniform, but there are signs of some resources improving even though others are deteriorating. People have devised ingenious ways to manage and govern the commons."
Ryan Piurek | EurekAlert!
Innovative genetic tests for children with developmental disorders and epilepsy
11.07.2018 | Christian-Albrechts-Universität zu Kiel
Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe”
05.07.2018 | European Geosciences Union
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:a84993c0-39c1-49ed-a4b7-a1aebec9e7e2> | 2.8125 | 893 | Content Listing | Science & Tech. | 33.731131 | 95,479,218 |
Legend has it that, many centuries ago, Archimedes jumped out of his bathtub and ran across town naked screaming "Eureka!" after he solved an especially difficult problem. Though you may not have thought of things this way before, when you drink a glass of water, the water that you are drinking contains some water molecules that were in Archimedes' bathwater that day, because water doesn't get created or destroyed on a large scale. It follows the water cycle, which includes rain, evaporation, flowing of rivers into the ocean, and so on. In the more than two thousand years since his discovery, the water molecules from Archimedes' bathwater have been through this cycle enough times that they are probably about evenly distributed throughout all the water on the earth. When you buy a can of soda, about how many molecules from that famous bathtub of Archimedes are there in that can?
Round the answer to the nearest power of 10 and then express your answer as the order of magnitude. For instance, if your estimated answer is 3 times 10^5, enter 5. If your estimated answer is 8.7 times 10^5, you should enter 6 (rounding up to the next power of 10).© BrainMass Inc. brainmass.com July 17, 2018, 3:43 am ad1c9bdddf
Suppose that the water that was in Archimedes' bathwater is indeed evenly distributed throughout all the water on the earth. If you denote by N the total number of water molecules on earth, and by N_arch the number of water molecules that were in Archimedes' bathwater, then the fraction of water molecules in some arbitrary sample of water that had been in Archimedes' bath is:
f = N_arch/N
If you multiply the numerator and the denominator by the mass of a single ... | <urn:uuid:853fdfd2-69eb-41c3-b063-e91e23799ec1> | 3.703125 | 388 | Tutorial | Science & Tech. | 54.217202 | 95,479,229 |
Although the effects of soil erosion can be highly destructive, the notion that only destruction comes from this sometimes unnatural event is highly illogical. The erosion of earthen soil is actually a quite common natural occurrence which has shaped the face of our planet for almost 65 million years.
Although humans have only been around for a few thousand years, water has been the primary source of this soil erosion all over the planet since time began. As seen in this short video clip, the positive effects of water erosion can be seen along the Colorado river running through Utah. Other evidence of water erosion is seen with the formation of the Grand Canyon in Arizona. Soil erosion is quite a common positive occurrence, more so than it is negative.
However, man made soil erosion is erosion of a destructive nature. Steps to avoid man made soil erosion must be taken in order to ensure the survivability of healthy farmlands and fertile soil. | <urn:uuid:dfa6005a-778b-4629-bb5c-56145e0efebe> | 3.796875 | 184 | Personal Blog | Science & Tech. | 35.120019 | 95,479,254 |
Before Canyonlands and Arches national parks were established, decades of heavy livestock grazing in sensitive areas led to a decline in native perennial grasses and biological soil crust. Plant roots and soil crusts act as anchors, holding the sandy soil in place. As these plants and crusts disappeared, wind blew the top layer of soil away. In many large areas, that topsoil is now gone. The parks have begun a project to restore those native grasslands.
The hard-packed soil left behind has higher clay content, which makes a hostile environment for native bunchgrasses. These conditions make it easy for undesirable non-native species, such as tumbleweed and cheatgrass, to move in.
When grasslands reach this “degraded state” where soil loss and weed invasion are common, they may not recover on their own because the large continuous areas of bare ground are very susceptible to the forces of erosion. High winds in these areas will whip away any beneficial sand, seed, or organic matter that might deposit there. Fixing this problem requires constructing something to interrupt those big, open spaces.
The solution park staff developed might look strange, but it works. They fastened x-shaped screens to the ground in degraded grasslands in The Needles district of Canyonlands National Park and in Salt Valley in Arches National Park. The screens are called “Connectivity Modifiers,” or ConMods because they modify—or break up—the continuous nature of the bare earth. Their shape helps trap windblown soil and prevent erosion, creating a protected environment for plants to take root.
In the fall of 2016, staff seeded the ConMods with a mixture of native perennial grasses that will germinate over the next several years when conditions are right. Perennial grasses will help to hold the soil in place, and put the degraded sites on the road to recovery. Stabilized soil will help the recovery of other native grasses, shrubs, flowering plants, and biological soil crusts, allowing these areas to return to a healthy plant community and habitat for wildlife.
Native plants are vital for the healthy functioning of our grassland ecosystems, and restoring native vegetation will help barren areas resist more degradation. Scientists predict higher temperatures and more frequent and extreme drought events for this region as the climate changes. These changes to our climate will make restoring grasslands more difficult, so the time for action is now. | <urn:uuid:cebb93ef-6d72-4227-a585-ca4cfa18968a> | 4.09375 | 494 | Knowledge Article | Science & Tech. | 38.792294 | 95,479,263 |
The chirplets in Fig. 2 and Fig. 3 were derived from a single Gaussian window by applying simple mathematical operations to that window. The window may be thought of as the primitive that generates a family of chirplets, much like the mother wavelet of wavelet theory. We will, therefore refer to this primitive (whether Gaussian or otherwise) as the ``mother chirplet'', and will denote it by the letter g.
A Gaussian wave packet (also known to physicists as simply a wave packet), is a wave with a Gaussian envelope. Mathematically, a wave packet, g, may be represented: g^_t_c,f_c,(t) = 12 e^-12 (t-t_c)^2 e^j2f_c (t-t_c) + j
where , is the center of the energy concentration in time, is the center frequency, is the spread of the pulse, and is the phase shift of the wave, which we will not consider as one of the parameters. The subscripts of g represent the degrees of freedom, which comprise the parameter list.
We like the wave packet to have unit energy. Hence we reformulate the definition of the Gaussian envelope (taking advantage of the fact that a Gaussian function raised to any exponent, in our case, 1/2, is still a Gaussian function if multiplied by the appropriate normalization constant):
Theoretically bandlimited signals have infinite duration, but it is customary, in electrical engineering, to use the 3dB bandwidth which is defined as the difference in frequencies, on either side of the peak, where the energy or power falls to half the peak value. This definition, however, is not theoretically motivated, nor particularly useful in our context. Therefore, in the case of the wave packet, we simply define the duration to be equal to in (2). By the reciprocal nature of and , we are also implicitly specifying the bandwidth.
In (2), we can identify the Gaussian part as an envelope, which is modulated by a harmonic oscillation. The family of Gaussian chirplets is given by replacing the harmonic oscillation (wave) with a linear FM chirp:
where we have used a logarithmic scale for the duration so that the unit width (default) is represented by a parameter of zero. Whenever a parameter is missing from the parameter list, we will assume it to be zero. For example, if only three parameters are present, we assume zero chirprate; if only two are present, we also assume that the log-duration is zero (). Summarizing, the Gaussian chirplet (3) has four parameters: time-center, ; frequency-center, ; log-duration, ; and chirprate, c. | <urn:uuid:5fdda5be-164a-41e4-a244-6c7d3b0d0cf6> | 3.59375 | 583 | Academic Writing | Science & Tech. | 37.541956 | 95,479,265 |
Overview of Linear Optical Effects
If an electromagnetic wave, or a photon, hits atoms or molecules of a material, these particles can react in two distinct ways. If the photons were absorbed, an excitation of the particles onto a higher energy level is the result. In this case the band gap of the material is smaller or equal to the energy of the photons (ħω). After the absorption, the excitation of the atom can decay in many different ways. The most important, in the context of optical telecommunications, is the spontaneous emission; in this case the material emits on its part photons with a different energy. This spontaneous emission can in some materials be transmuted into a stimulated emission and is the basis of lasers and the erbium-doped fiber amplifier (EDFA).
KeywordsRefractive Index Group Velocity Silica Glass Dispersion Parameter Amplify Spontaneous Emission
Unable to display preview. Download preview PDF. | <urn:uuid:4701d380-45e7-4cac-ae90-ebe5fd5ab5e3> | 3.484375 | 192 | Truncated | Science & Tech. | 29.88375 | 95,479,287 |
HYDROGEN FUEL CELLS & ENERGY EFFICIENCY. By: Claudio Bolzoni David Carlos Echeverria Andres Segura Dari Seo. FUEL CELLS. Definition: electrochemical cell that converts a source fuel into an electrical current.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
David Carlos Echeverria
Reduction in burning of fuel cells
Improvement of air quality especially in urban areas
Reduction of greenhouse gases (global warming)
Low noise and high power
Reduction in energy consumption (saving energy) | <urn:uuid:7f969a8a-ad9f-46bf-ad46-3e44de25ff41> | 2.796875 | 180 | Truncated | Science & Tech. | 23.458395 | 95,479,323 |
Today we started our astronomy projects. First we have to get used to working with the SDSS interface. Everyone picked one of the Favorite Places to get a location in the sky. They then started looking for galaxies that had spectra and recording data.
- Jamming with the 'Spiders' from Mars
This image from NASA's Mars Reconnaissance Orbiter, acquired May 13, 2018 during winter at the South Pole of Mars, shows a carbon dioxide ice cap covering the region and as the sun returns in the spring, "spiders" begin to emerge from the landscape. | <urn:uuid:67c6b886-067e-492e-a9c8-923bb7dcb8cf> | 2.96875 | 118 | Personal Blog | Science & Tech. | 51.853 | 95,479,329 |
The resistance between two adjacent nodes on an infinite square grid of equal resistors can easily be found by superposition. This paper addresses the corresponding problem for two arbitrary nodes. A solution is found by exploting the symmetry of the grid and using the method of superposition. The mathematical problem involves the solutio of an infinite set of linear, inhomogeneous difference equations which are solved by the method of separation of variables.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:58270fd5-5ae3-4b24-ab65-889bf10cde67> | 2.625 | 105 | Academic Writing | Science & Tech. | 16.266364 | 95,479,366 |
A piston moves along a tube containing air at an initial sound speed of 330 m/s. When the piston velocity is 250 m/s, it drives a shock wave which propagates at a velocity of 500 m/s. When the piston velocity is 100 m/s, it drives a shock at 392 m/s.
Use the hypersonic equivalence principle to calculate the shock angles (in degrees) on a flat plate:
At an incidence of 6 degrees and a Mach number of 7.2
At an incidence of 2 degrees and a Mach number of 21.7
The incidence degrees required to produce a shock angle of 9.5 degrees at a Mach number of 7.2
The incidence () required to produce a shock angle () of 3.2 degrees at a Mach number of 21.7
Here is a system which is travelling left to right at the speed of 0.5c (half of the speed of light). The system is made of two parts. Both the parts are made of a light source and a light receiver or detector; but both are placed opposite to each other (that means the receivers are at the ends of the system).
Then the both sources emit light at the same time and the receivers receive the pulse. According to Maxwell’s equations, the speed of the pulse of light will not be affected by the speed of its source (and the system). We know that the speed of light is same for all observers. So, the receivers will receive the light at the same time. But here rises a problem: how relativity will explain this phenomenon (that the speed of light is constant here too). Here, we should remember that the system is travelling at 0.5c and we can’t use the length contraction and time delay. Why? It is because:
1. If the system is affected with length contraction and time delay, both the parts will be affected not a single part. So, if we try to explain how the speed of light become constant in the right part with length contraction and time delay, we shall be unable to explain how the speed of light become constant in the left part with the same length contraction and time delay!
2. If we suppose that the receivers received those pulses at different time, it will become clear that the speed of light is not same and constant for all observers and the theory of relativity will break down!
Am I right? Please reply.
Stephen Hawking, the world renowned Physicist, the inspiration to millions, who outlived doctors’ predictions after suffering from progressive motor neuron disease and rised to the most coveted chair in Physics…The author of “A Brief History of Time” and many such inspiring books …. finally passed away at the age of 76.
AskPhysics family expresses its deepest condolences
You could barely move your limbs and body, but you moved our hearts and minds and compelled us to think deep and wide pondering the mysteries in connection with the evolution of universe and the future.
Adieu !! Hawking… Your discoveries, theories and books will continue to inspire us!
Is anything truly random? Is random a human made idea to describe something that is hard to predict? Or is it possible for something to actually be random?
If nothing is truly random then if the universe was created again, would everything turn out the same? I don’t see why it wouldn’t but im not finding much about this online and i want to know what you think.
I have a few exercices (two) on Bravais lattices and I can’t figure it out about the best way to approach them. A few tips on the steps, or theories that I should base my resolution on would be helpful.
I also have another 2 exercices that approach particles collision times and the kinetic theory of gases.
By the way, the exercises I mention are attached.
If there is any helpull suggestion that allows me to solve this problems, I would be very happy.
When asked students there was mixed reactions on this years CBSE Physics exam for class 12. What is your opinion?
How was the CBSE Physics exam ?
Acceleration due to gravity is found to decrease with increase in depth and vice versa. But, on the poles its value is more than as compared to that on the equator even though the depth is increasing.Why is this so? I know about the relation g is inversely proportional to R squared, but without this relation I can’t seem to b able to answer it with the depth relation.
Asked Thakrei Ruivah
The force of gravity is inversely proportional to the the distance from the centre of earth and hence it is evident that at poles the acceleration due to gravity should be more since the polar radius is less that the equatorial radius.
Putting it simply, the entire mass of earth is attracting the object kept at the pole towards it centre and the distance from the centre is less. Therefore the force and hence the acceleration due to gravity will be maximum.
The decrease in g with depth is due to the fact that:
At any depth, the mass of earth coming within the sphere with radius equal to the distance from the centre of earth to the object under consideration will be responsible for the force of gravity and hence the value of acceleration due to gravity decreases.
Get more details from http://www.askphysics.com/variation-in-acceleration-due-to-gravity-g-with-depth/ | <urn:uuid:85bfe532-b1c0-4464-8083-78137ff85bf7> | 3.84375 | 1,134 | Comment Section | Science & Tech. | 62.718683 | 95,479,385 |
A dwarf star is a star of relatively small size and low luminosity. Most main sequence stars are dwarf stars. The term was originally coined in 1906 when the Danish astronomer Ejnar Hertzsprung noticed that the reddest stars—classified as K and M in the Harvard scheme could be divided into two distinct groups. They are either much brighter than the Sun, or much fainter. To distinguish these groups, he called them "giant" and "dwarf" stars, the dwarf stars being fainter and the giants being brighter than the Sun. Most stars are currently classified under the Morgan Keenan System using the letters O, B, A, F, G, K, and M, a sequence from the hottest: O type, to the coolest: M type. The scope of the term "dwarf" was later expanded to include the following:
- Dwarf star alone generally refers to any main-sequence star, a star of luminosity class V: main-sequence stars (dwarfs). Example: Achernar (B6Vep)
- A blue dwarf is a hypothesized class of very-low-mass stars that increase in temperature as they near the end of their main-sequence lifetime.
- A white dwarf is a star composed of electron-degenerate matter, thought to be the final stage in the evolution of stars not massive enough to collapse into a neutron star or black hole—stars less massive than roughly 9 solar masses.
- A black dwarf is a white dwarf that has cooled sufficiently such that it no longer emits any visible light.
- A brown dwarf is a substellar object not massive enough to ever fuse hydrogen into helium, but still massive enough to fuse deuterium—less than about 0.08 solar masses and more than about 13 Jupiter masses.
- Brown, Laurie M.; Pais, Abraham; Pippard, A. B., eds. (1995). Twentieth Century Physics. Bristol; New York: Institute of Physics, American Institute of Physics. p. 1696. ISBN 0-7503-0310-7. OCLC 33102501.
- Nazé, Y. (November 2009). "Hot stars observed by XMM-Newton. I. The catalog and the properties of OB stars". Astronomy and Astrophysics. 506 (2): 1055–1064. arXiv: . Bibcode:2009A&A...506.1055N. doi:10.1051/0004-6361/200912659.
| This article includes a list of related items that share the same name (or similar names).
If an internal link incorrectly led you here, you may wish to change the link to point directly to the intended article. | <urn:uuid:15bbe625-2ba0-4c5c-bf4b-6cc2eb8b76b7> | 3.8125 | 574 | Knowledge Article | Science & Tech. | 70.121559 | 95,479,389 |
A UUID is a universally unique identifier, which means if you generate a UUID right now using
UUID it's guaranteed to be unique across all devices in the world. This means it's a great way to generate a unique identifier for users, for files, or anything else you need to reference individually – guaranteed.
Here's how to create a UUID as a string:
let uuid = UUID().uuidString
Available from iOS 6.0
Did this solution work for you? Please pass it on!
Other people are reading…
About the Swift Knowledge Base
This is part of the Swift Knowledge Base, a free, searchable collection of solutions for common iOS questions.
Take Swift further!
Your Swift skills let you make apps for macOS, watchOS, tvOS, and more, and for one low price you can learn it all with my Swift Platform Pack! | <urn:uuid:bf5b4cc6-7d9b-441b-b501-816b25e2a70f> | 2.546875 | 185 | Tutorial | Software Dev. | 62.133232 | 95,479,393 |
|A Wikibookian suggests that Waves/Sine Waves be merged into this book or chapter because:
Discuss whether or not this merger should happen on the discussion page.
The Mathematics Of WavesEdit
We start our discussion of waves by taking the equation for a very simple wave and describing its characteristics. The basic equation for such a wave is
where is the height of the wave at position and time . This equation describes a fairly simple wave, but most complex waves are just sums of simpler ones. If we freeze this equation in time at , we get
which looks like this: [TODO - Add a Graph]
From the graph we can see that each of the three parameters has a meaning. is the amplitude of the wave, how high it is. is the wavelength, the distance from a part of the wave in one cycle to the same part of the wave in the next cycle. is the phase of the wave, which shifts the wave to the left or right. The wavelength is a distance, and is usually measured in meters, millimeters or even nanometers depending on the wave. Phase is an angle, measured in radians.
Now that we have mapped out the wave in space, let's instead set and see how the wave changes over time
Amplitude and phase remain, but the wavelength is gone and a new quantity has appeared: , which is the frequency, or how rapidly the wave moves up and down. Frequency is measured in units of inverse time: in a fixed period of time, how many times does the wave move up and down? The unit usually used for this is the hertz, or inverse second.
Now let's combine these two pictures and see how the wave moves. Figure 3 is a diagram of how the wave looks when you plot it in both space and time. The straight lines are the places where the simple wave reaches a maximum, minimum, or zero (where it crosses the x axis).
We can look at the zeros to determine the phase velocity of the wave. The phase velocity is how fast a part of the wave moves. We can think of it as the speed of the wave, but for more complicated waves it is only one type of speed - more on that in later sections.
We can get an equation for the zeros by setting our equation to zero.
You see here that we have the equation for a straight line, describing a point that is moving at velocity . This gives us the equation for the phase velocity of the wave, which is | <urn:uuid:16443d93-f755-491e-b2ca-2e1455d0d489> | 4.34375 | 516 | Knowledge Article | Science & Tech. | 55.095137 | 95,479,421 |
Electronvolt(Redirected from MeV)
This article needs additional citations for verification. (May 2018) (Learn how and when to remove this template message)
This article possibly contains original research. (May 2018) (Learn how and when to remove this template message)
In physics, the electronvolt (symbol eV, also written electron-volt and electron volt) is a unit of energy equal to approximately ×10−19 1.6joules (symbol J) in SI units. By definition, it is the amount of energy gained (or lost) by the charge of a single electron moving across an electric potential difference of one volt. Hence, it has a value of one volt, , multiplied by the electron's 1 J/Celementary charge e, 1766208(98)×10−19 C1.602. Therefore, one electronvolt is equal to 1766208(98)×10−19 J. 1.602 The electronvolt is not a SI unit, and its definition is empirical (unlike the litre, the light-year and other such non-SI units), where its value in SI units must be obtained experimentally.
Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with charge q has an energy E = qV after passing through the potential V; if q is quoted in integer units of the elementary charge and the terminal bias in volts, one gets an energy in eV.
Like the elementary charge on which it is based, it is not an independent quantity but is equal to 1 J/ √. It is a common unit of energy within physics, widely used in solid state, atomic, nuclear, and particle physics. It is commonly used with the metric prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively), where meV stands for millielectronvolt. In some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion (109) electronvolts; it is equivalent to the GeV.
|Measurement||Unit||SI value of unit|
By mass–energy equivalence, the electronvolt is also a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum (from E = mc2). It is common to simply express mass in terms of "eV" as a unit of mass, effectively using a system of natural units with c set to 1. The mass equivalent of is 1 eV/c2
For example, an electron and a positron, each with a mass of , can 0.511 MeV/c2annihilate to yield of energy. The 1.022 MeVproton has a mass of . In general, the masses of all 0.938 GeV/c2hadrons are of the order of , which makes the GeV (gigaelectronvolt) a convenient unit of mass for particle physics: 1 GeV/c2
- = 1 GeV/c2×10−27 kg. 1.783
- 1 u = = 931.4941 MeV/c24941 GeV/c2. 0.931
In high-energy physics, the electronvolt is often used as a unit of momentum. A potential difference of 1 volt causes an electron to gain an amount of energy (i.e., ). This gives rise to usage of eV (and keV, MeV, GeV or TeV) as units of momentum, for the energy supplied results in acceleration of the particle. 1 eV
The dimensions of momentum units are LMT−1. The dimensions of energy units are L2MT−2. Then, dividing the units of energy (such as eV) by a fundamental constant that has units of velocity (LT−1), facilitates the required conversion of using energy units to describe momentum. In the field of high-energy particle physics, the fundamental velocity unit is the speed of light in vacuum c.
The fundamental velocity constant c is often dropped from the units of momentum by way of defining units of length such that the value of c is unity. For example, if the momentum p of an electron is said to be , then the conversion to MKS can be achieved by: 1 GeV
In particle physics, a system of "natural units" in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: c = ħ = 1. In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented in units of inverse particle masses.
Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following:
The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via Γ = ħ/τ. For example, the B0 meson has a lifetime of 1.530(9) picoseconds, mean decay length is cτ = 459.7 μm, or a decay width of ±25)×10−4 eV. (4.302
Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds.
Energy in electronvolts is sometimes expressed through the wavelength of light with photons of the same energy: 1 eV = 8065.544005(49) cm−1.
For example, a typical magnetic confinement fusion plasma is , or 170 MK. 15 keV
As an approximation: kBT is about (≈ 0.025 eV290 K/) at a temperature of . 20 °C
The energy E, frequency v, and wavelength λ of a photon are related by
A photon with a wavelength of (green light) would have an energy of approximately 532 nm. Similarly, 2.33 eV would correspond to an infrared photon of wavelength 1 eV or frequency 1240 nm. 241.8 THz
In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the "electron equivalent" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material.
This list (which may have dates, numbers, etc.) may be better in a sortable table format. (May 2017)
- ×1032 eV: total energy released from a 20 5.25kt nuclear fission device
- ×1028 eV: the 1.22Planck energy
- 10 YeV (×1025 eV): the approximate 1grand unification energy
- ~624 EeV (×1020 eV): energy consumed by a single 100-watt light bulb in one second ( 6.24 = 100 W ≈ 100 J/s×1020 eV/s) 6.24
- 300 EeV (×1020 eV = ~ 3J): 50 the so-called Oh-My-God particle (the most energetic cosmic ray particle ever observed)
- : two petaelectronvolts, the most high-energetic neutrino detected by the 2 PeVIceCube neutrino telescope in Antarctica
- : the designed proton collision energy at the 14 TeVLarge Hadron Collider (operated at about half of this energy since 30 March 2010, reached 13 TeV in May 2015)
- : a trillion electronvolts, or 1 TeV×10−7 J, about the kinetic energy of a flying 1.602mosquito
- 125.1±0.2 GeV: the energy corresponding to the mass of the Higgs boson, as measured by two separate detectors at the LHC to a certainty better than 5 sigma
- : the average energy released in fission of one 210 MeVPu-239 atom
- : the average energy released in 200 MeVnuclear fission of one U-235 atom
- : the average energy released in the 17.6 MeVfusion of deuterium and tritium to form He-4; this is per kilogram of product produced 0.41 PJ
- ( 1 MeV×10−13 J): about twice the 1.602rest energy of an electron
- : the energy required to 13.6 eVionize atomic hydrogen; molecular bond energies are on the order of to 1 eV per bond 10 eV
- to 1.6 eV: the 3.4 eVphoton energy of visible light
- : the energy EG required to break a 1.1 eVcovalent bond in silicon
- : the energy EG required to break a 720 meVcovalent bond in germanium
- : the 25 meVthermal energy kBT at room temperature; one air molecule has an average kinetic energy 38 meV
- : the thermal energy kBT of the 230 µeVcosmic microwave background
One mole of particles given 1 eV of energy has approximately 96.5 kJ of energy – this corresponds to the Faraday constant (F ≈ 485 C mol−1) where the energy in joules of N moles of particles each with energy X eV is X·F·N. 96
Notes and referencesEdit
- IUPAC Gold Book Archived 2009-01-03 at the Wayback Machine., p. 75
- SI brochure, Sec. 4.1 Table 7 Archived July 16, 2012, at the Wayback Machine.
- "CODATA Value: elementary charge". The NIST Reference on Constants, Units, and Uncertainty. US National Institute of Standards and Technology. June 2015. Retrieved 2015-09-22.
2014 CODATA recommended values
- "CODATA Value: electron volt". The NIST Reference on Constants, Units, and Uncertainty. US National Institute of Standards and Technology. June 2015. Retrieved 2015-09-22.
2014 CODATA recommended values
- "Definitions of the SI units: Non-SI units". The NIST Reference on Constants, Units, and Uncertainty. National Institute of Standards and Technology. Retrieved 2018-07-01.
- Barrow, J. D. "Natural Units Before Planck." Quarterly Journal of the Royal Astronomical Society 24 (1983): 24.
- "Units in particle physics". Associate Teacher Institute Toolkit. Fermilab. 22 March 2002. Archived from the original on 14 May 2011. Retrieved 13 February 2011.
- "Special Relativity". Virtual Visitor Center. SLAC. 15 June 2009. Retrieved 13 February 2011.
- "CODATA Value: Planck constant in eV s". Archived from the original on 22 January 2015. Retrieved 30 March 2015.
- What is Light? Archived December 5, 2013, at the Wayback Machine. – UC Davis lecture slides
- Elert, Glenn. "Electromagnetic Spectrum, The Physics Hypertextbook". hypertextbook.com. Archived from the original on 2016-07-29. Retrieved 2016-07-30.
- "Definition of frequency bands on". Vlf.it. Archived from the original on 2010-04-30. Retrieved 2010-10-16.
- Open Questions in Physics. Archived 2014-08-08 at the Wayback Machine. German Electron-Synchrotron. A Research Centre of the Helmholtz Association. Updated March 2006 by JCB. Original by John Baez.
- "A growing astrophysical neutrino signal in IceCube now features a 2-PeV neutrino". Archived from the original on 2015-03-19.
- Glossary Archived 2014-09-15 at the Wayback Machine. - CMS Collaboration, CERN
- ATLAS; CMS (26 March 2015). "Combined Measurement of the Higgs Boson Mass in pp Collisions at √s=7 and 8 TeV with the ATLAS and CMS Experiments". Physical Review Letters. 114 (19): 191803. arXiv: . Bibcode:2015PhRvL.114s1803A. doi: . PMID 26024162. | <urn:uuid:59d3ebde-2143-4369-bd95-b9d7bb117db7> | 3.65625 | 2,749 | Knowledge Article | Science & Tech. | 62.375845 | 95,479,422 |
Will drones and droids take our jobs? Perhaps, but that isn’t the full picture. We explore how robots might work alongside us, rather than replacing us
Ask most people what they think about artificial intelligence, and the first words on their lips are scare stories. The robots that are coming to take our jobs, the algorithms leaving us nowhere to hide, the killer drones poised to swoop from the skies. The risks are real, but there is potential in this ‘fourth Industrial Revolution’ to transform our lives for good, too. Here are five examples for starters.
1. Small is beautiful
Green optimists dream of a future powered by solar, wind and other renewables, backed up by an energy storage network – from cast farms of batteries right down to batteries in individual homes and electric cars.
AI promises to make this dream come true, balancing power supply and demand with rigorous precision, ensuring that there’s just the right amount of electricity when and where it is needed. It can turn homes into energy utilities, too, negotiating the buying and selling of micro amounts of power, with the millions of financial transactions involved being handled seamlessly via secure blockchain connections. This could make viable a whole range of energy innovations, such as ‘solar sprays’, turning every wall and roof into a micro-power plant.
2. Policing pollution
We may fret over the ‘surveillance society’, but there’s another side of the ‘lidless eye’ of AI. Thanks to a mix of sensors, drones and satellites, all communicating and comparing data, it can keep a close watch on the health of the environment, virtually down to every field and street.
This means that polluters will have nowhere to hide. What’s more, businesses will be able to swiftly spot where every single one of its source materials comes from and monitor their manufacture – and the welfare of their outsourced workforce. By making supply chains visible to all, there will be no carpet under which to sweep dodgy, dirty business practice.
Good journalism can be about good things too.
3. Farms with a future
AI isn’t just for city slickers. Agriculture, too, will benefit. Crop-picking robots could mean an end to labour shortages that see fruit rotting on the ground. Field sensors can check soil moisture and fertility and crop conditions, triggering interventions – from irrigation to inputs – as soon as required.
Droughts, storms and heatwaves can be forecast with unprecedented accuracy, and farmers kept up to speed with the latest movements in prices and market demand. And best of all, plummeting costs are set to bring all this within range of developing world farmers, putting power into the hands of some of the world’s poorest.
4. Smarter, safer streets
We may be living on the cusp of the fourth Industrial Revolution but outside, the streets look as though they’re barely out of the second. And barely out of second gear, either. So how can AI help? By kickstarting the shift to electric vehicles, and increasingly – albeit controversially – to driverless ones, too.
This promises to turn snarling jams into smooth flows of quiet, clean traffic; accident rates could plummet – and you won’t even need to own a car to benefit. Having personal transport on demand will save on resources and space, and give blind and disabled people more freedom to travel independently, too.
5. The algorithm will see you now…
Specialist radiographers have years of experience in spotting cancer – but sometimes algorithms can do it better. Faced with rare, hard-to-spot cancers in tests, computers outperformed the human eye. In other cases, the people did it better. But – hearteningly – they work best in tandem. One study showed that doctors make mistakes in 3.5 percent of cases, while state-of-the-art AI has an error rate of 7.5 per cent. But when humans and AI results are combined, the error rate can drop to 0.5 per cent.
Conclusion: cue the cobots?
Such examples show the potential of the ‘cobots’ – of AI working alongside, rather than replacing, humans. Robots taking on the grunt work, in other words – whether it’s crunching numbers or crop picking, freeing up their human partners for the more interesting stuff – not least, of course, managing the robots. There will be whole new classes of jobs, too, some of them as yet unimaginable. If that sounds unlikely, think how someone in the 1980s would have reacted to you telling them that, when you grow up, you’re going to be a web designer.
And here’s the flipside to the ‘robot ate my job’ fears. Sure, some jobs will go – but many of them are hardly ones we’d be itching to do, offered the choice. One survey showed that 87 per cent of workers either loathe their work, or are uninspired by it.
If AI can munch those jobs, while providing the economic boost to finance the creation of new ones that draw on the ingenuity and creativity that’s unique to humans, it might yet be a blessing, not a curse.
None of these sunny-side-up scenarios is inevitable; they are all technically feasible, though there is much work involved in making them happen. But the power to do so is in our hands – and not in the mechanical arms of robots.
Illustrations: Give Up Art | <urn:uuid:bad04317-156e-4692-8318-da6e28f9a226> | 2.78125 | 1,155 | Listicle | Science & Tech. | 54.975667 | 95,479,424 |
Rate of Ocean Acidification Unprecedented in 65 Million Years: Study
Rate of ocean acidification the fastest in 65 million years
Physorg.com, Feb, 16, 2010
(PhysOrg.com) -- A new model, capable of assessing the rate at which the oceans are acidifying, suggests that changes in the carbonate chemistry of the deep ocean may exceed anything seen in the past 65 million years.
The model also predicts much higher rates of environmental change at the ocean’s surface in the future than have occurred in the past, potentially exceeding the rate at which plankton can adapt.
The research, from the University of Bristol, is reported in this week's issue of Nature Geoscience.
The team applied a model that compared current rates of ocean acidification with the greenhouse event at the Paleocene-Eocene boundary, about 55 million years ago when surface ocean temperatures rose by around 5-6°C over a few thousand years. During this event, no catastrophe is seen in surface ecosystems, such as plankton, yet bottom-dwelling organisms in the deep ocean experienced a major extinction.
Dr Andy Ridgwell, lead author on the paper, said: “Unlike surface plankton dwelling in a variable habitat, organisms living deep down on the ocean floor are adapted to much more stable conditions. A rapid and severe geochemical change in their environment would make their survival precarious.
“The widespread extinction of these ocean floor organisms during the Paleocene-Eocene greenhouse warming and acidification event tells us that similar extinctions in the future are possible.”
The oceans are currently absorbing about a quarter of the CO2 released into the atmosphere, forcing the pH of the surface ocean lower in a process called ‘ocean acidification’.
Laboratory experiments suggest that if the pH continues to fall, we may start to see impacts such as the dissolution of carbonate shells of marine organisms, slower growth, muscle wastage, dwarfism or reduced activity, with knock-on effects throughout the ecosystem.
Dr Daniela Schmidt, also an author on the paper, explained: “Laboratory experiments can tell us about how marine organisms react, but experiments cannot tell us whether marine organisms will be able to adapt to ocean acidification via migration or evolution.
“Therefore, a lot of attention has recently focussed on looking at known ocean acidification and biotic reactions in the geological record. Various types of geological evidence - the spread of warm water organisms towards the poles and the dissolution of carbonate sediments on the sea-floor tell us there was simultaneously both extreme warming and acidification at this time - the hallmark of a massive greenhouse gas release.”
On the basis of their approach of comparing model simulations of past and future marine geochemical changes, the authors infer a future rate of surface-ocean acidification and environmental pressure on marine calcifiers, such as corals, unprecedented in the past 65 million years, and one that challenges the potential for plankton to adapt.
They also argue that for organisms which live on the sea floor, rapid and extreme acidification of the deep ocean would make their situation uncertain. The occurrence of widespread extinction of these organisms during the Paleocene-Eocenegreenhouse warming and acidification event raises the possibility of a similar extinction in the future. | <urn:uuid:0034f6ec-9f53-4a46-8714-06b0ef0bafbe> | 3.25 | 682 | News Article | Science & Tech. | 16.455212 | 95,479,461 |
The hunt for the Higgs particle has involved the biggest, most expensive experiment ever. So exactly what is this particle? Why does it matter so much? What does it tell us about the Universe? Has the discovery announced on 4 July 2012 finished the search? And was finding it really worth all the effort? The short answer is yes. The Higgs field is proposed as the way in which particles gain mass - a fundamental property of matter. It's the strongest indicator yet that the Standard Model of physics really does reflect the basic building blocks of our Universe. Little wonder the hunt and discovery of this new particle has produced such intense media interest. Here, Jim Baggott explains the science behind the discovery, looking at how the concept of a Higgs field was invented, how the vast experiment was carried out, and its implications on our understanding of all mass in the Universe. The book was written over the eighteen months of the CERN Large Hadron Collider experiment, with its final chapter rounded off on the day of the announcement 'that a particle consistent with the standard model Higgs boson has been discovered.'
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:36ed820b-0ced-4afb-932d-02643b0e6311> | 3.5 | 242 | Content Listing | Science & Tech. | 50.8925 | 95,479,472 |
When a microstructure is placed on a droplet, the structure is positioned and remains at a particular position due to capillary forces. By using this phenomenon, two-dimensional arrangements on droplets have been widely developed. In this study, we fabricate a three-dimensional structure (prism shape) by placing two circular plates on a droplet. The paper explains the dynamics of the positioning both experimentally and numerically. Through experiment and simulation, we found that the rotational positioning is determined by surface tension and the two plates are supported by internal pressure and surface tension. The apex angle of the prism can be easily tuned by changing its volume. As described at the end of the paper, a prism composed of a droplet of silicone oil and two transparent circular SU-8 plates is encapsulated by depositing an organic Parylene membrane. The shape of the liquid is fixed, and a non-evaporative and non-deformable prism is formed with a volume of less than 1 mm(3).
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:8ae85ac6-9722-4767-8fcc-11dc18517edb> | 3.125 | 227 | Academic Writing | Science & Tech. | 25.420901 | 95,479,473 |
A regional and global analysis of carbon dioxide physiological forcing and its impact on climate
- 860 Downloads
An increase in atmospheric carbon dioxide concentration has both a radiative (greenhouse) effect and a physiological effect on climate. The physiological effect forces climate as plant stomata do not open as wide under enhanced CO2 levels and this alters the surface energy balance by reducing the evapotranspiration flux to the atmosphere, a process referred to as ‘carbon dioxide physiological forcing’. Here the climate impact of the carbon dioxide physiological forcing is isolated using an ensemble of twelve 5-year experiments with the Met Office Hadley Centre HadCM3LC fully coupled atmosphere–ocean model where atmospheric carbon dioxide levels are instantaneously quadrupled and thereafter held constant. Fast responses (within a few months) to carbon dioxide physiological forcing are analyzed at a global and regional scale. Results show a strong influence of the physiological forcing on the land surface energy budget, hydrological cycle and near surface climate. For example, global precipitation rate reduces by ~3% with significant decreases over most land-regions, mainly from reductions to convective rainfall. This fast hydrological response is still evident after 5 years of model integration. Decreased evapotranspiration over land also leads to land surface warming and a drying of near surface air, both of which lead to significant reductions in near surface relative humidity (~6%) and cloud fraction (~3%). Patterns of fast responses consistently show that results are largest in the Amazon and central African forest, and to a lesser extent in the boreal and temperate forest. Carbon dioxide physiological forcing could be a source of uncertainty in many model predicted quantities, such as climate sensitivity, transient climate response and the hydrological sensitivity. These results highlight the importance of including biological components of the Earth system in climate change studies.
KeywordsCarbon dioxide physiological forcing Climate response Hydrological cycle Surface energy balance Fast responses
TA was funded by a NERC open CASE award with the Met Office. MDB and OB were supported by the Joint DECC and Defra Integrated Climate Programme, DECC/Defra (GA01101). We thank two anonymous reviewers for their constructive comments.
- Bala G, Caldeira K, Nemani R (2009) Fast versus slow response in climate change: implications for the global hydrological cycle. Clim Dyn. doi: 10.1007/s00382-009-0583-y
- Forster P et al (2007) Changes in atmospheric constituents and in radiative forcing. In: Solomon S et al (eds) Climate change 2007: the physical science basis contribution of working group 1 to the fourth assessment report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, pp 131–234Google Scholar
- Randall DA et al (2007) Climate models and their evaluation. In: Solomon S et al (eds) Climate change 2007: the physical science basis contribution of working group 1 to the fourth assessment report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, pp 591–662Google Scholar | <urn:uuid:345ca769-b95b-4cff-b25e-c8923e7ba921> | 3.046875 | 628 | Academic Writing | Science & Tech. | 18.84125 | 95,479,488 |
A Gene Divided Reveals Details of Natural Selection
News Oct 12, 2007
In a molecular tour de force, researchers at the University of Wisconsin-Madison have provided an exquisitely detailed picture of natural selection as it occurs at the genetic level.
Writing in the Oct. 11 issue of the journal Nature, Howard Hughes Medical Institute investigator Sean B. Carroll and former UW-Madison graduate student Chris Todd Hittinger document how, over many generations, a single yeast gene divides in two and parses its responsibilities to be a more efficient denizen of its environment. The work illustrates, at the most basic level, the driving force of evolution.
"This is how new capabilities arise and new functions evolve," says Carroll, one of the world's leading evolutionary biologists. "This is what goes on in butterflies and elephants and humans. It is evolution in action."
The work is important because it provides the most fundamental view of how organisms change to better adapt to their environments. It documents the workings of natural selection, the critical idea first posited by Charles Darwin where organisms accumulate random variations, and changes that enhance survival are "selected" by being genetically transmitted to future generations.
The new study replayed a set of genetic changes that occurred in a yeast 100 million or so years ago when a critical gene was duplicated and then divided its nutrient processing responsibilities to better utilize the sugars it depends on for food.
"One source of newness is gene duplication," says Carroll.
"When you have two copies of a gene, useful mutations can arise that allow one or both genes to explore new functions while preserving the old function. This phenomenon is going on all the time in every living thing. Many of us are walking around with duplicate genes we're not aware of. They come and go."
In short, says Carroll, two genes can be better than one because redundancy promotes a division of labor. Genes may do more than one thing, and duplication adds a new genetic resource that can share the workload or add new functions. For example, in humans the ability to see color requires different molecular receptors to discriminate between red and green, but both arose from the same vision gene.
The difficulty, he says, in seeing the steps of evolution is that in nature genetic change typically occurs at a snail's pace, with very small increments of change among the chemical base pairs that make up genes accumulating over thousands to millions of years.
To measure such small change requires a model organism like simple brewer's yeast that produces a lot of offspring in a relatively short period of time.
Yeast, Carroll argues, are perfect because their reproductive qualities enable study of genetic change at the deepest level and greatest resolution because researchers can produce and quickly count a large number of organisms. The same work in fruit flies, one of biology's most powerful models, would require "a football stadium full of flies" and years of additional work, Carroll explains.
"The process of becoming better occurs in very small steps. When compounded over time, these very small changes make one group of organisms successful and they out-compete others," according to Carroll.
The new study involved swapping out different regions of the yeast genome to assess their effects on the performance of the twin genes, as well as engineering in the gene from another species of yeast that had retained only a single copy.
"We retraced the steps of evolution," the Wisconsin biologist explains.
The work shows in great detail how the ancestral gene gained efficiency through duplication and division of labor.
"They became optimally connected in that job. They're working in cahoots, but together they are better at the job the ancestral gene held," Carroll says. "Natural selection has taken one gene with two functions and sculpted an assembly line with two specialized genes."
Analytical Tool Predicts Disease-Causing GenesNews
Predicting genes that can cause disease due to the production of truncated or altered proteins that take on a new or different function, rather than those that lose their function, is now possible thanks to an international team of researchers that has developed a new analytical tool to effectively and efficiently predict such candidate genes.
Single Gene Change in Gut Bacteria Alters Host MetabolismNews
Scientists have found that deleting a single gene in a particular strain of gut bacteria causes changes in metabolism and reduced weight gain in mice. The research provides an important step towards understanding how the microbiome – the bacteria that live in our body – affects metabolism.READ MORE
Gotta Sample 'Em All! Underwater Pokéball Captures Ocean LifeNews
A new device developed by Wyss Institute reseachers safely traps delicate sea creatures inside a folding polyhedral enclosure and lets them go without harm using a novel, origami-inspired design. The ultimate aim is to allow the sea creatures to be (gently) analyzed in high detail.READ MORE
International Conference on Neurooncology and Neurosurgery
Sep 17 - Sep 18, 2018 | <urn:uuid:c7fffc1d-25d5-4f90-98c5-6e1981362db4> | 3.640625 | 1,007 | News Article | Science & Tech. | 28.477409 | 95,479,510 |
Scientists solve structure of sought-after bacterial protein
Bacterial cells have an added layer of protection, called the cell wall, that animal cells don't. Assembling this tough armor entails multiple steps, some of which are targeted by antibiotics like penicillin and vancomycin.
Researchers at Duke University solved the structure of an enzyme that is crucial for helping bacteria build their cell walls. The molecule, called MurJ (shown in green), must flip cell wall precursors (purple) across the bacteria's cell membrane before these molecules can be linked together to form the cell wall. This new structure could be important to help develop new broad-spectrum antibiotics.
Credit: Alvin Kuk, Duke University
Yet one step in the process has remained a mystery because the molecular structures of the proteins involved were not known.
Duke University researchers have now provided the first close-up glimpse of a protein, called MurJ, which is crucial for building the bacterial cell wall and protecting it from outside attack. They published MurJ's molecular structure on Dec. 26 in Nature Structural and Molecular Biology.
Antibiotic researchers feel an urgent need to gain a deeper understanding of cell wall construction to develop new antibiotics in the face of mounting antibacterial resistance. In the U.S. alone, an antibiotic-resistant infection called MRSA causes nearly 12,000 deaths per year.
"Until now, MurJ's mechanisms have been somewhat of a 'black box' in the bacterial cell wall synthesis because of technical difficulties studying the protein," said senior author Seok-Yong Lee, Ph.D., associate professor of biochemistry at Duke University School of Medicine. "Our study could provide insight into the development of broad spectrum antibiotics, because nearly every type of bacteria needs this protein's action."
A bacterium's cell wall is composed of a rigid mesh-like material called peptidoglycan. Molecules to make peptidoglycan are manufactured inside the cell and then need to be transported across the cell membrane to build the outer wall.
In 2014, another group of scientists had discovered that MurJ is the transporter protein located in the cell membrane that is responsible for flipping these wall building blocks across the membrane. Without MurJ, peptidoglycan precursors build up inside the cell and the bacterium falls apart.
Many groups have attempted to solve MurJ's structure without success, partly because membrane proteins are notoriously difficult to work with.
In the new study, Lee's team was able to crystallize MurJ and determine its molecular structure to 2-angstrom resolution by an established method called X-ray crystallography --which is difficult to achieve in a membrane protein.
The structure, combined with follow-up experiments in which the scientists mutated specific residues of MurJ, allowed them to propose a model for how it flips peptidoglycan precursors across the membrane.
After determining the first structure of MurJ, Lee's team is now working to capture MurJ in action, possibly by crystallizing the protein while it is bound to a peptidoglycan precursor.
"Getting the structure of MurJ linked to its substrate will be key. It will really help us understand how this transporter works and how to develop an inhibitor targeting this transporter," Lee said.
Lee's group is continuing structure and function studies of other key players in bacterial cell wall biosynthesis as well. Last year, they published the structure of another important enzyme, MraY, bound to the antibacterial muraymycin.
The research was supported by Duke University startup funds.
CITATION: "Crystal structure of the MOP flippase MurJ in an inward-facing conformation," Alvin C. Y. Kuk, Ellene H. Mashalidis, Seok-Yong Lee. Nature Structural & Molecular Biology, December 26, 2016. DOI: 10.1038/nsmb.3346
Karl Bates | EurekAlert!
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:dc07d9bb-ff4d-4b9f-ba2c-3d8fceda4519> | 3.65625 | 1,457 | Content Listing | Science & Tech. | 40.059914 | 95,479,566 |
This alternative approach to creating artificial organic molecules, called bioretrosynthesis, was first proposed four years ago by Brian Bachmann, associate professor of chemistry at Vanderbilt University. Now Bachmann and a team of collaborators report that they have succeeded in using the method to produce the HIV drug didanosine.
The proof of concept experiment is described in a paper published online March 23 by the journal Nature Chemical Biology.
"These days synthetic chemists can make almost any molecule imaginable in an academic laboratory setting," said Bachmann. "But they can't always make them cheaply or in large quantities. Using bioretrosynthesis, it is theoretically possible to make almost any organic molecule out of simple sugars."
Putting natural selection to use in this novel fashion has another potential advantage. "We really need a green alternative to the traditional approach to making chemicals. Bioretrosynthesis offers a method to develop environmentally friendly manufacturing processes because it relies on enzymes – the biological catalysts that make life possible – instead of the high temperatures and pressures, toxic metals, strong acids and bases frequently required by synthetic chemistry," he said.
Normally, both evolution and synthetic chemistry proceed from the simple to the complex. Small molecules are combined and modified to make larger and more complex molecules that perform specific functions. Bioretrosynthesis works in the opposite direction. It starts with the final, desired product and then uses natural selection to produce a series of specialized enzymes that can make the final product out of a chain of chemical reactions that begin with simple, commonly available compounds.
Bachmann got the idea of applying natural selection in reverse from the retro-evolution hypothesis proposed in 1945 by the late Caltech geneticist Norman Horowitz. Horowitz envisioned an early stage in the development of life where early organisms were swimming in a primordial soup rich in organic material. In this environment, imagine that one of the species finds a use for the complex chemical compound A that gives it a competitive advantage. As a result, its population expands, consuming more and more compound A.
Everything goes well until compound A becomes scarce. When that happens, individuals who develop an enzyme that allows them to substitute the still plentiful compound B for the scarce compound A gain a reproductive advantage and continue to grow while those who remain dependent on compound A die out. And so it goes until many generations later the survivors have developed multi-step chemical pathways to produce the molecules that they need to survive from the molecules available in their environment.
To test Bachmann's retro approach, the Vanderbilt chemists first identified the drug that they wanted to produce – in this case didanosine, an anti-HIV drug sold under the trade names of Videx and Videx EC that is very costly to manufacture. Then they identified a similar "precursor" molecule that can be converted into didanosine when it is subject to a specific chemical transformation along with an enzyme capable of producing the type of transformation required.
Once they identified the enzyme, the researchers made use of the power of natural selection by making thousands of copies of the gene that makes the enzyme using a special copying technique that introduces random mutations.
The mutant genes were transferred into the gut bacteria E. coli in order to produce the mutant enzymes and placed into different "wells." After the cells were broken open and the contents mixed with the precursor compound, the amount of didanosine, in each well was measured. The researchers selected the enzyme that produced the greatest amount of the desired drug and then made enough of this optimized enzyme for the next step.
Next the researchers identified a second precursor – an even simpler molecule that could be chemical converted into the first precursor – and an associated transformative enzyme. Again they made thousands of mutated versions of the transformative enzyme's gene, inserted them in E. coli, put them in wells, broke open the cells and mixed the content with the optimized enzyme and second precursor. Once again, they tested all the wells for the anti-HIV drug. The well with the highest level of didanosine was the one in which the mutant enzyme was most effective in making the first precursor, which the optimized enzyme then converted into didanosine. This gave them a second optimized enzyme. The researchers carried out this reverse selection process three times, until they could make didanosine out a simple and inexpensive sugar named dideoxyribose.
One of the key technical challenges was rapidly determining the three-dimensional structures of the enzymes that were generated during the evolutionary process. Associate Professor of Pharmacology Tina Iverson provided this capability. Her team analyzed the laboratory-evolved enzymes after each round of mutagenesis and identified how the structural changes caused by the mutations improved the enzyme's ability to produce the desired transformation.
This information helped the collaborators figure out why some mutant enzymes did a better job at producing the desired compounds than others, which guided their choices about the areas of the precursor proteins to target.
The proof-of-concept experiment was performed in vitro instead of in living cells to keep things simple. However, the ultimate goal is to use the approach to produce artificial compounds by fermentation.
Graduate students William Birmingham, Chrystal Starbird, Timothy Panosian and David Nannemann contributed to the study.
The research was supported by National Science Foundation graduate fellowship DGE 0909667, the D. Stanley and Ann T. Tarbell Endowment fund, National Institutes of Health grant GM079419 and Department of Energy Argonne National Laboratory contract DE-AC02-06CH11357
Visit Research News @ Vanderbilt for more research news from Vanderbilt. [Media Note: Vanderbilt has a 24/7 TV and radio studio with a dedicated fiber optic line and ISDN line. Use of the TV studio with Vanderbilt experts is free, except for reserving fiber time.]
David Salisbury | Vanderbilt University
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:308d3b27-199d-4f48-8d52-49a62d58cc7f> | 3.859375 | 1,831 | Content Listing | Science & Tech. | 28.936709 | 95,479,567 |
Jupiter's total moon count jumps to 79 following the discovery of 12 new moons around the planet. Of the new dozen, astronomers pinpoint an 'oddball' moon with a dangerous orbit that's unlike any other Jovian moon.
NASA asked a number of scientists to study the chance of building a Europa lander.
NASA's Stratospheric Observation for Infrared Astronomy (SOFIA) will soon begin its 2017 observing campaign.
U.S. President-elect Donald Trump will focus NASA’s efforts in exploring the farthest reaches of the solar system and visiting Europa, Jupiter’s icy moon.
Is there life beyond Earth in the solar system? NASA director James Green explains which planets and moons in our solar system could have alien life, and why NASA believes it is on the right track.
Scientists have observed an atmospheric collapse in Jupiter’s moon Io when the giant planet casts its shadow on the moon’s surface during an eclipse.
Jupiter's moon Europa has in recent years given scientists hope that it harbors conditions suitable for life, so in a bid to explore this possibility further, on Tuesday, NASA chose nine high-tech instruments for a mission to search for life on this mysterious icy world. | <urn:uuid:268d1f44-0411-482a-af97-196b8e819213> | 2.90625 | 256 | Content Listing | Science & Tech. | 50.743717 | 95,479,616 |
ENVIRONMENTAL SCIENCE. CH.1 “our changing environment”. The big picture. Human population Earth’s natural resources pollution in air, water, or soil and harms humans or other living organisms. How can humans impact the environment less?.
“our changing environment”
Env. Sci. is the study of the relationship between humans and the environment (both biotic and abiotic factors) | <urn:uuid:357b2959-1b61-4287-94d7-259d77e998c0> | 2.84375 | 91 | Knowledge Article | Science & Tech. | 47.532982 | 95,479,617 |
The island Isla Santa María in the south of central Chile is the document of a complete seismic cycle.
Charles Darwin and his captain Robert Fitzroy witnessed the great earthquake of 1835 in south central Chile. Historical nautical charts from the „Beagle“-captain show an uplift of the island Isla Santa María of 2 to 3 meters after the earthquake.
What Darwin and Fitzroy couldn’t know was the fact that 175 years later nearly at the same position such a strong earthquake would recur.
At the South American west coastline the Pacific Ocean floor moves under the South American continent. Resulting that through an in- and decrease of tension the earth’s crust along the whole continent from Tierra del Fuego to Peru broke alongside the entire distance in series of earthquakes within one and a half century. The earthquake of 1835 was the beginning of such a seismic cycle in this area.
After examining the results of the Maule earthquake in 2010 a team of geologists from Germany, Chile and the US for the first time were able to measure and simulate a complete seismic cycle at its vertical movement of the earth’s crust at this place.
In the current online-edition of „Nature Geoscience“ they report about the earthquakes: After the earthquake of 1835 with a magnitude of about 8,5 Isla Santa María was uplifted up to 3 m, subsided again about 1,5 m in the following 175 years, and upliftet anew 1,5-2 m caused by the Maule earthquake with a moment magnitude scale of 8,8.
The Maule earthquake belongs to the great earthquakes, which was fully recorded and therefore well documented by a modern network of space-geodetical and geophysical measuring systems on the ground. More difficult was the reconstruction of the processes in 1835.
But nautical charts from 1804 before the earthquake, from 1835 and 1886 as well as the precise documentation of captain Fitzroy allow in combination with present-day methods a sufficient accurate determination of the vertical movement of the earth’s crust along a complete seismic cycle.
At the beginning of such a cycle energy is stored by elastic deformation of the earth’s crust, then released at the time of the earthquake. “But interestingly, our observations hint at a variable subsidence rate during the seismic cycle” explains Marcos Moreno from GFZ German Research Centre for Geosciences, one of the co-authors.
“Between great earthquakes the plates beneath Isla Santa María are large locked, dragging the edge of the South American plate, and the island upon it, downward and eastward.” During the earthquakes, motion is suddenly reversed and the edge of the South America Plate and island are thrust upward and to the west.” This complex movement pattern could be perfectly confirmed by a numerical model. In total, over time arises a permanent vertical uplift of 10 to 20% of the complete uplift.
Records of earthquakes show that there are no periodically sequence repetition times or consistent repeating magnitudes of earthquakes. An important instrument for a better estimation of risks caused by earthquakes are the compilation and measurement of earth’s crust deformation through an entire seismic cycle.
Wesson, R. L., Melnick, D., Cisternas, M., Moreno, M. & Ely, L.: “Vertical deformation through a complete seismic cycle at Isla Santa María, Chile”, Nature Geoscience, Advance Online Publication, 22.06.2015, http://dx.doi.org/10/1038/ngeo2468 (2015). DOI: 10.1038/NGEO2468
For photos in printable resolution click here:
Helmholtz Centre Potsdam
GFZ German Research Centre for Geosciences
- Head, Public Relations -
14473 Potsdam / Germany
Tel. +49 (0)331-288 1040
Fax +49 (0)331-288 1044
Franz Ossing | Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:1e26e836-a774-4368-b2bd-d0948c174b51> | 3.75 | 1,432 | Content Listing | Science & Tech. | 46.082168 | 95,479,634 |
Science continues to get bigger. The titles tell the story - supercolliders, supercomputers, superconductivity. Unfortunately these superprojects are supercostly. President Bush would be rash, in the budget plan he'll offer this week, to approve all the science extravaganzas his predecessor endorsed.
He has no task more urgent than to restore America's flagging pace of innovation. The traditional patterns of science funding have failed to help. Instead of letting each agency push its own agenda, Mr. Bush can redirect scientific resources toward a national goal: improving productivity.
NASA would be a good place to start. The space agency used to lead on two frontiers - space and technology. In the 1960's, its need for new materials and computers energized civilian research. Not any more. Its goal of building a grandiose $23 billion space station is largely make-work.
That gives Mr. Bush a splendid opportunity -assign NASA a mission in space designed to maximize the technology spin-off for civilian markets. The agenda almost writes itself: phase out the antique shuttle and space station; develop a new generation of rockets designed to put payloads in space at minimum cost; and fund research on robotics, computers and new materials to advance the unmanned exploration of space.
The superconducting supercollider - a $4 billion machine for exploring the ultimate constituents of matter - is another science spectacular. The purpose is worthy but the cost prohibitive. Other physicists correctly fear that the supercollider will drain funds from their research. High-energy physicists insist that Washington must build it, or stifle their research. There are far cheaper options.
One is to collaborate in the highly successful European laboratory at Geneva. Another is to build a different kind of machine, known as a linear accelerator. Backers of the supercollider claim its technology is well tested, but in fact there are still major design problems with its superconducting magnets.Continue reading the main story
Moreover, these are not the most advanced kind of superconductors. New superconducting materials will spawn whole new industries in the 1990's, and Japan may already lead in the race to exploit them. If the United States has to drop $5 billion into a supercollider, let it at least be one with a chance of helping the new superconducting materials toward practical application.
A third venture in rococo research is the $3 billion human genome project. Deciphering the full chemical sequence of human genetic instructions will be of great medical importance. But the Reagan Administration planned to let university biologists run the project, spreading the pork around the states to build the usual constituency. Shouldn't companies that might directly profit from it have some clear voice in its direction? That would speed commercial application and help American companies challenge foreign competitors.
The designs of the space station, the supercollider and the human genome project have one thing in common: a near total disregard of how such ventures might further American competitiveness. Mr. Bush's best science policy would be to rethink all three from scratch.Continue reading the main story | <urn:uuid:a9006648-425e-4c5f-be5b-77b90173a819> | 2.734375 | 630 | Truncated | Science & Tech. | 36.114249 | 95,479,635 |
Greenhouse-gas induced warming and megapolitan expansion are both significant drivers of our warming planet. Researchers are now assessing adaptation technologies that could help us acclimate to these changing realities.
The deployment of cool roofs, roofs typically painted white, help mitigate summertime temperatures but in Florida and some Southwestern cities like Phoenix (pictured) the roofs also have a negative effect on rainfall.
Credit: Ken Fagan, Arizona State University
But how well these adaptation technologies – such as cool roofs, green roofs and hybrids of the two – perform year round and how this performance varies with place remains uncertain.
Now a team of researchers, led by Matei Georgescu, an Arizona State University assistant professor in the School of Geographical Sciences and Urban Planning and a senior sustainability scientist in the Global Institute of Sustainability, have begun exploring the relative effectiveness of some of the most common adaptation technologies aimed at reducing warming from urban expansion.
The work showed that end-of-century urban expansion within the U.S. alone and separate from greenhouse-gas induced climate change, can raise near surface temperatures by up to 3 C (nearly 6 F) for some megapolitan areas. Results of the new study indicate the performance of urban adaptation technologies can counteract this increase in temperature, but also varies seasonally and is geographically dependent.In the paper, "Urban adaptation can roll back warming of emerging megapolitan regions," published in the online Early Edition of the Proceedings of the National Academy of Sciences, Georgescu and Philip Morefield, Britta Bierwagen and Christopher Weaver all of the U.S. Environmental Protection Agency, examined how these technologies fare across different geographies and climates of the U.S.
Specifically, what works in California's Central Valley, like cool roofs, does not necessarily provide the same benefits to other regions of the U.S., like Florida, Georgescu said. Assessing consequences that extend beyond near surface temperatures, like rainfall and energy demand, reveals important tradeoffs that are oftentimes unaccounted for.
Cool roofs are a good example. In an effort to reflect incoming solar radiation, and therefore cools buildings and lessen energy demand during summer, painting one's roof white has been proposed as an effective strategy. Cool roofs have been found to be particularly effective for certain areas during summertime.
However, during winter these same urban adaptation strategies when deployed in northerly locations, further cool the environment and consequently require additional heating to maintain comfort levels. This is an important seasonal contrast between cool roofs (i.e. highly reflective) and green roofs (i.e. highly transpiring). While green roofs do not cool the environment as much during summer, they also do not compromise summertime energy savings with additional energy demand during winter.
"The energy savings gained during the summer season, for some regions, is nearly entirely lost during the winter season," Georgescu said.
In Florida, and to a lesser extent Southwestern states of the U.S., there is a very different effect caused by cool roofs.
"In Florida, our simulations indicate a significant reduction in precipitation. The deployment of cool roofs results in a 2 to 4 millimeter per day reduction in rainfall, a considerable amount (nearly 50 percent) that will have implications for water availability, reduced stream flow and negative consequences for ecosystems," he said. "For Florida, cool roofs may not be the optimal way to battle the urban heat island because of these unintended consequences."
Georgescu said the researchers did not intend to rate urban adaptation technologies as much as to shed light on each technology's advantages and disadvantages.
"We simply wanted to get all of the technologies on a level playing field and draw out the issues associated with each one, across place and across time."
Overall, the researchers suggest that judicious planning and design choices should be considered in trying to counteract rising temperatures caused by urban sprawl and greenhouse gasses. They add that, "urban-induced climate change depends on specific geographic factors that must be assessed when choosing optimal approaches, as opposed to one size fits all solutions."Source:
Skip Derra | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:d7a2e095-2dac-41a1-b7f5-be883cc43358> | 3.671875 | 1,441 | Content Listing | Science & Tech. | 30.651351 | 95,479,642 |
Re-examination of the COBE DIRBE data reveals the thermal emission of several comet dust trails.The dust trails of 1P/Halley, 169P/NEAT, and 3200 Phaethon have not been previously reported.The known trails of 2P/Encke, and 73P/Schwassmann-Wachmann 3 are also seen. The dust trails have 12 and 25 microns surface brightnesses of <0.1 and <0.15 MJy/sr, respectively, which is <1% of the zodiacal light intensity. The trails are very difficult to see in any single daily image of the sky, but are evident as rapidly moving linear features in movies of the DIRBE data. Some trails are clearest when crossing through the orbital plane of the parent comet, but others are best seen at high ecliptic latitudes as the Earth passes over or under the dust trail. All these comets have known associations with meteor showers. This re-examination also reveals one additional comet and 13 additional asteroids that had not previously been recognized in the DIRBE data. | <urn:uuid:edd430a9-b5a2-43d6-b77f-c001df7126f7> | 3.09375 | 229 | Truncated | Science & Tech. | 55.95106 | 95,479,650 |
Artist's conception of the blazar in Orion that emitted the ghost particle (aka neutrino) detected at the Ice Cube facility in Antarctica.
As usual, the media in stories about this "revolutionary" astrophysical find, chooses terms like "ghost particle" aiming to elicit plenty of clicks. In fact, the term can apply to any neutrino precisely because one of the properties of these near massless particles is the ability to pass through an enormous amount of matter without causing any reaction. Like the cartoon character "Caspar the friendly ghost", they can literally pass through walls. In fact, only about 1 in ten billion neutrinos traversing a matter barrier equal to Earth's diameter, reacts with even a proton or neutron.
Hence, by virtue of being unaffected by normal matter, radiation or gravity we have "ghostly neutrinos".
In the case of the recent discovery- announced Thursday at the National Science Foundation- we learned one and only one neutrino "made the cut" in being detected by Ice Cube. This neutrino arrived from a "blazar" - a hyper active galaxy in the constellation Orion, 3.7 billion light years distant - and hurling neutrinos like particles from a cosmic ray gun. Indeed, the international team assembled in D.C. for the announcement believes the neutrino- spewing blazar to be the first known source of higher energy cosmic rays to reach Earth. See also:
This particular neutrino intercepted the Ice Cube detector in September, 2017, with an energy of 290 terra electron volts. (Recall here that 1 eV = 1.6 x 10 -19 J) . That energy is some 40 times greater than similar particles produced in the Large Hadron collider. But how do we know, what signature do we have, that it was a neutrino? Also what type of neutrino? Recall I had discussed three different types in an earlier post, the electron neutrino, tau neutrino and muon neutrino, e.g.
In the current single neutrino case from the blazar, it appears we have a muon neutrino - based also on archival data. We suspect it is an actual neutrino because the little bugger generated Cherenkov radiation, and hence are able to move faster than the speed of light c, in ice. Some accounts have the muons moving faster than light, period, which is incorrect. It is faster than light in the ice, e.g. forming the Ice Cube detector. Moving faster than light through a medium like ice, the muons, electrons spun off, glow or flare. It is the sensors in the ice of the Ice Cube that "spotted" the flares and from that - and some useful statistical analysis (see bottom of post) the neutrino interaction was deduced. The initial discovery was confirmed by astronomers at 20 different observatories. The find itself caps almost 20 years of work by a collaboration comprising 300 astrophysicists and astronomers (you can more easily see their names appended to the pdf version of one of the papers accessible at the end.)
OK, so how did the above reference "confirmation" take place? Well, lo and behold, at nearly the same time , the Fermi Gamma-ray Space Telescope detected an increase in energetic activity from the same direction as the blazar. Coincidence? Nope, the contributing researchers don't buy it, mainly because the rigorous statistics indicates a physical connection. Readers can read the end conclusion in the paper for yourselves. (Of course, the uncertainty is measured in so many standard deviations and these measurements aren't entirely free of them so the participating team members can't be 100 percent sure, but they're maybe at least 70-80 percent sure. Which is a pretty good marker in astrophysical research!
Some 8 years ago, I wrote at the end of a post ('Neutrinos Then And Now', Oct. 26, 2010 about the planned construction of the "Ice Cube" neutrino detector in Antarctica. I wrote at the time:
"Many other physicists and astronomers are interested in detecting neutrinos from much more distant objects, such as supernovas, and colliding galaxies. To that end, an enormous neutrino telescope detector called "Ice Cube" is being constructed inside an ice field in Antarctica. (See attached image). Its sensors will be aimed not only at the sky but toward the ground to detect neutrinos from the Sun and outer space that are coming through the planet"
The image shown at the time was extremely crude with virtually no details, but the one I show below basically indicates the most critical aspects.
The Ice Cube detector:
The Observatory seen in macro-frame:
Because of the oscillations and quantum interference we need to reckon in a "misalignment" between flavor and the basic neutrino masses. This is done by reference to three independent "mixing angles": Θ_12 , Θ_23 and Θ_13. To a good approximation, oscillation in any one regime is characterized by just one Θ_ij and a corresponding mass difference, defined:
D m ij2 = [m j2 - m i2]
As an example, the probability that a muon neutrino of energy E acquires a different flavor after traversing distance L is:
P = sin2 Θ 23 sin2 (l23)
where l23 is the energy -dependent oscillation length, given by:
4ħ E c / (D m 322)
How well do we know the parameters? Atmospheric neutrino observations yield:
Θ 23 ~ 45 degrees, while D m 322 = 0.0024 eV2. Meanwhile, solar neutrino data yield roughly 33 degrees for Θ12 and D m 212 = 0.00008 eV2. (Note: ħ is the Planck constant of action divided by 2 π) If then:
We know, D m 312 = 0.00248
which is close to D m 322 | <urn:uuid:a6acdae8-64e2-494f-b046-13d4b52d94e1> | 3.34375 | 1,263 | Personal Blog | Science & Tech. | 48.452731 | 95,479,662 |
RIO GRANDE VALLEY, Texas - “2016, 2017 these years will both be remembered as being among the warmest on record for the Rio Grande Valley,” said Barry Goldsmith, meteorologist for the National Weather Service.
Harlingen and Port Mansfield had its hottest year on record, with an average temperature of 77.1 and 75.6 degrees, respectively in 2017.
While McAllen and Rio Grande City tied with 2016’s heat record.
The recent cold the Valley has experienced, including the rare occasion of snow, evened things out.
“Those cold snaps came and what the cold snaps we've actually dropped that value back a bit,” Goldsmith said.
The National Weather Service has seen a number of all-time records or top five records of heat this century, and climate change could be one explanation.
“There’s still more data that we need to collect to make an observation and say how much is related to climate change and how much is related to these other atmospheric puzzle pieces fitting together,” he said.
It will be years before the National Weather Service can determine how much a contribution global climate change is making towards the record breaking heat.
“We are seeing a trend, so we can't deny that's happening,” Goldsmith said. “But exactly the contribution, we just need more data and that would be a few decades.” | <urn:uuid:a448b5f0-6247-4c0d-b8cc-af3495d105ab> | 2.671875 | 300 | News Article | Science & Tech. | 54.081114 | 95,479,674 |
Introduction to the Common Language Runtime (CLR)
By Vance Morrison (@vancem) - 2007
What is the Common Language Runtime (CLR)? To put it succinctly:
The Common Language Runtime (CLR) is a complete, high level virtual machine designed to support a broad variety of programming languages and interoperation among them.
Phew, that was a mouthful. It also in and of itself is not very illuminating. The statement above is useful however, because it is the first step in taking the large and complicated piece of software known as the CLR and grouping its features in an understandable way. It gives us a "10,000 foot" view of the runtime from which we can understand the broad goals and purpose of the runtime. After understanding the CLR at this high level, it is easier to look more deeply into sub-components without as much chance of getting lost in the details.
The CLR: A (very rare) Complete Programming Platform
Every program has a surprising number of dependencies on its runtime environment. Most obviously, the program is written in a particular programming language, but that is only the first of many assumptions a programmer weaves into the program. All interesting programs need some runtime library that allows them to interact with the other resources of the machine (such as user input, disk files, network communications, etc). The program also needs to be converted in some way (either by interpretation or compilation) to a form that the native hardware can execute directly. These dependencies of a program are so numerous, interdependent and diverse that implementers of programming languages almost always defer to other standards to specify them. For example, the C++ language does not specify the format of a C++ executable. Instead, each C++ compiler is bound to a particular hardware architecture (e.g., X86) and to an operating system environment (e.g., Windows, Linux, or Mac OS), which describes the format of the executable file format and specifies how it will be loaded. Thus, programmers don't make a "C++ executable," but rather a "Windows X86 executable" or a "Power PC Mac OS executable."
While leveraging existing hardware and operating system standards is usually a good thing, it has the disadvantage of tying the specification to the level of abstraction of the existing standards. For example, no common operating system today has the concept of a garbage-collected heap. Thus, there is no way to use existing standards to describe an interface that takes advantage of garbage collection (e.g., passing strings back and forth, without worrying about who is responsible for deleting them). Similarly, a typical executable file format provides just enough information to run a program but not enough information for a compiler to bind other binaries to the executable. For example, C++ programs typically use a standard library (on Windows, called msvcrt.dll) which contains most of the common functionality (e.g., printf), but the existence of that library alone is not enough. Without the matching header files that go along with it (e.g., stdio.h), programmers can't use the library. Thus, existing executable file format standards cannot be used both to describe a file format that can be run and to specify other information or binaries necessary to make the program complete.
The CLR fixes problems like these by defining a very complete specification (standardized by ECMA) containing the details you need for the COMPLETE lifecycle of a program, from construction and binding through deployment and execution. Thus, among other things, the CLR specifies:
- A GC-aware virtual machine with its own instruction set (called the Common Intermediate Language (CIL)) used to specify the primitive operations that programs perform. This means the CLR is not dependent on a particular type of CPU.
- A rich meta data representation for program declarations (e.g., types, fields, methods, etc), so that compilers generating other executables have the information they need to call functionality from 'outside'.
- A file format that specifies exactly how to lay the bits down in a file, so that you can properly speak of a CLR EXE that is not tied to a particular operating system or computer hardware.
- The lifetime semantics of a loaded program, the mechanism by which one CLR EXE file can refer to another CLR EXE and the rules on how the runtime finds the referenced files at execution time.
- A class library that leverages the features that the CLR provides (e.g., garbage collection, exceptions, or generic types) to give access both to basic functionality (e.g., integers, strings, arrays, lists, or dictionaries) as well as to operating system services (e.g., files, network, or user interaction).
Defining, specifying and implementing all of these details is a huge undertaking, which is why complete abstractions like the CLR are very rare. In fact, the vast majority of such reasonably complete abstractions were built for single languages. For example, the Java runtime, the Perl interpreter or the early version of the Visual Basic runtime offer similarly complete abstraction boundaries. What distinguishes the CLR from these earlier efforts is its multi-language nature. With the possible exception of Visual Basic (because it leverages the COM object model), the experience within the language is often very good, but interoperating with programs written in other languages is very difficult at best. Interoperation is difficult because these languages can only communicate with "foreign" languages by using the primitives provided by the operating system. Because the OS abstraction level is so low (e.g., the operating system has no concept of a garbage-collected heap), needlessly complicated techniques are necessary. By providing a COMMON LANGUAGE RUNTIME, the CLR allows languages to communicate with each other with high-level constructs (e.g., GC-collected structures), easing the interoperation burden dramatically.
Because the runtime is shared among many languages, it means that more resources can be put into supporting it well. Building good debuggers and profilers for a language is a lot of work, and thus they exist in a full-featured form only for the most important programming languages. Nevertheless, because languages that are implemented on the CLR can reuse this infrastructure, the burden on any particular language is reduced substantially. Perhaps even more important, any language built on the CLR immediately has access to all the class libraries built on top of the CLR. This large (and growing) body of (debugged and supported) functionality is a huge reason why the CLR has been so successful.
In short, the runtime is a complete specification of the exact bits one has to put in a file to create and run a program. The virtual machine that runs these files is at a high level appropriate for implementing a broad class of programming languages. This virtual machine, along with an ever growing body of class libraries that run on that virtual machine, is what we call the common language runtime (CLR).
The Primary Goal of the CLR
Now that we have basic idea what the CLR is, it is useful to back up just a bit and understand the problem the runtime was meant to solve. At a very high level, the runtime has only one goal:
The goal of the CLR is to make programming easy.
This statement is useful for two reasons. First, it is a very useful guiding principle as the runtime evolves. For example, fundamentally only simple things can be easy, so adding user visible complexity to the runtime should always be viewed with suspicion. More important than the cost/benefit ratio of a feature is its added exposed complexity/weighted benefit over all scenarios ratio. Ideally, this ratio is negative (that is, the new feature reduces complexity by removing restrictions or by generalizing existing special cases); however, more typically it is kept low by minimizing the exposed complexity and maximizing the number of scenarios to which the feature adds value.
The second reason this goal is so important is that ease of use is the fundamental reason for the CLR's success. The CLR is not successful because it is faster or smaller than writing native code (in fact, well-written native code often wins). The CLR is not successful because of any particular feature it supports (like garbage collection, platform independence, object-oriented programming or versioning support). The CLR is successful because all of those features, as well as numerous others, combine to make programming significantly easier than it would be otherwise. Some important but often overlooked ease of use features include:
- Simplified languages (e.g., C# and Visual Basic are significantly simpler than C++)
- A dedication to simplicity in the class library (e.g., we only have one string type, and it is immutable; this greatly simplifies any API that uses strings)
- Strong consistency in the naming in the class library (e.g., requiring APIs to use whole words and consistent naming conventions)
- Great support in the tool chain needed to create an application (e.g., Visual Studio makes building CLR applications very simple, and Intellisense makes finding the right types and methods to create the application very easy).
It is this dedication to ease of use (which goes hand in hand with simplicity of the user model) that stands out as the reason for the success of the CLR. Oddly, some of the most important ease-of-use features are also the most "boring." For example, any programming environment could apply consistent naming conventions, yet actually doing so across a large class library is quite a lot of work. Often such efforts conflict with other goals (such as retaining compatibility with existing interfaces), or they run into significant logistical concerns (such as the cost of renaming a method across a very large code base). It is at times like these that we have to remind ourselves about our number-one overarching goal of the runtime and ensure that we have our priorities straight to reach that goal.
Fundamental Features of the CLR
The runtime has many features, so it is useful to categorize them as follows:
- Fundamental features – Features that have broad impact on the design of other features. These include:
- Garbage Collection
- Memory Safety and Type Safety
- High level support for programming languages.
- Secondary features – Features enabled by the fundamental features that may not be required by many useful programs:
- Program isolation with AppDomains
- Program Security and sandboxing
- Other Features – Features that all runtime environments need but that do not leverage the fundamental features of the CLR. Instead, they are the result of the desire to create a complete programming environment. Among them are:
The CLR Garbage Collector (GC)
Of all the features that the CLR provides, the garbage collector deserves special notice. Garbage collection (GC) is the common term for automatic memory reclamation. In a garbage-collected system, user programs no longer need to invoke a special operator to delete memory. Instead the runtime automatically keeps track of all references to memory in the garbage-collected heap, and from time-to-time, it will traverse these references to find out which memory is still reachable by the program. All other memory is garbage and can be reused for new allocations.
Garbage collection is a wonderful user feature because it simplifies programming. The most obvious simplification is that most explicit delete operations are no longer necessary. While removing the delete operations is important, the real value to the programmer is a bit more subtle:
- Garbage collection simplifies interface design because you no longer have to carefully specify which side of the interface is responsible for deleting objects passed across the interface. For example, CLR interfaces simply return strings; they don't take string buffers and lengths. This means they don't have to deal with the complexity of what happens when the buffers are too small. Thus, garbage collection allows ALL interfaces in the runtime to be simpler than they otherwise would be.
- Garbage collection eliminates a whole class of common user mistakes. It is frightfully easy to make mistakes concerning the lifetime of a particular object, either deleting it too soon (leading to memory corruption), or too late (unreachable memory leaks). Since a typical program uses literally MILLIONS of objects, the probability for error is quite high. In addition, tracking down lifetime bugs is very difficult, especially if the object is referenced by many other objects. Making this class of mistakes impossible avoids a lot of grief.
Still, it is not the usefulness of garbage collection that makes it worthy of special note here. More important is the simple requirement it places on the runtime itself:
Garbage collection requires ALL references to the GC heap to be tracked.
While this is a very simple requirement, it in fact has profound ramifications for the runtime. As you can imagine, knowing where every pointer to an object is at every moment of program execution can be quite difficult. We have one mitigating factor, though. Technically, this requirement only applies to when a GC actually needs to happen (thus, in theory we don't need to know where all GC references are all the time, but only at the time of a GC). In practice, however, this mitigation doesn't completely apply because of another feature of the CLR:
The CLR supports multiple concurrent threads of execution with a single process.
At any time some other thread of execution might perform an allocation that requires a garbage collection. The exact sequence of operations across concurrently executing threads is non-deterministic. We can't tell exactly what one thread will be doing when another thread requests an allocation that will trigger a GC. Thus, GCs can really happen any time. Now the CLR does NOT need to respond immediately to another thread's desire to do a GC, so the CLR has a little "wiggle room" and doesn't need to track GC references at all points of execution, but it does need to do so at enough places that it can guarantee "timely" response to the need to do a GC caused by an allocation on another thread.
What this means is that the CLR needs to track all references to the GC heap almost all the time. Since GC references may reside in machine registers, in local variables, statics, or other fields, there is quite a bit to track. The most problematic of these locations are machine registers and local variables because they are so intimately related to the actual execution of user code. Effectively, what this means is that the machine code that manipulates GC references has another requirement: it must track all the GC references that it uses. This implies some extra work for the compiler to emit the instructions to track the references.
To learn more, check out the Garbage Collector design document.
The Concept of "Managed Code"
Code that does the extra bookkeeping so that it can report all of its live GC references "almost all the time" is called managed code (because it is "managed" by the CLR). Code that does not do this is called unmanaged code. Thus all code that existed before the CLR is unmanaged code, and in particular, all operating system code is unmanaged.
The stack unwinding problem
Clearly, because managed code needs the services of the operating system, there will be times when managed code calls unmanaged code. Similarly, because the operating system originally started the managed code, there are also times when unmanaged code calls into managed code. Thus, in general, if you stop a managed program at an arbitrary location, the call stack will have a mixture of frames created by managed code and frames created by unmanaged code.
The stack frames for unmanaged code have no requirements on them over and above running the program. In particular, there is no requirement that they can be unwound at runtime to find their caller. What this means is that if you stop a program at an arbitrary place, and it happens to be in a unmanaged method, there is no way in general to find who the caller was. You can only do this in the debugger because of extra information stored in the symbolic information (PDB file). This information is not guaranteed to be available (which is why you sometimes don't get good stack traces in a debugger). This is quite problematic for managed code, because any stack that can't be unwound might in fact contain managed code frames (which contain GC references that need to be reported).
Managed code has additional requirements on it: not only must it track all the GC references it uses during its execution, but it must also be able to unwind to its caller. Additionally, whenever there is a transition from managed code to unmanaged code (or the reverse), managed code must also do additional bookkeeping to make up for the fact that unmanaged code does not know how to unwind its stack frames. Effectively, managed code links together the parts of the stack that contain managed frames. Thus, while it still may be impossible to unwind the unmanaged stack frames without additional information, it will always be possible to find the chunks of the stack that correspond to managed code and to enumerate the managed frames in those chunks.
More recent platform ABIs (application binary interfaces) define conventions for encoding this information, however there is typically not a strict requirement for all code to follow them.
The "World" of Managed Code
The result is that special bookkeeping is needed at every transition to and from managed code. Managed code effectively lives in its own "world" where execution can't enter or leave unless the CLR knows about it. The two worlds are in a very real sense distinct from one another (at any point in time the code is in the managed world or the unmanaged world). Moreover, because the execution of managed code is specified in a CLR format (with its Common Intermediate Language (CIL)), and it is the CLR that converts it to run on the native hardware, the CLR has much more control over exactly what that execution does. For example, the CLR could change the meaning of what it means to fetch a field from an object or call a function. In fact, the CLR does exactly this to support the ability to create MarshalByReference objects. These appear to be ordinary local objects, but in fact may exist on another machine. In short, the managed world of the CLR has a large number of execution hooks that it can use to support powerful features which will be explained in more detail in the coming sections.
In addition, there is another important ramification of managed code that may not be so obvious. In the unmanaged world, GC pointers are not allowed (since they can't be tracked), and there is a bookkeeping cost associated with transitioning from managed to unmanaged code. What this means is that while you can call arbitrary unmanaged functions from managed code, it is often not pleasant to do so. Unmanaged methods don't use GC objects in their arguments and return types, which means that any "objects" or "object handles" that those unmanaged functions create and use need to be explicitly deallocated. This is quite unfortunate. Because these APIs can't take advantage of CLR functionality such as exceptions or inheritance, they tend to have a "mismatched" user experience compared to how the interfaces would have been designed in managed code.
The result of this is that unmanaged interfaces are almost always wrapped before being exposed to managed code developers. For example, when accessing files, you don't use the Win32 CreateFile functions provided by the operating system, but rather the managed System.IO.File class that wraps this functionality. It is in fact extremely rare that unmanaged functionality is exposed to users directly.
While this wrapping may seem to be "bad" in some way (more code that does not seem do much), it is in fact good because it actually adds quite a bit of value. Remember it was always possible to expose the unmanaged interfaces directly; we chose to wrap the functionality. Why? Because the overarching goal of the runtime is to make programming easy, and typically the unmanaged functions are not easy enough. Most often, unmanaged interfaces are not designed with ease of use in mind, but rather are tuned for completeness. Anyone looking at the arguments to CreateFile or CreateProcess would be hard pressed to characterize them as "easy." Luckily, the functionality gets a "facelift" when it enters the managed world, and while this makeover is often very "low tech" (requiring nothing more complex than renaming, simplification, and organizing the functionality), it is also profoundly useful. One of the very important documents created for the CLR is the Framework Design Guidelines. This 800+ page document details best practices in making new managed class libraries.
Thus, we have now seen that managed code (which is intimately involved with the CLR) differs from unmanaged code in two important ways:
- High Tech: The code lives in a distinct world, where the CLR controls most aspects of program execution at a very fine level (potentially to individual instructions), and the CLR detects when execution enters and exits managed code. This enables a wide variety of useful features.
- Low Tech: The fact that there is a transition cost when going from managed to unmanaged code, as well as the fact that unmanaged code cannot use GC objects encourages the practice of wrapping most unmanaged code in a managed façade. This means interfaces can get a "facelift" to simplify them and to conform to a uniform set of naming and design guidelines that produce a level of consistency and discoverability that could have existed in the unmanaged world, but does not.
Both of these characteristics are very important to the success of managed code.
Memory and Type Safety
One of the less obvious but quite far-reaching features that a garbage collector enables is that of memory safety. The invariant of memory safety is very simple: a program is memory safe if it accesses only memory that has been allocated (and not freed). This simply means that you don't have "wild" (dangling) pointers that are pointing at random locations (more precisely, at memory that was freed prematurely). Clearly, memory safety is a property we want all programs to have. Dangling pointers are always bugs, and tracking them down is often quite difficult.
A GC is necessary to provide memory safety guarantees
One can quickly see how a garbage collector helps in ensuring memory safety because it removes the possibility that users will prematurely free memory (and thus access memory that was not properly allocated). What may not be so obvious is that if you want to guarantee memory safety (that is make it impossible for programmers to create memory-unsafe programs), practically speaking you can't avoid having a garbage collector. The reason for this is that non-trivial programs need heap style (dynamic) memory allocations, where the lifetime of the objects is essentially under arbitrary program control (unlike stack-allocated, or statically-allocated memory, which has a highly constrained allocation protocol). In such an unconstrained environment, the problem of determining whether a particular explicit delete statement is correct becomes impossible to predict by program analysis. Effectively, the only way you have to determine if a delete is correct is to check it at runtime. This is exactly what a GC does (checks to see if memory is still live). Thus, for any programs that need heap-style memory allocations, if you want to guarantee memory safety, you need a GC.
While a GC is necessary to ensure memory safety, it is not sufficient. The GC will not prevent the program from indexing off the end of an array or accessing a field off the end of an object (possible if you compute the field's address using a base and offset computation). However, if we do prevent these cases, then we can indeed make it impossible for a programmer to create memory-unsafe programs.
While the common intermediate language (CIL) does have operators that can fetch and set arbitrary memory (and thus violate memory safety), it also has the following memory-safe operators and the CLR strongly encourages their use in most programming:
- Field-fetch operators (LDFLD, STFLD, LDFLDA) that fetch (read), set and take the address of a field by name.
- Array-fetch operators (LDELEM, STELEM, LDELEMA) that fetch, set and take the address of an array element by index. All arrays include a tag specifying their length. This facilitates an automatic bounds check before each access.
By using these operators instead of the lower-level (and unsafe) memory-fetch operators in user code, as well as avoiding other unsafe CIL operators (e.g., those that allow you to jump to arbitrary, and thus possibly bad locations) one could imagine building a system that is memory-safe but nothing more. The CLR does not do this, however. Instead the CLR enforces a stronger invariant: type safety.
For type safety, conceptually each memory allocation is associated with a type. All operators that act on memory locations are also conceptually tagged with the type for which they are valid. Type safety then requires that memory tagged with a particular type can only undergo operations allowed for that type. Not only does this ensure memory safety (no dangling pointers), it also allows additional guarantees for each individual type.
One of the most important of these type-specific guarantees is that the visibility attributes associated with a type (and in particular with fields) are enforced. Thus, if a field is declared to be private (accessible only by the methods of the type), then that privacy will indeed be respected by all other type-safe code. For example, a particular type might declare a count field that represents the count of items in a table. Assuming the fields for the count and the table are private, and assuming that the only code that updates them updates them together, there is now a strong guarantee (across all type-safe code) that the count and the number of items in the table are indeed in sync. When reasoning about programs, programmers use the concept of type safety all the time, whether they know it or not. The CLR elevates type-safety from being simply a programming language/compiler convention, to something that can be strictly enforced at run time.
Verifiable Code - Enforcing Memory and Type Safety
Conceptually, to enforce type safety, every operation that the program performs has to be checked to ensure that it is operating on memory that was typed in a way that is compatible with the operation. While the system could do this all at runtime, it would be very slow. Instead, the CLR has the concept of CIL verification, where a static analysis is done on the CIL (before the code is run) to confirm that most operations are indeed type-safe. Only when this static analysis can't do a complete job are runtime checks necessary. In practice, the number of run-time checks needed is actually very small. They include the following operations:
- Casting a pointer to a base type to be a pointer to a derived type (the opposite direction can be checked statically)
- Array bounds checks (just as we saw for memory safety)
- Assigning an element in an array of pointers to a new (pointer) value. This particular check is only required because CLR arrays have liberal casting rules (more on that later...)
Note that the need to do these checks places requirements on the runtime. In particular:
- All memory in the GC heap must be tagged with its type (so the casting operator can be implemented). This type information must be available at runtime, and it must be rich enough to determine if casts are valid (e.g., the runtime needs to know the inheritance hierarchy). In fact, the first field in every object on the GC heap points to a runtime data structure that represents its type.
- All arrays must also have their size (for bounds checking).
- Arrays must have complete type information about their element type.
Luckily, the most expensive requirement (tagging each heap item) was something that was already necessary to support garbage collection (the GC needs to know what fields in every object contain references that need to be scanned), so the additional cost to provide type safety is low.
Thus, by verifying the CIL of the code and by doing a few run-time checks, the CLR can ensure type safety (and memory safety). Nevertheless, this extra safety exacts a price in programming flexibility. While the CLR does have general memory fetch operators, these operators can only be used in very constrained ways for the code to be verifiable. In particular, all pointer arithmetic will fail verification today. Thus many classic C or C++ conventions cannot be used in verifiable code; you must use arrays instead. While this constrains programming a bit, it really is not bad (arrays are quite powerful), and the benefits (far fewer "nasty" bugs), are quite real.
The CLR strongly encourages the use of verifiable, type-safe code. Even so, there are times (mostly when dealing with unmanaged code) that unverifiable programming is needed. The CLR allows this, but the best practice here is to try to confine this unsafe code as much as possible. Typical programs have only a very small fraction of their code that needs to be unsafe, and the rest can be type-safe.
High Level Features
Supporting garbage collection had a profound effect on the runtime because it requires that all code must support extra bookkeeping. The desire for type-safety also had a profound effect, requiring that the description of the program (the CIL) be at a high level, where fields and methods have detailed type information. The desire for type safety also forces the CIL to support other high-level programming constructs that are type-safe. Expressing these constructs in a type-safe manner also requires runtime support. The two most important of these high-level features are used to support two essential elements of object oriented programming: inheritance and virtual call dispatch.
Object Oriented Programming
Inheritance is relatively simple in a mechanical sense. The basic idea is that if the fields of type
derived are a superset of the fields of type
derived lays out its fields so the fields of
base come first, then any code that expects a pointer to an instance of
base can be given a pointer to an instance of
derived and the code will "just work". Thus, type
derived is said to inherit from
base, meaning that it can be used anywhere
base can be used. Code becomes polymorphic because the same code can be used on many distinct types. Because the runtime needs to know what type coercions are possible, the runtime must formalize the way inheritance is specified so it can validate type safety.
Virtual call dispatch generalizes inheritance polymorphism. It allows base types to declare methods that will be overridden by derived types. Code that uses variables of type
base can expect that calls to virtual methods will be dispatched to the correct overridden method based on the actual type of the object at run time. While such run-time dispatch logic could have been implemented using primitive CIL instructions without direct support in the runtime, it would have suffered from two important disadvantages
- It would not be type safe (mistakes in the dispatch table are catastrophic errors)
- Each object-oriented language would likely implement a slightly different way of implementing its virtual dispatch logic. As result, interoperability among languages would suffer (one language could not inherit from a base type implemented in another language).
For this reason, the CLR has direct support for basic object-oriented features. To the degree possible, the CLR tried to make its model of inheritance "language neutral," in the sense that different languages might still share the same inheritance hierarchy. Unfortunately, that was not always possible. In particular, multiple inheritance can be implemented in many different ways. The CLR chose not to support multiple inheritance on types with fields, but does support multiple inheritance from special types (called interfaces) that are constrained not to have fields.
It is important to keep in mind that while the runtime supports these object-oriented concepts, it does not require their use. Languages without the concept of inheritance (e.g., functional languages) simply don't use these facilities.
Value Types (and Boxing)
A profound, yet subtle aspect of object oriented programming is the concept of object identity: the notion that objects (allocated by separate allocation calls) can be distinguished, even if all their field values are identical. Object identity is strongly related to the fact that objects are accessed by reference (pointer) rather than by value. If two variables hold the same object (their pointers address the same memory), then updates to one of the variables will affect the other variable.
Unfortunately, the concept of object identity is not a good semantic match for all types. In particular, programmers don't generally think of integers as objects. If the number '1' was allocated at two different places, programmers generally want to consider those two items equal, and certainly don't want updates to one of those instances affecting the other. In fact, a broad class of programming languages called `functional languages' avoid object identity and reference semantics altogether.
The key characteristics of value types are:
- Each local variable, field, or array element of a value type has a distinct copy of the data in the value.
- When one variable, field or array element is assigned to another, the value is copied.
- Equality is always defined only in terms of the data in the variable (not its location).
- Each value type also has a corresponding reference type which has only one implicit, unnamed field. This is called its boxed value. Boxed value types can participate in inheritance and have object identity (although using the object identity of a boxed value type is strongly discouraged).
Value types very closely model the C (and C++) notion of a struct (or C++ class). Like C you can have pointers to value types, but the pointers are a type distinct from the type of the struct.
Another high-level programming construct that the CLR directly supports is exceptions. Exceptions are a language feature that allow programmers to throw an arbitrary object at the point that a failure occurs. When an object is thrown, the runtime searches the call stack for a method that declares that it can catch the exception. If such a catch declaration is found, execution continues from that point. The usefulness of exceptions is that they avoid the very common mistake of not checking if a called method fails. Given that exceptions help avoid programmer mistakes (thus making programming easier), it is not surprising that the CLR supports them.
As an aside, while exceptions avoid one common error (not checking for failure), they do not prevent another (restoring data structures to a consistent state in the event of a failure). This means that after an exception is caught, it is difficult in general to know if continuing execution will cause additional errors (caused by the first failure). This is an area where the CLR is likely to add value in the future. Even as currently implemented, however, exceptions are a great step forward (we just need to go further).
Parameterized Types (Generics)
Previous to version 2.0 of the CLR, the only parameterized types were arrays. All other containers (such as hash tables, lists, queues, etc.), all operated on a generic Object type. The inability to create List, or Dictionary<KeyT, ValueT> certainly had a negative performance effect because value types needed to be boxed on entry to a collection, and explicit casting was needed on element fetch. Nevertheless, that is not the overriding reason for adding parameterized types to the CLR. The main reason is that parameterized types make programming easier.
These benefits do not disappear just because the type gets put into a List or a Dictionary, so clearly parameterized types have value. The only real question is whether parameterized types are best thought of as a language specific feature which is "compiled out" by the time CIL is generated, or whether this feature should have first class support in the runtime. Either implementation is certainly possible. The CLR team chose first class support because without it, parameterized types would be implemented different ways by different languages. This would imply that interoperability would be cumbersome at best. In addition, expressing programmer intent for parameterized types is most valuable at the interface of a class library. If the CLR did not officially support parameterized types, then class libraries could not use them, and an important usability feature would be lost.
Programs as Data (Reflection APIs)
The fundamentals of the CLR are garbage collection, type safety, and high-level language features. These basic characteristics forced the specification of the program (the CIL) to be fairly high level. Once this data existed at run time (something not true for C or C++ programs), it became obvious that it would also be valuable to expose this rich data to end programmers. This idea resulted in the creation of the System.Reflection interfaces (so-called because they allow the program to look at (reflect upon) itself). This interface allows you to explore almost all aspects of a program (what types it has, the inheritance relationship, and what methods and fields are present). In fact, so little information is lost that very good "decompilers" for managed code are possible (e.g., NET Reflector). While those concerned with intellectual property protection are aghast at this capability (which can be fixed by purposefully destroying information through an operation called obfuscating the program), the fact that it is possible is a testament to the richness of the information available at run time in managed code.
In addition to simply inspecting programs at run time, it is also possible to perform operations on them (e.g., invoke methods, set fields, etc.), and perhaps most powerfully, to generate code from scratch at run time (System.Reflection.Emit). In fact, the runtime libraries use this capability to create specialized code for matching strings (System.Text.RegularExpressions), and to generate code for "serializing" objects to store in a file or send across the network. Capabilities like this were simply infeasible before (you would have to write a compiler!) but thanks to the runtime, are well within reach of many more programming problems.
While reflection capabilities are indeed powerful, that power should be used with care. Reflection is usually significantly slower than its statically compiled counterparts. More importantly, self-referential systems are inherently harder to understand. This means that powerful features such as Reflection or Reflection.Emit should only be used when the value is clear and substantial.
The last grouping of runtime features are those that are not related to the fundamental architecture of the CLR (GC, type safety, high-level specification), but nevertheless fill important needs of any complete runtime system.
Interoperation with Unmanaged Code
Managed code needs to be able to use functionality implemented in unmanaged code. There are two main "flavors" of interoperation. First is the ability simply to call unmanaged functions (this is called Platform Invoke or PINVOKE). Unmanaged code also has an object-oriented model of interoperation called COM (component object model) which has more structure than ad hoc method calls. Since both COM and the CLR have models for objects and other conventions (how errors are handled, lifetime of objects, etc.), the CLR can do a better job interoperating with COM code if it has special support.
Ahead of time Compilation
In the CLR model, managed code is distributed as CIL, not native code. Translation to native code occurs at run time. As an optimization, the native code that is generated from the CIL can be saved in a file using a tool called crossgen (similar to .NET Framework NGEN tool). This avoids large amounts of compilation time at run time and is very important because the class library is so large.
The CLR fully anticipated the need to support multi-threaded programs in managed code. From the start, the CLR libraries contained the System.Threading.Thread class which is a 1-to-1 wrapper over the operating system notion of a thread of execution. However, because it is just a wrapper over the operating system thread, creating a System.Threading.Thread is relatively expensive (it takes milliseconds to start). While this is fine for many operations, one style of programming creates very small work items (taking only tens of milliseconds). This is very common in server code (e.g., each task is serving just one web page) or in code that tries to take advantage of multi-processors (e.g., a multi-core sort algorithm). To support this, the CLR has the notion of a ThreadPool which allows WorkItems to be queued. In this scheme, the CLR is responsible for creating the necessary threads to do the work. While the CLR does expose the ThreadPool directly as the System.Threading.Threadpool class, the preferred mechanism is to use the Task Parallel Library, which adds additional support for very common forms of concurrency control.
From an implementation perspective, the important innovation of the ThreadPool is that it is responsible for ensuring that the optimal number of threads are used to dispatch the work. The CLR does this using a feedback system where it monitors the throughput rate and the number of threads and adjusts the number of threads to maximize the throughput. This is very nice because now programmers can think mostly in terms of "exposing parallelism" (that is, creating work items), rather than the more subtle question of determining the right amount of parallelism (which depends on the workload and the hardware on which the program is run).
Summary and Resources
Phew! The runtime does a lot! It has taken many pages just to describe some of the features of the runtime, without even starting to talk about internal details. The hope is, however, that this introduction will provide a useful framework for a deeper understanding of those internal details. The basic outline of this framework is:
- The Runtime is a complete framework for supporting programming languages
- The Runtime's goal is to make programming easy.
- The Fundamental features of the runtime are:
- Garbage Collection
- Memory and Type Safety
- Support for High-Level Language Features | <urn:uuid:57037dcb-0757-4ab2-ba6f-3f69a304adb3> | 3.5625 | 8,658 | Documentation | Software Dev. | 35.565049 | 95,479,692 |
The study led by University of Melbourne researcher Dr Michelle Hall, is the first to show that the larger the male fairy wren, the lower the pitch of his song.
Male purple-crowned fairy-wrens sing trill songs in response to predator calls. They seem to take advantage of the attention attracted by predator calls, and sing their advertising songs when females are paying most attention.
Credit: Michelle L Hall
"This is the first time we have been able to show that song pitch indicates body size in song birds," said Dr Hall from the University's Department of Zoology.
The study, which began when Dr Hall was at the Max Planck Institute for Ornithology in Germany, has been published today in the journal PLOS ONE.
Reliable communication about body size between animals is particularly important when communicating with mates or rivals. For example the bigger the rival is, the more likely it is to win in a fight so a song pitch indicating a large size may deter rivals.
"Surprisingly, there is very little evidence that the pitch of calls indicates body size differences within species, except in frogs," she said.
"In birds in particular, there has been no evidence that the pitch of songs indicated the size of the singer until now."
The study involved measuring the leg length (a good indicator of overall body size) of 45 adult male purple-crowned fairy-wrens. It found there was a correlation between the lowest song pitches and male size.
"We found the bigger males sang certain song types at a lower pitch than smaller males," she said.
Purple-crowned fairy-wrens are creek-dwelling birds from northern Australia and, like their close relatives the blue wrens, males sing trill songs after the calls of certain predators, a context that seems to attract the attention of females.
Males have a repertoire of trill song variants, and it is the low-pitched variants that indicate the size of the singer.
Dr Hall showed that it may be the complexity of birdsong that has obscured the relationship between body size and song frequency in the past.
"Birds can have large repertoires of song types spanning a wide frequency range, and some birds even shift the pitch of their songs down in aggressive contexts," she said.
"Focusing on the lowest pitches that males were able to sing was the key to finding the correlation with body size."
The study was conducted at Mornington Wildlife Sanctuary in collaboration with Dr Anne Peters (Monash University) and Dr Sjouke Kingma (University of East Anglia, UK), and funded by the German Max Planck Institute for Ornithology.
Rebecca Scott | EurekAlert!
World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes
17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Plant mothers talk to their embryos via the hormone auxin
17.07.2018 | Institute of Science and Technology Austria
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:74c84890-312a-4e61-bb35-297c66781e5e> | 3.46875 | 1,228 | Content Listing | Science & Tech. | 43.947496 | 95,479,694 |
Microbial electrochemical cells or MXCs are able to use bacterial respiration as a means of liberating electrons, which can be used to generate current and make clean electricity. With minor reconfiguring such devices can also carry out electrolysis, providing a green path to hydrogen production, reducing reliance on natural gas and other fossil fuels, now used for most hydrogen manufacture.
Dr. Prathap Parameswaran showing the electrode used in the microbial electrochemical cell (MEC). MXCs resemble a battery, with a Mason jar-sized chamber setup for each terminal. The bacteria are grown in the “positive” chamber (called the anode). The research team, led by Bruce Rittmann, director of Biodesign’s Center for Environmental Biotechnology, had previously shown that the bacteria are able to live and thrive on the anode electrode, and can use waste materials as food, (the bacteria’s dietary staples include pig manure or other farm waste) to grow while transferring electrons onto the electrode to make electricity.
In a microbial electrolysis cell (MEC), like that used in the current study, the electrons produced at the anode join positiviely charged protons in the negative (cathode) chamber to form hydrogen gas. “The reactions that happen at the MEC anode are the same as for a microbial fuel cell which is used to generate electricity, “ Parameswaran says. “The final output is different depending on how we operate it.”
When the bacteria are grown in an oxygen-free, or anaerobic environment, they attach to the MXC’s anode, forming a sticky matrix of sugar and protein. In such environments, when fed with organic compounds, an efficient partnership of bacteria gets established in the biofilm anode, consisting of fermenters, hydrogen scavengers, and anode respiring bacteria (ARB). This living matrix, known as the biofilm anode, is a strong conductor, able to efficiently transfer electrons to the anode where they follow a current gradient across to the cathode side.
The present study demonstrates that the level of electron flow from the anode to the cathode can be improved by selecting for additional bacteria known as homo-acetogens, in the anode chamber. Homo-acetogens capture the electrons from hydrogen in waste material, producing acetate, which is a very favorable electron donor for the anode bacteria.
The study shows that under favorable conditions, the anode bacteria could convert hydrogen to current more efficiently after forming a mutual relationship or syntrophy with homo-acetogens. The team was also able to reduce the negative impact of other hydogen consuming microbes, such as methane-producing methanogens, which otherwise steal some of the available electrons in the system, thereby reducing current. The selective inhibition of methanogens was accomplished by the adding a chemical called 2-bromoethane sulfonic acid to the adode’s microbial stew.
The group used both chemical and genomic methods to confirm the identify of homo-acetogens. In addition to detection of acetate, formate, an intermediary product, was also discovered. With the aid of quantitative PCR analysis, the team was also able to pick up the genomic signature of acetogens in the form of FTHFS, a gene specifically associated with acetogenesis.
“We were able to establish that these homo-acetogens can prevail and form relationships,” Parameswaran says. Future research will explore ways to sustain syntrophic relations between homo-acetogens and anode bacteria, in the absence of the chemical inhibitors.
Further progress could pave the way for eventual large-scale commercialization of systems to simultaneously treat wastewater and generate clean energy. “One of the biggest limitations right now is our lack of knowledge,” says Cesar Torres, one of the current study’s co-authors, who stresses that there remains much to understand about the interactions of bacterial communities within MXCs.
The field is still very young, Torres points out, noting that work on MXCs only began about 8 years ago. “I think over the next 5-10 years the community will bring a lot of information that will be really helpful and that will lead us to good applications.”
The team’s results appear in the advanced online issue of the journal Bioresource Technology.Written by Richard Harth
Joe Caspermeyer | EurekAlert!
World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes
17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Plant mothers talk to their embryos via the hormone auxin
17.07.2018 | Institute of Science and Technology Austria
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:e6766fc1-9713-4204-b1c0-63ced0b28ec4> | 3.625 | 1,588 | Content Listing | Science & Tech. | 31.764292 | 95,479,695 |
The connection between mathematics and art goes back thousands of years. Mathematics has been used in the design of Gothic cathedrals, Rose windows, oriental rugs, mosaics and tilings. Geometric forms were fundamental to the cubists and many abstract expressionists, and award-winning sculptors have used topology as the basis for their pieces. Dutch artist M.C. Escher represented infinity, Möbius bands, tessellations, deformations, reflections, Platonic solids, spirals, symmetry, and the hyperbolic plane in his works.
Mathematicians and artists continue to create stunning works in all media and to explore the visualization of mathematics--origami, computer-generated landscapes, tesselations, fractals, anamorphic art, and more.
Hilbert's Square-Filling Curve"Hilbert's Square-Filling Curve" by The
In 1890 David Hilbert published a construction of a continuous curve whose image completely fills a square, which was a significant contribution to the understanding of continuity. Although it might be considered to be a pathological example, today, Hilbert's curve has become well-known for a very different reason---every computer science student learns about it because the algorithm has proved useful in image compression. See more fractal curves on the 3D-XplorMath Gallery.
--- adapted from "About Hilbert's Square Filling Curve" by Hermann Karcher
Mandelbrot SetA striking aspect of this image is its self-similarity: Parts of the set look very similar to larger parts of the set, or to the entire set itself. The boundary of the Mandelbrot Set is an example of a fractal, a name derived from the fact that the dimensions of such sets need not be integers like two or three, but can be fractions like 4/3. See more at the 3D-XplorMath Fractal Gallery.
--- Richard Palais (Univ. of California at Irvine, Irvine, CA) | <urn:uuid:a1945071-a2c0-47e5-ac12-faad9159a4b7> | 3.515625 | 414 | Knowledge Article | Science & Tech. | 32.74527 | 95,479,715 |
Species Detail - Six-striped Rustic (Xestia sexstrigata) - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
insect - moth
24 May (recorded in 2014)
24 September (recorded in 2004)
National Biodiversity Data Centre, Ireland, Six-striped Rustic (Xestia sexstrigata), accessed 19 July 2018, <https://maps.biodiversityireland.ie/Species/78987> | <urn:uuid:9b84a78d-63aa-4c92-9892-0cafc4eab9df> | 2.671875 | 150 | Structured Data | Science & Tech. | 40.681922 | 95,479,726 |
It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.
The story of a large chasm opening suddenly in Kenya’s Narok and Suswa County, damaging a major road and several houses, got quickly viral. The story was first reported around March 22, four day after the first fissure occurred, and picked up by international media around March 29, quickly spreading thereafter on the internet.
The proposed explanation that the fissure is evidence of Africa splitting apart, driven by mysterious tectonic forces deep within, sure is appealing. However, there is quite some controversy surrounding the nature of the chasm. In the first reports it was suggested that it is a fault. Some news reported that the opening of the crack was preceded by seismic activity. However, this area is poorly covered by seismograph stations and also no international station picked up an especially strong earthquake in the region, so this claim seems to be unverified at the moment. It could be a fossil fault, not active anymore and covered by sediments, washed away by the recent strong rainfalls. Mount Longonot and Mount Suswa, two dormant volcanoes, are located approximately ten miles away and the area is part of the Great Rift, a very large tectonic structure stretching almost 1,900 miles from the Gulf of Aden in the north towards Zimbabwe in the south. Here slow tectonic movements, less than one inch per year, slowly deform the rocks. Also, uprising magma, associated with the volcanoes, can deform rocks and cause faults. Along such faults, the rocks break and the continuous movements along the faults grind the rock into pieces. Faults are often filled with broken rocks and running water, following the path of least resistance, tends to excavate and erode this material.
Google Earth images show some linear structures visible on the ground, suggesting that indeed more erodible material was located there, quickly removed by the recent rainfalls and forming a gully into the unconsolidated material (mostly volcanic ash deposited by the nearby volcanoes).
However, seismologists noted, commenting the news on Twitter with the hashtag #KenyaCrack, that the chasm doesn’t really look like a typical fault rupture. The idea that a large fissure opens during an earthquake is more an urban legend. In most cases, as earthquakes are associated with compressive forces, a block will move along a fault plane, forming steps or escarpments where the fault intersects with the surface. Like this example of the Wall of Waiau, formed by the 14 November 2016 Kaikoura earthquake in New Zealand.
Powered by NEW DAY DIGEST | <urn:uuid:e10c7d66-6afc-49c2-a3f9-a14512c981b2> | 3.53125 | 562 | News Article | Science & Tech. | 37.597618 | 95,479,770 |
With funding from the Polar Studies Program of the Robert A. Pritzker Center for Meteoritics and Polar Studies Genome Analyst Felix Grewe journeys to Antarctica to collect Usnea lichens. A unique hybrid of algae and fungus, lichens are early predictors of climate change. Step into this extreme and beautiful environment. Thanks to Thorsten Lumbsch, Leopold G. Sancho, Miki Ojeda, Robert A.
Philipp Heck (Robert A. Pritzker Associate Curator for Meteoritics and Polar Studies) and collaborators from the University of Maryland and the University of California at Davis report on the rapid effects of terrestrial alteration of a pristine meteorite in a new article in Meteoritics & Planetary Science: The meteorite was seen falling in April 2012 over California’s Sierra foothills and landed near Sutter’s Mill (where the 1848–49 Gold Rush started).
The Field’s team consisting of University of Chicago graduate student Jennika Greer and Robert A. Pritzker Associate Curator Philipp Heck used scanning electron microscopy with X-ray spectroscopy and Raman spectroscopy to classify the rock as a very weakly shocked H4 chondrite. Out of about 60,000 confirmed meteorites, only about 0.1 percent are H4 chondrite falls. The meteorite is now officially named after Hamburg, Michigan.
A bright fireball streaked across the sky on January 16, 2018 near Detroit. The shockwave through the air caused a magnitude 2.0 earthquake in the area. Meteorite hunter Robert Ward used a strewn field map generated through Doppler radar data from NASA collaborator Marc Fries to search for meteorites. He found several pieces, and donated one to the Field. The connection to Robert Ward is thanks to long-time Field Museum supporter and private meteorite collector Terry Boudreaux. Robert A. Pritzker Associate Curator Philipp Heck and Resident graduate student Jennika Greer (Univ.
No, it’s not a sci-fi movie, but rather the unique Swedish fossil meteorite Oesterplana 065 which is the subject of a paper published last week in Meteoritics & Planetary Science by Postdoctoral Scholar Surya Rout (now at the University of Bern, Switzerland), Robert A. Pritzker Associate Curator for Meteoritics and Polar Studies Philipp Heck, and Field Museum Research Associate and Professor of Geology Birger Schmitz (Lund University of Sweden). | <urn:uuid:c667235d-4d83-4cd2-bf5e-2dba7194fedf> | 3.21875 | 515 | News (Org.) | Science & Tech. | 35.981412 | 95,479,775 |
A Gentle Introduction to Python. Iraklis Akritas Pipeline TD/Plugin-Tools Programmer Candidate. Presenter Information. Please feel free to email me with any questions as well as connect with me: Email LinkedIn Twitter Google+ Website. Where does the name Python come from?.
Iraklis Akritas Pipeline TD/Plugin-Tools Programmer Candidate
The Burmese Python (Python molurusbivittatus) is the largest subspecies of the Indian Python and one of the six largest snakes in the world. [Wikipedia]
“In December 1989, I was looking for a "hobby" programming project that would keep me occupied during the week around Christmas. My office would be closed, but I had a home computer, and not much else on my hands. I decided to write an interpreter for the new scripting language...”
The "else" always applies to the nearest "if", unless you use braces. This is an essential problem in C and C++. Of course, you could resort to always use braces, no matter what, but that's tiresome and bloats the source code, and it doesn't prevent you from accidentally obfuscating the code by still having the wrong indentation. (And that's just a very simple example. In practice, C code can be much more complex.)
In Python, the above problems can never occur, because indentation levels and logical block structure are always consistent. The program always does what you expect when you look at the indentation.
From simple all element printing | <urn:uuid:2f861fc2-2781-44da-967a-0afd635ea25e> | 2.625 | 317 | Truncated | Software Dev. | 50.479326 | 95,479,791 |
The views are spectacular at the wall break, 25 m deep, San Salvador, Bahamas.
Starfish are found in the animal phylum, Echinodermata. Echino comes from the Greek word spiny and derma meaning skin. There has been six classes that fall under this phylum: Class Concentricycloidea, Class Holothurodea, Class Crinoidea, Class Echinoidea, Class Asteroida, and Class Ophiuroidea. The two classes that you will find the starfish are Asteroida (Sea Star) and Ophiuroidea (Brittle Star). The Brittle Star has a distinct boundary between the arm and central disk where the Sea Star has the arms connected to one another. We better know Sea Stars, and more likely what we will encounter.
“The Echinoderms possess three unique and distinctive features: a body plan with five-part symmetry, an internal calcite skeleton, and a water-vascular system of fluid-filled vessels that are manifested externally as structures called tube feet,” (Hendler, Kier, Miller, and Pawson, 18).
The first process becoming a starfish starts out like humans, with reproduction. With the starfish, fertilization is external. The female Starfish sheds several million eggs and are released into the water. After the eggs match up with sperm, a hallow ball (called the blastula) develops. After the fertalization process, the embryo turns into the blastula to the larva to a miniature starfish. The process takes about two months. The starfish is a juvenile up until six months of age. They start with three or four arms and develop more later on. After the age of six months, the starfish is mature.
The body of the starfish consists of many parts. The starfish has no ears, nose, or eyes. There is basic central disk and symmetrical arms (rays). The rays typically come in the number five, although some have up to fifty (Hendler, Kier, Miller, and Pawson, 60). The Major anatomical features located on the upper portion of the starfish are: the arm, anus, madreporite, papulae, and ocular plate. Located underneath the starfish are the mouth, jaw, arm furrow, and tube feet. The starfish has a water vascular system. The tube feet are the primary site of oxygen uptake. (Lawrence, 131). Canals are connected to the tube feet, which connect to the disk.
The diet of the starfish starts with encrusting algae as a juvenile. The diet changes as they grow. The can eat shelled animals such as: Oysters, clams, sea urchins, barnacles, mussels, and other small animals. Some starfish even eat other starfish. The eating process is interesting. The starfish sneaks up and jumps on the food and surrounds the shell. It uses suction to pull apart he shell and puts its stomach into the shell to eat the inside. The stomach moves back into the starfish when it is done.
Within the better well-known class, Asteroida, of starfish, there are approximately 1,800 living species (Hendler, Kier, Miller, and Pawson, 18). While there are to many to discuss, and ones we will not see, I have narrowed it down to a few from different families, to discuss ones we may see in the Bahamas as well as the Florida Keys. These species can be found in numerous other places, but I will only mention the places we are going.
Luidia clathrata, family Luidiidae, can be found in the Florida Keys. Its habitat is in "protected inshore areas such as lagoons and bays on sand or mud sediments; offshore on sand and shell hash," (Hendler, Kier, Miller, and Pawson, 68). To feed, avoid light, and to protect itself, it burrows into the sand. We will be able to see the dead bodies after a storm hits. Thousands of their bodies will wash up on shore. These sea stars have a small disk and five long, flat, strap like arms. The larger of the Luidia clathrata are 8-11 inches in diameter. On each arm to the disk, there is a black or gray stripe. There are no suckers on the tube feet; they are pointed. They can be found around a depth of 131 feet. (Hendler, Kier, Miller, and Pawson, 68-69).
Astropecten articulatus, family astropectinidae, can be found in the Bahamas as well
as the Florida Keys. Its habitat is in "soft sediment composed of sand or shell hash," Hendler, Kier, Miller, and Pawson, 73). The moderate-sized adults can reach to be 8 inches. They have five flat arms and are two to three times greater in length to the disk diameter. Their color is a drab gray or light brown. Some have light brown on the central portion of the arms and disk. The marginal plates are mottled with dark reddish brown to a light pink. They can be found around a depth of 24-42 feet. (Hendler, Kier, Miller, and Pawson, 72-74).
Asterina folium, family asterinidea, can be found in the Bahama Islands and Florida
Keys. Its habitat is in "Coral reefs, particularly on the reef flat, and spur-and-groove zones, usually under coral rubble or rock, or in the reef framework," (Hendler, Kier, Miller, and Pawson, 74). This sea star is a small species of 1 inch in its whole diameter. It has short, bluntly rounded arms and is pentagonal shaped. The dorsal surface is covered in scales. They are mostly white. The Intermediate sized ones are yellow or yellowish red. The larges are blue to blue-green. They can be found in the low-tide mark of 49 feet. (Hendler, Kier, Miller, and Pawson, 74-75).
Linckia guildingii, family Ophidiasteridae, can be found in the Bahamas Islands and
Florida. Its habitat is "Usually on coral reef hard bottom; also reported from sandy beds between reefs," (Hendler, Kier, Miller, and Pawson, 78). This species has a small disk and slender arms, mostly of unequal length. They can have anywhere from 4 to 7 arms. The longest of the arms is between 9-10 times the diameter of the disk. Small swollen plates cover the smooth granules of the upper surface of the arms. juvenile are shades of red, brown, violet, or purple. Adults are reddish brown, yellowish brown, tan or violet. They can be found at a depth usually less than 3 to 6 feet. (Hendler, Kier, Miller, and Pawson, 77-78).
Poraniella echinulata, family Asteropseidae, can be found in the Bahamas Islands and
in Florida. Its habitat is "on hard substrates, beneath coral rubble and rock in shallow reef habitats, (Hendler, Kier, Miller, and Pawson, 82). This species of sea star is one of the smallest in tropical western Atlantic. It is usually less than 1 inch in diameter. It has a broad disk and five short arms, which are wide, flat, and thin. They are covered with thick, fleshy tissue. The upper surface is bright orange-red to a blood red color. It is variegated with white. Some have white pigment that forms a pentagon in the center of the disk. There is also a distinctive stripe along the arms midline. There is a mottled black and white color on the tips of the arms. The fleshy papulae are pale orange and the madreporite is a light tan color. The tube feet are transparent with no color. The under surface is orange-red but with white tips on the jaws' tips. They have been found around a depth of 492 feet. (Hendler, Kier, Miller and Pawson, 81-82).
Oreaster reticulatus, family Oreasteridae, can be found in Florida and the Bahamas
Islands. Its habitat is in "shallow, quiet waters of reef flats, lagoons, and mangrove channels," (Hendler, Kier, Miller and Pawson, 83). It is one of the most widely known sea stars in the Caribbean. It can reach a diameter of 20 inches. It has a massive disk that is inflated with five short tapering arms. The top surface has thick, heavy plates. The bottom surface is flat with a shallow concavity near its mouth. The top surface of a juvenile is usually a mottlet green, tan, brown, and gray. The top surface of adults is yellow, orange, or brown. The erect tubercles are lighter or darker than the disk and arms. Both adult and juvenile have a bottom surface of cream or beige. On a calm, clear day, there is a chance to see them in grass beads or sand patches. They can be found at a depth of 3 to 120 feet. (Hendler, Kier, Miller and Pawson, 82-84).
The last sea star is the Echinaster echinophours, family Echinasteridae. It can be found in the Florida Keys. Usually found in shallow water. Its habitat is "usually associated with hard subtrates," (Hendler, Kier, Miller and Pawson, 84). This small species has a diameter of 2.8 inches. It has a small disk and five arms. The arms taper slightly and flattened ventrally. They have widely spaced plates and are covered with thin skin. The tip surface has thorn like, erect spines. There are 6 to 9 spines on each row measuring to a length of 0.08 to 0.12 inches. The sides of the arms have a straight row of spines. The bottom surface has grooves outlined by three series of spines. Their color is red or crimson. The can be found at a depth of 180 feet. (Hendler, Kier, Miller and Pawson, 84-85).
These are just seven different sea stars. Starfish are an important factor in marine life. A great decline of the starfish would affect the life of other systems in the ocean. They are beautiful animals. I hope to learn more about them as we explore the Florida Keys and the Bahamas.
Bavendam, Fred. Beneath Cold Waters: The Marine Life of New England. Down Eawst Books. Camden, ME. 1980.
Hendler, Gordan and Kier, Porter M. and Miller, John E. and Pawson, David L. Sea
Stars, Sea Urchins, and Allies. Smithsonian Institution. 1995.
Lawrence, John. The Functional Biology of Echinoderms. The Johns Hopkins
University Press: Baltimore, MD. 1987.
Prayer of the Starfish
The depths have closed over me.
Am I not
fallen from heaven
to the torments of the waves?
I look like a blood star.
I try to remember
my distant royalty
but in vain.
Crawling on the sand,
I open my arms
and I dream, I dream, I dream…
could not an angel
pull me up
from the bottom of the sea
to place me again
in Your heaven?
Ah! One day,
So be it!
Translated by John Lawrence
Return to Topic Menu
We also have a GUIDE for depositing articles, images, data, etc in your research folders.
Article complete. Click HERE to return to the Pre-Course Presentation Outline and Paper Posting Menu. Or, you can return to the course syllabus
WEATHER & EARTH SCIENCE RESOURCES
OTHER ACADEMIC COURSES, STUDENT RESEARCH, OTHER STUFF
TEACHING TOOLS & OTHER STUFF
It is 7:16:03 PM on Tuesday, July 17, 2018. Last Update: Wednesday, May 7, 2014 | <urn:uuid:41c0c13e-d7ff-41a1-a05d-e9d96a39c307> | 4.09375 | 2,579 | Academic Writing | Science & Tech. | 65.421309 | 95,479,809 |
Glaciers might seem rather inhospitable environments. However, they are home to a diverse and vibrant microbial community. It’s becoming increasingly clear that they play a bigger role in the carbon cycle than previously thought.
A new study, now published in the journal Nature Geoscience, shows how microbial communities in melting glaciers contribute to the Earth’s carbon cycle, a finding that has global implications as the bulk of Earth’s glaciers shrink in response to a warming climate.
The study was conducted by Heidi Smith and Christine Foreman of the Center for Biofilm Engineering in Montana State University’s College of Engineering, USA, Marcel Kuypers and Sten Littmann of the Max Planck Institute for Marine Microbiology in Bremen, Germany, and researchers at the University of Colorado at Boulder, the U.S. Geological Survey, Stockholm University in Sweden.
The paper challenges the prevailing theory that microorganisms found in glacial meltwater primarily consume ancient organic carbon that was once deposited on glacial surfaces and incorporated into ice as glaciers formed.
“We felt that there was another side to the story,” said Smith. “What we showed for the first time is that a large proportion of the organic carbon is instead coming from photosynthetic bacteria” that are also found in the ice and that become active as the ice melts, Smith said. Like plants, those bacteria absorb carbon dioxide and in turn provide a source of organic matter.
The research team made the discovery after sampling meltwater from a large stream flowing over the surface of a glacier in the McMurdo Dry Valleys region of Antarctica in 2012. Afterward, Smith spent two months at the Max Planck Institute for Marine Microbiology in Bremen, where she worked with colleagues to track how different carbon isotopes moved through the meltwater’s ecosystem, allowing the team to determine the carbon’s origin and activity.
The researchers ultimately found that the glacial microbes utilized the carbon produced by the photosynthetic bacteria at a greater rate than the older, more complex carbon molecules deposited in the ice, because the bacterial carbon is more “labile,” or easily broken down. The labile carbon “is kind of like a Snickers bar,” meaning that it’s a quick, energizing food source that’s most available to the microbes, Smith said.
Moreover, the researchers found that the photosynthetic bacteria produced roughly four times more carbon than was taken up by the microbes, resulting in an excess of organic carbon being flushed downstream. “The ecological impact of this biologically produced organic carbon on downstream ecosystems will be amplified due to its highly labile nature,” Foreman said.
Although individual glacial streams export relatively small amounts of organic carbon, the large mass of glaciers, which cover more than 10 percent of the Earth’s surface, means that total glacial runoff is an important source of the material. Marine organic carbon underpins wide-ranging ecological processes such as the production of phytoplankton, the foundation of the oceans’ foodweb.
As glaciers increasingly melt and release the organically produced, labile carbon, “we think that marine microbial communities will be most impacted,” Smith said. “We hope this generates more discussion.”
In a “News and Views” commentary accompanying the article in Nature Geoscience, Elizabeth Kujawinski, a tenured scientist at Woods Hole Oceanographic Institution, called the team’s work “an elegant combination” of research methods.
Taken together with another study published in the same issue of Nature Geoscience, about microbial carbon cycling in Greenland, Smith’s paper “deflates the notion that glacier surfaces are poor hosts for microbial metabolism,” according to Kujawinski. The two studies “have established that microbial carbon cycling on glacier surfaces cannot be ignored,” she added.
Based on Montana State University’s press release:
H. J. Smith, R. A. Foster, D. M. McKnight , J. T. Lisle , S. Littmann , M. M. M. Kuypers und C. M. Foreman: Microbial formation of labile organic carbon in Antarctic glacial environments. Nature Geoscience.
Begleitender News & Views
E. Kujawinski: The power of glacial microbes. Nature Geoscience.
Montana State University, Bozeman, Montana 59717, USA
Stockholm University, Stockholm 10691, Sweden
Max Planck Institute for Marine Microbiology, Bremen 28359, Germany
University of Colorado, Boulder, Colorado 80309, USA
US Geological Survey, St Petersburg, Florida 33701, USA
Please direct your queries to
Dr. Fanni Aspetsberger
Dr. Manfred Schlösser
Phone: +49 421 2028 947 or 704
http://www.mpi-bremen.de/en/Hotspots-for-biological-activity-and-carbon-cycling-... (related press release: Hotspots for biological activity and carbon cycling on glaciers)
Dr. Fanni Aspetsberger | Max-Planck-Institut für marine Mikrobiologie
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:e3720288-808b-44e5-8f63-5ad900c569b4> | 4.3125 | 1,683 | Content Listing | Science & Tech. | 35.717623 | 95,479,869 |
Vertically grown cadmium sulfide (CdS) nanowire (NW) arrays were prepared using two different processes: hydrothermal and physical vapor deposition (PVD). The NWs obtained from the hydrothermal process were composed of alternating hexagonal wurtzite (WZ) and cubic zinc blende (ZB) phases with growth direction along WZ < 0001 > and ZB . The NWs produced by PVD process are single crystalline WZ phase with growth direction along < 0001 >. These vertically grown CdS NW arrays have been used to converting mechanical energy into electricity following a developed procedure [Z. L. Wang and J. Song Science 312, 242 (2006)]. The basic principle of the CdS NW nanogenerator relies on the coupled piezoelectric and semiconducting properties of CdS, and the data fully support the mechanism previously proposed for ZnO NW nanogenerators and nanopiezotronics. (c) 2008 American Institute of Physics.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:814362f8-167c-452e-9e63-7f97d576b6b1> | 2.75 | 234 | Academic Writing | Science & Tech. | 35.806051 | 95,479,870 |
Estimation of ballistic block landing energy during 2014 Mount Ontake eruption
- 1.5k Downloads
The 2014 Mount Ontake eruption started just before noon on September 27, 2014. It killed 58 people, and five are still missing (as of January 1, 2016). The casualties were mainly caused by the impact of ballistic blocks around the summit area. It is necessary to know the magnitude of the block velocity and energy to construct a hazard map of ballistic projectiles and design effective shelters and mountain huts. The ejection velocities of the ballistic projectiles were estimated by comparing the observed distribution of the ballistic impact craters on the ground with simulated distributions of landing positions under various sets of conditions. A three-dimensional numerical multiparticle ballistic model adapted to account for topographic effect was used to estimate the ejection angles. From these simulations, we have obtained an ejection angle of γ = 20° from vertical to horizontal and α = 20° from north to east. With these ejection angle conditions, the ejection speed was estimated to be between 145 and 185 m/s for a previously obtained range of drag coefficients of 0.62–1.01. The order of magnitude of the mean landing energy obtained using our numerical simulation was 104 J.
KeywordsBallistics Mount Ontake 3D multiparticle numerical model Drag Topographic effect
Ballistic projectiles are ejected during explosive eruptions, follow a parabolic trajectory in the air that is minimally affected by wind, and ultimately land on the ground (Wilson 1972). These blocks can cause significant damage, such as penetrating roofs (Blong 1981, 1984; Ui et al. 2002), demolishing mountain huts, injuring humans (Blong 1984; Baxter and Gresham 1997), and causing fires if they are still hot when they land (Pistolesi et al. 2011). Our work on deducing the ejection conditions of ballistic projectiles and estimating their landing velocity and energy is useful for reducing the risk of damage caused by ballistic blocks ejected during volcanic eruptions.
To avoid such a high number of ballistic block-related casualties, it is useful to make hazard maps (Crandell et al. 1984; Alatorre-Ibargüengoitia et al. 2012; MIAVITA Team 2012; Fitzgerald et al. 2014), construct shelters around the crater of the volcano, and reinforce the roofs of mountain huts (Pomonis et al. 1999). To make a hazard map for volcanic ballistic projectiles, it is necessary to estimate the travel distance, landing velocity, and landing energy of the ballistic projectiles. Therefore, the objective of our project is to estimate the impact velocity and energy of ballistic projectiles when they land on the ground or impact the mountain huts. These estimations would also be useful in the design of shelters and the reinforcement of mountain hut roofs.
It is difficult to directly measure the impact speed and energy from monitoring data. Furthermore, no video equipment was set up in the summit area at the time of eruption. Although some hikers shot videos with their cameras or mobile phones, the location and time stamps of these videos were not clear or required calibration (Oikawa et al. 2016). For this reason, we implemented numerical simulations and compared the results with the distribution of impact craters to judge which initial conditions are most plausible for reproducing the existing distribution of impact craters. Some input parameter values were defined based on field observations or measurements of rocks sampled during the field observation.
Impact craters are often produced when ballistic blocks hit the ground. Several studies on impact craters have been conducted regarding the distribution of the ballistic blocks after landing (Fitzgerald et al. 2014; Pistolesi et al. 2008; Maeno et al. 2013). For the 2014 Mount Ontake eruption, Kaneko et al. (2016) studied the distribution of impact craters of ballistics from photographs they took days after the start of the eruption from a journalist’s helicopter. They defined A, B, C, and D zones around the vent depending on the number of impact craters per 5 m × 5 m. These impact craters were visible because they formed on the fine ash layer during the eruption. Ballistic blocks were ejected several times on 27 September from 11:52 a.m. when the eruption started to 12:40 p.m. when the fall of pyroclasts ended (Oikawa et al. 2016). For this reason, the distributions obtained by Kaneko et al. (2016) likely exclude some blocks ejected before the ash deposition. To estimate the ejection speed of the ballistics, we compared the ground distribution of ballistic blocks simulated using our numerical model with the distribution of the impact craters photographed by Kaneko et al. (2016).
The characteristics of ballistic impact crater distribution featured in Kaneko et al. (2016) were first, it is elongated to the north-northeast direction, and second, the farthest impact crater is approximately 1 km from the vent around Ninoike pond. This is consistent with our field observation that the ballistic blocks that landed farthest from the vent were the blocks that fell on the Ninoike Honkan hut, which is located north of Ninoike pond.
The elongation of the distribution of impact craters may have been caused by a combination of an inclined ejection and topographic control. Topographic control is taken into consideration because the stretched direction of the Jigokudani valley is similar to the direction of the elongation of impact crater distribution and the eruptive vents are in the Jigokudani valley. Thus, the wall of the valley might have prevented the ballistic projectiles from flying out from the valley in some directions. An inclination is also considered in our study because no impact crater was found in the lower and southwest part of the vent (Kaneko et al. 2016). Ballistic blocks hardly drop on the south-southwestern slope if the ejection angle has an inclination. Although wind is another possible cause of this elongation, the wind at the height of the summit was weak (approximately 2–3 m/s) according to the weather monitoring data of the Japan Meteorological Agency. Therefore, it is unlikely that wind materially affected the transport of ballistic projectiles.
To clarify the reason for the elongated distribution of impact craters and deduce the ejection angles, we used our multiparticle three-dimensional numerical model which is able to calculate the trajectories and landing positions of ballistic projectiles based on the local topography. The ejection speed was then estimated based on these ejection angles, and once these ejection conditions were obtained, the ballistic landing velocity and energy was calculated.
Numerical models of ballistic projectiles have been developed since the 1940s. Minakami (1942) used an analytical ballistic equation to estimate the ejection speed of the 1937 Asama eruption. Wilson (1972) formulated a discretized numerical model for the trajectories of pyroclasts for the first time. Using a phreatomagmatic eruption as an example, Self et al. (1980) estimated an ejection velocity of 100–150 m/s for the 1977 Ukinrek Maars eruption. After these early studies, single-particle models mostly considered the drag effect of a vulcanian explosion (Fagents and Wilson 1993) or that of an eruption with a volcanic jet (Bower and Woods 1996). In the 2000s, some studies were dedicated to multiparticle numerical models (Saunderson 2008; de’Michieli Vitturi et al. 2010). Recently, Tsunematsu et al. (2014) proposed a numerical model with interparticle collisions but without drag. This model was the first to describe the two-dimensional (2D) deposited particle distribution on the ground, making the output data much more suitable for application to hazard maps. This study aims to calculate the particle trajectory and three-dimensional (3D) distribution on the ground, considering drag and topographic effects. Therefore, this model is adapted to evaluate the topographic effect which is not accounted for other numerical models. Using this improved numerical model, we estimated the ejection conditions and the landing velocity and energy of ballistic projectiles released during 2014 Mount Ontake eruption.
Input parameters, variable names, and values
Particle diameter (m)
Particle density (kg/m3)
Mean = 2300
Std = 300
Particle ejection speed (m/s)
Polar angle (°)
Mean = 0
Std = 15
Azimuth angle (°)
Uniform distribution 0–360
Rotation angle from the vertical axis (°)
Direction angle from north to east (°)
Gas flow effect range (m)
The drag coefficient C D, which is included in Eq. (1), is one of the most important parameters. The drag coefficient for a volcanic particle is strongly dependent on its shape (Wilson and Huang 1979) and Reynolds number (Mastin 2001; Alatorre-Ibargüengoitia and Delgado-Granados 2006; de’Michieli Vitturi et al. 2010). It is also dependent on its Mach number M a if the flow is compressible and if M a > 0.7 (Mastin 2001; Alatorre-Ibargüengoitia and Delgado-Granados 2006). Alatorre-Ibargüengoitia and Delgado-Granados (2006) measured the drag coefficient for ballistic blocks by conducting wind tunnel experiments. They set the flow velocity to approximately 20 m/s, and the particles were blown by this flow in the wind tunnel. From these experiments, they obtained an average drag coefficient of 0.8, with the results of individual experiments ranging from 0.62 to 1.01. This measurement was conducted in a horizontal wind tunnel, and it was possible to mediate the effects of gravity, which usually create some noise in the measurement. Furthermore, the Reynolds number was kept within the turbulent range at all times, and drag separation was assumed not to occur during the experiment. Therefore, this range can be assumed to accurately represent the range of possible drag coefficients for the ballistic blocks. Recently, many other models have been proposed for calculating the drag coefficient considering different particle shapes (Dellino et al. 2005; Bagheri et al. 2013). Because these models focus only on volcanic ash particles and do not consider the effects of the Mach number, they are not applicable to the calculation of our ballistic blocks.
Therefore, to determine the best fit for the ejection conditions, such as the rotation angle (γ), direction angle (α), and ejection speed, the average drag coefficient C D = 0.8 obtained by Alatorre-Ibargüengoitia and Delgado-Granados (2006) was mainly used. Possible ranges for the ejection speed, landing velocity, and landing energy were then discussed by varying the drag coefficient within the range of 0.62–1.01 obtained in the individual experiments by Alatorre-Ibargüengoitia and Delgado-Granados (2006).
The gas flow may affect the particle transport when the explosion occurs, especially around the vent. To assess this effect, the gas flow velocity around the vent was included in the simulation. The range in which the gas flow may affect the particle transport (Fig. 2d) is the distance from the ejection position in which the flow velocity is included in the particle velocity calculation. This is a type of drag effect, as the flow velocity is implemented in the drag term of Eq. (1).
The gas flow effect range was set to 100 m for all simulations because the pyroclastic cone that formed around the center vent was approximately 200 m in diameter (vent “J4” of Fig. 2 in Kaneko et al. 2016). This suggests that the explosive gas flow affected the particles within this area. However, the gas flow effect range is outside the scope of this study because the objective of this study is to ascertain which parameters significantly affect the ejection and landing conditions. In fact, the gas flow effect range changes less than 10 % of maximum travel distance when it is doubled in our trial simulation.
For the initial conditions of our model, the particle density was measured in a laboratory. Five blocks were sampled by the joint observation group of the 2014 Mount Ontake eruption. Six rock samples, which were obtained by the members of the Joint Survey Team of the Japanese Coordinating Committee for Prediction of Volcanic Eruptions, were weighed in ambient air and in water. Then, the density was calculated based on the method by Shea et al. (2010) but without wrapping film because our samples were not overly vesiculated. Furthermore, the blocks were not cut into pieces, as the purpose of obtaining the particle density was to apply the values of the complete block to the numerical simulation.
The densities of the five samples ranged from 2020 to 2700 kg/m3, and the average density was 2283 kg/m3. Therefore, the mean density and its standard deviation were set to 2300 and 300 kg/m3, respectively, in our simulator.
The lengths along the three axes of the rocks sampled around the summit area were measured, and the arithmetic and geometric means (Biass and Bonadonna 2011) of these three dimensions were calculated for each rock. The average of the arithmetic means was 17.4 cm with a standard deviation of 8.1 cm, and the average of the geometric means was 16.8 cm with a standard deviation of 8.1 cm.
During our field observation, the diameter of the largest block we found was 70 cm, and particles of 10 cm in diameter were found on the wall of the mountain hut. Blocks that penetrated the roofs of mountain huts were approximately 20 cm in diameter.
Based on our direct measurement of the sampled blocks and the field observation, the most damaging particles in the 2014 Mount Ontake eruption were those of 20 cm in diameter. Thus, we used only 20-cm particles to investigate the ejection conditions and estimate the landing velocity and energy.
Vent position and ejection points
Several vents opened on September 27, 2014, according to reports by the Geospatial Information Authority of Japan (GSI) and Kaneko et al. (2016). Based on photographs of the summit area (Asahi Shimbun Newspaper, September 28, 2014; Kaneko et al. 2016), only one vent located in the center of these vents emitted an ash-laden plume, whereas the others emitted only steam-dominated plumes. The location of this center vent is shown in Fig. 1. We infer that the center vent was the only vent associated with the ejection of solid material by an ash-laden plume. Moreover, there is a pyroclastic cone in the south slope of the vent labeled “J4” by Kaneko et al. (2016), which corresponds to our center vent. This pyroclastic cone suggests that the center vent emitted ballistic blocks. Therefore, the center vent was set as the only vent that ejected ballistic blocks in our simulation. The location of the center vent was taken from the polygon file provided on the GSI Web site (Geospatial information authority of Japan 2014), and its center was used as the ejection position of the ballistic blocks. Although our ballistic simulator can accept multiple ejection positions, only one position was used to focus on the effects of the topography and the ejection direction on the distribution of deposited particles. The center vent and two other vents were in the Jigokudani valley (Kaneko et al. 2016). The Jigokudani valley runs from northeast to southwest. The shape of this valley and the ejection position may have strongly affected the transport of the ballistic blocks.
Digital elevation model
To include the effect of topography, we used the digital elevation model (DEM), which was downloaded from the Web site for the National Land Numerical Information Download Service (National Land Information Division, MILT, Japan, 1974–2014). The DEM segments the land in a grid of 10 m in longitude and 10 m in latitude. Its coordinate system was the World Geodetic System 1984 (WGS84), and the coordinates were expressed as sets of latitude and longitude. The DEM based on WGS84 was converted into the Universal Transverse Mercator (UTM) coordinate system with the Geospatial Data Abstraction Library (GDAL) software using the bilinear method. This method is the recommended method for converting from WGS to UTM coordinates because of its small artifact in this type of conversion (Price 2013).
Results and discussion
According to Fig. 6 of Kaneko et al. (2016), the distribution of impact craters was elongated in the north-northeast direction. This elongation direction roughly measured on the map was approximately 17° from north. Presumably, the reasons for this elongation are the inclination of the ejection axis and the topographic control around the vent. Therefore, simulations with various direction angles α (Fig. 2c) were first conducted. Figure 4 shows the simulated distributions of deposit particles for direction angles of α = 10°, 20°, and 30°. The resulting particle distributions were confined along the eastern edge, especially in the case of a direction angle of α = 30°. The level of confinement decreased as the ejection direction shifted northward. Conversely, the distribution of deposited particles on the western side was dispersed.
During our field observation, we found several ballistic blocks that had penetrated the roof of the mountain hut Otaki Chojo Sanso (Cabinet Office, Japan 2015), and some hikers were hit by a shower at Otaki Chojo (Shinano Mainichi Shimbun 2015). Furthermore, many large blocks were dispersed along the Hacchodarumi trail (Oikawa et al. 2016). However, in the simulation with a direction angle of α = 30°, no particle reached the Otaki Chojo Sanso hut or the Hacchodarumi trail (Fig. 4c). Conversely, many particles reached the Otaki Chojo Sanso hut and the Hacchodarumi trail when the direction angle was set to α = 20° (Fig. 4b). When the direction angle was set to α = 10° (Fig. 4a), fewer particles were deposited in the area around the Hacchodarumi trail, and more particles were deposited in the northwest of Ichinoike depression, where few impact craters were found in aerial observations (Kaneko et al. 2016). Given these results, the simulation with a direction angle of α = 20° best reproduced the actual distribution of deposited ballistic blocks.
The rotation angle γ (Fig. 2b) was then varied to assess the effect of the inclination of the ejection axis. Simulations were conducted with the rotation angle varying from γ = 20° to 80° (Fig. 5). The dispersion of the deposition decreased with increasing rotation angle, i.e., with increasing ejection axis inclination. The particle distribution in the Ichinoike depression was concentrated in the northeastern part when the rotation angle was between γ = 40° and 80° (Fig. 5b–d). The cause of this inhomogeneity may be the uphill slope on the western side of the Ichinoike depression. When the particles were ejected northeastward, some particles could not go beyond the southwestern ridge of the Ichinoike depression and were thus deposited on its southwestern slope. Other particles which were able to go beyond that slope were deposited in the northern region of the Ichinoike depression. Thus, only a small number of particles ejected with a rotation angle of γ = 80° were deposited in the Ichinoike depression (Fig. 5d).
Based on the photographs taken by Kaneko et al. (2016), impact craters were homogeneously dispersed in the Ichinoike depression. Among the results shown in Fig. 5, only those of the simulation with a rotation angle of γ = 20° show a homogenous dispersion of the deposited particles in the Ichinoike depression. As stated in the previous section, many ballistic blocks fell along the Hacchodarumi trail and around the Otaki Chojo Sanso hut during the actual eruption. In the simulations with rotation angles of γ = 40°, 60°, and 80°, few particles were deposited around the Hacchodarumi trail and the Otaki Chojo Sanso hut. Thus, the rotation angle that yields the distribution most closely resembling the observation is γ = 20° from the vertical axis.
Because the air drag on the particles was difficult to estimate, as described in Introduction, we performed simulations with various drag coefficients.
In Fig. 6, the 99th percentile of the travel distance is plotted for particle ejection speeds ranging from 100 to 200 m/s and drag coefficients ranging from 0.0 to 1.2. In theory, to consider the maximum travel distance, the 100th percentile of the travel distance should be calculated from the simulated deposition locations. However, in some cases, an outlier particle travels much farther than other particles. To reduce the influence of outlier particles, the 99th percentile was used to show the longest travel distance. To calculate the 99th percentile of the travel distance, the travel distances of the ejected particles were measured from their distances from the vent. When the number of counted particles reached 99 % of the total number of particles, this particle’s travel distance was defined as the 99th percentile of the travel distance.
In Fig. 6, the different lines show the results of simulations with different drag coefficients C D, and the black dashed line shows the largest observed distance between an impact crater and the center vent, which is approximately 1000 m (Kaneko et al. 2016). Thus, the ejection speed that is read from the intersections between the black dashed line and the solid colored lines gives the estimated ejection speed for each considered drag coefficient.
Assuming the drag coefficient was in the range of the experimental results obtained by Alatorre-Ibargüengoitia and Delgado-Granados (2006), which is 0.62–1.01, the estimated ejection speed was between approximately 145 and 185 m/s. This estimation is based on the assumption that no particle was transported by the plume. However, there were many photographs showing plumes between 11:52 a.m. and 12:40 p.m. when pyroclasts were falling (Oikawa et al. 2016), and the ballistic blocks may have been blown upward by the plume. The travel distance may have increased if the ballistic blocks were affected by the uprising plume. Therefore, the model could be overestimating the ejection velocity.
In contrast, Kaneko et al. (2016) estimated an ejection speed of 108 m/s using the program Eject! because they used a drag coefficient of approximately 0.1 based on the temperature at sea level (25 °C) and the thermal lapse rate (6.5 °C/km). To illustrate that our model is consistent with Eject!, the ejection speed was estimated to be approximately 110 m/s in the case of C D = 0.1 as shown in Fig. 6. Given that the variation in the estimated ejection speed with a varying drag coefficient is large, defining the drag coefficient is very important for discussing the travel distances of ballistic projectiles. In addition, the effects of the plume or gas velocity should be seriously considered in numerical simulations in future studies.
Landing velocity and energy
Figures 7 and 8 show the mean landing velocity and energy, respectively. To estimate the landing velocities and energies, only drag coefficients in the range of 0.6–1.2 were used based on the range obtained by Alatorre-Ibargüengoitia and Delgado-Granados (2006). The mean landing velocity was calculated based on the simulation results for 10,000 particles with the estimated rotation angle γ = 20° and direction angle α = 20°. The mean landing velocities with all considered drag coefficients were lower than the ejection speeds (Fig. 7). The particle velocity was significantly reduced by the drag effect in our simulations.
Based on our estimation of the ejection speed in the previous section, the ejection speeds were approximately v e = 145 and 185 m/s with drag coefficients of C D = 0.62 and 1.01, respectively. In Fig. 7, the results for C D = 0.6 and 1.0 at ejection speeds of 145 and 185 m/s give mean landing velocities of 83 and 85 m/s, which correspond to 299 and 306 km/h, respectively. These landing velocities are smaller than the ejection velocities estimated with the given drag coefficients because the drag reduces particle velocity.
Spence et al. (2005) did not discuss whether the energy is dependent on the particle size; however, it should be noted that the energy likely varies with the particle size, as the drag term in Eq. (1) depends on the particle mass and the cross-sectional area of the block. So far, criteria for roof penetration can be found only in studies by Blong (1984), Pomonis et al. (1999), and Spence et al. (2005), and similar values were found in all three studies. To discuss the possibility of roof penetration, more realistic critical values must be derived from observations or laboratory experiments (Williams 2016).
Furthermore, our estimation did not consider variation in the block size, and we ignored the possibility of particles being blown upward by volcanic plumes or the blasts, which may have affected the transport of blocks. One future objective may be to find the block size distribution on the ground and combine our ballistic model with a plume or blast model to obtain a more realistic estimation of the landing energy.
Using a 3D numerical multiparticle model of ballistic blocks, the ejection conditions of the 2014 Mount Ontake eruption, such as particle speed and rotation and direction angles, were estimated by comparing the simulated landing position distributions obtained using various sets of conditions with the distribution of ballistic impact craters obtained by Kaneko et al. (2016). The mean landing velocity and energy were then calculated for a range of possible drag coefficients using the estimated ejection conditions.
The topographic control is considered with our modified numerical model, and a rotation angle of γ = 20° from the vertical axis and a direction angle of α = 20° from the north were successfully derived by comparing our simulated particle distribution with the shape and axis of distribution of the impact craters observed by Kaneko et al. (2016).
The ejection speed was determined to be between 145 and 185 m/s for drag coefficients ranging from 0.62 to 1.01; this range was proposed by Alatorre-Ibargüengoitia and Delgado-Granados (2006) based on their laboratory experiments. The mean landing velocity is always lower than the ejection speed, and the estimated mean landing velocity of 10,000 particles with ejection speeds ranging from 145 to 185 m/s was found to be between 83 and 85 m/s. The simulated mean landing energies were larger than the critical energy of roof penetration for a plywood roof, and the value ranged from 3.8 × 104 to 4.5 × 104 J.
These values, estimated by comparing the simulated distribution of deposited particles with the observed distribution of impact craters, do not consider all the ballistic projectiles released from the vent of the 2014 Mount Ontake eruption because the impact craters were formed after fine tephra was deposited on the ground, which may have occurred after the initial ejection of ballistic blocks.
KT simulated ballistic trajectories and wrote the manuscript. YI checked the numerical code and simulated results. TK offered the photographs and the data of impact craters, and MY offered the ballistic block sample from the field. TF and KY discussed and helped to write the manuscript. KT, YI, and MY joined the ground truth of the summit area. All authors read and approved the final manuscript.
This study was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) and the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number 15K01256 and 23241055. Kae Tsunematsu and Yasuhiro Ishimine studied in the restricted summit area of Mount Ontake with the support and special permission of the Cabinet Office of Japan. Mitsuhiro Yoshimoto was a member of the Joint Survey Team of the Japanese Coordinating Committee for Prediction of Volcanic Eruptions. We appreciate Japan Meteorological Agency’s support providing wind data around the summit of Mount Ontake. We thank the town of Kiso and the village of Otaki in Nagano Prefecture for permitting our field survey around the summit area. We thank Marcus Bursik for his useful advice. We are grateful for the detailed comments and thorough reviews from Ben Kennedy and another anonymous reviewer, and the great help from the editor, Nobuo Geshi.
The authors declare that they have no competing interests.
- Asahi Shimbun Digital (2014) 90 percent of victims died instantly. Were a half of them hit directly by the blocks? Dead body inspection. Asahi Shimbun Digital, 27 Oct 2014Google Scholar
- Asahi Shimbun Newspaper (2014) The vent of Mount Ontake blowing up a volcanic plume. Asahi Shimbun, 28 Sept 2014Google Scholar
- Asahi Shimbun Newspaper (2014) Mount Ontake eruption. At that time…. Asahi Shimbun, 27 Oct 2014Google Scholar
- Blong RJ (1981) Some effects of tephra falls on buildings. In: Self S, Sparks RSJ (eds) Tephra studies: proceedings of the NATO advanced study institute Tephra Studies as a Tool in Quaternary ResearchGoogle Scholar
- Blong RJ (1984) Volcanic hazards: a sourcebook on the effects of eruptions. Academic Press, LondonGoogle Scholar
- Cabinet Office, Japan (2016) Handbook for an enhancement of shelters on active volcanoes. Disaster Management section of Cabinet Office, Japan. Retrieved, 1 January, 2016, from http://www.bousai.go.jp/kazan/shiryo/ (in Japanese)
- Crandell DR, Booth B, Kusumadinata K, Shimozuru D, Walker GPL, Westercamp D (1984) Source-book for volcanic-hazards zonation. UNESCO, ParisGoogle Scholar
- de’Michieli Vitturi M, Neri A, Esposti Ongaro T, Lo Savio S, Boschi E (2010) Lagrangian modeling of large volcanic particles: application to vulcanian explosions. J Geophys Res 115(B8):B08206Google Scholar
- Fitzgerald RH, Tsunematsu K, Kennedy BM, Breard ECP, Lube G, Wilson TM, Jolly AD, Pawson J, Rosenberg MD, Cronin SJ (2014) The application of a calibrated 3D ballistic trajectory model to ballistic hazard assessments at Upper Te Maari, Tongariro. J Volcanol Geotherm Res 286:248–262. doi: 10.1016/j.jvolgeores.2014.04.006 CrossRefGoogle Scholar
- Geospatial Information Authority of Japan (2016) Acts of GSI, Japan for the Mount Ontake 2014 eruption. Retrieved 12 February, 2016, from http://www.gsi.go.jp/BOUSAI/h26-ontake-index.html
- Mastin LG (2001) A simple calculator of ballistic trajectories for blocks ejected during volcanic eruptions. U.S. Geological Survey Open-File Report 01-45, 16 pp. Retrieved 1 January, 2016 from http://pubs.usgs.gov/of/2001/0045/
- MIAVITA (2012) Handbook for volcanic risk management: Prevention, crisis management, resilience. MIAVITA Team, OrleansGoogle Scholar
- Minakami T (1942) On the distribution of volcanic ejecta (part I): the distributions of volcanic bombs ejected by the recent explosions of Asama. Bull Earthq Res Inst Univ Tokyo 20:65–91Google Scholar
- National Land Information Division (1974–2014) National spatial planning and regional policy Bureau, MILT, Japan. National Land Numerical Information download service. Retrieved 1 January, 2016, from http://nlftp.mlit.go.jp/ksj-e/
- Pistolesi M, Rosi M, Pioli L, Renzulli A, Bertagnini A, Andronico D (2008) The paroxysmal event and its deposits. In: Calvari S, Inguaggiato S, Puglisi G, Ripepe M, Rosi M (eds) The Stromboli volcano: an integrated study of the 2002–2003 eruption. American Geophysical Union, Washington, DC. doi: 10.1029/182GM26 Google Scholar
- Price M (2013) Looking good—properly reprojecting elevation rasters. ArcUser Esri, fall 2013. Retrieved 1 January, 2016, from http://www.esri.com/esri-news/arcuser/fall-2013/looking-good
- Shinano Mainichi Shimbun (2015) Verification of Mount Ontake eruption—living with a volcano. What do we learn from 9.27?. The Shinano Mainichi Shimbun Press, Nagano (in Japanese) Google Scholar
- Shinano Mainichi Shimbun Newspaper (2014) Hit of an Eruption (1)—difficulty in initial action of the rescue at high altitude. Shinano Mainichi Shimbun, 18 Oct 2014 (in Japanese) Google Scholar
- Shinano Mainichi Shimbun Newspaper (2015) Estimation of damaged area of lost 6 people. One year after the eruption. Shinano Mainichi Shimbun, 26 Mar 2015 (in Japanese) Google Scholar
- Ui T, Nakagawa M, Inaba C, Yoshimoto M (2002) Sequence of the 2000 eruption. Bulletin of the Volcanological Society of Japan, Usu Volcano (in Japanese) Google Scholar
- Williams G (2016) The vulnerability of Auckland city’s buildings to tephra hazards. Master thesis, University of Canterbury, Christchurch, New ZealandGoogle Scholar
- Yamanashi Nichinichi Shimbun Newspaper (2015) One year after Mount Ontake eruption. Yamanashi Nichinichi Shimbun, 26 Sept 2015 (in Japanese) Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | <urn:uuid:60cf99a1-7fc8-48ca-be30-b32a1bbc6f5c> | 2.875 | 7,313 | Academic Writing | Science & Tech. | 44.587738 | 95,479,880 |
#Climatic_models are the mathematical representation of the atmosphere, ocean, ice and vegetation. In case of human induced climate change, the climatic models are used to resolve and to find the amount of warming increased and attributed to study how climate will change in future. Climate models are used to simulate the past climate and involves the prediction of future climate based on the projection of Man- Made greenhouse gases.
Researchers have developed climatic models based on a computer software in which the output archives were stored in a CMIP. On the other hand, the data of sea and ocean were stored in another archive called AMIP.
· Coupled Model Inter comparison Project
The main Objective of CMIP is to better understand the past, present and future climate changes arising from Natural, unforced variability.
· Atmospheric Model Inter comparison Project
It is an experimental protocol for global atmospheric general circulation models. This model configuration enables scientists to focus on the atmospheric model without the added complexity of ocean-atmosphere feedbacks in the climate system.
· GISS Climate Model
A major focus of GISS GCM simulations is to study the human impact on the climate as well as the effects of a changing climate on society and the environment.
Nowadays, various methods are developed to predict the Causes of the #Climate_Change, since there is a lag in taking steps to control the #impacts. To discuss more and to share your ideas on Climate Change join with us on “World Congress on Climate Change” during October 15-16, 2018 at Rome, Italy.
Visit us @ https://goo.gl/B3b8AD | <urn:uuid:b64b396b-fb87-4b22-8b65-9ebebdf98009> | 3.703125 | 334 | News (Org.) | Science & Tech. | 23.869587 | 95,479,887 |
12 July 2018
Stork Trek: migratory costs and strategies
Published online 27 January 2016
Trade-offs may shape the migratory decisions of white storks.
White storks are opportunistic and adjust their behaviour to changing circumstances, says a new study revealing extensive variation in the migratory habits of the birds.
Researchers at Germany’s Max Planck Institute for Ornithology worked with colleagues in Spain, Russia, Armenia, Greece, Poland, Tunisia, Israel and South Africa to collect data from eight white stork populations in Europe and North Africa. Using GPS together with acceleration sensors, they tracked the position and movement of young storks to identify patterns.
While the sensors sent back low-resolution tracking data, the team could only collect the high-resolution data stored on the sensor by downloading it from nearby or by recovering the sensor. Many storks die en route, and since their migration carries them through countries that have become unsafe, data recovery proved a major challenge. “We have plenty of tags lying around which we would love to get data from, but we couldn’t get them ourselves or find anyone to go,” says Andrea Flack of Max Planck, who led the study.
The team, however, collected enough data to identify distinct patterns. The storks of each population flew a characteristic distance and followed a route that, Flack suspects, they use each year. In addition, the populations seem to follow different migration strategies.
The Armenian storks and some of the German storks migrated only a short distance, overwintering north of the Sahara. The remaining German storks and those from most of the other populations migrated over the Sahara to overwinter in the south, reaching as far as South Africa. Despite flying more than three times as far as the storks overwintering in the north, these birds used only 30% more energy during their migration, possibly because they could take advantage of thermal updrafts to ease their flight across the desert.
A third surprising finding came from the storks of the Uzbekistan population, which didn’t migrate at all. “We expected them to go east towards India or China based on their history, and I hope we can work on this and find out why they didn’t migrate,” says Flack.
While the storks which crossed the Sahara may have had an easier time during migration, the study identified a trade-off once they reached their destination. The short-distance migrants endured a demanding flight but overwintered in areas with a higher human population density, yielding easily available food in garbage dumps and landfills. The long-distance migrants, on the other hand, spent the winter in areas without many humans and so had to use more energy foraging.
“Logistically, this was a fantastic operation and I believe it’s the first time that we’ve gotten such high quality data that are truly comparable. It’s remarkable that there’s so much variation in the migration strategies,” says Arie van Noordwijk of the Netherlands Institute of Ecology, who was not involved in the study.
The different payoffs of these strategies may lead to changes in the storks’ migration, which would also have an impact at their destination. “The storks feed on insects that turn into pests, like locusts and armyworms,” Flack offers as an example.
As these flexible birds adjust to a changing world, so too must the ecosystems that have adapted to their annual visits, say the scientists.
- Flack, A. et al. Costs of migratory decisions: A comparison across eight white stork populations. Science Advances http://dx.doi.org/10.1126/sciadv.1500931 (2016) | <urn:uuid:b5764d09-bb46-4be8-b623-302384194a91> | 3.46875 | 785 | Truncated | Science & Tech. | 43.304562 | 95,479,918 |
Phyllomacromia overlaeti (Schouteden, 1934)
- scientific: P. subtropicalis (Fraser, 1954); P. paludosa (Pinhey, 1976); P. royi (Legrand, 1982)
Type locality: Lulua, Kapanga, DRC
Male is similar to P. congolica and P. sylvatica by (a) size, Hw 30-42 mm; (b) thorax with 2-3 pale stripes on each side: usually 1 antehumeral and 2 laterals, but one of these may be reduced; (c) dorsum of S10 flat, at most with rounded hump, without cones; (d) cerci brownish yellow to black, often darker than epiproct. However, differs by (1) labium uniformly brown, rather than with yellow and brown pattern; (2) border of hamule smoothly bent, rather than angled or incurved; (3) hind femur brown with contrasting black apex; (4) foliation on S8 broad and triangular, without notch. [Adapted from Dijkstra & Clausnitzer 2014; this diagnosis not yet verified by author]
Rivers, flowing channels in marshes and possibly streams shaded by gallery forest. Often with a sandy bottom and probably emergent vegetation and coarse detritus. From 0 to 1100 m above sea level.
Appendages (dorsal view)
Appendages (lateral view)
Abdominal segment 2 (lateral view)
Abdominal segments 8-10 (lateral view)
Map citation: Clausnitzer, V., K.-D.B. Dijkstra, R. Koch, J.-P. Boudot, W.R.T. Darwall, J. Kipping, B. Samraoui, M.J. Samways, J.P. Simaika & F. Suhling, 2012. Focus on African Freshwaters: hotspots of dragonfly diversity and conservation concern. Frontiers in Ecology and the Environment 10: 129-134.
- Schouteden, H. (1934). Annales Musee Congo belge Zoologie 3 Section 2, 3, 1-84. [PDF file]
- Pinhey, E.C.G. (1961). Dragonflies (Odonata) of Central Africa. Occasional Papers Rhodes-Livingstone Museum, 14, 1-97. [PDF file]
- Longfield, C. (1959). The Odonata of N. Angola. Part II. Publicacoes culturais Companhia Diamantes Angola, 45, 13-42. [PDF file]
- Fraser, F.C. (1954). New species of Macromia from tropical Africa. Revue Zoologie Botanique Africaines, 49, 41-76. [PDF file]
Citation: Dijkstra, K.-D.B (editor). African Dragonflies and Damselflies Online. http://addo.adu.org.za/ [2018-07-17]. | <urn:uuid:0aff23f4-0388-4536-a1a0-7e9d46b80113> | 2.8125 | 661 | Knowledge Article | Science & Tech. | 71.969992 | 95,479,919 |
Researchers at the Max Planck Institute for Biophysical Chemistry and the German Center for Neurodegenerative Diseases in Göttingen – in collaboration with Polish colleagues – have now “filmed” how a protein gradually unfolds for the first time.
“Snapshot” of the unfolding of the CylR2 protein from Enterococcus faecalis. If the protein is cooled from 25°C to -16°C, it successively breaks down into its two identical subunits. The latter are initially stable, but at -16°C they form an instable, dynamic protein form, which plays a key role in folding.
© Zweckstetter, Max Planck Institute for Biophysical Chemistry & German Center for Neurodegenerative Diseases
By combining low temperatures and NMR spectroscopy, the scientists visualized seven intermediate forms of the CylR2 protein while cooling it down from 25°C to - 16°C. Their results show that the most instable intermediate form plays a key role in protein folding. The scientists’ findings may contribute to a better understanding of how proteins adopt their structure and misfold during illness. (Nature Chemical Biology, 10. February 2013)Whether Alzheimer’s, Parkinson’s or Huntington’s Chorea – all three diseases have one thing in common: They are caused by misfolded proteins that form insoluble clumps in the brains of affected patients and, finally, destroy their nerve cells. One of the most important questions in the biological sciences and medicine is thus: How do proteins – the tools of living cells – achieve or lose their three-dimensional structure. Because only if their amino acid chains are correctly folded, can proteins perform their tasks properly.
Stefan Becker's group undertook the first step: to prepare a sufficient quantity of the protein in the laboratory. Subsequently, the two chemists cooled the protein successively from 25°C to -16°C and examined its intermediate forms with NMR spectroscopy. They achieved what they had hoped for: Their “film clip” shows at atomic resolution how the protein gradually unfolds. The structural biologist Markus Zweckstetter describes exactly what happens in this process: “We clearly see how the CylR2 protein ultimately splits into its two subunits. The individual subunit is initially relatively stable. With further cooling, the protein continues to unfold and at -16 °C it is extremely instable and dynamic. This instable protein form provides the seed for folding and can also be the “trigger” for misfolding.” The scientist’s findings may help to gain deeper insights into how proteins assume their spatial structure and why intermediate forms of certain proteins misfold in the event of illness. (cr)Original Publication
Dr. Dirk Förger | Max-Planck-Institute
Further reports about: > Alzheimer > Chemical Biology > CylR2 > DZNE > German language > MPIbpc > MR spectroscopy > Max Planck Institute > NMR spectroscopy > Nature Chemical Biology > Protein > atomic resolution > chemical engineering > nerve cell > neurodegenerative diseases > protein folding > protein structures > synthetic biology
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:809cc3a4-59ab-4d37-a14a-8c0ca41dc48d> | 3.71875 | 1,307 | Content Listing | Science & Tech. | 37.428158 | 95,479,922 |
Facts Summary: The Asian Green Broadbill (Calyptomena viridis) is a species of concern belonging in the species group "birds" and found in the following area(s): Indonesia, Malaysia, Myanmar, Thailand.
Asian Green Broadbill Facts Last Updated: January 1, 2016
To Cite This Page:
Glenn, C. R. 2006. "Earth's Endangered Creatures - Asian Green Broadbill Facts" (Online).
Accessed 7/18/2018 at http://earthsendangered.com/profile.asp?sp=9970&ID=3.
Need more Asian Green Broadbill facts?
Photos that will make you think twice before littering
Not too many people think of or even understand how much littering can actually impact our planet. Something as simple as holding onto your trash until you can throw it away properly can have a huge impact on conservation, preservation, and our planet.
Here are some photos that we thought you should take a look at that we hope will make you think twice before littering. | <urn:uuid:5b74ffe7-3c7f-417c-abd6-16cf6d5b91b3> | 3.078125 | 220 | Knowledge Article | Science & Tech. | 61.142073 | 95,479,937 |
It’s an interesting question. I think that the answer is probably no.
Underneath the Earth’s crust (the first layer) there is a liquid layer called the mantle. Because it’s liquid, the hole will seal up again.
At the moment, the deepest hole we’ve ever drilled is the Kola Superdeep Borehole in Russia and is 12262 m deep. To get to the center of the earth, we would have to drill 6400000 m deep!
Here’s an awesome video that shows you some of the world’s deepest holes: https://www.youtube.com/watch?v=fiW4gnaqCv4
In the future I think maybe, but in our lifetime probably not! It’s 2000 miles to get to the core! The furthest we’ve gone is 12km. Also we’d have problems when we get there as it’s around 11,000 degrees fahrenheit (the same as the surface of the sun) and the pressure at the core is the same as 48,000 elephants sitting on you! | <urn:uuid:4f69812a-97df-4f4b-9af4-372beb577e57> | 2.625 | 237 | Personal Blog | Science & Tech. | 82.932363 | 95,479,943 |
News & Views
What Is Quantum Physics and Who Invented It?
Jul 09 2016 Read 4501 Times
A quintessential arena of science, quantum physics is a term that’s recognised by most, but understood by few. Also known as quantum mechanics or quantum theory, the name describes a fundamental branch of physics that explains the nature and behaviour of matter and energy, at an atomic and subatomic level. Yet despite the fact that its widely recognised as the theoretical basis of modern physics, it’s not a new concept.
The early days
Its origins can be traced back to 1900, when physicist Max Planck put forward his controversial quantum theory to the German Physical Society. Determined to pinpoint the reason why the colour of radiation emitted from a body changes from red, orange and blue, Planck made an assumption that like matter, energy exists in individual units, as opposed to a constant electromagnetic wave. As a result, this makes it quantifiable, which allowed him to answer his initial question. It was Planck’s recognition of individual units that materialised as the first assumptions of quantum theory.
He then went on to create a complex mathematical equation to explain the phenomenon, which he dubbed quanta. It ruled that at certain discrete temperature levels, body energy radiation occupies different areas of the colour spectrum. He won the Nobel Prize in Physics for his innovative theory in 1918, which paved the way for thirty years of refined contributions from fellow scientists.
The building blocks of quantum physics
Albert Einstein was another major quantum physics influencer, theorising in 1905 that it wasn’t just energy, but radiation itself that was quantised. In 1924, French physicist Louis de Broglie put forward the idea that the makeup and behaviour of energy and matter aren’t separated by any fundamental differences, with both able to behave as though they are made of particles or waves when observed on atomic and subatomic levels. Cue the emergence of the ‘principle of wave-particle duality ’ theory. German theoretical physicist Werner Heisenberg was another pioneer to make ground-breaking contributions, proposing that it’s impossible to make precise, simultaneous measurements of two complementary values. This gained fame as the uncertainty principle, and was the inspiration for Albert Einstein's notorious comment, "God does not play dice."
Branching out from quantum theory itself are two major expressions, known as the Copenhagen interpretation and the many-worlds theory. The first was put forward by Niels Bohr, while the latter is favoured by the likes of Stephen Hawking and the late Richard Feynman.
As well as quantum physics, the theory of quantum chemistry is also a keynote part of contemporary science. ‘Molecular Rotational Resonance Spectroscopy - Chiral Analysis without Chromatography’ asserts that in order to make progress, the rotational spectroscopy community needs to validate the use of quantum chemistry as a method of accurately calculating molecular parameters.
Do you like or dislike what you have read? Why not post a comment to tell others / the manufacturer and our Editor what you think. To leave comments please complete the form below. Providing the content is approved, your comment will be on screen in less than 24 hours. Leaving comments on product information and articles can assist with future editorial and article content. Post questions, thoughts or simply whether you like the content.
In This Edition Articles - Why Does Nanotechnology Require Mass Spectrometry Spotlight Features Luminescence, UV & Microplate Readers - New Confocal Laser Scanning Microscope Combine...
View all digital editions
Jul 29 2018 Chicago, IL, USA
Jul 29 2018 Washington DC, USA
Aug 05 2018 Baltimore, MD, USA
Aug 06 2018 Westminster, CO, USA
Aug 06 2018 Berlin, Germany | <urn:uuid:61f295db-0be0-40dc-a541-0a9d77e91777> | 3.21875 | 776 | Truncated | Science & Tech. | 34.17582 | 95,479,945 |
An artist's impression of a stellar-mass black hole. photo courtesy: NASA/
There are more stars in space than there are grains of sand on every beach on Earth. Telescopes Why do we see the same Stars every Night?
Star Wanders Too Close to a Black Hole
Black Holes Hide in Our Cosmic Backyard
NASA's Chandra Finds Supermassive Black Hole Burping Nearby
Foto: Black hole. Supernova NGC 3783 is a barred spiral galaxy a located about 30 million light years away in the constellation Centaurus.
Astronomers Pursue Renegade Supermassive Black Hole
NGC 6240 is a galaxy that contains two supermassive black holes in the process of merging
Astronomers have watched the sudden brightening of a galaxy and realized it can mean only one
... of the galactic center combines images from the Hubble, Spitzer, and Chandra space telescopes. An artist's concept depicts the likely black hole within.
Artist's illustration of Cygnus X-1 black hole binary, explained in caption and text
Something new and very exciting has just moved into our neighbourhood
Whirlpool of the Black Hole / Le Torbillon du Trou Noir
Most distant star ever seen spotted by Hubble telescope 9 billion light years away | The Independent
Black Hole Eating A Planet | Thread: Most Dangerous Places In The Universe -- Identified
This diagram shows how a shifting feature, called a corona, can create a flare
An artist's impression of the Square Kilometre Array. Image Credit: SPDO/Swinburne Astronomy Productions
Artist concept of a black hole taking matter from a companion star.
Biggest Ever Black Hole 17 billion times the mass of the sun
New study challenges supermassive black hole theory
... all of which are galaxies visible with the naked eye (assuming you are in the appropriate hemisphere of the Earth). Moving out beyond our local ...
Supermassive black hole Sagittarius A* is located in the middle of the Milky Way galaxy
Artist's impression showing Planet Nine causing other planets in the solar system to be hurled into
Sagittarius A*: New Evidence For A Jet From Milky Way's Black Hole. X
Collapsing Star Gives Birth to a Black Hole
Newly discovered network of planets could harbor water and life, scientists say
This chart shows artist concepts of the seven planets of TRAPPIST-1 with their orbital
If event horizons are real, then a star falling into a central black hole would
Hubble shows Milky Way is destined for head-on collision with Andromeda galaxy
First, all planets and objects in our solar system go through a period of retrograde — it's not just Mercury
A blazar jet in the middle of a galaxy makes it appear especially bright
Science : Black Holes.. It's time to learn kiddos, I'm going
The large image shows the jet ...
Galaxy 8 billion light years away offers insight into supermassive black holes
Hubble Space Telescope Images of M87. At right, a large scale image taken with the Wide-Field/Planetary Camera-2 from 1998. The zoom-in images on the left ...
This artistic rendering shows the distant view from Planet Nine back towards the sun. The
Planetary Nebula NGC 5189 : The nebula is located
Click on the image to get a 1622x1319 (637KB) JPG.
The R136 region of the Tarantula nebula
The Universe Cosmos Galaxies Space Black Holes Earth Planets Moon Stars Sun Solar System
What is a Supermassive Black Hole?
Scientist recently discovered that what was thought to be a bright supernova may actually instead be
This illustration steps through the events that scientists think likely resulted in Swift J1644+57. Credit: NASA/Goddard Space Flight Center/Swift
Collisions between supermassive black holes can result in recoils that send the resulting black hole remnant
Whirlpool Galaxy (M51) · Spiral Galaxy NGC 7714 · Cats Eye Nebula Dying Star · Galaxy Pair Arp 87 · Antennae Galaxies NGC 4038 NGC 4039
A supermassive black hole is depicted in this artist's concept
Tech. Oceanic Black Holes Found in Southern Atlantic
The red lobes are radio emissions from gas jetted out by the galaxy's central supermassive black hole (at the center of the image).
Black holes rapidly spinning and twisting spacetime
science, planets, space, outer space, possibly habitable planets
A jet airliner leaves a vapor trail as the planet Mercury is seen in the bottom
Desktop Project Part 22: A black hole belches out a hurricane
NGC 4342 and its supermassive black hole
The Butterfly Nebula : With a wingspan of over
Gravitational Pull of a Black Hole Depends on Mass and Distance
And not just any black hole, it's a supermassive black hole with more than 4.1 million times the mass of the Sun. Its name is Sagittarius A*.
Can we artificially create a black hole? Sophie Allan from the National Space Centre answers this question for us.
NGC 4414, a typical spiral galaxy in the constellation Coma Berenices, is about 55,000 light-years in diameter and approximately 60 million light-years away ...
Jupiter's moon Europa could support alien life, scientists announce after finding activity under its shell | The Independent
Pillars of Creation : Originally taken on April 1
Artist's conception of a tidal disruption event (TDE) that happens when a supermassive black hole tears apart a star and launches a relativistic jet. | <urn:uuid:0c6ebedb-719e-4f86-b44c-9465950d60a9> | 3.484375 | 1,166 | Content Listing | Science & Tech. | 46.337814 | 95,479,948 |
Ever since the 1950s, when physicists first dreamed up the idea of doing astronomy with neutrinos, the holy grail has been to observe the first object outside our solar system that emits these ghostly particles. A handful were collected from a nearby supernova in 1987, but that was a rare event and the instruments that made the detection were hardly telescopes; they could not discern much more than up from down or left from right.
Three papers released today (two in Science and one on the preprint server arXiv) announce the culmination of this 60-year quest. IceCube, a strange telescope made of deep glacial ice at the South Pole, has detected neutrinos from a distant, luminous galaxy.
The neutrino is nearly massless and flies through space at almost the speed of light. Its nickname, “ghost particle,” points to the fact it rarely interacts with any form of matter and is therefore devilishly difficult to detect. Like the photon (particle of light), the neutrino carries no electric charge, so it is not diverted by electromagnetic fields: its arrival direction will point directly back to its source. Unlike the photon, however, it can pass through planets, stars, galaxies, veils of interstellar dust as easily as a bullet passes through fog and can therefore bring us news from regions that are opaque to light, at the edge of the universe and from the earliest times.
The latest discovery represents only the second time—after the near-miraculous supernova—scientists have identified neutrinos and light coming from the same extragalactic object. It also provides a clue to the long-standing mystery of how the charged particles known as cosmic rays, which constantly bombard our planet from space, are accelerated to the highest energies that have ever been observed. “It’s incredibly exciting and what we were always hoping for from the neutrino detectors,” says Alan Watson, a cosmic ray physicist from the University of Leeds in England who was not involved in these studies.
Observatory in the Ice
IceCube can tell the direction of some neutrinos to better than a quarter of a degree. It consists of a billion tons of diamond-clear Antarctic ice about two kilometers deep, monitored by more than 5,000 light detectors. In 2013 it detected the first high-energy neutrinos coming from beyond our atmosphere. But that breakthrough was not entirely satisfying because those neutrinos had rained in uniformly across the sky: There was no indication of the specific objects that may have emitted them—no “point source.”
This past September IceCube detected a neutrino carrying about 20 times the energy of any particle that could possibly be created by the most powerful man-made accelerators. This meant it had probably come from outer space. The instrument broadcast an automated alert.
IceCube’s alerts generate a lot of interest among astronomers, because the neutrino represents the third arrow in the quiver of the newborn field of multimessenger astronomy. Astrophysicists have long dreamed of employing messengers besides light to reveal the inner workings of the many unfathomable wonders in the cosmos. And the dream had come true only one month earlier, when three gravitational wave observatories had detected the merger of two neutron stars and optical telescopes had tied that merger to a gamma-ray burst: a brief flash of the most energetic form of light. No neutrinos were seen, however.
A Blazar Seen in Texas
Several days after IceCube’s alert, astronomer Yasuyuki Tanaka, who works at Kanata (“faraway” in Japanese), an optical/near-infrared telescope operated by Hiroshima University, realized the neutrino was pointing within two tenths of a degree of a known blazar named TXS 0506+056, which had first been observed by a radio telescope in Texas four decades ago.
Blazars are among the most violent creatures in the astronomical zoo: giant elliptical galaxies with rapidly spinning, supermassive black holes at their cores that gobble up nearby stars and other material in a sort of continuous cosmic earthquake and send out laserlike jets of light and other particles from their north and south poles. What differentiates blazars from other galaxies with such so-called active nuclei is that one of the jets points in Earth’s direction, making these objects extremely bright. Blazars occasionally flare, brightening by factors of 10 or more for periods of minutes to years. Because they are so cataclysmic and give off very energetic gamma rays, they have long been suspected of emitting not only high-energy neutrinos but also mysterious ultrahigh-energy cosmic rays.
Tanaka also works on the Fermi Gamma-ray Space Telescope, which has been taking images of the entire gamma-ray sky every three hours for about 10 years. Searching its catalogues, he discovered TXS had been flaring since the previous April. He sent out a second alert encouraging “observations of this source” across the optical spectrum.
TXS had not distinguished itself among the 4,000 or so known blazars until that moment, so little was known about it—even how far away it was. In the excitement after Tanaka’s alert the astronomical community made up for lost time. One group determined TXS is about 4.5 billion light-years away. That makes it one of the most luminous objects in the cosmos.
Six days after Tanaka’s alert, the operators of MAGIC, the Major Atmospheric Gamma Imaging Cherenkov Telescope on the La Palma Canary Island, announced the observation of very high-energy gammas coming from TXS. Because MAGIC sees to higher energies and has finer angular resolution than Fermi, this finding strengthened the connection to the neutrino—but not quite enough. In the first of today’s papers IceCube and the 15 collaborations that followed up on its alert conclude there is about one chance in a thousand the coincidence in direction and time between the single neutrino and the flaring blazer was just that, a coincidence. In this business, you need one chance in three million to claim discovery.
But IceCube’s principal investigator, Francis Halzen, a physicist at the University of Wisconsin–Madison, points out there is more to the science of this matter than statistics. He quotes the great experimentalist Ernest Rutherford: “If your experiment needs a statistician, you need a better experiment,” and adds, “We did that.”
Looking Back in Time
IceCube’s point source group, led by astrophysicist Chad Finley of Stockholm University, looked through the experiment’s historical data and discovered IceCube had detected a spectacular “neutrino flare” from TXS—about 13 particles in all—during a four-month period starting in October 2014. Perplexingly, however, Fermi had observed no corresponding flare in gamma rays.
Another IceCubist, Elisa Resconi, an astrophysicist at the Technical University of Munich, gathered a small team to investigate more closely. Synthesizing all the observations that had ever been made of TXS, they discovered it actually had flared in gammas in 2014, but in a subtle way. Although it had not given off more gamma-ray energy altogether, its spectrum had shifted toward higher-energy gammas exactly when it had flared in neutrinos. And the shapes of the optical and neutrino spectra shifted in complementary ways during both flares. “It all holds together,” Watson says, “I believe the whole story, but it took all three papers to convince me. This is the first convincing direct evidence for the acceleration of a hadronic component [a particle made of quarks] in any source.”
Basic particle physics says these neutrinos can only have been produced by hadrons, which would primarily have been protons, emerging in the blazar jet and colliding with other particles, including photons, on their way out. Because the cosmic rays that bombard Earth are made up predominantly of protons and heavier nuclei, the simple fact a blazar has now been shown to produce high-energy neutrinos is the first solid clue to a possible source of ultrahigh-energy cosmic rays. The reason it is difficult to identify the sources of cosmic rays is that they carry electric charge, so their trajectories are bent by interstellar magnetic fields and their arrival directions do not point back to their origins. Because the neutrinos IceCube detected must have traveled in straight lines and must have been produced by hadrons, they indicate high-energy hadrons must have been emitted from the same blazar source.
The various models for neutrino emission from blazars, developed in blissful theoretical isolation, have now had their first encounter with real data, and none can explain the exact details seen. Theorist Eli Waxman of the Weizmann Institute of Science in Israel believes the models “will require a complete modification.”
This discovery also gives a shot in the arm to the nascent field of neutrino astronomy. Both Waxman and Watson now hunger for next-generation instruments. The IceCube collaboration has proposed an upgrade that stands to improve sensitivity by an order of magnitude, and similar instruments are planned for deployment in the Mediterranean Sea and Lake Baikal, Siberia.
Meanwhile, this remarkable telescope continues to watch the neutrino sky from its deep, icy abode. IceCube almost certainly has more surprises in store. | <urn:uuid:e0f3a513-572a-4b83-906f-56fe9924c910> | 3.921875 | 1,980 | News Article | Science & Tech. | 30.070072 | 95,479,962 |
In this Oct. 31, 2011 file photo, Thai residents carry their belongings along floods as they move to higher ground at Bangkok's Don Muang district, Thailand. (AP Photo/Aaron Favila, File)
WASHINGTON - Freakish weather disasters — from the sudden October snowstorm in the Northeast U.S. to the record floods in Thailand — are striking more often. And global warming is likely to spawn more similar weather extremes at a huge cost, says a draft summary of an international climate report obtained by The Associated Press.
The final draft of the report from a panel of the world's top climate scientists paints a wild future for a world already weary of weather catastrophes costing billions of dollars. The report says costs will rise and perhaps some locations will become "increasingly marginal as places to live."
The report from the Nobel Prize-winning Intergovernmental Panel on Climate Change will be issued in a few weeks, after a meeting in Uganda. It says there is at least a 2-in-3 probability that climate extremes have already worsened because of man-made greenhouse gases.
This marks a change in climate science from focusing on subtle changes in daily average temperatures to concentrating on the harder-to-analyze freak events that grab headlines, cause economic damage and kill people. The most recent bizarre weather extreme, the pre-Halloween snowstorm in the U.S., is typical of the damage climate scientists warn will occur — but it's not typical of the events they tie to global warming.
"The extremes are a really noticeable aspect of climate change," said Jerry Meehl, senior scientist at the National Center for Atmospheric Research. "I think people realize that the extremes are where we are going to see a lot of the impacts of climate change."
The snow-bearing Nor'easter cannot be blamed on climate change and probably isn't the type of storm that will increase with global warming, four meteorologists and climate scientists said. They agree more study is needed. But experts on extreme storms have focused more closely on the increasing numbers of super-heavy rainstorms, not snow, NASA climate scientist Gavin Schmidt said.
The opposite kind of disaster — the drought in Texas and the Southwest U.S. — is also the type of event scientists are saying will happen more often as the world warms, said Schmidt and Meehl, who reviewed part of the climate panel report. No studies have specifically tied global warming to the drought, but it is consistent with computer models that indicate current climate trends will worsen existing droughts, Meehl said.
Studies also have predicted more intense monsoons with climate change. Warmer air can hold more water and puts more energy into weather systems, changing the dynamics of storms and where and how they hit.
Thailand is now coping with massive flooding from monsoonal rains that illustrate how climate is also interconnected with other manmade issues such as population and urban development, river management and sinking lands, Schmidt said. In fact, the report says that "for some climate extremes in many regions, the main driver for future increases in losses will be socioeconomic in nature" rather than greenhouse gases.
There's an 80 percent chance that the killer Russian heat wave of 2010 wouldn't have happened without the added push of global warming, according to a study published last week in the Proceedings of the National Academy of Sciences.
So while in the past the climate change panel, formed by the United Nations and World Meteorological Organization, has discussed extreme events in snippets in its report, this time the scientists are putting them all together. The report, which needs approval by diplomats at the mid-November meeting, tries to measure the confidence scientists have in their assessment of climate extremes both future and past.
Chris Field, one of the leaders of the climate change panel, said he and other authors won't comment because the report still is subject to change. The summary chapter of the report didn't detail which regions of the world might suffer extremes so severe as to leave them marginally habitable.
The report does say scientists are "virtually certain" — 99 percent — that the world will have more extreme spells of heat and fewer of cold. Heat waves could peak as much as 5 degrees hotter by mid-century and even 9 degrees hotter by the end of the century.
Weather Underground meteorology director Jeff Masters, who wasn't involved in the study, said in the United States from June to August this year, blistering heat set 2,703 daily high temperature records, compared with only 300 cold records during that period, making it the hottest summer in the U.S. since the Dust Bowl of 1936.
By the end of the century, the intense, single-day, heavy rainstorms that now typically happen only once every 20 years are likely to happen about twice a decade, the report says.
The report said hurricanes and other tropical cyclones — like 2005's Katrina — are likely to get stronger in wind speed, but won't increase in number and may actually decrease. Massachusetts Institute of Technology meteorology professor Kerry Emanuel, who studies climate's effects on hurricanes, disagrees and believes more of these intense storms will occur.
And global warming isn't the sole villain in future climate disasters, the climate report says. An even bigger problem will be the number of people — especially the poor — who live in harm's way.
University of Victoria climate scientist Andrew Weaver, who wasn't among the authors, said the report was written to be "so bland" that it may not matter to world leaders.
But Masters said the basics of the report seem to be proven true by what's happening every day. "In the U.S., this has been the weirdest weather year we've had for my 30 years, hands down. Certainly this October snowstorm fits in with it." | <urn:uuid:3bdc50ab-7abb-464e-ae0b-4e652440ba75> | 3.09375 | 1,181 | News Article | Science & Tech. | 45.156989 | 95,479,963 |
bottom-up vs. top-down impact; habitat complexity; intertidal wetlands; intraguild predation; multitrophic interactions; phytophagous insect
We employed a combination of factorial experiments in the field and laboratory to investigate the relative magnitude and degree of interaction of bottom-up factors (two levels each of host-plant nutrition and vegetation complexity) and top-down forces (two levels of wolf-spider predation) on the population growth of Prokelisia planthoppers (P. dolus and P. marginata), the dominant insect herbivores on Spartina cordgrass throughout the intertidal marshes of North America. Treatments were designed to mimic combinations of plant characteristics and predator densities that occur naturally across habitats in the field.
There were complex interactive effects between plant resources and spider predation on the population growth of planthoppers. The degree that spiders suppressed planthoppers depended on both plant nutrition and vegetation complexity, an interaction that was demonstrated both in the field and laboratory. Laboratory results showed that spiders checked planthopper populations most effectively on poor-quality Spartina with an associated matrix of thatch, all characteristics of high-marsh meadow habitats. It was also this combination of plant resources in concert with spiders that promoted the smallest populations of planthoppers in our field experiment. Planthopper populations were most likely to escape the suppressing effects of predation on nutritious plants without thatch, a combination of factors associated with observed planthopper outbreaks in low-marsh habitats in the field. Thus, there is important spatial variation in the relative strength of forces with bottom-up factors dominating under low-marsh conditions and top-down forces increasing in strength at higher elevations on the marsh.
Enhancing host-plant biomass and nutrition did not strengthen top-down effects on planthoppers, even though nitrogen-rich plants supported higher densities of wolf spiders and other invertebrate predators in the field. Rather, planthopper populations, particularly those of Prokelisia marginata, escaped predator restraint on high-quality plants, a result we attribute to its mobile life history, enhanced colonizing ability, and rapid growth rate. Thus, our results for Prokelisia planthoppers suggest that the life history strategy of a species is an important mediator of top-down and bottom-up impacts.
In laboratory mesocosms, enhancing plant biomass and nutrition resulted in increased spider reproduction, a cascading effect associated with planthopper increases on high-quality plants. Although the adverse effects of spider predation on planthoppers cascaded down and fostered increased plant biomass in laboratory mesocosms, this result did not occur in the field where top-down effects attenuated. We attributed this outcome in part to the intraguild predation of other planthopper predators by wolf spiders. Overall, the general paradigm in this system is for bottom-up forces to dominate, and when predators do exert a significant suppressing effect on planthoppers, their impact is generally legislated by vegetation characteristics.
Required Publisher's Statement
© 2002 by the Ecological Society of America
Denno, RF; Gratton, C; Peterson, MA; Langellotto, GA; Finke, DL; Huberty, AF. 2002. Bottom-Up Forces Mediate Natural-Enemy Impact in a Phytophagous Insect Community. Ecology 83:1443–1458. http://dx.doi.org/10.1890/0012-9658(2002)083[1443:BUFMNE]2.0.CO;2.
Gratton, Claudio; Denno, Robert F.; Peterson, Merrill A.; Langellotto, Gail A.; Finke, Deborah L.; and Huberty, Andrea F., "Bottom-Up Forces Mediate Natural-Enemy Impact in a Phytophagous Insect Community" (2002). Biology. 21. | <urn:uuid:5bd7c730-0052-4a8f-8a29-012588b74372> | 2.6875 | 821 | Academic Writing | Science & Tech. | 26.434965 | 95,479,966 |
0 or 1 is the question.
Arrays in C++ start at 0.
However, sometimes it is more convenient to start them at 1.
For example, if I am to model a chess board as an array, I feel it is more convenient to number rows and columns between 1 and 8, rather than between 0 and 7.
If I have 3 items of something, I'd rather count them 1,2,3, instead of 0,1,2.
Hence, the question: is this a matter of taste, or are there practical ways when you do need to use 0, or to use 1.
Just to note: there are some things that are a matter of taste, but then someone comes up with a good reason to use one of those things and not the others. That's why I'm asking.
If I get a really good reason or a really good reply, I will increase points before awarding points. | <urn:uuid:380a3019-c888-4e25-88c7-1c4da1631c5a> | 2.59375 | 195 | Q&A Forum | Software Dev. | 81.424088 | 95,479,975 |
Conventional methods of stock monitoring are unsuitable for certain fish species. For example, the infestation of an area with invasive Ponto-Caspian gobies cannot be identified in time by standard methods. Researchers at the University of Basel have developed a simple, effective and cost-efficient test for these introduced non-native fish, they report in the magazine PLOS ONE.
Gobies from the Black and Caspian Sea are spreading along the shipping routes in Central Europe and North America. They have been present in the Swiss part of the Rhine for about four years and already dominate the bottom of the stream in the region of Basel. So far, they have not advanced further than the water power plant in Rheinfelden, but a continuing expansion seems inevitable.
Current methods of fish monitoring are not suited to adequately measure the spreading of Ponto-Caspian gobies as they are labor-intensive and not sufficiently sensitive. Accordingly, infestations of an area with gobies are often only discovered when they have reached high densities and efforts of containment remain futile. Researchers of the Department of Environmental Sciences of the University of Basel have now developed a test that allows for the detection of Ponto-Caspian gobies in streaming and stagnant water.
Measuring the environmental DNA
With a commercially available, though slightly modified, water column sampler, water samples are taken from the bottom of the water body, where invasive gobies live. Via feces or scales, the fish release so-called environmental DNA into the stream. The water samples are then analyzed for traces of this so-called eDNA in the lab. The test developed at the University of Basel reacts exclusively to the genetic material of Ponto-Caspian gobies, but not to domestic fish species.
The procedure is less time and cost-intensive than angling, and the samples can even be drawn by untrained individuals. Unlike electrofishing, the method does not impact the fish fauna and can consequently be used in protected zones and breeding grounds.
First test for lotic water
Five species of invasive gobies populate wide areas of freshwater and brackish waters in Central Europe – the species that is most common to the region around Basel, Neogobius melanostomus, even figures among the 100 worst invaders in Europe.
“Our test is one of the first approaches that targets a specific fish species and detects it successfully in flowing freshwater” says the study’s lead author, Dr. Irene Adrian-Kalchhauser. “We hope that our work contributes to establishing eDNA as a standard method in European water resource management. Similar tests have been used for a few years to track the expansion of the Asian carp in the United States.”
Irene Adrian-Kalchhauser, Patricia Burkhardt-Holm
An eDNA assay to monitor a globally invasive fish species from flowing freshwater
PLOS ONE 11 (1) | doi: 10.1371/journal.pone.0147558
Dr. Irene Adrian-Kalchhauser, University of Basel, Department of Environmental Sciences, Tel. +41 61 26704 10, email: firstname.lastname@example.org.
Reto Caluori | Universität Basel
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:0dbfb565-7bbf-4cd3-81fe-38e92a0a0a04> | 3.546875 | 1,270 | Content Listing | Science & Tech. | 36.465553 | 95,479,996 |
Researchers have developed an endoscope as thin as a human hair that can image the activity of neurons in the brains of living mice. Because it is so thin, the endoscope can reach deep into the brain, giving researchers access to areas that cannot be seen with microscopes or other types of endoscopes.
“In addition to being used in animal studies to help us understand how the brain works, this new endoscope might one day be useful for certain applications in people,” says Shay Ohayon, who developed the device as a postdoctoral researcher in James DiCarlo’s lab at the Massachusetts Institute of Technology (MIT). “It could offer a smaller, and thus more comfortable, instrument for imaging within the nasal cavity, for example.”
The new endoscope is based on an optical fiber just 125 microns thick. Because the device is five to ten times thinner than the smallest commercially available microendoscopes, it can be pushed deeper into the brain tissue without causing significant damage.
In The Optical Society (OSA) journal Biomedical Optics Express, the researchers report that the endoscope can capture micron-scale resolution images of neurons firing. This is the first time that imaging with such a thin endoscope has been demonstrated in a living animal.
“With further development, the new microendoscope could be used to image neuron activity in previously inaccessible parts of the brain such as the visual cortex of primate animal models,” says Ohayon. “It might also be used to study how neurons from different regions of the brain communicate with each other.”
Acquiring images from a fiber
The new microendoscope is based on a multimode optical fiber, which can carry different multiple beams of light at the same time. When light enters the fiber, it can be manipulated to generate a tiny spot at the other end, and can be moved to different positions on the tissue without moving the fiber. Scanning the tiny spot across the sample allows it to excite fluorescent molecules used to label neuron activity. As the fluorescence from each spot travels back through the fiber, an image of neuron activity is formed.
“To achieve scanning fast enough to image neurons firing, we used an optical component known as a digital mirror device (DMD) to quickly move the light spot,” says Ohayon. “We developed a technique that allowed us to use the DMD to scan light at speeds up to 20 kilohertz, which is fast enough to see fluorescence from active neurons.”
Because the multimode fibers used for the endoscope scramble light, the researchers applied a method called wavefront shaping to convert the scrambled light into images. For wavefront shaping, they sent various patterns of light through the fiber to a camera at the other end and recorded exactly how that specific fiber changed light that passed through. The camera was then removed, and the fiber placed into the brain for imaging. The previously obtained information about how the fiber changes the light is then used to generate and scan a small point across the field of view.
Imaging living neurons
After successfully imaging cultured cells, the researchers tested their microendoscope on anesthetized mice. They inserted the fiber through a tiny hole in the skull of a mouse and slowly lowered it into the brain. To image the neurons firing, the researchers used a technique called calcium imaging that creates fluorescence in response to the influx of calcium that occurs when a neuron fires.
“One of the advantages of using an endoscope so thin is that as you lower it into the brain, you can see all the blood vessels and navigate the fiber to avoid hitting them,” says Ohayon.
In addition to showing that their endoscope could catch detailed neuronal activity the researchers also demonstrated that multiple colors of light could be used for imaging. This capability could be used to observe interactions between two groups of neurons each labeled with a different color, for example.
For standard imaging, the endoscope images the neurons at the very tip of the fiber. However, the researchers also showed that the microendoscope could image up to about 100 microns away from the tip. “This is very useful because when the fiber is inserted into the brain, it may affect the function of neurons very close to the fiber,” explains Ohayon. “Imaging an area slightly away from the fiber makes it easier to capture healthy neurons.”
Dealing with bends in the fiber
One limitation of the microendoscope is that any bends in the fiber cause it to lose the ability to produce images. Although this did not affect the experiments described in the paper because the fiber was kept straight as it was pushed into the brain, solving the bending problem could greatly expand the applications for the device. Various research groups are working on new types of fibers that are less susceptible to bending and computational methods that might compensate for bending in real-time.
“If this bending problem can be solved, it will likely change the way endoscopy in people is performed by allowing much thinner probes to be used,” says Ohayon. “This would allow more comfortable imaging than today’s large endoscopes and may enable imaging in parts of the body that aren’t currently feasible.”
The paper “Minimally invasive multimode optical fiber microendoscope for deep brain fluorescence imaging is published in the OSA journal Biomedical Optics Express. | <urn:uuid:a6b91bdb-1b8d-47c6-bbf5-0a08a7cd584d> | 3.765625 | 1,126 | News Article | Science & Tech. | 31.490403 | 95,480,001 |
2028 ESA Space Mission Focuses on Nature of ExoplanetsApril 13, 2018 / Written by: Miki Huynh
The ARIEL (Atmospheric Remote-sensing Exoplanet Large-survey) space mission has been selected by the European Space Agency (ESA) as the next medium-class science mission. Image source: ARIEL Space Mission / ESA.
In March of 2018, ARIEL (Atmospheric Remote-sensing Exoplanet Large-survey), developed by a consortium of more than 50 institutes from 12 European countries, was selected as the European Space Agency’s next medium-class mission, the first dedicated to exoplanet atmospheres. The four-year mission, planned for launch in 2028, will observe 1000 planets orbiting distant stars and make the first large-scale survey of the chemistry of the atmospheres.
“ARIEL will study a statistically large sample of exoplanets to give us a truly representative picture of what these planets are like. This will enable us to answer questions about how the chemistry of a planet links to the environment in which it forms, and how its birth and evolution are affected by its parent star,” said Giovanna Tinetti, PI for the ARIEL mission and Professor of Astrophysics at University College London. She is also a former NASA Astrobiology Postdoctoral Fellow at JPL and a past member of the NASA Astrobiology Institute.
More information on ARIEL, including facts, figures, and press release, are available at the ARIEL Space Mission website.
- Life Underground - Available to Play
- Electron Acceptors and Carbon Sources for a Thermoacidophilic Archaea
- Yosemite Granite Tells New Story About Earth's Geologic History
- Supporting SHERLOC in the Detection of Kerogen as a Biosignature
- New Estimates of Earth's Ancient Climate and Ocean pH
- How Microbes From Spacecrafts Survive Clean Rooms
- Radical Factors in the Evolution of Animal Life
- Understanding Oxygen as an Exoplanet Biosignature
- Recap of the 2018 Astrobiology Graduate Conference (AbGradCon)
- Astrobiologist Rebecca Rapf Receives Inaugural Maggie C. Turnbull Early Career Award | <urn:uuid:bfb631b4-3087-47a8-bc9a-ac9547943fec> | 3.015625 | 469 | News (Org.) | Science & Tech. | 7.87141 | 95,480,009 |
Date of publication: 2017-09-02 18:51
The salt of a weak acid and strong base will form an alkaline [ alkali : A base which is soluble in water. ] solution when dissolved in water.
To measure the hydrogen gas released in the above reaction we use the apparatus as shown. As the bubbles of gas are given off, the plunger in the syringe moves out as hydrogen gas fills it. After, say every 75 seconds we read the volume of gas in the syringe. The reaction is complete when the syringe no longer moves.
A piece of cotton wool is placed in the neck of the flask to allow carbon dioxide gas to escape. As the gas escapes the mass of the flask reduces. Take readings of mass loss over a time interval, . 85 seconds.
This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.
Concentration affects the rate of reaction. Therefore over time as the concentration of HCl increased then the rate of the reaction also increased.
What we observe over time is that gradually the zinc disappears and bubbles of gas appear. After a few minutes the bubbles of gas form less and less quickly until finally no bubbles appear because all the acid has been used up, some zinc remains.
The rate of reaction was determined by measuring the time required for a given amount of magnesium metal to be consumed by Hydrochloric acid (HCl) solution of varying concentrations.
The equal proportions of hydrogen ions and hydroxide ions in the solution are not disturbed by additional equilibria, and so the pH of the solution is equal to 7.
Magnesium ribbon , ruler, scissors, analytical balance, sandpaper, hydrochloric acid, measuring cylinder, graduated cylinder, distilled water, glass stirring rod.
This additional equilibrium removes hydrogen ions, but not hydroxide ions. The water equilibrium shifts to the right, to replace these lost hydrogen ions and, in so doing, also produces more hydroxide ions. There are now more hydroxide ions than hydrogen ions present, leading to a pH of more than 7.
The rate of a chemical reaction is the time required for a given quantity of reactant(s) to be changed to product(s). The unit of time may be seconds, minutes, hours, days or years.
Soaps are also salts of weak acids (such as stearic and oleic acid) and strong bases (such as sodium and potassium hydroxide). As a result, soaps are usually alkaline in nature. | <urn:uuid:a2cca721-d8ef-4bc1-8fe4-c48be51f8c49> | 3.8125 | 577 | Tutorial | Science & Tech. | 50.259562 | 95,480,020 |
Language Reference |
See Also Applies To
Sets the numeric date of the Date object according to local time.
The numDate argument is a numeric value equal to the numeric date.
To set the date value according to Universal Coordinated Time (UTC), use the setUTCDate method.
If the value of numDate is greater than the number of days in the month stored in the Date object or is a negative number, the date is set to a date equal to numDate minus the number of days in the stored month. For example, if the stored date is January 5, 1996, and setDate(32) is called, the date changes to February 1, 1996. Negative numbers have a similar behavior.
© 1997 by Microsoft Corporation. All rights reserved.
|file: /Techref/inet/iis/jscript/htm/js109.htm, 2KB, , updated: 1997/9/30 04:44, local time: 2018/7/15 21:53,
|©2018 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?|
<A HREF="http://techref.massmind.org/techref/inet/iis/jscript/htm/js109.htm"> setDate Method</A>
|Did you find what you needed?|
Welcome to massmind.org!
Welcome to techref.massmind.org! | <urn:uuid:0ec0ce79-b70d-4858-a3ac-af488897f830> | 2.703125 | 343 | Documentation | Software Dev. | 62.653036 | 95,480,023 |
You may think you're just an average Joe, but according to your metabolomics data your body is percolating some expressive information about your daily life.
"Metabolomics measures small molecules called metabolites that reflect the physiology of the body, and can reveal specific details about you. Researchers can see specific metabolites -- such as caffeine -- in your blood, and form hypotheses about your diet, lifestyle or environment," said Stanford University School of Medicine Postdoctoral Fellow Tejaswini Mishra, Ph.D.
"For example, if we detected caffeine in your blood, it is likely that you had coffee before giving blood. With more data, we could also track your coffee-drinking habits, and perhaps even learn something about what type of coffee you drink! We might also see pesticides or derivatives of medications in the data, from which one could hypothesize whether a person gardens or farms, or lives in proximity to one, and which medications they might be on."
Mishra is integrating multi-omics data for NASA's Twins Study and comparing all the metabolites in retired twin astronauts Scott and Mark Kelly. She saw a number of Scott's metabolites increase in levels when he went to space and when he returned to Earth some of those stayed elevated. By integrating data from other Twins Study investigations, she hopes they can determine the cause of this elevation.
"It is incredible and powerful to have such rich data but it also is a little scary," Mishra said. "It really underscores the importance of securing your personal data, who you share it with, how you store it and protect it."
Twins Study researchers are investigating and securing an unprecedented amount of information. Most studies focus on two or three types of data but this is one of the few studies integrating many different types of data. By comparing identical genomes from twins, researchers can focus more attention to other specific molecular changes, such as metabolomics changes involving the end products of various biological pathways and processes.
Mishra is helping to integrate data from metabolites, DNA, RNA, proteins, microbes, physiological and neurobehavioral systems, as well as food and exercise logs, to help researchers create a timeline and identify patterns and correlations. Together, they hope to help identify health-associated molecular effects of spaceflight to protect astronauts on future missions.
NASA's Human Research Program (HRP) is dedicated to discovering the best methods and technologies to support safe, productive human space travel. HRP enables space exploration by reducing the risks to astronaut health and performance using ground research facilities, the International Space Station and analog environments. This leads to the development and delivery of a program focused on: human health, performance and habitability standards; countermeasures and risk-mitigation solutions; and advanced habitability and medical-support technologies. HRP supports innovative, scientific human research by funding more than 300 research grants to respected universities, hospitals and NASA centers to over 200 researchers in more than 30 states.
NASA Human Research Strategic Communications
Amy Blanchett | EurekAlert!
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:190f6347-f68e-4119-ba49-2a63d30becec> | 3.3125 | 1,240 | Content Listing | Science & Tech. | 32.110105 | 95,480,029 |
Coevolution in Ecological Systems: Results from “Loop Analysis” for Purely Density-Dependent Coevolution
Much of the theory of population ecology is concerned with predicting equilibrium population size on the basis of assumptions about the interactions between populations. Familiar population interactions are interspecific competition, predation, symbiosis including parasitism and mutualism, and others. Recent years have witnessed the proliferation of equations which model these interactions in ways especially suited for certain species. Most population dynamic models predict that the interacting populations will attain stable equilibrium abundance provided the parameters in the model satisfy certain requirements which are special to each model. Many models also allow for other possibilities including cycling of various forms. Nonetheless almost all models contain stable coexistence at an equilibrium point as one of the possibilities.
KeywordsEcological System Stable Equilibrium Stable Equilibrium Point Evolutionary Control Population Dynamic Model
Unable to display preview. Download preview PDF. | <urn:uuid:9399bb94-22c7-4112-bf20-d2d9378d50ed> | 2.59375 | 190 | Truncated | Science & Tech. | -11.020257 | 95,480,038 |
During the development of an organism, individual cells are directed to perform specific tasks within the body of the adult organism. Researchers at the Cells-in-Motion Cluster of Excellence show now that the function of a certain protein is responsible for the development of sperm and egg cells.
When an embryo develops, single cells acquire specific fates that allow them to perform specific tasks in the adult organism. The primordial germ cells are formed very early in embryonic development and migrate within the embryo to the developing testis or the ovary, where they give rise to sperm and egg cells. During their migration, the germ cells pass through tissues and interact with cells that acquire other specific fates such as muscle, bone or nerve cells in response to different cues.
Primordial germ cells, however, ignore those signals and maintain their fate. What are the mechanisms behind this process? Researchers at the Cells-in-Motion (CiM) Cluster of Excellence at the University of Münster have now discovered that a certain protein expressed within the progenitor germ cells is responsible for their fate maintenance: the Dead End protein.
“For the first time, we have been able to demonstrate that germ cells lacking the protein undergo differentiation into other cell types during their migration,” says Theresa Gross-Thebing, lead author of the study and a PhD student at the Cluster of Excellence’s Graduate School.
As a consequence, in embryos lacking the Dead End protein the progenitor germ cells do not give rise to cells critical for reproduction and the adult organism becomes infertile. These findings are also relevant for the understanding of the development of certain germ cell tumours. The study appears in the latest issue of the journal “Developmental Cell”.
The detailed story:
The new results were obtained by studying primordial germ cells in zebrafish embryos. These embryos are transparent and develop rapidly outside the body of the female, allowing the visualization of such processes within the live organism. The researchers in the CiM group headed by Prof. Erez Raz observed progenitor germ cells in which the level of the Dead end protein was reduced. In previous studies, researchers had already discovered that germ cells lacking the Dead End protein had disappeared after one day. It was therefore presumed that these germ cells died during their migration such that they did not arrive at their destination. Following this assumption the protein was named “Dead end”.
To be able to observe the primordial germ cells over a longer period of time than in earlier studies, the researchers from Münster labelled germ cells with fluorescent proteins and employed different microscopy techniques. A type of microscopy that was especially useful in this study is “light-sheet fluorescence microscopy”. This type of microscopy scans the tissue very rapidly layer by layer, while a camera records the fluorescence signal. Next, a composite image of individual layers is generated allowing observation of the three-dimensional tissue structure and the position of cells within it.
By genetically deactivating certain signalling cues, that normally guide primordial germ cells to the correct place, the researchers made these cells migrate into foreign tissues. After one day they were able to recognize that primordial germ cells lacking the Dead End protein obviously changed their shape: the characteristic round shape and the migratory behaviour could not be observed anymore. Instead, the cells displayed shapes typical of the neighbouring somatic cells – for example, an elongated form of a muscle cell or long processes characteristic of nerve cells. In each case, the shape, behaviour and molecular features were matching the tissue in which the cells resided.
After one day only 20 percent of all primordial germ cells lacking Dead End still showed their original shape. After two days, the researchers could no longer observe any cells displaying the shape of a progenitor germ cell. While some of the germ cells were dying, as had been observed in earlier studies, most of them were transformed into other types of cells based on their shape.
Importantly, in addition to the morphological changes, germ cells lacking Dead End function developed into other types of cells as judged by the expression of specific molecules. In contrast with wildtype cells, in the absence of Dead end germ cells started expressing proteins characteristic of muscle or nerve cells. “These results enable us to show for the first time that Dead End as a protein is responsible for maintaining the fate of primordial germ cells,” says Prof. Erez Raz.
These findings are relevant for research concerning certain germ cell tumours in humans, named teratomas. These tumours occur in large part in ovaries or testicles and contain, for example, tissues like teeth or hair. “Previous studies in mice suggest that germ cells lacking Dead End can initiate tumours and that within these tumours somatic differentiation occurs,” says Theresa Gross-Thebing. In future, the researchers want to investigate how Dead end functions in maintenance of germ cell fate and in inhibition of transformation of the cells into cancer cells that can develop into different types of somatic cells. Further studies will determine if the results of this basic research study can find their way into any possible medical applications.
The study received funding from the Cells-in-Motion Cluster of Excellence, the European Research Council (ERC) and the German Research Foundation.
Gross-Thebing T, Yigit S, Pfeiffer J, Reichman-Fried M, Bandemer J, Ruckert C, Rathmer C, Goudarzi M, Stehling M, Tarbashevich K, Seggewiss J, Raz E. The vertebrate protein Dead end maintains primordial germ cell fate by inhibiting somatic differentiation. Dev Cell 2017, DOI: 10.1016/j.devcel.2017.11.019
Cells-in-Motion Cluster of Excellence
Media Relations Manager
Tel: +49 251 83-49310
Svenja Ronge | idw - Informationsdienst Wissenschaft
New wasp species with a giant stinger discovered in Amazonia
06.07.2018 | University of Turku
Researchers develop a new method for turning skin cells into pluripotent stem cells
06.07.2018 | University of Helsinki
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
Sizes and shapes of nuclei with more than 100 protons were so far experimentally inaccessible. Laser spectroscopy is an established technique in measuring fundamental properties of exotic atoms and their nuclei. For the first time, this technique was now extended to precisely measure the optical excitation of atomic levels in the atomic shell of three isotopes of the heavy element nobelium, which contain 102 protons in their nuclei and do not occur naturally. This was reported by an international team lead by scientists from GSI Helmholtzzentrum für Schwerionenforschung.
Nuclei of heavy elements can be produced at minute quantities of a few atoms per second in fusion reactions using powerful particle accelerators. The obtained...
A team headed by the TUM physicists Alexander Holleitner and Reinhard Kienberger has succeeded for the first time in generating ultrashort electric pulses on a chip using metal antennas only a few nanometers in size, then running the signals a few millimeters above the surface and reading them in again a controlled manner. The technology enables the development of new, powerful terahertz components.
Classical electronics allows frequencies up to around 100 gigahertz. Optoelectronics uses electromagnetic phenomena starting at 10 terahertz. This range in...
Russian researchers together with their French colleagues discovered that a genuine feature of superconductors -- quantum Abrikosov vortices of supercurrent -- can also exist in an ordinary nonsuperconducting metal put into contact with a superconductor. The observation of these vortices provides direct evidence of induced quantum coherence. The pioneering experimental observation was supported by a first-ever numerical model that describes the induced vortices in finer detail.
These fundamental results, published in the journal Nature Communications, enable a better understanding and description of the processes occurring at the...
03.07.2018 | Event News
28.06.2018 | Event News
28.06.2018 | Event News
06.07.2018 | Physics and Astronomy
06.07.2018 | Information Technology
06.07.2018 | Earth Sciences | <urn:uuid:1289d90d-fba5-4bcf-9aae-a3f0b63ea8b6> | 3.640625 | 1,911 | Content Listing | Science & Tech. | 38.471144 | 95,480,049 |
Remote sensing has long been a useful tool in global applications, since it provides physically-based, worldwide, and consistent spatial information. This paper discusses the potential of using these techniques in the research field of water management, particularly for 'Water Footprint' (WF) studies. The WF of a crop is defined as the volume of water consumed for its production, where green and blue WF stand for rain and irrigation water usage, respectively. In this paper evapotranspiration, precipitation, water storage, runoff and land use are identified as key variables to potentially be estimated by remote sensing and used for WF assessment. A mass water balance is proposed to calculate the volume of irrigation applied, and green and blue WF are obtained from the green and blue evapotranspiration components. The source of remote sensing data is described and a simplified example is included, which uses evapotranspiration estimates from the geostationary satellite Meteosat 9 and precipitation estimates obtained with the Climatic Prediction Center Morphing Technique (CMORPH). The combination of data in this approach brings several limitations with respect to discrepancies in spatial and temporal resolution and data availability, which are discussed in detail. This work provides new tools for global WF assessment and represents an innovative approach to glo bal irrigation mapping, enabling the estimation of green and blue water use. © 2010 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:5ec43e4a-a850-40e3-8be4-697994d01f7e> | 2.875 | 312 | Academic Writing | Science & Tech. | 7.932608 | 95,480,050 |
Finding, and Possibly Fixing, Toxicity in Nanomaterials
UO and Oregon State University scientists were baffled. New testing of mixtures of nanoparticles had led to an 88 percent mortality rate in zebrafish embryos, after earlier testing had found the materials to be free of toxins.
Looking more extensively, they found that a new automated delivery system, meant to speed the mixing of products for testing in the fish, created a synergistic, or multiplying, effect that triggered the toxicity.
The method used to analyze what was happening, it turns out, could provide a solution that could keep the nanotechnology field moving forward, said study co-author Jim Hutchison of the UO Department of Chemistry and Biochemistry.
While it isn’t clear that the new-found toxicity affecting zebrafish poses a threat to human health, the four-member research team said caution is necessary.
“Years after showing that these materials were the most benign and among the least toxic materials that we’ve ever seen, we did these experiments with the surfactants and found that, in this case, they were toxic,” Hutchison said. “Our new study gives us a wake-up call. This isn’t the first time that people have seen mixture toxicity, but it does remind us that two safe things mixed together doesn’t mean that the mixture is safe.”
The new research looked at biocompatible gold nanoparticles combined with surfactants in the formulation of nanomaterials. The findings are in a paper published in the June 26 issue of the journal ACS Nano.
In nanotechnology’s infancy, toxicologists hand delivered nanoparticles using pipettes for exposure to zebrafish. Based on that approach, Hutchison and OSU co-author Robert Tanguay had found that the widely used mix of inorganic nanoparticles and surfactants, individually, were not toxic to the fish.
However, they found, the automation — using inkjet-printer-like devices to rapidly inject materials employing small amounts of surfactant to control the size of the delivered droplets — brought an unforeseen change.
The mortality among the embryos first emerged as the researchers used the surfactant polysorbate 20. Results were similar using polysorbate 80 and sodium dodecyl sulfate.
Surfactants are compounds that reduce surface tension in liquids and other substances to improve mixing. Polysorbates are surfactants and emulsifiers commonly used in laundry detergent, suntan lotions, cosmetics and ice cream.
The discovery was made using a technique known as diffusion-ordered NMR spectroscopy, an adaptation of nuclear magnetic resonance that reveals how particles diffuse in solution. Movement slowed as increasing amounts of the surfactants assembled on the outside of the gold nanoparticles.
The result was increased uptake and toxicity in the zebrafish.
The National Science Foundation and National Institutes of Health funded the new research. The earlier work was done under the Safer Nanomaterials and Nanomanufacturing Initiative funded by the Air Force Research Laboratory through the Oregon Nanoscience and Microtechnologies Institute.
The NMR technique could be used as an early rapid-screening approach that would allow industry to assure safety by reformulating their products before large investments have been made, Hutchison said.
Hutchison and Tanguay are internationally known for pioneering the use of green chemistry, also known as sustainable chemistry, in designing nanoparticles. The technique uses molecular design principles to produce safer chemicals, reduce toxicity and minimize waste.
This article has been republished from materials provided by the University of Oregon. Note: material may have been edited for length and content. For further information, please contact the cited source.
Aurora L. Ginzburg, Lisa Truong, Robert L. Tanguay, James E. Hutchison. Synergistic Toxicity Produced by Mixtures of Biocompatible Gold Nanoparticles and Widely Used Surfactants. ACS Nano, 2018; 12 (6): 5312 DOI: 10.1021/acsnano.8b00036.
Synthetic Material That Detects Enzymatic ActivityNews
Scientists integrate protein and polymer building blocks to create stimulus-responsive systemsREAD MORE
Regenerative Medicine Meets Clever Engineering to Accommodate Bone GraftsNews
Personalized bone grafts developed to repair bone defects from disease or injuryREAD MORE
Rapid and Cost-Effective Instrument that Measures Molecular DynamicsNews
By combining mass spectrometry and thermal desorption, researchers honed a new method to measure excitation and relaxation rates of uracil, the building block of RNA.READ MORE | <urn:uuid:d601895a-53a9-41fe-8416-48927ed8615f> | 2.75 | 977 | News Article | Science & Tech. | 20.943346 | 95,480,060 |
An experimental diff library for generating operation deltas that represent the difference between two sequences of comparable items.
An open licensed (MIT) library for performing generating deltas (A.K.A sequences of operations) representing the difference between two sequences of comparable tokens.
- Installation: pip install deltas
- Repo: http://github.com/halfak/Deltas
- Documentation: http://pythonhosted.org/deltas
- Note this library requires Python 3.3 or newer
This library is intended to be used to make experimental difference detection strategies more easily available. There are currently two strategies available:
- deltas.sequence_matcher.diff(a, b):
- A shameless wrapper around difflib.SequenceMatcher to get it to work within the structure of deltas.
- deltas.segment_matcher.diff(a, b, segmenter=None):
- A generalized difference detector that is designed to detect block moves and copies based on the use of a Segmenter.
>>> from deltas import segment_matcher, text_split >>> >>> a = text_split.tokenize("This is some text. This is some other text.") >>> b = text_split.tokenize("This is some other text. This is some text.") >>> operations = segment_matcher.diff(a, b) >>> >>> for op in operations: ... print(op.name, repr(''.join(a[op.a1:op.a2])), ... repr(''.join(b[op.b1:op.b2]))) ... equal 'This is some other text.' 'This is some other text.' insert ' ' ' ' equal 'This is some text.' 'This is some text.' delete ' ' ''
Release history Release notifications
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size & hash SHA256 hash help||File type||Python version||Upload date|
|deltas-0.3.7-py2.py3-none-any.whl (27.9 kB) Copy SHA256 hash SHA256||Wheel||3.4||Dec 13, 2015|
|deltas-0.3.7.tar.gz (17.9 kB) Copy SHA256 hash SHA256||Source||None||Dec 13, 2015| | <urn:uuid:419115de-2f9d-4915-b414-9e5aca1d455b> | 2.671875 | 521 | Product Page | Software Dev. | 57.876341 | 95,480,070 |
32 No. 4
Teaching about the Role of Green Chemistry
by Fulvio Zecchini, Aurelia Pascariu, Patrizia Vazquez, Ligia Maria Moretto, Anthony Patti, Panayotis Siskos, Pietro Tundo
Originally published in 2004 in Italian, Il Cambiamento Globale del Clima (Global Climate Change) was written by Fulvio Zecchini and Pietro Tundo of the Consorzio Interuniversitario Nazionale “La Chimica per l’Ambiente” (INCA). Meant as an integrative educational tool, the booklet was designed to introduce green chemistry to seniors at secondary schools and to university freshmen. About 4000 copies were printed and distributed for free to schools and universities all over Italy. The monograph was so appreciated by Italian teachers, students, and academic experts, that a wider distribution is planned.
So far, the Subcommittee on Green Chemistry has completed three projects to translate the monograph into the following five languages: English, Spanish, and Portuguese (project 2005-015-1-300); Romanian (project 2007-035-1-300), and Greek (project 2008-018-2-300). All these editions are being, or already have been, distributed to target groups in several countries and have received positive feedback from end-users.
The monograph is structured into five chapters. The first two are focused on the composition and structure of the atmosphere and on air pollutants. The third chapter is dedicated to the interaction between matter and radiation, namely between greenhouse gases (GHGs) and infrared rays (IR).
Cover image from the book Il Cambiamento Globale del Clima
(by Francesco Tundo).
The fourth chapter is focused on the depletion of the ozone layer. Since several GHGs also act as ozone depleting substances (ODSs), this topic is somewhat related to global warming and is presented in a comparative way, underlining that, even if often the involved pollutants are the same, the phenomena are different and take place in different levels of the atmosphere.
The final chapter of the monograph is dedicated to the consequences of global warming and possible countermeasures. Being a global problem, the “political” solution must be global as well. New and effective international protocols must be agreed on in order to replace the ineffective Kyoto protocol. Nonetheless, the message of this chapter is that actual solutions can only come from scientists and namely from chemists, who must develop solutions to abate CO2 emissions caused by human activity.
Besides the five versions sponsored by IUPAC projects, two further adaptations/translations were realized. The Arabic edition was produced in 2006–2007 using European Commission funds derived from the Tempus Joint European Project “Sustainable Environmental Development, A Curriculum Development Project,” in which INCA participated.
The success of the different editions of the monograph has helped improve the public perception of chemistry as a fundamental scientific tool to solve global environmental problems, such as climate change. Each clear demonstration to younger generations of how much chemistry occurs in everyday life and of its usefulness for environmental protection contributes to raising awareness of, and engendering an interest in, this fascinating and useful discipline.
All of the above-mentioned versions of the monograph may be downloaded for free at <www.incaweb.org/publications>.
last modified 29 June 2010.
Copyright © 2003-2010 International Union of Pure and Applied Chemistry.
Questions regarding the website, please contact email@example.com | <urn:uuid:84810dcf-9e0f-4440-86eb-413ec1b649ae> | 2.890625 | 739 | Knowledge Article | Science & Tech. | 23.360553 | 95,480,103 |
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact us so we can address the problem.
Y: waving, not drowning
© BioMed Central Ltd 2003
Published: 18 June 2003
The human Y chromosome contains 60 million base pairs (Mb) of DNA, it is haploid, and 95% of it is nonrecombining. Helen Skaletsky from the Whitehead Institute for Biomedical Research and colleagues report in the first of two papers in the June 19 Nature that the 23Mb euchromatic region in the Y chromosome comprises eight massive palindromic sequences and that these regions are rich in genes that are functional and testis-specific (Nature 2003, 423:825-837.
In the second paper, they describe both comparative sequencing of the great ape Y chromosome, and the mechanism of gene conversion by which the Y chromosome repairs mutations that occur within these genes (Nature 2003, 423:873-876). The results raise important questions about the molecular clock dating of segmental duplications in the human genome and the rate of human-chimpanzee divergence in these regions.
Skaletsky et al. sequenced 97% of the male-specific region of the Y chromosome (MSY) from one man, and observed that it contained at least 156 transcription units, all located within euchromatic sequences, and identified 24 MSY-specific families to account for 125 of these. Half of the transcription units encode 27 distinct proteins or protein families, 12 of which are expressed ubiquitously, and 11 of which are testis-specific, confirming a previous model proposing two distinct functional classes of MSY genes.
They also showed that three different classes of sequences comprised the euchromatin: the X-transposed class, the X-degenerate class, and the ampliconic segments, the latter being composed of sequences that demonstrated intrachromosomal identities of at least 99.9%, and which contained the eight palindromic segments. The palindrome arms range from 9 kb to 1.4 Mb, are symmetrical and identical within a palindrome, and six of them contain the testis-specific genes as gene pairs in the palindromic arms, as well as inverted repeats and long tandem arrays. In the light of these findings, Skaletsky et al. propose a model for the evolution of the MSY.
"The occurrence of MSY gene pairs that are subject to frequent gene conversion might provide a mechanism for conserving gene functions across evolutionary time in the absence of crossing over," conclude the authors, debunking the previously held theory of the Y as a genetic wasteland of dead and dying genes that will rot away over the next few million years. "We have a new way of understanding how the rotting tendencies of the Y are counteracted," commented lead researcher David Page.
"Although the sex chromosomes provide the strongest case for a special relationship between genome organization and the unique biology of chromosomes, the other chromosomes shouldn't feel left out. […] Piecing together these [evolutionary] events remains a worthwhile challenge, for among the flotsam and jetsam of each chromosome lie clues to our history," writes Huntington F. Willard of Duke University in an accompanying News and Views article. | <urn:uuid:e0f73bf8-3926-49b8-bba7-868af8f7748d> | 3.140625 | 692 | Truncated | Science & Tech. | 35.115897 | 95,480,106 |
Methane is the second most important anthropogenic greenhouse gas after CO2. As a short-lived climate forcing agent (lifetime ~10 years), it provides a lever for slowing near-term climate change. Major anthropogenic sources of methane include oil/gas exploration and use, livestock, landfills, coal mining, and rice cultivation. Wetlands are the dominant natural source. The magnitude and spatial distribution of methane sources is highly uncertain and difficult to constrain.
Fig. Simulated methane concentrations using emissions constrained by satellite observations.
The hydroxyl radical (OH) is the primary oxidant for a number of non-CO2 greenhouse gases and CFCs. It also regulates the production of tropospheric ozone, a leading pollutant. As such, changes in tropospheric OH could have large implications for both future climate and air quality. However we currently lack a predictive understanding of OH on decadal-to-centennial timescales, evidenced by the disagreement between global models in their simulation of OH.
Fig. OH concentrations in a 6000 year equilibrium simulation with a coupled chemistry-climate model.
Carbon dixoide (CO2) is an atmospheric trace gas and the largest anthropogenic radiative forcer. CO2 levels have increased from 280 ppm in pre-industrial times to greater than 400 ppm in the present, largely due to changes in fossil fuel emissions, and can be measured via ground stations, aircraft, and satellites. The paradigm in ground-based trace gas measurements has been to employ a sparse network of high-precision instruments that can be used to measure atmospheric concentrations. These concentrations are then used to estimate emission fluxes, validate numerical models, and quantify changes in physical processes. However, the BEACO2N project (http://beacon.berkeley.edu/Overview.aspx) aims to provide a better understanding of the emissions and physical processes governing CO2 by deploying a high density of moderate-precision instruments.
Fig. We constructed a custom, hourly, 1-km CO2 emission inventory for the Bay Area.
Inverse models quantify the state variables driving the evolution of a physical system by using observations of that system. This requires a physical model that relates a set of input variables (state vector) to a set of output variables (observation vector). A critical step in solving the inverse problem is determining the amount of information contained in the observations and choosing the state vector accordingly. This is a non-trivial problem when using a large ensemble of observations with large errors.
Fig. Illustration of using a Gaussian mixture model and radial basis functions for defining the state vector. | <urn:uuid:f9c30f02-289e-4240-8ff5-deff81af69bb> | 3.453125 | 541 | Academic Writing | Science & Tech. | 28.077457 | 95,480,236 |
A fruit salad consists of blueberries, raspberries, grapes, and cherries. The fruit salad has a total of pieces of fruit. There are twice as many raspberries as blueberries, three times as many grapes as cherries, and four times as many cherries as raspberries. How many cherries are there in the fruit salad?
This problem is copyrighted by the American Mathematics Competitions.
Instructions for entering answers:
For questions or comments, please email firstname.lastname@example.org. | <urn:uuid:7592a77e-7774-46c9-8fb6-ab1574fddc48> | 3.28125 | 110 | Tutorial | Science & Tech. | 43.911375 | 95,480,246 |
Rectangle has sides of length 4 and of length 3. Divide into 168 congruent segments with points , and divide into 168 congruent segments with points . For , draw the segments . Repeat this construction on the sides and , and then draw the diagonal . Find the sum of the lengths of the 335 parallel segments drawn.
This problem is copyrighted by the American Mathematics Competitions.
Instructions for entering answers:
For questions or comments, please email email@example.com. | <urn:uuid:bba9a2d5-bbd2-4e61-a768-d1f443bc9bce> | 2.96875 | 103 | Tutorial | Science & Tech. | 64.215375 | 95,480,247 |
Temperature Measurements in a Negatively Buoyant Round Vertical Jet Issued in a Horizontal Crossflow
The problem considered here is the vertical discharge of round negatively buoyant jets through a horizontal crossflow. Laboratory experiments are performed in a flume: fresh water is emitted vertically through warm water. Temperature measurements are undertaken along verticals in the jet axis. It is found that the mean temperature values can be plotted as similarity diagrams. Some experimental correlations are deduced for the maximum height of jet rise. Turbulent temperature quantities do not appear obeying to similarity in the ascending part of the jet. Asymmetry is observed for the various profiles. The intermittency zone was also investigated in some cases.
KeywordsGraphical Recording Intermittency Factor Densimetric Froude Number Intermittency Region Axial Trajectory
Unable to display preview. Download preview PDF.
- 1.Badr, A.: Contribution à l’étude des jets radioactifs et des jets lourds émis en présence d’un courant traversier. Thèse de Docteur-Ingénieur. Institut National Polytechnique de Grenoble (1981).Google Scholar
- 2.Townsend, A.A.: The structure of Turbulent Shear Flow. Cambridge University Press (1956).Google Scholar
- 3.Wright, S.J.: Effects of Ambient Crossflow and Density Stratification on the Characteristic Behavior of Round Turbulent Buoyant Jets. W.M. Keck Laboratory of Hydraulics and Water Resources. California Institue of Technology. Pasadena, Report KH-R-36 (1977).Google Scholar
- 4.List, E.J. and Imberger, J.: Turbulent Entrainment in Buoyant Jets and Plumes“. Journal of the Hydraulics Division, proc. ASCE, HY9,pp.14611474 (1973).Google Scholar
- 5.Chu, V.H.: Turbulent Dense Plumes in Laminar Crossflow. Journal of the Hydraulic Research, 13, n°3, pp. 263–279 (1975).Google Scholar
- 6.Stolzenbach, K.D, Adams, E.E.: Submerged Discharges of Dense Effluent. Proceedings of the 2nd International IAHR Symposium on Stratified Flows. Trondheim. Norway, vol. 2, pp. 832–844 (1980).Google Scholar
- 7.Antonia, R.A., Prabhu, A; Stephenson, S.E.: Conditionally Sampled Measurements in a Heated Turbulent Jet. Journal of Fluid Mechanics, Vol. 72, part 3, pp. 455–480 (1975).Google Scholar
- 8.Kotsovinos, N.E.: A study of the Entrainment and Turbulence in a Plane Buoyant Jet“. Ph.D.Thesis. California Institute of Technology, Pasadena. 1975.Google Scholar | <urn:uuid:8c2169bb-9709-4a59-9b0f-60bc3eecfddb> | 2.640625 | 641 | Academic Writing | Science & Tech. | 52.255551 | 95,480,282 |
- Laurentide Ice Sheet,
- marine isotope stage 3,
- Gulf of Mexico
A leading hypothesis to explain abrupt climate change during the last glacial cycle calls on fluctuations in the margin of the North American Laurentide Ice Sheet ( LIS), which may have routed fresh water between the Gulf of Mexico (GOM) and the North Atlantic, affecting North Atlantic Deep Water variability and regional climate. Paired measurements of delta O-18 and Mg/Ca of foraminiferal calcite from GOM sediments reveal five episodes of LIS meltwater input from 28 to 45 thousand years ago (ka) that do not match the millennial-scale Dansgaard-Oeschger warmings recorded in Greenland ice. We suggest that summer melting of the LIS may occur during Antarctic warming and likely contributed to sea level variability during marine isotope stage 3.
Paleoceanography, v. 21, no. 1, article PA1006.
Available at: http://works.bepress.com/benjamin_flower/4/ | <urn:uuid:949f61fb-7c68-40a7-a335-a076aa40f5e4> | 3.0625 | 218 | Truncated | Science & Tech. | 35.335908 | 95,480,283 |
This data was obtained by the specialists of the Institute of Animal Taxonomy and Ecology, Siberian Branch, Russian Academy of Sciences, who have been studying behavior of different rodent species for years. In this case, they dealt with water voles (Arvicola terrestis) and several lines of house mice.
Males’ aggressiveness is inevitable as they compete for a female. The most peaceful male would get nothing. However, is the most fierce one always a winner? To answer this question, the Novosibirsk researchers sorted out the males by their aggressiveness. To this end, the males were placed together, and the researchers counted the average number of aggressive acts in correlation to the total number of social interactions. Based on the observation results, the males were divided into four groups: low-aggressive, average aggressive, aggressive and highly aggressive ones. Then the researchers checked how attractive the males from each group are to the opposite sex. When choosing marriage partners, rodents are guided by the smell. It is by the smell that they determine physical state of the candidates and their genetic peculiarities. In the course of the experiment, a female was offered samples of bedding from different males – the selected male was considered the one whose bedding the female had studied for the longest time. Aggressive (but not highly aggressive!) males turned out to be attractive to females. Representatives of other classes are less popular. Low-aggressive partners are almost of no interest to anyone, but the most fierce males turned out to be at the bottom of the preference list.
In evolutionary perspective, females’ choice is fully justified. Both low-aggressive and very aggressive males are low-polytocous, and this is because they do not know how to behave with a “lady”. For example, a water vole female needs to be prepared for coupling. A normal aggressive male and its female-friend usually come to consent within two weeks of joint keeping. And the males whose aggressiveness differs from the optimal level to any direction succeed much more seldom.
The actual prolificacy of mice depends undoubtedly on the male’s aggressiveness. With very aggressive fathers, babies die both before and after the birth. Some babies are devoured by fierce fathers. Besides, the male’s aggressiveness towards a female negatively impacts the female’s own maternal qualities and reduces actual prolificacy. If the male often bites the female, the female is not that attentive to her young. The more biting the father is, the less attentive the mother would be. It is interesting to note that other aggressive actions do not impact the maternal behaviour. By attacking or striking with a tail, the males demostrate their high social status or that they are “cool”, and the females even like that, but not biting.
The researchers came to the conclusion that the male’s aggressiveness degree is the main criterion the mice females are guided by when selecting a partner for coupling. The most attractive are the males of some optimal level of aggressiveness. It is them that possess the required parental properties, which are exceptionally important to babies’ survival. Sluggish and especially fierce males are bad husbands and fathers. That is why females almost do not select them, they almost do not have posterity, and their genes gradually disappear from the population, which maintains in that way a permanent level of aggressiveness of its members.
Nadezda Markina | alfa
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Social Sciences
18.07.2018 | Life Sciences
18.07.2018 | Materials Sciences | <urn:uuid:8d30bee5-2b5d-4660-afcb-6a1c209208c2> | 2.640625 | 1,343 | Content Listing | Science & Tech. | 36.465116 | 95,480,285 |