text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
The Particle at the End of the Universe
How the Hunt for the Higgs Boson Leads Us to the Edge of A New WorldeBook - 2013
"The Higgs boson ... is the key to understanding why mass exists and how atoms are possible. After billions of dollars and decades of effort by more than six thousand researchers at the Large Hadron Collider in Switzerland--a doorway is opening into the mind-boggling world of dark matter and beyond. Caltech physicist and acclaimed writer Sean Carroll explains both the importance of the Higgs boson and the ultimately human story behind the greatest scientific achievement of our time"--Publisher.
Publisher: New York, New York, USA : Plume, 2013.
Characteristics: 1 online resource (x, 353 pages, 16 unnumbered pages of plates) : illustrations (some color). | <urn:uuid:977957b1-2d26-4673-9099-81642ac9a6fb> | 2.625 | 173 | Product Page | Science & Tech. | 49.815313 | 95,623,780 |
Statistics Definitions > Moment
If you do a casual Google search for “What is a Moment?”, you’ll probably come across something that states the first moment is the mean or that the second measures how wide a distribution is (the variance). Loosely, these definitions are right. Technically, a moment is defined by a mathematical formula that just so happens to equal formulas for some measures in statistics.
The sth moment = (x1s + x2s + x3s + . . . + xns)/n.
This type of calculation is called a geometric series. You should have covered geometric series in your college algebra class. If you didn’t (or don’t remember how to work one), don’t fret too much; In most cases, you won’t have to actually perform the calculations. You just have to have a general grasp of the meaning.
The 1st moment around zero for discrete distributions = (x11 + x21 + x31 + . . . + xn1)/n
= (x1 + x2 + x3 + . . . + xn)/n.
This formula is identical to the formula to find the sample mean. You just add up all of the values and divide by the number of items in your data set. For continuous distributions, the formula is similar but involves an integral (from calculus):
The 2nd moment around the mean = Σ(xi – μx)2.
The second is the variance.
In practice, only the first two moments are ever used in statistics. However, more moments exist (they are usually used in physics):
The 3rd moment = (x13 + x23 + x33 + . . . + xn3)/n
The third is skewness.
The 4th moment = (x14 + x24 + x34 + . . . + xn4)/n
The fourth is kurtosis.
Higher-order terms(above the 4th) are difficult to estimate and equally difficult to describe in layman’s terms. You’re unlikely to come across any of them in elementary stats. For example, the 5th order is a measure of the relative importance of tails versus center (mode, shoulders) in causing skew. For example, a high 5th means there is a heavy tail with little mode movement and a low 5th means there is more change in the shoulders.
Next: Sheppard’s correction for moments calculated from grouped data.------------------------------------------------------------------------------
If you prefer an online interactive environment to learn R and statistics, this free R Tutorial by Datacamp is a great way to get started. If you're are somewhat comfortable with R and are interested in going deeper into Statistics, try this Statistics with R track.Comments? Need to post a correction? Please post on our Facebook page. | <urn:uuid:c6ecd956-ddfa-4614-825c-3382ce8143bb> | 4.125 | 600 | Tutorial | Science & Tech. | 68.456338 | 95,623,809 |
For more than 100 years, scientists have speculated about the origin of high-energy cosmic rays, a rain of charged particles that bombard Earth from space at close to light speed.
Now a global team of hundreds of researchers says it thinks it has tracked down a source of some of the highest-energy cosmic rays: an unusual galaxy called a blazar about four billion light-years from Earth in the constellation Orion. The findings were unveiled at a news conference that is being streamed live online starting 11 a.m. ET.
The researchers found the culprit by using telescopes around the world to follow the trail of a “ghost particle” called a high-energy neutrino that hit a massive underground detector in Antarctica last September.
Such particles “are really
This story was originally published on CBC News. To read the rest of this news worthy story, please visit https://www.cbc.ca/news/technology/neutrinos-icecube-cosmic-rays-1.4742995?cmp=rss. | <urn:uuid:22c6d697-f5b4-4f82-9243-8fcbeb097806> | 3.375 | 213 | Truncated | Science & Tech. | 60.179205 | 95,623,813 |
November 12, 2014 goes down in history. On this Wednesday, an unmanned probe landed on a comet nucleus for the first time ever. Philae is to remain there as a permanent research station to collect data and take measurements for at least 60 hours. On its way to 67P/Churyumov-Gerasimenko, Philae followed a precisely defined choreography.
Already on 8 November, the ground crew sent the computer sequence that controlled the landing to Philae via the orbiter. On Monday, the lander was switched on and heated. On the morning of 12 November, the mother spacecraft Rosetta hovered just over 22 kilometres above the powdery surface of the comet 67P / Churyumov-Gerasimenko.
Between 7.35 and 8.35 CET, the final 'go' was given by the ground control centre. Experts of the European Space Agency checked whether Rosetta's orbit was correct. Even though there were problems with the lander cold gas system, which was supposed to push Philae gently onto the comet surface after landing, the decision was made to separate as scheduled. Three preloaded springs manoeuvred Philae into space with a gentle nudge at 9.35 CET. The refrigerator-sized box drifted away from the mother ship.
At 10.03 CET the control centre ESOC in Darmstadt confirmed the successful separation. Approximately two hours after the separation, data transfer began. The lander sent signals to the orbiter, from which they travelled to earth at the speed of light. Because the comet is flying through space at a distance of 500 million kilometres from our planet, the signals took 28 minutes and 20 second to reach us. They reached ESOC at 12.07 CET.
With this delay, the researchers and technicians already gathered information on the status of Philae during the descent Around 12.25 CET it was confirmed that the three landing legs and a sensor had been deployed. Moreover, first pictures of the on-board camera CIVA should soon arrive and measurements from instruments such as the radio radar CONSERT be transferred.
During the descent, Philae itself could not be controlled from Earth. Rather, the 100 kg mass spacecraft drifted in free fall towards the comet's nucleus at the speed of 18 kilometres per second. The landing area - recently christened Agilkia after a naming contest by ESA - was imprecise; the landing ellipse had an area of more than half a square kilometre. Due to the precise separation manoeuvre, Philae was exactly on course; the diameter of the landing ellipse had been reduced to 400 metres.
Roughly two hours after separation, Rosetta's onboard OSIRIS camera system was looking at the departing lander. "Philae is on a good way," Hermann Böhnhardt from the MPI for Solar System Research, lead scientist of the landing mission, commented. As evidenced by the most recent image, the lander had also successfully unfolded its legs. "To be able to watch this historic landing through the eyes of our camera is a feeling that cannot be described in words," says Max Planck Researcher Holger Sierks, leader of the OSIRIS team. "Since Rosetta's launch more than ten years ago, we have not seen any images of the lander. Now it is floating freely in space on its way to the comet's surface".
16.34 h CET: Touchdown, Philae has landed! When the probe touched down at a rate of three and a half miles per hour, two explosive harpoons were fired into the surface. In addition, screw-like tips under the footpads of the three legs of the lander drilled into the comet's crust. A shock absorbing mechanism in the central cavity of the landing gear absorbed the energy and ensured that Philae did not bounce into space.
At 17.03 CET, signals were received by control. It was clear soon after this: Landing was successful! The touch-down signal is triggered at the moment in which the central element of the landing gear was pushed upwards by the landing and the contact pressure. Also data about the release of the harpoons reached earth quite rapidly.
Next, the first pictures of the panoramic camera on the monitors appear. Is the horizon of the comet visible? The greatest danger for Philae is offered by the terrain itself. While the landscape appears to be very flat, chunks on the surface or a slope could have considerably affected the landing.
If one of the legs of the probe had landed on a rock or a slope, and if the slope had been inclined by more than 30 degrees, Philae could have rolled off - which would probably have meant the end of the mission.
Now the first science sequence will begin. First, experts analyse illumination in order to gain crucial evidence for the energy supply; because the batteries of the lander are charged via solar energy. Afterwards, all of the board instruments are put into operation.
The "hot phase", in which Philae is to measure and collect data, should last at least 60 hours. Secretly, engineers and scientists hope that the lander will survive significantly longer. Actually it is supposed to work up to a distance of about 300 million kilometres from the sun.
This point will be reached at the end of March 2015. After that, things should come to an end. If it is not hit and hurled into space by a dust fountain, the lander will overheat or perish through energy deficiency caused by dirty sun panels. Until then, Philae could have revolutionized our picture of comets.
Dr. Birgit Krummheuer
Press and Public Relations
Max Planck Institute for Solar System Research, Göttingen
Phone: +49 551 384979-462
Helmut Hornung | Max-Planck-Institut
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication
16.07.2018 | Chinese Academy of Sciences Headquarters
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:00bfcd1a-d60c-459e-96d7-0825eef12fea> | 3.359375 | 1,843 | Content Listing | Science & Tech. | 50.38813 | 95,623,852 |
Optogenetics Without the Genetics?
Researchers built this thin layer of silicon lace to modulate neural signals when activated by light. Credit: Yuanwen Jiang and Bozhi Tian
Over the past five years, University of Chicago chemist Bozhi Tian has been figuring out how to control biology with light.
A long-term science goal is devices to serve as the interface between researcher and body, both as a way to understand how cells talk among each other and within themselves and, eventually, as a treatment for brain or nervous system disorders by stimulating nerves to fire or limbs to move. Silicon—a versatile, biocompatible material used in both solar panels and surgical implants—is a natural choice.
In a paper published April 30 in Nature Biomedical Engineering, Tian’s team laid out a system of design principles for working with silicon to control biology at three levels—from individual organelles inside cells to tissues to entire limbs. The group has demonstrated each in cells or mice models, including the first time anyone has used light to control behavior without genetic modification.
“We want this to serve as a map, where you can decide which problem you would like to study and immediately find the right material and method to address it,” said Tian, an assistant professor in the Department of Chemistry.
The scientists’ map lays out best methods to craft silicon devices depending on both the intended task and the scale—ranging from inside a cell to a whole animal.
For example, to affect individual brain cells, silicon can be crafted to respond to light by emitting a tiny ionic current, which encourages neurons to fire. But in order to stimulate limbs, scientists need a system whose signals can travel farther and are stronger—such as a gold-coated silicon material in which light triggers a chemical reaction.
The mechanical properties of the implant are important, too. Say researchers would like to work with a larger piece of the brain, like the cortex, to control motor movement. The brain is a soft, squishy substance, so they’ll need a material that’s similarly soft and flexible, but can bind tightly against the surface. They’d want thin and lacy silicon, say the design principles.
The team favors this method because it doesn’t require genetic modification or a power supply wired in, since the silicon can be fashioned into what are essentially tiny solar panels. (Many other forms of monitoring or interacting with the brain need to have a power supply, and keeping a wire running into a patient is an infection risk.)
They tested the concept in mice and found they could stimulate limb movements by shining light on brain implants. Previous research tested the concept in neurons.
“We don’t have answers to a number of intrinsic questions about biology, such as whether individual mitochondria communicate remotely through bioelectric signals,” said Yuanwen Jiang, the first author on the paper, then a graduate student at UChicago and now a postdoctoral researcher at Stanford. “This set of tools could address such questions as well as pointing the way to potential solutions for nervous system disorders.”
This article has been republished from materials provided by the University of Chicago. Note: material may have been edited for length and content. For further information, please contact the cited source.
Reference: Jiang, Y., Li, X., Liu, B., Yi, J., Fang, Y., Shi, F., … Tian, B. (2018). Rational design of silicon structures for optically controlled multiscale biointerfaces. Nature Biomedical Engineering, 1. https://doi.org/10.1038/s41551-018-0230-1
Convergence of Synaptic Signals is Mediated by a Protein Critical for Learning and MemoryNews
Researchers show that protein Kinase C is a novel information integrator, keeping tabs on the recent history of neighboring synapses while simultaneously monitoring local synaptic inputREAD MORE
Through the Eyes of the Crab: Binocular processing of object motion in the crustaceanNews
The widely spaced eyes and visually guided behaviors of the crab Neohelice granulata suggest it may compute visual parameters of moving targets by combining input from both eyes.READ MORE
Getting to Know the Microbes that Drive Climate ChangeNews
A new understanding of the microbes and viruses in the thawing permafrost in Sweden may help scientists better predict the pace of climate change.READ MORE | <urn:uuid:435a9190-b9c6-46b0-ad21-e85fe5a23abe> | 3.734375 | 911 | News Article | Science & Tech. | 40.422653 | 95,623,868 |
Authors: Sjaak Uitterdijk
Monthly records of the CO2-concentration in the atmosphere and of the global temperature show a surprisingly periodic behaviour and a strong mutual correlation. This article describes why this phenomenon is the most convincing argument for the validity of the greenhouse model, explaining the worldwide climate problem. Even the most critical climate sceptics / sceptical climate critics cannot deny the greenhouse effect anymore.
Comments: 7 Pages.
[v1] 2017-11-03 07:42:14
Unique-IP document downloads: 61 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:ced446b3-d05f-40ca-8a41-0591bfd45bc6> | 3.078125 | 254 | Truncated | Science & Tech. | 36.696591 | 95,623,876 |
The measurements reveal in detail where the seals go on their winter feeding trips, where they find food and where they don’t, and help explain why some populations have remained stable since 1950 while others have declined.
The results are the subject of a paper published today in the prestigious Proceedings of the National Academy of Sciences in the USA.
The work was carried out by an international team including scientists from France, the United States and the United Kingdom and included Australian researchers Professor Mark Hindell from the University of Tasmania’s Animal Wildlife Research Unit, Dr Steve Rintoul, from the Antarctic Climate and Ecosystem Cooperative Research Centre and CSIRO (through the Wealth from Oceans Flagship), and Professor Nathan Bindoff from the University of Tasmania and the Flagship. Lead author of the paper is Dr Martin Biuw, from the Sea Mammal Research Unit at the University of St Andrews in the UK.
Until recently, the response of large marine predators to environmental variability has been almost impossible to observe directly.
Sensors were deployed on 85 elephant seals from key colonies in January and February 2003 and lasted throughout most of the Antarctic winter season. The longest track was 326 days and up to 30,000 profiles of temperature and salinity were obtained.
“Most of what we know about these seals has been based on observations made when the seals haul out on sub Antarctic islands to breed, such as the number and physical condition of the animals,” Professor Hindell said. “In particular, we have had no way to study how the seals interact with their environment and the prey within it.”
By monitoring changes in the rate at which seals drift up or down during passive “drift dives,” the scientists could determine where the seals were gaining fat (and becoming more buoyant) and where food was harder to find and the animals lost fat.
“These measurements have allowed us for the first time to make circumpolar maps of the areas that provide good foraging for seals, and areas where conditions are less favourable for them,” Professor Hindell said.
The oceanographic measurements collected by the seals provided a detailed view of the feeding behaviour of the seals in relation to oceanographic features. “The measurements of temperature and salinity collected by the seals show that the seals target very specific water bodies,” Dr Rintoul said.
“By simultaneously recording movements, dive behaviour and oceanographic conditions, the new sensors allow researchers to examine in detail how elephant seals respond to changes in ocean conditions.”“An intriguing surprise was that the feeding preferences of the Atlantic seals were very different from seals tagged on Kerguelen and Macquarie Islands, in the Indian and Pacific sectors of the Southern Ocean,” Dr Rintoul said.
The Atlantic seals preferred the open ocean waters of the Antarctic Circumpolar Current. The Kerguelen and Macquarie seals spent the winter feeding season in the sea ice pack, near the Antarctic continent.
“We think the fact that these seal populations have different foraging strategies may explain why seal numbers in the Indian and Pacific declined between the 1950s and 1970s, while Atlantic populations remained stable,” said Professor Hindell.
“The Indian and Pacific seals have to travel more than 1000 km further during their winter migration than Atlantic seals. The extra energy expended would mean less energy for breeding in years of low food abundance,” he said.
“Studies suggest that the amount of sea ice declined during the 1950s to 1970s off east Antarctica; since the Indian and Pacific seals prefer to feed in the sea ice zone, the decline in sea ice may have contributed to the decline in those seal populations,” added Dr Rintoul.
Support for the Australian component of the research was provided by the Australian Research Council, the Australian Antarctic Science Grants Scheme, CSIRO (through the Wealth from Oceans Flagship) and the Antarctic Climate and Ecosystems Cooperative Research Centre.
Background: Secret life of Elephant Seals
By providing detailed information on how animals at the top of the Southern Ocean food chain respond to variability in the ocean, this study will guide development of effective strategies for management of living resources in the Southern Ocean and predictions of how animals will respond to climate change.
Oceanographic sensors with satellite transmitters were deployed on southern elephant seals (Mirounga leonina) in three locations – Macquarie Island south of Tasmania, the French sub-Antarctic island of Kerguelen and South Georgia in the Atlantic.
The tags are glued to the fur of the elephant seals before they leave on long foraging journeys. The tags are retrieved when the animals return to the same beach to moult, which is up to 10 months later.
The tags record the position of the animal, monitor its diving cycle with a pressure sensor, and record water temperature and salinity – data which they upload to satellite while they are at the surface.
Southern elephant seals can dive to 1,500 metres but more commonly frequent depths of 200-500 metres.
Craig Macaulay | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:2a671c59-28fe-4509-9f92-deaba4467632> | 3.5625 | 1,694 | Content Listing | Science & Tech. | 38.465583 | 95,623,899 |
Supermassive stars may have been formed from old star clusters
A team of international astrophysicists from institutes in seven countries, including Dr Martin Krause at the University of Hertfordshire, propose a solution to a problem that has perplexed scientists for more than 50 years: why are the stars in globular clusters made of material different to other stars found in the Milky Way?
In a study published by Monthly Notices of the Royal Astronomical Society the team introduce a new actor to the equation that could solve the problem – a supermassive star.
The Milky Way galaxy
The Milky Way galaxy hosts over 150 old globular clusters, each containing hundreds of thousands of stars densely packed together and held by gravity. These stars are almost as old as the Universe. Since the 1960s, it has been known that most stars in these clusters contain chemical elements in different proportions than all other stars in the Milky Way. These could not have been produced in the stars themselves because the required temperatures are about 10 times higher than the temperatures of the stars in the globular clusters.
The scientists argue that a supermassive star, with a mass that is tens of thousands times the mass of the Sun, formed at the same time as the globular clusters. At that time, globular clusters were filled with dense gas out of which the stars were forming. As the stars collect more and more gas, they get so close to each other that they could physically collide and form a supermassive star in a runaway collision process. The supermassive star was hot enough to produce all the observed elements and “pollute” the other stars in the cluster with the peculiar elements we observe today.
'This makes the new model extremely viable'
Co-author, Dr Martin Krause from the University of Hertfordshire, said: “Many models have been suggested to solve this problem. They have more or less all been ruled out by observations. A supermassive star gets the proportions of the elements right relatively easily, which is the crucial observation that needs to be explained. It is also straight forward to understand why the effect occurs in massive star clusters, as the smaller ones just don’t have enough density to produce a supermassive star. In addition, small star clusters don’t have the mass to bind gas so ejecta from massive stars are easily lost from the cluster. This makes the new model extremely viable."
The team proposes various ways to test this new model of globular clusters and supermassive star formation with existing and upcoming telescopes, which can peer deep into the regions where the globular clusters formed, when the Universe was very young.
Image credit: NASA, ESA. Hubble Space Telescope image of the young massive star cluster R136 in the 30 Doradus star forming region in the Large Magellanic Cloud. The core of this cluster contains several very massive stars with masses of several 100 times the mass of the Sun, which could have formed by stellar collisions. | <urn:uuid:6a0d5216-6661-4db0-b2f0-6518a99037b4> | 4 | 604 | News (Org.) | Science & Tech. | 37.470012 | 95,623,945 |
posted by Mathslover Please help
Consider a glass with full of water of mass density ρ=1,000 kg/m3 and height h=20 cm. There's a circular hole in the bottom of the glass of radius r. The maximum pressure that pushes the water back into the hole is roughly (on the order of) p=σ/r, where σ=0.072 N/m is the water's surface tension. This extra pressure comes from the curvature of the water surface, and it tends to flatten out the surface.
Estimate the largest possible radius of the hole in μm such that water doesn't drip out of the glass.
Details and assumptions
The gravitational acceleration is g=−9.8 m/s2 and the glass is placed vertically.
Neglect any other effects that can influence the pressure from other external sources.
net pressure on hole=weight-pushback
when net pressure=zero, then
= .46/9.6 millimeters=46.9 micrometers.
Mathslover Please help
Sir but 46/9.6 is 4.79 tell the right answer
1000*9.8*0.2 = 0.072/r
1960*r = 0.072
r = 3,673 x 10^(-5) m
so r = 36,73 μm
sorry i mean
r = 3.673 x 10^(-5) m
so r = 36.73 μm
haha still confused with . and , :) | <urn:uuid:88861f09-e5cb-4515-a71c-27a7cb060521> | 3.625 | 324 | Q&A Forum | Science & Tech. | 100.535071 | 95,623,964 |
Climate Change Biology
Climate Change Biology, 2e examines the evolving discipline of human-induced climate change and the resulting shifts in the distributions of species and the timing of biological events. The text focuses on understanding the impacts of human-induced climate change by drawing on multiple lines of evidence, including paleoecology, modeling, and current observation. This revised and updated second edition emphasizes impacts of human adaptation to climate change on nature and greater emphasis on natural processes and cycles and specific elements. With four new chapters, an increased emphasis on tools for critical thinking, and a new glossary and acronym appendix, Climate Change Biology, 2e is the ideal overview of this field. * Expanded treatment of processes and cycles* Additional exercises and elements to encourage independent and critical thinking * Increased on-line supplements including mapping activities and suggested labs and classroom activities.
- Electronic book text | 464 pages
- 191 x 235mm
- 21 Nov 2014
- Elsevier Science Publishing Co Inc
- Academic Press Inc
- San Diego, United States
- 2nd edition
- Approx. 198 illustrations (198 in full color)
About Lee Hannah
Lee Hannah is Senior Researcher in Climate Change Biology the Betty and Gordon Moore Center for Science and Oceans at Conservation International (CI). Tracking with his interest in the role of climate change in conservation planning and methods of corridor design, he heads CI's efforts to develop conservation responses to climate change. He works collaboratively with the Bren School at UC Santa Barbara to model climate impacts on species in California, and with the National Botanical Institute in Cape Town, South Africa to model biotic change resulting from global warming in biodiversity hot spots in that region. He has written on the global extent of wilderness and the role of communities in the management of protected areas.
Table of contents
1: Climate Change Biology, 2: Climate Change and the Climate System, 3: Changes in species ranges, 4: Changes in Timing and Process: Phenology, 5: Ecosystem Impacts, 6: Past Terrestrial Response, 7: Marine Ecosystem Changes, 8: Past Freshwater Changes, 9: Extinctions, 10: Insights from Experimentation, 11: Modeling Species and Ecosystem Response, 12: Estimating Extinction Risk from Climate Change, 13: Ecosystem Service and Human Linkages, 14: International Policy and Action, 15: Conservation Strategies: Protected Areas, Working Landscapes and Species, 16: Reducing Greenhouse Gas Emissions, 17: Carbon Sinks and Sources, 18: Assessing Risks, Designing Solutions | <urn:uuid:bc172365-4935-46c2-b6ff-e5c76b3aa0b7> | 3 | 525 | Product Page | Science & Tech. | 3.69 | 95,623,970 |
or log in
Water on Earth is in a continuous state of change. The water cycle includes processes...
Natural gas and petroleum are among the most important sources of energy and raw materials today.
A glacier is a large body of ice that forms from snow, and is in constant, slow motion.
Lateral compressive forces cause rocks to form folds. This is how fold mountains are formed.
An earthquake is one of the most devastating natural phenomena.
In the past, a number of geologists tried to explain why the outlines of continents seem to fit...
Above a certain altitude snow does not melt, not even in summer.
The dissolution of limestone results in the development of karst formations. | <urn:uuid:2c0f9e4c-ee17-4815-afaa-29e67b371d91> | 3.234375 | 145 | Content Listing | Science & Tech. | 56.324068 | 95,623,991 |
From Science Daily , Published 28 January 2015
Using satellite images to study changing patterns of surface water is a powerful tool for identifying conservationally important "stepping stone" water bodies that could help aquatic species survive in a drying climate, a UNSW Australia-led study shows.
The approach has been applied to the Swan Coastal Plain near Perth in Western Australia, which has more than 1500 water bodies and is one of 25 designated biodiversity hotspots on the globe.
Scientists led by UNSW's Dr. Mirela Tulbure analyzed 13 years of Landsat images of the region, taken from 1999 to 2011, to gain a picture of the changing interconnectivity of the many water bodies on the plain over time.
"Aquatic systems are some of the most threatened ecosystems in the world, because they are affected by climate change and human factors such as urban expansion and use of ground water," says Dr. Tulbure.
"These factors not only increase stress on the habitat of many water species, they reduce the opportunities for water-dependent organisms to disperse to neighboring water bodies to breed and maintain resilient populations."
The Swan Coastal Plain was chosen as a study site because it has lost more than 70 per cent of its surface water bodies since European settlement, and the remaining ones are strongly affected by rapid urban development and climate change.
"Our results showed a highly variable pattern of connectivity between water bodies that changed with the seasons. Overall, there was a decline in connectivity during the 13-year period, with potentially negative consequences for the species that have a limited capacity to move between water bodies," says Dr. Tulbure.
"But we also identified stepping stone water bodies that are vital for connecting distant habitats because of their position in the landscape. This approach is a cost-effective way to prioritize which water bodies require better management and conservation to assist creatures with different travel capabilities, such as turtles and water birds."
The researchers recommend that the stepping stones on the Swan Coastal Plain, which are near the Peel-Harvey Estuary and in some national parks, be targeted for conservation.
The team is now applying their approach to the Murray-Darling Basin, and Dr. Tulbure says it could be applied to other habitat networks as well.
Suggested additional reading about water stewardship:
Subscribe to our blog Latest post: Pension Plans Are Driving Sustainability Reporting
DOWNLOAD THE LATEST WHITEPAPER Effectiveness of Local Agency Sustainability Plans
Subscribe to Greenwatch Newsletter Check out the latest issues
READ OUR LATEST CASE STUDY Assisting City of Dublin with CEQA Review for Major Kaiser Permanente Medical Facility | <urn:uuid:a8edf333-859b-407d-b198-4c92f4e3a231> | 3.796875 | 541 | News (Org.) | Science & Tech. | 24.140929 | 95,624,060 |
Precipitation MCQs Quiz Worksheet PDF Download
Practice precipitation MCQs, earth science test for online course learning and test prep. Weather and climate quiz questions has multiple choice questions (MCQ), precipitation test to learn.
Earth science practice test MCQ on extremely cold winter weather is brought us by with options polar, continental polar, maritime and tropical problem solving skills for competitive exam, viva prep, interview questions with answer key. Free Earth science revision notes to learn precipitation quiz with MCQs to find questions answers based online learning tests.
MCQs on Precipitation Quiz PDF Download
MCQ. Extremely cold winter weather is brought US by
- continental polar
MCQ. The water falling on earth surface in any form is called
MCQ. The area which determine the temperature and moisture of air mass is called
- source region
- complex zone
- air region
- mass region
MCQ. Hails are the falling of
MCQ. Maritime polar over north pacific is | <urn:uuid:c493d6bd-1797-4275-92f7-8ab0254e6699> | 3.140625 | 213 | Product Page | Science & Tech. | 45.257211 | 95,624,081 |
Name Last modified Size Description
Parent Directory 14-Jul-2018 00:28 -
SQLTeX.cfg 13-Jan-2016 20:50 1k
SQLTeX.pdf 13-Jan-2016 20:50 265k
SQLTeX.pl 13-Jan-2016 20:50 30k
SQLTeX.tex 13-Jan-2016 20:50 35k
SQLTeX.v2.0.exe 13-Jan-2016 20:50 8.8M
SQLTeX_r.dat 13-Jan-2016 20:50 1k
install 13-Jan-2016 20:50 1k
SQLTeX is a preprocessor to enable the use of SQL statements in LaTeX.
It is a perl script that reads an input file containing the SQL commands,
and writes a LaTeX file that can be processed with your LaTeX package.
The SQL commands will be replaced by their values. It's possible to select
a single field for substitution substitution in your LaTeX document, or to
be used as input in another SQL command.
When an SQL command returns multiple fields and or rows, the values can only
be used for substitution in the document.
Before installing SQLTeX, you need to have it. The latest version can always
be found at http://software.oveas.net/sqltex.
The download consists of this readme, documentation in LaTeX and HTML format,
an installation script for Unix (install), the Perl script SQLTeX, the
default replace- and configuration files, and the Windows executable.
On a Unix system, make sure the file install is executable by issueing
bash$ chmod +x install
then execute it with:
The script will ask in which directory SQLTeX should be installed. If you are
logged in as `root', the default will be /usr/local/bin, otherwise the
Make sure the directory where SQLTeX is installed is in your path.
For other operating systems, there is no install script, you will have to install
On OpenVMS it would be something like:
$ COPY SQLTEX.PL SYS$SYSTEM:
$ COPY SQLTEX.CFG SYS$SYSTEM:
$ COPY SQLTEX_R.DAT SYS$SYSTEM:
$ SET FILE/PROTECTION=(W:R) SYS$SYSTEM:SQLTEX*.*
However, on OpenVMS you also need to define the command SQLTEX by setting a symbol,
either in the LOGIN.COM for all users who need to execute this script, or in some
group-- or system wide login procedure, with the command:
$ SQLTEX :== "PERL SYS$SYSTEM:SQLTEX.PL"
For more information, please refer to the LaTeX documentation.
* Perl (http://perl.org/)
* Perl-DBI (http://dbi.perl.org/)
* The DBI driver for your database (see: http://search.cpan.org/search?query=DBD%3A%3A&mode=module)
Note for MAC users:
If DBI and the database driver are not yet installed, Xtools needs to be
installed in advance, since gcc is not available in a standard install
of Mac OS X.
Note for Windows users:
This distribution contains an .EXE file that was generated using PAR::Packer with
The files SQLTeX.EXE, SQLTeX.cfg and SQLTeX-r.dat must be placed manually in the
directory of your choice, all in the same direcrtory.
Ingo Reich for the comment on Mac OS
Johan W. Klüwer for verifying the SyBase support
Paolo Cavallini for adding PostgreSQL support
The SQL\TeX\ project is available from GitHub:
The latest stable release is always available at
For bugs, questions and comments, please use the issue tracker available at
This software is subject to the terms of the LaTeX Project Public License;
Copyright (c) 2001-2016 - Oscar van Eijk, Oveas Functionality Provider | <urn:uuid:54b1961a-635b-484e-a116-b4fd0411562f> | 2.546875 | 881 | Documentation | Software Dev. | 72.521312 | 95,624,092 |
In astronomy, extinction is the absorption and scattering of electromagnetic radiation by dust and gas between an emitting astronomical object and the observer. Interstellar extinction was first documented as such in 1930 by Robert Julius Trumpler. However, its effects had been noted in 1847 by Friedrich Georg Wilhelm von Struve, and its effect on the colors of stars had been observed by a number of individuals who did not connect it with the general presence of galactic dust. For stars that lie near the plane of the Milky Way and are within a few thousand parsecs of the Earth, extinction in the visual band of frequencies (photometric system) is on the order of 1.8 magnitudes per kiloparsec.
For Earth-bound observers, extinction arises both from the interstellar medium (ISM) and the Earth's atmosphere; it may also arise from circumstellar dust around an observed object. Strong extinction in earth's atmosphere of some wavelength regions (such as X-ray, ultraviolet, and infrared) is overcome by the use of space-based observatories. Since blue light is much more strongly attenuated than red light, extinction causes objects to appear redder than expected, a phenomenon referred to as interstellar reddening.
In astronomy, interstellar reddening is a phenomenon associated with interstellar extinction where the spectrum of electromagnetic radiation from a radiation source changes characteristics from that which the object originally emitted. Reddening occurs due to the light scattering off dust and other matter in the interstellar medium. Interstellar reddening is a different phenomenon from redshift, which is the proportional frequency shifts of spectra without distortion. Reddening preferentially removes shorter wavelength photons from a radiated spectrum while leaving behind the longer wavelength photons (in the optical, light that is redder), leaving the spectroscopic lines unchanged.
In most photometric systems filters (passbands) are used from which readings of magnitude of light may account of latitude and humidity among terrestrial factors. Interstellar reddening equates to the "color excess", defined as the difference between an object's observed color index and its intrinsic color index (sometimes referred to as its normal color index). The latter is the theoretical value which it would have if unaffected by extinction. In the first system, the UBV photometric system devised in the 1950s and its most closely related successors, the object's color excess is related to the object's B−V color (calibrated blue minus calibrated violet) by:
For an A0-type main sequence star (these have median wavelength and heat among the main sequence) the color indices are calibrated at 0 based on an intrinsic reading of such a star (± exactly 0.02 depending on which spectral point, i.e. precise passband within the abbreviated color name is in question, see color index). At least two and up to five measured passbands in magnitude are then compared by subtraction: U,B,V,I or R during which the color excess from extinction is calculated and deducted. The name of the four sub-indices (R minus I etc.) and order of the subtraction of recalibrated magnitudes is from right to immediate left within this sequence.
Interstellar reddening occurs because interstellar dust absorbs and scatters blue light waves more than red light waves, making stars appear redder than they are. This is similar to the effect seen when dust particles in the atmosphere of Earth contribute to red sunsets.
Broadly speaking, interstellar extinction is strongest at short wavelengths, generally observed by using techniques from spectroscopy. Extinction results in a change in the shape of an observed spectrum. Superimposed on this general shape are absorption features (wavelength bands where the intensity is lowered) that have a variety of origins and can give clues as to the chemical composition of the interstellar material, e.g. dust grains. Known absorption features include the 2175 Å bump, the diffuse interstellar bands, the 3.1 μm water ice feature, and the 10 and 18 μm silicate features.
In the solar neighborhood, the rate of interstellar extinction in the Johnson-Cousins V-band (visual filter) averaged at a wavelength of 540 nm is usually taken to be 0.7-1.0 mag/kpc−simply an average due to the clumpiness of interstellar dust. In general, however, this means that a star will have its brightness reduced by about a factor of 2 in the V-band viewed from a good night sky vantage point on earth for every kiloparsec (3,260 light years) it is farther away from us.
The amount of extinction can be significantly higher than this in specific directions. For example, some regions of the Galactic Center are awash with obvious intervening dark dust from our spiral arm (and perhaps others) and themselves in a bulge of dense matter, causing as much as more than 30 magnitudes of extinction in the optical, meaning that less than 1 optical photon in 1012 passes through. This results in the so-called zone of avoidance, where our view of the extra-galactic sky is severely hampered, and background galaxies, such as Dwingeloo 1, were only discovered recently through observations in radio and infrared.
The general shape of the ultraviolet through near-infrared (0.125 to 3.5 μm) extinction curve (plotting extinction in magnitude against wavelength, often inverted) looking from our vantage point at other objects in the Milky Way, is fairly well characterized by the stand-alone parameter of relative visiblity (of such visible light) R(V) (which is different along different lines of sight), but there are known deviations from this characterization. Extending the extinction law into the mid-infrared wavelength range is difficult due to the lack of suitable targets and various contributions by absorption features.
R(V) compares aggregate and particular extinctions. It is A(V)/E(B-V). Restated, it is the total extinction, A(V) divided by the selective total extinction (A(B)-A(V)) of those two wavelengths (bands). A(B) and A(V) are the total extinction at the B and V filter bands. Another measure used in the literature is the absolute extinction A(λ)/A(V) at wavelength λ, comparing the total extinction at that wavelength to that at the V band.
R(V) is known to be correlated with the average size of the dust grains causing the extinction. For our own galaxy, the Milky Way, the typical value for R(V) is 3.1, but is found to vary considerably across different lines of sight. As a result, when computing cosmic distances it can be advantageous to move to star data from the near-infared (of which the filter or passband Ks is quite standard) where the variations and amount of extinction are significantly less, and similar ratios as to R(Ks): 0.49±0.02 and 0.528±0.015 were found respectively by independent groups. Those two more modern findings differ substantially relative to the commonly referenced historical value ≈0.7.
The relationship between the total extinction, A(V) (measured in magnitudes), and the column density of neutral hydrogen atoms column, NH (usually measured in cm−2), shows how the gas and dust in the interstellar medium are related. From studies using ultraviolet spectroscopy of reddened stars and X-ray scattering halos in the Milky Way, Predehl and Schmitt found the relationship between NH and A(V) to be approximately:
Astronomers have determined the three-dimensional distribution of extinction in the "solar circle" (our region of our galaxy), using visible and near-infrared stellar observations and a model of distribution of stars. The dust causing extinction mainly lies along the spiral arms, as observed in other spiral galaxies.
Measuring extinction towards an object
To measure the extinction curve for a star, the star's spectrum is compared to the observed spectrum of a similar star known not to be affected by extinction (unreddened). It is also possible to use a theoretical spectrum instead of the observed spectrum for the comparison, but this is less common. In the case of emission nebulae, it is common to look at the ratio of two emission lines which should not be affected by the temperature and density in the nebula. For example, the ratio of hydrogen alpha to hydrogen beta emission is always around 2.85 under a wide range of conditions prevailing in nebulae. A ratio other than 2.85 must therefore be due to extinction, and the amount of extinction can thus be calculated.
The 2175-angstrom feature
One prominent feature in measured extinction curves of many objects within the Milky Way is a broad 'bump' at about 2175 Å, well into the ultraviolet region of the electromagnetic spectrum. This feature was first observed in the 1960s, but its origin is still not well understood. Several models have been presented to account for this bump which include graphitic grains with a mixture of PAH molecules. Investigations of interstellar grains embedded in interplanetary dust particles (IDP) observed this feature and identified the carrier with organic carbon and amorphous silicates present in the grains.
Extinction curves of other galaxies
The form of the standard extinction curve depends on the composition of the ISM, which varies from galaxy to galaxy. In the Local Group, the best-determined extinction curves are those of the Milky Way, the Small Magellanic Cloud (SMC) and the Large Magellanic Cloud (LMC).
In the LMC, there is significant variation in the characteristics of the ultraviolet extinction with a weaker 2175 Å bump and stronger far-UV extinction in the region associated with the LMC2 supershell (near the 30 Doradus starbursting region) than seen elsewhere in the LMC and in the Milky Way. In the SMC, more extreme variation is seen with no 2175 Å and very strong far-UV extinction in the star forming Bar and fairly normal ultraviolet extinction seen in the more quiescent Wing.
This gives clues as to the composition of the ISM in the various galaxies. Previously, the different average extinction curves in the Milky Way, LMC, and SMC were thought to be the result of the different metallicities of the three galaxies: the LMC's metallicity is about 40% of that of the Milky Way, while the SMC's is about 10%. Finding extinction curves in both the LMC and SMC which are similar to those found in the Milky Way and finding extinction curves in the Milky Way that look more like those found in the LMC2 supershell of the LMC and in the SMC Bar has given rise to a new interpretation. The variations in the curves seen in the Magellanic Clouds and Milky Way may instead be caused by processing of the dust grains by nearby star formation. This interpretation is supported by work in starburst galaxies (which are undergoing intense star formation episodes) that their dust lacks the 2175 Å bump.
Atmospheric extinction gives the rising or setting Sun an orange hue and varies with location and altitude. Astronomical observatories generally are able to characterise the local extinction curve very accurately, to allow observations to be corrected for the effect. Nevertheless, the atmosphere is completely opaque to many wavelengths requiring the use of satellites to make observations.
This extinction has three main components: Rayleigh scattering by air molecules, light scattering by particulates, and molecular absorption. Molecular absorption is often referred to as telluric absorption, as it is caused by the Earth (telluric is a synonym for terrestrial). The most important sources of telluric absorption are molecular oxygen and ozone, which absorb strongly in the near-ultraviolet, and water, which absorbs strongly in the infrared.
The amount of such extinction is lowest at the sky's zenith and at a maximum near the horizon. To reach a star's greatest celestial altitude requires the optimal hour of the day, the star's local meridian, a favorable declination (i.e. similar to the observer's latitude) and the point in the seasons, the earth's annual cycle in axial tilt are key. Extinction is approximated by multiplying the standard atmospheric extinction curve (plotted against each wavelength) by the mean airmass calculated over the duration of the observation. A dry atmosphere reduces infrared extinction signficantly.
- Trumpler, R. J. (1930). "Preliminary results on the distances, dimensions and space distribution of open star clusters". Lick Observatory Bulletin. 14 (420): 154–188. Bibcode:1930LicOB..14..154T. doi:10.5479/ADS/bib/1930LicOB.14.154T.
- Karttunen, Hannu (2003). Fundamental astronomy. Physics and Astronomy Online Library. Springer. p. 289. ISBN 978-3-540-00179-9.
- Struve, F. G. W. 1847, St. Petersburg: Tip. Acad. Imper., 1847; IV, 165 p.; in 8.; DCCC.4.211
- Whittet, Doug C. B. (2003). Dust in the Galactic Environment. Series in Astronomy and Astrophysics (2nd ed.). CRC Press. p. 10. ISBN 0750306246.
- See Binney and Merrifeld, Section 3.7 (1998, ISBN 978-0-691-02565-0), Carroll and Ostlie, Section 12.1 (2007, ISBN 978-0-8053-0402-2), and Kutner (2003, ISBN 978-0-521-52927-3) for applications in astronomy.
- "Interstellar Reddening, Extinction, and Red Sunsets". Astro.virginia.edu. 2002-04-22. Retrieved 2017-07-14.
- Gottlieb, D. M.; Upson, W.L. (1969). "Local Interstellar Reddening". Astrophysical Journal. 157: 611. Bibcode:1969ApJ...157..611G. doi:10.1086/150101.
- Milne, D. K.; Aller, L.H. (1980). "An average model for the galactic absorption". Astrophysical Journal. 85: 17–21. Bibcode:1980AJ.....85...17M. doi:10.1086/112628.
- Lynga, G. (1982). "Open clusters in our Galaxy". Astronomy & Astrophysics. 109: 213–222. Bibcode:1982A&A...109..213L.
- Schlegel, David J.; Finkbeiner, Douglas P; Davis, Marc (1998). "Maps of Dust Infrared Emission for Use in Estimation of Reddening and Cosmic Microwave Background Radiation Foregrounds". Astrophysical Journal. 500 (2): 525–553. arXiv: . Bibcode:1998ApJ...500..525S. doi:10.1086/305772.
- Cardelli, Jason A.; Clayton, Geoffrey C.; Mathis, John S. (1989). "The relationship between infrared, optical, and ultraviolet extinction". Astrophysical Journal. 345: 245–256. Bibcode:1989ApJ...345..245C. doi:10.1086/167900.
- Valencic, Lynne A.; Clayton, Geoffrey C.; Gordon, Karl D. (2004). "Ultraviolet Extinction Properties in the Milky Way". Astrophysical Journal. 616 (2): 912–924. arXiv: . Bibcode:2004ApJ...616..912V. doi:10.1086/424922.
- Mathis, John S.; Cardelli, Jason A. (1992). "Deviations of interstellar extinctions from the mean R-dependent extinction law". Astrophysical Journal. 398: 610–620. Bibcode:1992ApJ...398..610M. doi:10.1086/171886.
- T. K. Fritz; S. Gillessen; K. Dodds-Eden; D. Lutz; R. Genzel; W. Raab; T. Ott; O. Pfuhl; F. Eisenhauer and F. Yusuf-Zadeh (2011). "Line Derived Infrared Extinction toward the Galactic Center". The Astrophysical Journal. 737: 73. arXiv: . Bibcode:2011ApJ...737...73F. doi:10.1088/0004-637X/737/2/73.
- Schultz, G. V.; Wiemer, W. (1975). "Interstellar reddening and IR-excess of O and B stars". Astronomy and Astrophysics. 43: 133–139. Bibcode:1975A&A....43..133S.
- Majaess, Daniel; David Turner; Istvan Dekany; Dante Minniti; Wolfgang Gieren (2016). "Constraining dust extinction properties via the VVV survey". Astronomy and Astrophysics. 593. arXiv: . Bibcode:2016A&A...593A.124M. doi:10.1051/0004-6361/201628763.
- R(Ks) is, mathematically likewise, A(Ks)/E(J-Ks)
- Nishyiama, Shogo; Motohide Tamura; Hirofumi Hatano; Daisuke Kato; Toshihiko Tanabe; Koji Sugitani; Tetsuya Nagata (2009). "Interstellar Extinction Law Toward the Galactic Center III: J, H, KS Bands in the 2MASS and the MKO Systems, and 3.6, 4.5, 5.8, 8.0 μm in the Spitzer/IRAC System". The Astrophysical Journal. 696. arXiv: . Bibcode:2009ApJ...696.1407N. doi:10.1088/0004-637X/696/2/1407.
- Predehl, P.; Schmitt, J. H. M. M. (1995). "X-raying the interstellar medium: ROSAT observations of dust scattering halos". Astronomy and Astrophysics. 293: 889–905. Bibcode:1995A&A...293..889P.
- Bohlin, Ralph C.; Blair D. Savage; J. F. Drake (1978). "A survey of interstellar H I from L-alpha absorption measurements. II". Astrophysical Journal. 224: 132–142. Bibcode:1978ApJ...224..132B. doi:10.1086/156357.
- Diplas, Athanassios; Blair D. Savage (1994). "An IUE survey of interstellar H I LY alpha absorption. 2: Interpretations". Astrophysical Journal. 427: 274–287. Bibcode:1994ApJ...427..274D. doi:10.1086/174139.
- Güver, Tolga; Özel, Feryal (2009). "The relation between optical extinction and hydrogen column density in the Galaxy". Monthly Notices of the Royal Astronomical Society. 400: 2050–2053. arXiv: . Bibcode:2009MNRAS.400.2050G. doi:10.1111/j.1365-2966.2009.15598.x.
- Marshall, Douglas J.; Robin, A.C.; Reylé, C.; Schultheis, M.; Picaud, S. (Jul 2006). "Modelling the Galactic interstellar extinction distribution in three dimensions". Astronomy and Astrophysics. 453 (2): 635–651. arXiv: . Bibcode:2006A&A...453..635M. doi:10.1051/0004-6361:20053842.
- Robin, Annie C.; Reylé, C.; Derrière, S.; Picaud, S. (Oct 2003). "A synthetic view on structure and evolution of the Milky Way". Astronomy and Astrophysics. 409 (2): 523–540. arXiv: . Bibcode:2003A&A...409..523R. doi:10.1051/0004-6361:20031117.
- Cardelli, Jason A.; Sembach, Kenneth R.; Mathis, John S. (1992). "The quantitative assessment of UV extinction derived from IUE data of giants and supergiants". Astronomical Journal. 104 (5): 1916–1929. Bibcode:1992AJ....104.1916C. doi:10.1086/116367. ISSN 0004-6256.
- Stecher, Theodore P. (1965). "Interstellar Extinction in the Ultraviolet". Astrophysical Journal. 142: 1683. Bibcode:1965ApJ...142.1683S. doi:10.1086/148462.
- Stecher, Theodore P. (1969). "Interstellar Extinction in the Ultraviolet. II". Astrophysical Journal. 157: L125. Bibcode:1969ApJ...157L.125S. doi:10.1086/180400.
- Bradley, John; Dai, ZR; et al. (2005). "An Astronomical 2175 Å Feature in Interplanetary Dust Particles". Science. 307 (5707): 244–247. Bibcode:2005Sci...307..244B. doi:10.1126/science.1106717. PMID 15653501.
- Gordon, Karl D.; Geoffrey C. Clayton; Karl A. Misselt; Arlo U. Landolt; Michael J. Wolff (2003). "A Quantitative Comparison of the Small Magellanic Cloud, Large Magellanic Cloud, and Milky Way Ultraviolet to Near-Infrared Extinction Curves". Astrophysical Journal. 594 (1): 279–293. arXiv: . Bibcode:2003ApJ...594..279G. doi:10.1086/376774.
- Fitzpatrick, Edward L. (1986). "An average interstellar extinction curve for the Large Magellanic Cloud". Astronomical Journal. 92: 1068–1073. Bibcode:1986AJ.....92.1068F. doi:10.1086/114237.
- Misselt, Karl A.; Geoffrey C. Clayton; Karl D. Gordon (1999). "A Reanalysis of the Ultraviolet Extinction from Interstellar Dust in the Large Magellanic Cloud". Astrophysical Journal. 515 (1): 128–139. arXiv: . Bibcode:1999ApJ...515..128M. doi:10.1086/307010.
- Lequeux, J.; Maurice, E.; Prevot-Burnichon, M. L.; Prevot, L.; Rocca-Volmerange, B. (1982). "SK 143 - an SMC star with a galactic-type ultraviolet interstellar extinction". Astronomy and Astrophysics. 113: L15–L17. Bibcode:1982A&A...113L..15L.
- Prevot, M. L.; Lequeux, J.; Prevot, L.; Maurice, E.; Rocca-Volmerange, B. (1984). "The typical interstellar extinction in the Small Magellanic Cloud". Astronomy and Astrophysics. 132: 389–392. Bibcode:1984A&A...132..389P.
- Gordon, Karl D.; Geoffrey C. Clayton (1998). "Starburst-like Dust Extinction in the Small Magellanic Cloud". Astrophysical Journal. 500 (2): 816–824. arXiv: . Bibcode:1998ApJ...500..816G. doi:10.1086/305774.
- Clayton, Geoffrey C.; Karl D. Gordon; Michael J. Wolff (2000). "Magellanic Cloud-Type Interstellar Dust along Low-Density Sight Lines in the Galaxy". Astrophysical Journal Supplement Series. 129 (1): 147–157. arXiv: . Bibcode:2000ApJS..129..147C. doi:10.1086/313419.
- Valencic, Lynne A.; Geoffrey C. Clayton; Karl D. Gordon; Tracy L. Smith (2003). "Small Magellanic Cloud-Type Interstellar Dust in the Milky Way". Astrophysical Journal. 598 (1): 369–374. arXiv: . Bibcode:2003ApJ...598..369V. doi:10.1086/378802.
- Calzetti, Daniela; Anne L. Kinney; Thaisa Storchi-Bergmann (1994). "Dust extinction of the stellar continua in starburst galaxies: The ultraviolet and optical extinction law". Astrophysical Journal. 429: 582–601. Bibcode:1994ApJ...429..582C. doi:10.1086/174346.
- Gordon, Karl D.; Daniela Calzetti; Adolf N. Witt (1997). "Dust in Starburst Galaxies". Astrophysical Journal. 487 (2): 625–635. arXiv: . Bibcode:1997ApJ...487..625G. doi:10.1086/304654.
- Binney, J. & Merrifield, M. (1998). Galactic Astronomy. Princeton: Princeton University Press. ISBN 0-691-00402-1.
- Howarth, I. D. (1983). "LMC and galactic extinction". Monthly Notices of the Royal Astronomical Society. 203: 301–304. Bibcode:1983MNRAS.203..301H. doi:10.1093/mnras/203.2.301.
- King, D. L. (1985). "Atmospheric Extinction at the Roque de los Muchachos Observatory, La Palma". RGO/La Palma technical note. 31.
- Rouleau, F.; Henning, T.; Stognienko, R. (1997). "Constraints on the properties of the 2175Å interstellar feature carrier". Astronomy and Astrophysics. 322: 633–645. arXiv: . Bibcode:1997A&A...322..633R. | <urn:uuid:fb1ddce1-28ea-4d38-aa3a-5130e894fddb> | 3.90625 | 5,573 | Knowledge Article | Science & Tech. | 69.869138 | 95,624,101 |
Probability - examples
Probability is the measure of the likeliness that an event will occur. The probability (chance) is a value from the interval <0;1> or in percentage (0% to 100%) expressing the occurrence of some event. 0 is impossible event and 1 (100%) means the certainty event.
- A book
A book contains 524 pages. If it is known that a person will select any one page between the pages numbered 125 and 384, find the probability of choosing the page numbered 252 or 253.
There are 20 peaches in the pocket. 3 peaches are rotten. What is the probability that one of the randomly picked two peaches will be just one rotten?
- The big clock
The big clock hands stopped at a random moment. What is the probability that: a) a small hand showed the time between 1:00 and 3:00? b) the big hand was in the same area as a small hand in the role of a)? c) did the hours just show the time between 21:00.
- Utopia Island
A probability of disease A on the island of Utopia is 40%. A probability of occurrence among the men of this island, which make up 60% of all the population (the rest are women), is 50%. What is the probability of occurrence of A disease among women on Uto
In rectangle with sides 3 and 10 mark the diagonal. What is the probability that a randomly selected point within the rectangle is closer to the diagonal than to any side of the rectangle?
From complete sets of playing cards (32 cards) we pulled out one card. What is the probability of pulling the ace?
Pediatrician this month of 21 working days takes 3 days holidays. What is the probability that on Monday will be at work?
- Bureau of Labor
Bureau of Labor is a state institution that provides mike and the rest for their so-called clients. The mission of the Bureau of Labor is spend taxpayer money to provide relaxation and benefits to those who do not want to work. Popularly speaking sense the
What is the probability that a random word composed of chars H, T, M, A will be MATH?
- Two doctors
Doctor A will determine the correct diagnosis with a probability 86% and doctor B with a probability 87%. Calculate probability of correct diagnosis if patient is diagnosed by both doctors.
How many times must throw the dice, the probability of throwing at least one six was greater than 90%?
From statistics of sales goods, item A buy 51% of people and item B buys 59% of people. What is the probability that from 10 people buy 2 item A and 8 item B?
From 6 products are 3 scrap. What is the probability that the random pick of 2 products have no defective product?
We will throw two dice. What is the probability that the ratio between numbers on first and second dice will be 1:2?
- Event probability
The probability of event P in 8 independent experiments is 0.33. What is the probability that the event P occurs in one experiment (probability is same)?
The probability that a good shooter hits the center of the target circle I is 0.1. The probability that the target hit intercircle is II 0.58. What is the probability that it hits the target circle I or II?
Event N has probability of 0.24. What is the probability that the event N occurs in 8, 5, 4 try.
From the urn in which are 7 white balls and 17 red, gradually drag 3-times without replacement. What is the probability that pulls balls are in order: red red red?
The owner of the house is insured against natural disasters and pays 0.04% annually of the value of house 77 Eur. Calculate the value of the house. Calculate the probability of disaster, if you know that 48% of the insurance is to pay damages.
A school survey found that 10 out of 11 students like pizza. If 6 students are chosen at random, what is the probability that all 6 students like pizza?
Do you have interesting mathematical example that you can't solve it? Enter it
and we can try to solve it. | <urn:uuid:d37b7bea-27cd-4f40-99ae-f5bb5a89e3ef> | 3.625 | 872 | Tutorial | Science & Tech. | 66.458358 | 95,624,103 |
Milkweed plants protect the iconic insects from various diseases. The leaves of the said plant contain bitter toxins that help monarchs ward off predators and parasites, and the plant is the sole food of monarch caterpillars.
Researchers at the University of Michigan grew four milkweed species with varying levels of those protective compounds, which are called cardenolides.
Half the plants were grown under normal carbon dioxide levels, and half of them were bathed, from dawn to dusk, in nearly twice that amount. Then the plants were fed to hundreds of monarch caterpillars.
The study showed that the most protective of the four milkweed species lost its medicinal properties when grown under elevated CO2, resulting in a steep decline in the monarch's ability to tolerate a common parasite, as well as a lifespan reduction of one week.
Researchers of the study solely looked at how elevated carbon dioxide levels alter plant chemistry and how those changes, in turn, affect interactions between monarchs and their parasites.
"Our results emphasize that global environmental change may influence parasite-host interactions through changes in the medicinal properties of plants," said Leslie Decker, first author of the study.
In recent years, monarch populations have been declining rapidly. Most discussions of the monarch butterfly's plight focus on habitat loss, logging of trees in the Mexican forest where monarchs spend the winter, as well as the loss of wild milkweed plants that sustain them during their annual migration.
The findings appeared in the Journal of Ecology Letters (ANI) | <urn:uuid:0f2f270c-7343-457d-98db-bab657efcfc9> | 4.25 | 304 | News Article | Science & Tech. | 31.864143 | 95,624,114 |
|Dictyostelium fruiting bodies|
Dictyostelium is a genus of single- and multi-celled eukaryotic, phagotrophic bacterivores. Though they are Protista and in no way fungal, they traditionally are known as "slime molds". They are present in most terrestrial ecosystems as a normal and often abundant component of the soil microflora, and play an important role in the maintenance of balanced bacterial populations in soils.
The genus Dictyostelium is in the order Dictyosteliida, the so-called cellular slime molds or social amoebae. In turn the order is in the infraphylum Mycetozoa. Members of the order are Protista of great theoretical interest in biology because they have aspects of both unicellularity and multicellularity. The individual cells in their independent phase are common on organic detritus or in damp soils and caves. In this phase they are amoebae. Typically, the amoebal cells grow separately and wander independently, feeding mainly on bacteria. However, they interact to form multi-cellular structures following starvation. Groups of up to about 100,000 cells signal each other by releasing chemoattractants such as cyclic AMP (cAMP) or glorin. They then coalesce by chemotaxis to form an aggregate that becomes surrounded by an extracellular matrix. The aggregate forms a fruiting body, with cells differentiating individually into different components of the final structure. In some species, the whole aggregate may move collectively – forming a structure known as a grex or "slug" – before finally forming a fruiting body. Basic processes of development such as differential cell sorting, pattern formation, stimulus-induced gene expression, and cell-type regulation are common to Dictyostelium and metazoans. For further detail see family Dictyostelid.
The cellular slime molds were formerly considered to be fungi following their discovery in 1869 by Brefeld. Although they resemble fungi in some respects, they have been included in the kingdom Protista. Individual cells resemble small amoebae in their movement and feeding, and so are referred to as myxamoebae. D. discoideum is the most studied of the genus.
Most of its life, this haploid social amoeba undergoes a vegetative cycle, preying upon bacteria in the soil, and periodically dividing mitotically. When food is scarce, either the sexual cycle or the social cycle begins. Under the social cycle, amoebae aggregate in response to cAMP by the thousands, and form a motile slug, which moves towards light. Ultimately the slug forms a fruiting body in which about 20% of the cells die to lift the remaining cells up to a better place for sporulation and dispersal.
When starved for their bacterial food supply and exposed to dark, moist conditions, heterothallic or homothallic strains can undergo sexual development that results in the formation of a diploid zygote. Heterothallic mating has been best studied in Dictyostelium discoideum and homothallic mating has been best studied in Dictyostelium mucoroides (strain DM7). In the heterothallic sexual cycle, amoebae aggregate in response to cAMP and sex pheromones, and two cells of opposite mating types fuse, and then begin consuming the other attracted cells. Before they are consumed, some of the prey cells form a cellulose wall around the entire group. When cannibalism is complete, the giant diploid cell is a hardy macrocyst which eventually undergoes recombination and meiosis, and hatches hundreds of recombinants. In D. mucoroides (DM7) homothallic mating, cells are directed towards sexual development by ethylene.
Professor John Tyler Bonner has spent a lifetime researching the slime molds and created a number of fascinating videos in the 1940s to show the life cycle; he has mostly studied D. discoideum. In the videos, intelligence appears to be observed as the single cells, after separation, regroup into a cellular mass. The time-lapse film captivated audiences; indeed, Bonner when giving conferences has stated that the film "always stole the show". The video is available on YouTube.
Taxonomy of Dictyostelium is complicated. It has also been confused by the different forms in the life cycle stages and by the similar Polysphondylium spp. Below are some reported examples.
- Dictyostelium caveatum (Wadell 1982)
- Dictyostelium discoideum Raper 1935
- Dictyostelium irregularis (Olive, Nelson and Stoianovitch 1967)
- Dictyostelium lacteum
- Dictyostelium minutum
- Dictyostelium mucoroides
- Dictyostelium polycephalum
- Dictyostelium purpureum
- Dictyostelium rosarium
- Landolt. C. (2006) Dictyostelid Cellular Slime Molds from Caves. Journal of Cave and Karst studies v. 68 no. 1 pp. 22-26.
- "About Dictyostelium". dictybase.org.
- Kessin, R (2001). ISBN 0-521-58364-0.
- O'Day DH, Keszei A (May 2012). "Signalling and sex in the social amoebozoans". Biol Rev Camb Philos Soc. 87 (2): 313–29. doi:10.1111/j.1469-185X.2011.00200.x. PMID 21929567.
- "Dictyostelium". www.ruf.rice.edu.
- "dictyBase Home". dictybase.org.
- Princeton University (22 January 2010). "John Bonner's slime mold movies" – via YouTube.
- Raper, K.B. (1935). "Dictyostelium discoideum, a new species of slime mold from decaying forest leaves". Journal of Agricultural Research. 50: 135–147.
- Nelson, Nancy; Olive, L. S.; Stoianovitch, Carmen (20 February 1967). "A New Species of Dictyostelium from Hawaii". American Journal of Botany. 54 (3): 354–358. doi:10.2307/2440763. JSTOR 2440763. | <urn:uuid:368101e0-7d44-4853-a2b7-81c4403670ea> | 3.4375 | 1,397 | Knowledge Article | Science & Tech. | 42.812705 | 95,624,119 |
What is a Liana?
Lianas are woody vines that have roots in the soil but reach for light by growing on, over, and around a tree, snag, or other "trellis." Lianas may climb using many different adaptations including twining stems (to grab around large branches or trunks), tendrils (non-stem features that grow to grab smaller features like twigs), adhesive roots (to grab rough surfaces like bark), hooks, and thorns.
Lianas affect the forest around them in many ways. They compete with other plants for light, water, and nutrients. They also increase the chance and extent of canopy damage from forest disturbances such as hurricanes and logging operations. Lianas weigh heavily on their host trees, increase the drag from high winds, and pull on their host tree if connected to another falling tree. Lianas also affect forest recovery after such a disturbance. They recover quickly because they are flexible, can grow sideways, and often grow "clonally," whereby a plant "clone" roots and re-sprouts from a broken leaf or stem. This quick recovery shades forest floor, affecting competition between recovering tree species that are more or less shade tolerant.
Scientists are still working to understand the ecological roles lianas play in forests all over the world. Recent research at Congaree National Park is helping to address many of these questions.
Lianas are an important and diverse part of the floodplain forest ecosystem at Congaree National Park. Three common examples are poison ivy, trumpet creeper, and wild grape (sometimes known as bullace, scuppernong, or muscadine). Scientists have counted at least 28 liana species at the park. These species represent approximately four percent of the park’s vascular plant biodiversity. In most temperate forests, by comparison, lianas only make up about two percent of the vascular plant biodiversity. Regionally, Congaree’s 28 species represent over 62 percent of the 45 liana species found across the Carolinas.
Some lianas at Congaree National Park may even be older than the park’s champion trees! Scientists have found wild grape vines almost 9.5 inches across. Although these vines’ heart wood is rotted out, growth rate projections suggest that they could be over 240 years old! Many lianas may also, technically speaking, be clones that have re-sprouted from older plants through many generations. The actual, original seed may have sprouted centuries ago.
Lianas are Changing Around the World
Over the last several decades, scientists around the world have noticed that lianas are expanding their ranges and generally increasing in abundance, density, and size. Scientists are still sorting out the details, but the following factors are all related, both directly and indirectly, to human activity:
Carbon dioxide (CO2) - Lianas (especially poison ivy!) grow faster under increased CO2 levels because they have a high leaf area relative to stem size. Humans have increased CO2 levels by burning fossil fuels.
Global warming - Lianas freeze easily, but warming with climate change allows them to expand their range.
Drought stress - Lianas often grow deep roots that can tolerate drought stress associated with climate change.
Habitat fragmentation - Lianas thrive in forest margins artificially fragmented by roads, fields, and clear cuts.
Invasive species - People have introduced liana species that locally out-compete native vegetation for limited resources. Some examples include Kudzu, Chinese Wisteria, Japanese Honeysuckle, and English Ivy.
The implications of increased liana growth are not well understood. Over time, however, they will change the way forests develop, store carbon, and provide resources for countless species - including people.
Liana Research at Congaree
Scientists from the University of Georgia and the Ohio State University have included lianas in long-term forest monitoring studies at Congaree National Park. Studies have focused on areas disturbed by Hurricane Hugo (1989) as well as areas disturbed by historical logging activity. For years, scientists have systematically identified, sampled, and monitored thousands of lianas and other plants in plots around the park. The data have allowed scientists to study changes in liana populations, growth rates, and host tree relationships. These results represent some of the most detailed and significant temperate forest liana data in the world. Highlights from this research include:
Lianas increased in density, basal area (stem area), and growth rate across the park during the late 20th century. Increases were greatest in areas with significant disturbance, but not limited to these areas.
Sweetgum trees were more likely to be liana host trees than other trees - especially for poison ivy.
1970’s logging disturbance affected liana distribution. Rattanvine was generally more common in logged areas. Clear-cuts had more Virginia creeper and wild grape. Select-cut areas had more wild grape. Salvage logged areas had more poison ivy.
Hurricane Hugo changed liana populations. The storm initially killed many lianas by damaging host trees, but lianas recovered very quickly. Virginia creeper and wild grape populations nearly doubled their basal area within 16 years.
Trumpet Creeper Versus Poison Ivy
In 2005-2006, scientists examined the growth histories of trumpet creeper and poison ivy by analyzing stem cores that revealed annual growth rings (basically like tree rings). The largest trumpet creeper sampled was 5.75 inches across. The largest poison ivy sampled was 5.31 inches across. Highlights from this research include:
The oldest trumpet creeper was 38 years old, while the oldest poison ivy was 58 years old. Older (larger) trumpet creeper stems were found, but these were prone to "heart rot" that removes the inner rings and makes ring counting impossible.
These two species have different growth histories. Young trumpet creeper vines initially grew very fast, but then slowed down over time. Poison ivy had a more constant growth rate for the first 30 years and then began to grow faster.
These two species prefer different host trees. Trumpet creeper prefers to grow on smaller host trees and small branches in the upper canopy. Poison ivy prefers to grow on large branches and large tree trunks (especially sweetgum). These differences make trumpet creeper generally more susceptible to host-tree wind damage than poison ivy, which has a more stable trellis.
Both species experienced "releases," or periods of increased growth, following disturbance. In areas of high hurricane damage, poison ivy growth rates were especially high for up to 8 years. Trumpet creeper growth rates increased slightly but, over the long term, were generally slower in areas of high damage than in undamaged areas. This was because trumpet creeper vines suffered more initial host-tree-related damage and tended to grow slower with age. Conversely, in areas of low hurricane disturbance the trumpet creeper vines were generaly larger than poison ivy vines.
- Check out a 2011 National Science Foundation news article about a study that included Congaree liana data with other data from around the world.
- Printable and formatted version of this research summary (PDF 441 KB) | <urn:uuid:941d6047-102d-41fb-b877-1d7d82d15743> | 4 | 1,516 | Knowledge Article | Science & Tech. | 38.263502 | 95,624,124 |
A giant gas cloud is on collision course with the black hole in the centre of our galaxy in 2013. This is a unique opportunity to observe how a super massive black hole sucks in material, in real time.
Comets and asteroids preserve the building blocks of our Solar System and should help explain its origin. But there are unsolved puzzles. For example, how did icy comets obtain particles that formed at high temperatures, and how did these refractory particles acquire rims with different compositions? Carnegie's theoretical astrophysicist Alan Boss and cosmochemist Conel Alexander are the first to model the trajectories of such particles in the unstable disk of gas and dust that formed the Solar System.
For several days this month, Greenland's surface ice cover melted over a larger area than at any time in more than 30 years of satellite observations. Nearly the entire ice cover of Greenland, from its thin, low-lying coastal edges to its 2-mile-thick center, experienced some degree of melting at its surface, according to measurements from three independent satellites analyzed by NASA and university scientists.
In order to understand Earth's earliest history--its formation from Solar System material into the present-day layering of metal core and mantle, and crust--scientists look to meteorites. New research from a team including Carnegie's Doug Rumble and Liping Qin focuses on one particularly old type of meteorite called diogenites.
A telescope launched July 11 aboard a NASA sounding rocket has captured the highest-resolution images ever taken of the sun's million-degree atmosphere called the corona. The clarity of the images can help scientists better understand the behavior of the solar atmosphere and its impacts on Earth's space environment.
Heliophysics nuggets are a collection of early science results, new research techniques, and instrument updates that further our attempt to understand the sun and the dynamic space weather system that surrounds Earth. | <urn:uuid:ee042abd-48ed-4e30-8cd8-fc3bcd17c187> | 3.625 | 386 | Content Listing | Science & Tech. | 29.702841 | 95,624,136 |
- Research news
- Open Access
Fungus report stirs debate
© BioMed Central Ltd 2005
Published: 17 January 2005
An ancient group of fungi might possess more than one genome, Swiss researchers report in the January 13 issue of Nature.
Arbuscular mycorrhizal fungi colonize the roots of most land plants. Their cells contain hundreds of nuclei, leading Mohamed Hijri and Ian R. Sanders of the University of Lausanne in Switzerland to suggest multiple genomes could evolve within single individuals (Nature 2004, 433:160-3).
"These fungi have no sexual reproduction, which means if deleterious mutations accumulated, they might lead to extinction. But these fungi are extremely old, dated at 450 million years," Hijri said. "We think they developed this strategy of multiple genomes against deleterious mutations, such that some genes can get knocked out, but other genomes might have functional versions."
"This means Mendelian genetics and classical evolution cannot apply to these organisms. They are completely unique," Hijri told us.
The report has reignited a longstanding debate. "I think this data is not particularly believable. I'm a little worried they might have presented an experimental artifact or misinterpretation of results," said Teresa Pawlowska of Cornell University in Ithaca, NY, who did not participate in this study.
In 2001, Hijri and colleagues reported polymorphism of ribosomal DNA in spores of fungi as evidence for their idea. But in 2004, Pawlowska and John Taylor at the University of California at Berkeley suggested the variation was due to polyploidy. They focused on a POL1-like sequence (PLS1) in the arbuscular mycorrhizal fungus Glomus etunicatum, whose spores contain 13 variants of PLS1. Pawlowska and Taylor's mathematical model, based on the random inheritance of nuclei to each clonally produced offspring, predicted that if each variant existed in genetically different nuclei, then the loss of some variants would almost certainly occur after one generation. Instead, they found clonally produced spores possessed all 13 variants, suggesting polyploidy was behind the variation.
In the latest paper, Hijri and colleagues investigated the polyploidy question. Using flow cytometry, they measured the nuclear DNA content of G. etunicatum as 37.45 megabases (Mb). If G. etunicatum nuclei were 13 N, the genome size of this fungus would be 2.88 Mb, much smaller than any other eukaryote and smaller than that of Escherichia coli and most other bacteria, the researchers argue.
Using real-time polymerase chain reaction, the investigators then estimated the number of copies of PLS1 was 1.88 per nucleus. Given a maximum of two copies per nucleus, Sanders and colleagues conclude the 13 variants must be spread out in different nuclei, meaning there is heterokaryosis, or genetic differences among the nuclei.
Pawlowska told us she disagrees with the new paper, citing her prior work, which, among other data, in the supplementary section noted that two isolates of the fungi from different locations, "one from Minnesota and the other from California, had both a complement of 13 variants of a genetic marker. According to their argument, if you expect all the nuclei to be different, you would expect to find different numbers of variants in different isolates due to loss and acquisition of different types of nuclei."
Soren Rosendahl of the University of Copenhagen, who did not participate in either study, said both opposing groups did excellent work. "It's clear they are using different methods," he told us. In his team's investigation of arbuscular mycorrhizal fungi G. mosseae, G. geosporum, and G. caledonium, which analyzed genetic variations based on expressed DNA instead of genomic DNA as Hijri and colleagues did, "we never saw this heterokaryosis. But then again, we used a different method and different organisms, so that doesn't resolve the problem."
Pawlowska plans on repeating Hijri and colleagues' experiment in her own lab soon. "The fact this paper is published makes the whole question more visible and hopefully will draw more researchers to apply different methods and tools to address this problem," she said.
- Nature, [http://www.nature.com]
- Mohamed Hijri, [http://www.unil.ch/dee/page7251_fr.html]
- Teresa Pawlowska, [http://ppathw3.cals.cornell.edu/People/labs/Pawlowska/Index.html]
- Evidence for the evolution of multiple genomes in arbuscular mycorrhizal fungiGoogle Scholar
- Organization of genetic variation in individuals of arbuscular mycorrhizal fungiGoogle Scholar
- Soren Rosendahl, [http://www.bi.ku.dk/staff/staff-vip-details.asp?ID=33]
- Development and amplification of multiple co-dominant genetic markers from single spores of arbuscular mycorrhizal fungi by nested multiplex PCRGoogle Scholar | <urn:uuid:2ea8d34e-8e5f-4658-a1a7-d863f06f6bda> | 3.125 | 1,102 | Truncated | Science & Tech. | 37.631785 | 95,624,159 |
News Release 09-212
VERITAS Discovers Very High Energy Gamma Rays from the Starburst Galaxy M82
Gamma-ray source identified for the first time, leading to better understanding of the early universe
November 2, 2009
This material is available primarily for archival purposes. Telephone numbers or other contact information may be out of date; please see current contact information at media contacts.
The VERITAS (Very Energetic Radiation Imaging Telescope Array System) collaboration, an international team of astronomers from the United States, Canada, United Kingdom and Ireland, has discovered very high energy (VHE) gamma rays emitted by the starburst galaxy M82 (the Cigar Galaxy). The observed gamma rays have energies more than a trillion times higher than the energy of visible light, and are the highest energy photons ever detected from a galaxy undergoing large amounts of star formation. The discovery was made from data taken over a two-year long observing campaign.
This is the first example of a very-high-energy gamma-ray source associated with a starburst galaxy, and its discovery provides fundamental insight into the origin of cosmic rays," said Rene Ong, a professor of physics at the University of California, Los Angeles, and the spokesperson for the VERITAS collaboration.
"Our program has been supporting studies of VHE gamma-rays and cosmic rays in separate and very different experiments for over a decade," said Jim Whitmore, program director for particle and nuclear astrophysics in NSF's Division of Physics. "This significant VERITAS discovery provides an immediate connection between the sources of these two types of very energetic particles and enhances our understanding of the early universe."
Cosmic rays are particles striking the Earth's atmosphere and are produced in violent processes in our own Milky Way galaxy and beyond. Although the Earth is constantly bombarded by cosmic rays, their origin remains a mystery nearly 100 years after their discovery. The VERITAS result provides critical evidence to help scientists understand the origin of cosmic rays by clearly linking the processes related to the life-cycle of stars with the acceleration of cosmic rays.
The VERITAS observations strongly support the long-held theory that supernovae and massive star winds are the dominant accelerators of cosmic-ray particles. Galaxies with high levels of star formation such as M82 have high numbers of supernovae and massive stars. These "starburst" galaxies would then be expected to have a higher number of cosmic rays per unit volume.
The VERITAS discovery indicates that the cosmic-ray density in M82 is approximately 500 times the average density in our Galaxy, the Milky Way, thus providing key evidence to unlocking the mystery of the origin of cosmic rays.
Wystan Benbow, an astrophysicist at the Smithsonian Astrophysical Observatory (SAO), working with the VERITAS, coordinated this project for the VERITAS collaboration. The results of this study, titled "A Connection Between Star Formation Activity and Cosmic Rays in the Starburst Galaxy M82" appeared on Nov. 1, 2009, in the advance online publication of the journal Nature. The article will appear in the published version of Nature on Nov. 4. The SAO VERITAS group, which Benbow leads, manages the operation of VERITAS and plays a major role in the scientific activities of the collaboration.
"We knew that the detection of M82 would have important scientific implications. As a result, we scheduled an exceptionally deep exposure immediately after the experiment became fully operational," says Benbow. "The data took almost two years to acquire, and needed to be meticulously analyzed to extract the gamma-ray signal which is over 1 million times smaller than the background noise. Although the signal is only a tiny fraction of the data, we made many checks for possible bias and we are confident that the signal is genuine."
M82 is a bright galaxy located approximately 12 million light years from Earth, in the direction of the Ursa Major constellation. In the active starburst region at its center, stars are being formed at a rate approximately ten times more rapidly than in entire ‘normal' galaxies like our own Milky Way. The cosmic rays produced in the formation, life and death of the massive stars in this region eventually produce diffuse gamma-ray emission via their interactions with interstellar gas and radiation. Due to its unusually high cosmic-ray and gas densities and its relative proximity, M82 is expected to be the brightest starburst galaxy in VHE gamma rays.
VHE gamma rays, those with energies ranging from 100 GeV (one-hundred billion electron Volts) to 50 TeV (50 trillion electron Volts), are observed with ground-based Cherenkov telescopes. These gamma rays are absorbed in the Earth's atmosphere, where they create a short-lived shower of particles. The Cherenkov telescopes detect the faint, extremely short flashes of blue light, which these particles emit (named Cherenkov light) using extremely sensitive cameras.
The images can be used to infer the arrival direction and initial energy of the primary gamma rays. This technique is used by VHE observatories throughout the world, and was pioneered under the direction of SAO's Trevor Weekes, using the 10-meter Cherenkov telescope at the Fred Lawrence Whipple Observatory (FLWO), just south of Tucson, Arizona. The Whipple 10-m telescope was used to detect the first Galactic and extragalactic sources of VHE gamma rays.
VERITAS continues the tradition of the 10-m telescope and is also located at FLWO. It is comprised of an array of four 12-meter (39 feet) diameter Cherenkov telescopes. VERITAS began full-scale observations in September 2007. The telescopes are used to study the remnants of exploded stars, distant galaxies, powerful gamma ray bursts, and to search for evidence of mysterious dark matter particles.
VERITAS is operated by a collaboration of more than 100 scientists from 22 different institutions in the United States, Ireland, England and Canada. VERITAS is funded by the U.S. National Science Foundation, U.S. Department of Energy, Smithsonian Institution, Natural Sciences and Engineering Research Council of Canada, Science Foundation Ireland, and Science and Technology Facilities Council of the UK.
Very Energetic Radiation Imaging Telescope Array System (VERITAS) detects sources of gamma rays.
Credit and Larger Version
The very high energy gamma-ray emission observed by VERITAS. The Black star is the active starburst.
Credit and Larger Version
VERITAS, operated by a collaboration of more than 100 scientists from 22 different countries.
Credit and Larger Version
Lisa-Joy Zgorski, NSF, (703) 292-8311, email: firstname.lastname@example.org
James Whitmore, NSF, (703) 292-8908, email: email@example.com
Vernon L. Pankonin, NSF, (703) 292-4902, email: firstname.lastname@example.org
Rene A. Ong, VERITAS Spokesperson, UCLA, (310) 825-3622, email: email@example.com
Wystan Benbow, Harvard-Smithsonian Center for Astrophysics, (520) 975-5795, email: firstname.lastname@example.org
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2018, its budget is $7.8 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives more than 50,000 competitive proposals for funding and makes about 12,000 new funding awards.
Useful NSF Web Sites:
NSF Home Page: https://www.nsf.gov
NSF News: https://www.nsf.gov/news/
For the News Media: https://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: https://www.nsf.gov/statistics/
Awards Searches: https://www.nsf.gov/awardsearch/ | <urn:uuid:32f35cf1-df29-4a18-a8e7-bdf6d1b5974c> | 3.296875 | 1,711 | News (Org.) | Science & Tech. | 37.923693 | 95,624,185 |
The '=' is an assignment operator . An assignment operator assigns a value to its left operand based on the value of its right operand. The first operand must be a variable which assigns the value of its right operand to its left operand. That is, x = y assigns the value of y to x.
The = operator behaves like other operators, so expressions that contain it have a value. This means that you can chain assignment operators as follows: x = y = z = 0 . In this case x, y, and z equal zero.
The "===" is an identity operator returns true if the operands are strictly equal (see above) with no type conversion. It will return false even when their values are equal but they are not of same data type.
For ex: 999 and '999', according to the values are same but they are not of same data type, hence === will return false.
- What is noscript tag?
- What is Browser Object Model
- How to test a string as a literal and as an object ?
- What is Associative Array? How do we use it?
- What's the Difference Between Class and Prototypal Inheritance?
- Event bubbling and Event Capturing in JavScript? | <urn:uuid:2bb4611a-e828-4b6d-890d-3143d6c57e2b> | 3.890625 | 263 | Documentation | Software Dev. | 61.02676 | 95,624,192 |
UAH Global Temperature Report: June 2018
Global Temperature Report: June 2018
Global climate trend since Dec. 1 1978: +0.13 C per decade
June Temperatures (preliminary)
Global composite temp.: +0.21 C (+0.38 °F) above seasonal average
Northern Hemisphere.: +0.38 C (+0.68 °F) above seasonal average
Southern Hemisphere.: +0.04 C (+0.07 °F) above seasonal average
Tropics.: +0.12 C (+0.22 °F) above seasonal average
May Temperatures (revised)
Global composite temp.: +0.18 C (+0.32 °F) above seasonal average
Northern Hemisphere.: +0.40 C (+0.72 °F) above seasonal average
Southern Hemisphere.: -0.05 C (-0.09 °F) below seasonal average
Tropics.: +0.03 C (+0.05 °F) above seasonal average
Notes on data released July 2, 2018
The global temperature anomaly for June 2018 changed only slightly from May. Indeed the first six months of 2018 have been steady, varying in a narrow range between +0.26 and +0.18 °C. As noted last month, NOAA’s indication that an El Niño is coming this winter appears on track as we see tropical temperatures continue to inch upward.
The seasonally-adjusted chilliest spot on the Earth was -3.5 °C (-6.3 °F) below average in the Ross Sea off West Antarctica while the relative warmest was +5.1 C (+9.2 °F) southwest of Saskylakh in northern Russia. In addition to northern Russia, other warmer than average regions included northern Europe, most of North America and portions of Antarctica. It was cooler than average in Kazakhstan, eastern Canada, Australia and Argentina.
As part of an ongoing joint project between UAH, NOAA and NASA, Christy and Dr. Roy Spencer, an ESSC principal scientist, use data gathered by advanced microwave sounding units on NOAA, NASA and European satellites to get accurate temperature readings for almost all regions of the Earth. This includes remote desert, ocean and rain forest areas where reliable climate data are not otherwise available.
The satellite-based instruments measure the temperature of the atmosphere from the surface up to an altitude of about eight kilometers above sea level. Once the monthly temperature data are collected and processed, they are placed in a “public” computer file for immediate access by atmospheric scientists, and anyone interested, in the U.S. and abroad.
The complete version 6 lower troposphere dataset is available here:
Archived color maps of local temperature anomalies are available on-line at:
Neither Christy nor Spencer receives any research support or funding from oil, coal or industrial companies or organizations, or from any private or special interest groups. All of their climate research funding comes from federal and state grants or contracts.
— 30 —
via Watts Up With That? https://ift.tt/1Viafi3 | <urn:uuid:ae3b37a4-c96b-46e0-82bb-9328ee426626> | 2.734375 | 630 | News (Org.) | Science & Tech. | 54.554498 | 95,624,201 |
Explosion of a hydrogen–air mixture.
Hydrogen gas (dihydrogen or molecular hydrogen, also called diprotium when consisting specifically of a pair of protium atoms) is highly flammable and will burn in air at a very wide range of concentrations between 4% and 75% by volume. The enthalpy of combustion is −286 kJ/mol:
- 2 H2(g) + O2(g) → 2 H2O(l) + 572 kJ (286 kJ/mol)[note 2]
Hydrogen gas forms explosive mixtures with air in concentrations from 4–74% and with chlorine at 5–95%. The explosive reactions may be triggered by spark, heat, or sunlight. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is 500 °C (932 °F). Pure hydrogen-oxygen flames emit ultraviolet light and with high oxygen mix are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle Main Engine, compared to the highly visible plume of a Space Shuttle Solid Rocket Booster, which uses an ammonium perchlorate composite. The detection of a burning hydrogen leak may require a flame detector; such leaks can be very dangerous. Hydrogen flames in other conditions are blue, resembling blue natural gas flames.
The destruction of the Hindenburg airship was a notorious example of hydrogen combustion and the cause is still debated. The visible orange flames in that incident were the result of a rich mixture of hydrogen to oxygen combined with carbon compounds from the airship skin.
H2 reacts with every oxidizing element. Hydrogen can react spontaneously and violently at room temperature with chlorine and fluorine to form the corresponding hydrogen halides, hydrogen chloride and hydrogen fluoride, which are also potentially dangerous acids.
Electron energy levels
Depiction of a hydrogen atom with size of central proton shown, and the atomic diameter shown as about twice the Bohr model
radius (image not to scale)
The ground state energy level of the electron in a hydrogen atom is −13.6 eV, which is equivalent to an ultraviolet photon of roughly 91 nm wavelength.
The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, which conceptualizes the electron as "orbiting" the proton in analogy to the Earth's orbit of the Sun. However, the atomic electron and proton are held together by electromagnetic force, while planets and celestial objects are held by gravity. Because of the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies.
A more accurate description of the hydrogen atom comes from a purely quantum mechanical treatment that uses the Schrödinger equation, Dirac equation or even the Feynman path integral formulation to calculate the probability density of the electron around the proton. The most complicated treatments allow for the small effects of special relativity and vacuum polarization. In the quantum mechanical treatment, the electron in a ground state hydrogen atom has no angular momentum at all—illustrating how the "planetary orbit" differs from electron motion.
Elemental molecular forms
There exist two different spin isomers of hydrogen diatomic molecules that differ by the relative spin of their nuclei. In the orthohydrogen form, the spins of the two protons are parallel and form a triplet state with a molecular spin quantum number of 1 (1⁄2+1⁄2); in the parahydrogen form the spins are antiparallel and form a singlet with a molecular spin quantum number of 0 (1⁄2–1⁄2). At standard temperature and pressure, hydrogen gas contains about 25% of the para form and 75% of the ortho form, also known as the "normal form". The equilibrium ratio of orthohydrogen to parahydrogen depends on temperature, but because the ortho form is an excited state and has a higher energy than the para form, it is unstable and cannot be purified. At very low temperatures, the equilibrium state is composed almost exclusively of the para form. The liquid and gas phase thermal properties of pure parahydrogen differ significantly from those of the normal form because of differences in rotational heat capacities, as discussed more fully in spin isomers of hydrogen. The ortho/para distinction also occurs in other hydrogen-containing molecules or functional groups, such as water and methylene, but is of little significance for their thermal properties.
The uncatalyzed interconversion between para and ortho H2 increases with increasing temperature; thus rapidly condensed H2 contains large quantities of the high-energy ortho form that converts to the para form very slowly. The ortho/para ratio in condensed H2 is an important consideration in the preparation and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces enough heat to evaporate some of the hydrogen liquid, leading to loss of liquefied material. Catalysts for the ortho-para interconversion, such as ferric oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromic oxide, or some nickel compounds, are used during hydrogen cooling.
Covalent and organic compounds
While H2 is not very reactive under standard conditions, it does form compounds with most elements. Hydrogen can form compounds with elements that are more electronegative, such as halogens (e.g., F, Cl, Br, I), or oxygen; in these compounds hydrogen takes on a partial positive charge. When bonded to fluorine, oxygen, or nitrogen, hydrogen can participate in a form of medium-strength noncovalent bonding with the hydrogen of other similar molecules, a phenomenon called hydrogen bonding that is critical to the stability of many biological molecules. Hydrogen also forms compounds with less electronegative elements, such as metals and metalloids, where it takes on a partial negative charge. These compounds are often known as hydrides.
Hydrogen forms a vast array of compounds with carbon called the hydrocarbons, and an even vaster array with heteroatoms that, because of their general association with living things, are called organic compounds. The study of their properties is known as organic chemistry and their study in the context of living organisms is known as biochemistry. By some definitions, "organic" compounds are only required to contain carbon. However, most of them also contain hydrogen, and because it is the carbon-hydrogen bond which gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word "organic" in chemistry. Millions of hydrocarbons are known, and they are usually formed by complicated synthetic pathways that seldom involve elementary hydrogen.
Compounds of hydrogen are often called hydrides, a term that is used fairly loosely. The term "hydride" suggests that the H atom has acquired a negative or anionic character, denoted H−, and is used when hydrogen forms a compound with a more electropositive element. The existence of the hydride anion, suggested by Gilbert N. Lewis in 1916 for group 1 and 2 salt-like hydrides, was demonstrated by Moers in 1920 by the electrolysis of molten lithium hydride (LiH), producing a stoichiometry quantity of hydrogen at the anode. For hydrides other than group 1 and 2 metals, the term is quite misleading, considering the low electronegativity of hydrogen. An exception in group 2 hydrides is BeH
2, which is polymeric. In lithium aluminium hydride, the AlH−
4 anion carries hydridic centers firmly attached to the Al(III).
Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, more than 100 binary borane hydrides are known, but only one binary aluminium hydride. Binary indium hydride has not yet been identified, although larger complexes exist.
In inorganic chemistry, hydrides can also serve as bridging ligands that link two metal centers in a coordination complex. This function is particularly common in group 13 elements, especially in boranes (boron hydrides) and aluminium complexes, as well as in clustered carboranes.
Protons and acids
Oxidation of hydrogen removes its electron and gives H+, which contains no electrons and a nucleus which is usually composed of one proton. That is why H+
is often called a proton. This species is central to discussion of acids. Under the Brønsted–Lowry acid–base theory, acids are proton donors, while bases are proton acceptors.
A bare proton, H+
, cannot exist in solution or in ionic crystals because of its unstoppable attraction to other atoms or molecules with electrons. Except at the high temperatures associated with plasmas, such protons cannot be removed from the electron clouds of atoms and molecules, and will remain attached to them. However, the term 'proton' is sometimes used loosely and metaphorically to refer to positively charged or cationic hydrogen attached to other species in this fashion, and as such is denoted "H+
" without any implication that any single protons exist freely as a species.
To avoid the implication of the naked "solvated proton" in solution, acidic aqueous solutions are sometimes considered to contain a less unlikely fictitious species, termed the "hydronium ion" (H
). However, even in this case, such solvated hydrogen cations are more realistically conceived as being organized into clusters that form species closer to H
4. Other oxonium ions are found when water is in acidic solution with other solvents.
Although exotic on Earth, one of the most common ions in the universe is the H+
3 ion, known as protonated molecular hydrogen or the trihydrogen cation.
NASA has investigated the use of atomic hydrogen as a rocket propellant. It could be stored in liquid helium to prevent it from recombining into molecular hydrogen. When the helium is vaporized, the atomic hydrogen would be released and combine back to molecular hydrogen. The result would be a intensely hot stream of hydrogen and helium gas. The liftoff weight of rockets could be reduced by 50% by this method.
Most interstellar hydrogen is in the form of atomic hydrogen because the atoms can seldom collide and combine. They are the source of the important 21cm hydrogen line in astronomy at 1420 MHz.
Hydrogen discharge (spectrum) tube
Deuterium discharge (spectrum) tube
Protium, the most common isotope
of hydrogen, has one proton and one electron. Unique among all stable isotopes, it has no neutrons (see diproton
for a discussion of why others do not exist).
Hydrogen has three naturally occurring isotopes, denoted 1
H and 3
H. Other, highly unstable nuclei (4
H to 7
H) have been synthesized in the laboratory but not observed in nature.
H is the most common hydrogen isotope with an abundance of more than 99.98%. Because the nucleus of this isotope consists of only a single proton, it is given the descriptive but rarely used formal name protium.
H, the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in the nucleus. All deuterium in the universe is thought to have been produced at the time of the Big Bang, and has endured since that time. Deuterium is not radioactive, and does not represent a significant toxicity hazard. Water enriched in molecules that include deuterium instead of normal hydrogen is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for 1
H-NMR spectroscopy. Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion.
H is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decaying into helium-3 through beta decay with a half-life of 12.32 years. It is so radioactive that it can be used in luminous paint, making it useful in such things as watches. The glass prevents the small amount of radiation from getting out. Small amounts of tritium are produced naturally by the interaction of cosmic rays with atmospheric gases; tritium has also been released during nuclear weapons tests. It is used in nuclear fusion reactions, as a tracer in isotope geochemistry, and in specialized self-powered lighting devices. Tritium has also been used in chemical and biological labeling experiments as a radiolabel.
Hydrogen is the only element that has different names for its isotopes in common use today. During the early study of radioactivity, various heavy radioactive isotopes were given their own names, but such names are no longer used, except for deuterium and tritium. The symbols D and T (instead of 2
H and 3
H) are sometimes used for deuterium and tritium, but the corresponding symbol for protium, P, is already in use for phosphorus and thus is not available for protium. In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry (IUPAC) allows any of D, T, 2
H, and 3
H to be used, although 2
H and 3
H are preferred.
The exotic atom muonium (symbol Mu), composed of an antimuon and an electron, is also sometimes considered as a light radioisotope of hydrogen, due to the mass difference between the antimuon and the electron. Muonium was discovered in 1960. During the muon's µs lifetime, muonium can enter into compounds such as muonium chloride (MuCl) or sodium muonide (NaMu), analogous to 2.2 hydrogen chloride and sodium hydride respectively. | <urn:uuid:dbf09e45-d507-49b5-a8b5-0b376391e82d> | 3.84375 | 2,945 | Knowledge Article | Science & Tech. | 30.406661 | 95,624,203 |
Java is another general purpose programming language and supports object-oriented programming concepts like polymorphism, inheritance, etc.
Java was created by Sun Microsystems and since then many programmers use Java programming language to write network based applications and business applications. You can also write games using Java programming language.
A list of Java programs in given below for following purpose
- 1. Learn to program in Java by practicing these programs.
- 2. Learn the Java programming concepts using these examples.
- 3. Write similar programs on your own.
Prerequisites for Java Examples
Programming is all about doing, not just learning concepts and syntax of the programming languages. To learn Java you must use the practical approach – practice as many example programs you can. This will lay a strong foundation for you to learn higher and complex programs. To practice Java you need following.
- Java Compiler – there is two way to install a Java compiler. First, you can install NetBeans or Eclipse with latest Java Development Kit ( JDK ). Second, Install the latest JDK version and work with command line utility.
- Java Tutorial – before you practice example programs, learn the basic Java concepts to get started.
- Pen and Paper – if you try new inputs for example Java programs and want to verify the results, make use of pen and paper and solve the problem manually to verify outputs. Especially, in case of math programs it is true.
Books serve as alternative resource to learn by example. We recommend two good Java programming books that contains several examples. You will have more examples to practice and learn. Do read the reviews for each book before you decide to get them.
Note: We earn a small commission when you make a purchase.
Java Math Programs
This section consists of all programs that solve mathematical problems from simple algebra to advanced calculus.
- Java Program to Compute Average Mark of Students.
- Program in Java for Addition of Two Matrices.
- Program to Add Two Numbers in Java.
- Program to Compute Area of Circle in Java.
- Program to Find Area of Triangle in Java.
This section you will find all programs that use Java operators like assignment, arithmetic, logical, bitwise shift, and so on.
Java String Programs
This section is for all types of string related programs in Java. It will consist of demos for different string classes.
- Java Program to Demo Built-in String Functions.
- Program to Read Text using Bufferedreader in Java.
- Java Program demonstrating usage of String-buffer Class and String-builder Class.
Java Array Programs
This section consists of all programs and demos consisting of arrays in Java language. There are two types of arrays in Java – single dimension and two dimension.
Java Class & Object Related Programs
This section we will list all examples demonstrating Object-Oriented programming with Java and programs that are built using these principles. At the end of this section, we include some mini-projects in Java language. | <urn:uuid:7eeb84d0-e1d1-480e-b96f-3e01199ba183> | 3.546875 | 615 | Tutorial | Software Dev. | 45.495343 | 95,624,211 |
The 700 km long Talas-Fergana fault in Kyrgyzstan is similar to the San-Andreas fault in the USA and geologists believe the area is highly vulnerable to seismic activity.
The fault cuts across the largest hydroelectric power and irrigation scheme in Central Asia. The Toktogul scheme generates 1200 megawatts of electricity annually and incorporates a reservoir containing 20 cubic kilometres of water behind a 230 metre high dam. It provides power and irrigation water to Kyrgyzstan, Uzbekistan, Tajikistan, Kazakhstan and Russia so it is vital for the region’s economic, social and agricultural stability.
These countries’ competing demands for power and water mean Toktogul is already the focus of cross-border tensions. Disruption could be catastrophic, putting their fragile economies at risk, provoking civil unrest and providing opportunities for the region’s extremist groups to exploit the resulting disorder.
Radioactive and toxic waste dumps in the area, left by Soviet-era uranium mining, means there is a further threat of contamination to irrigated land in the Fergana Valley that provides food and livelihoods for 10 million people.
Dr Derek Rust, a geologist at the University of Portsmouth, is the Director of the three-year NATO ‘Science for Peace’ project. The research team also includes the University of Milan-Bicocca and the national seismological institutes of Kyrgyzstan and Uzbekistan. They will use the grant to analyse potential geo-environmental risks and produce hazard scenarios for the governments of the countries at risk.
He said: “Faults are created by movements in the Earth’s crust linked to plate tectonics, a theory which was dismissed by Soviet geologists when Toktogul was designed and built in the late 60s and early 1970s; consequently the significance of the fault was not appreciated.
“We now know that the Talas-Fergana fault has a long history of activity with the last faulting event occurring recently in geological terms, approximately 400 – 500 years ago. Another event is inevitable; it’s just a question of when.
“Understanding the real threats to the environmental security of this region and finding ways to mitigate against these threats is crucial to avoiding conflicts over water and power supplies and avoiding extensive pollution of vital lands.”
The Talas-Fergana fault results from the Indian tectonic plate ploughing northwards into Eurasia at a rate of around 50 mm per year, the same active plate tectonics that continues to create the Himalayas and the Tibetan plateau.
Rust predicts that seismic activity in the area of the Talas-Fergana fault could lead to the breaching of landslide-dammed lakes, causing flooding and contamination downstream by uranium waste.
The Sichuan earthquake in May this year, which measured 7.8 on the Richter scale, created around 30 landslide damned lakes. Rust says this earthquake can serve as a model for what may happen during a similar earthquake on the Talas-Fergana fault.
“An earthquake is like a spring being steadily wound until it breaks, releasing the stored energy,” he said. “A major earthquake in mountainous terrain is very likely to produce large landslides.
Rust and his team will spend three years examining existing seismic data and gathering new information from satellite remote sensing imagery, aerial photography, radiocarbon dating of geological features and using several portable seismometers. He said that establishing a pattern of how previous tectonic activity has affected the region is the best guide to what may happen in the future.
But he is clear that the research is not about predicting earthquakes but understanding them to minimise their effects.
“For example we can estimate long term ‘slip rates’ on big faults and their patterns of behaviour – but exact earthquake prediction is the elusive holy grail of earthquake geology.”
The findings will be presented to the governments of the countries at risk when the project is completed in 2011.
Lisa Egan | alfa
Innovative genetic tests for children with developmental disorders and epilepsy
11.07.2018 | Christian-Albrechts-Universität zu Kiel
Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe”
05.07.2018 | European Geosciences Union
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:20150df6-bcab-4661-bfdb-a9c61b1daa79> | 3.421875 | 1,485 | Content Listing | Science & Tech. | 37.105937 | 95,624,215 |
The use and Limitations of Dendrochronology in Studying Effects of Air Pollution on Forests
The annual ringwidths of trees can be used to search for hypothesized air pollution effects on forests. This search is extremenly complicated by the inherent statistical properties of ringwidth data and the high level of uncertainty regarding the sources of variance observed in the ringwidths. A linear aggregate model for ringwidths which highlights the general classes of variance that may be found in a tree-ring series is described. Dendrochronological principles and techniques that can be used to create a tree-ring chronology that is suitable for rigorous statistical analysis and hypothesis testing are described. An analysis of a red spruce tree-ring chronology indicates that a decline in ringwidths since 1968 cannot be explained by a linear temperature response model.
KeywordsTree Ring Radial Growth Pollution Effect Exogenous Disturbance Pollution Signal
Unable to display preview. Download preview PDF.
- Conkey LE (1979) Response of tree-ring density to climate in Maine, USA. Tree-Ring Bull 39: 29–38Google Scholar
- Cook ER (1982) Eastern North America. In: Hughes MK, Kelly PM, Pilcher JR, LaMarche VC Jr (eds) Climate from tree rings. University Press, Cambridge, p 126Google Scholar
- Cook ER (1985) A time series analysis approach to tree ring standardization. PhD thesis, Univ Arizona, TucsonGoogle Scholar
- Cook ER, Peters K (1981) The smoothing spline: A new approach to standardizing forest interior tree-ring width series for dendroclimatic studies. Tree-Ring Bull 41: 45–54Google Scholar
- Fritts HC (1976) Tree rings and climate. Academic Press, New York, p 567Google Scholar
- Lamb HH (1977) Climate: present, past and future, vol 2, Climatic history and the future. Methuen & Co, London, p 835Google Scholar | <urn:uuid:4f5e9b4a-e645-42d5-b0f0-305053d05264> | 2.65625 | 416 | Truncated | Science & Tech. | 38.939368 | 95,624,251 |
Humans rely on ecosystems to supply food and other necessities for a healthy human life. Certain human activities have had a devastating impact on ecosystems, however. From pollution to overharvesting, the damage and exploitation of wildlife and natural vegetation by humans has left some ecosystems in bad shape.
Many byproducts of industrialization have harmed ecosystems. For example, burning coal to produce energy releases chemicals like sulfur dioxide. Such chemicals in the air lead to acid rain and acid deposition, which can harm plant and animal life, especially as it acidifies aquatic ecosystems. In addition, liquid chemical runoff from human activities can negatively impact ecosystems. Such runoff is not just produced by big industrial factories. Zinc and lead runoff from lawns, driveways and sidewalks in residential areas can damage ecosystems.
Urban sprawl is the ever-increasing spread of cities out into formerly rural areas. Clear-cutting and deforestation have occurred in order to accommodate the push of urbanization into rural regions. Besides resulting in loss of forests and other vegetation, such actives lead to habitat fragmentation. When roads, homes or even vehicles cut through the original ecosystem composition, animals can be cut off from a large part of their habitat and, by extension, their population.
Introduction of Invasive Species
The transfer of species can be unwitting, such as a plant spore hitching a ride on a shoe. Or the introduction of a new species could be on purpose, as was the case with the Asian carp in the United States. According the National Wildlife Federation, 42 percent of endangered animals are threatened by non-native species. These species pose a problem because they compete for food and may not serve as good food for the native species. In addition, invasive species can decrease biodiversity and physically alter the ecosystem. For example, an invasive species can change the soil's chemical composition.
Overharvesting, sometimes called overexploitation, happens when species are taken from their natural habitat. This can happen as a result of habitat destruction, but more often it is a result of hunting or fishing. Such unsustainable activities can especially be seen in the fishing industry, where species like cod, haddock and flounder have had their populations drastically reduced. Overharvesting can lead to an imbalance in ecosystems, upsetting the food chain and harming other nonharvested species. | <urn:uuid:5e196e18-ea4b-47da-91cb-03589e409300> | 3.84375 | 471 | Knowledge Article | Science & Tech. | 23.112351 | 95,624,305 |
- Research article
- Open Access
Student construction of phylogenetic trees in an introductory biology course
© Dees and Momsen. 2016
Received: 25 November 2015
Accepted: 14 April 2016
Published: 21 April 2016
Phylogenetic trees have become increasingly essential across biology disciplines. Consequently, learning about phylogenetic trees has become an important component of biology education and an area of interest for biology education research. Construction tasks, in which students generate phylogenetic trees from some type of data, are often used for instruction. However, the impact of these exercises on student learning is uncertain, in part due to our fragmented knowledge of what students construct during the tasks. The goal of this project was to develop a more robust method for describing student-generated phylogenetic trees, which will support future investigations that attempt to link construction tasks with student learning.
Through iterative examination of data from an introductory biology course, we developed a method for describing student-generated phylogenetic trees in terms of style, conventionality, and accuracy. Students used the diagonal style more often than the bracket style for construction tasks. The majority of phylogenetic trees were constructed conventionally, and variable orientation of branches was the most common unconventional feature. In addition, the majority of phylogenetic trees were generated correctly (no errors) or adequately (minor errors only) in terms of accuracy. Suggesting extant taxa are descended from other extant taxa was the most common major error, while empty branches and extra nodes were very common minor errors.
The method we developed to describe student-constructed phylogenetic trees uncovered several trends that warrant further investigation. For example, while diagonal and bracket phylogenetic trees contain equivalent information, student preference for using the diagonal style could impact comprehension. In addition, despite a lack of explicit instruction, students generated phylogenetic trees that were largely conventional and accurate. Surprisingly, accuracy and conventionality were also dependent on each other. Our method for describing phylogenetic trees constructed by students is based on data from one introductory biology course at one institution, and the results are likely limited. We encourage researchers to use our method as a baseline for developing a more generalizable tool, which will support future investigations that attempt to link construction tasks with student learning.
Phylogenetic trees are visual representations that depict hypothesized evolutionary relationships among nested groups of taxa (Novick and Catley 2007; Baum and Offner 2008). These tools are used primarily by evolutionary biologists to evaluate evidence for evolution (Baum et al. 2005), but phylogenetic trees have also become increasingly essential in nearly all disciplines of biology (Omland et al. 2008). Consequently, learning about phylogenetic trees has become an important component of biology education and an area of interest for biology education research.
Undergraduates in the sciences should develop competence with visual representations in general (National Research Council 2012). However, “tree-thinking” skills are particularly important for students due to the subject matter of phylogenetic trees. Evolution is a unifying theory in biology (Dobzhansky 1973) and a fundamental concept for biological literacy (American Association for the Advancement of Science 2011). As conceptual models, phylogenetic trees offer insights into patterns and processes of evolution and provide powerful scaffolding for learning about biology (Novick and Catley 2007). However, the utility of phylogenetic trees is tempered by widespread misinterpretations among biology students (Meir et al. 2007; Halverson et al. 2011; Novick and Catley 2013; Dees et al. 2014) that potentially create obstacles to understanding evolution (Meir et al. 2007; Gregory 2008). The importance of phylogenetic trees for biologists and lack of basic interpretation skills among students necessitate continued research to address this discrepancy.
Some of the most common instructional activities concerning phylogenetic trees are construction exercises, in which students build phylogenetic trees from provided or self-generated data. Such tasks assume that constructing phylogenetic trees will improve interpretation skills, but research exploring this relationship is limited and conflicting. Eddy et al. (2013) observed that scaffolded construction tasks significantly improved student interpretations of phylogenetic trees. However, Halverson (2011) concluded that students must develop interpretation skills before construction abilities. Thus, the effects of construction exercises on student learning remain uncertain.
One reason that such effects are uncertain could be that what students construct during the tasks is largely unknown. Halverson (2011) only characterized representations from students as valid phylogenetic trees or one of several alternatives (e.g., dichotomous keys, flow charts, food webs, pictures, and lists), while the conflicting investigation by Eddy et al. (2013) did not describe the representations created by students. A third study, Young et al. (2013), was limited to measuring the prevalence of basic phylogenetic tree characteristics (e.g., single common ancestor, branches, and hierarchy) in representations generated by students before and after instructional activities.
Which style of phylogenetic tree (diagonal or bracket) do introductory biology students prefer to construct?
How conventionally do introductory biology students construct phylogenetic trees, and what are the common deviations?
How accurately do introductory biology students construct phylogenetic trees, and what are the common errors?
This investigation was conducted in the context of an introductory biology course for science and related majors at a large, public university in the midwestern United States. The large-enrollment course (n = 88) served students at various stages in their academic programs (24 % freshmen, 33 % sophomores, 18 % juniors, and 25 % seniors) and was comprised of three units: evolution (first 6 weeks), form and function of plants and animals (next 5 weeks), and ecology (last 5 weeks). Students often collaborated in permanent, self-selected groups of three or four individuals during instructional activities and assessments (Johnson et al. 1998; Smith 2000), including exams with individual and group sections (Cortright et al. 2003). All classes were observed, and instructional materials and assessments were collected to document instruction throughout the course.
Phylogenetic tree instruction
Phylogenetic trees were introduced during the evolution unit through reading assignments in the textbook (Freeman 2011), individual and group reading quizzes, and a series of multiple-choice questions presented by the instructor and answered by students using letter cards (Freeman et al. 2007). These tasks familiarized students with basic characteristics of phylogenetic trees, such as nodes and monophyletic groups, and introduced the critical concept of taxa relatedness (Novick and Catley 2013; Dees et al. 2014). Responses to letter card questions were ungraded but public, which allowed students to view answers from neighbors in preparation for collaborative learning activities. Correct answers using appropriate reasoning were established through group and class discussions, and by students iteratively responding to the same or similar letter card questions if necessary. All phylogenetic trees used during the course were cladograms, in which only branch patterns contain reliable information (Gregory 2008). The instructor briefly presented examples of phylograms (branches scaled for degree of divergence) and chronograms (branches scaled for time), but students were never asked to reason from them during the course.
Following the phylogenetic tree introduction, students completed a group homework featuring a diagonal phylogenetic tree of chordates accompanied by a series of interpretation questions. The prompts specifically concerned trait possession, synapomorphies, most recent common ancestry, monophyletic groups, taxa relatedness, and convergent evolution. Student interpretations of taxa relatedness and convergent evolution submitted by groups were exclusively incorrect (i.e., failed to include both the correct answer and correct reasoning). Responses also exhibited a wide array of inappropriate reasoning strategies (Morabito et al. 2010; Dees et al. 2014), which compelled the instructor to respond with feedback and remedial activities. Phylogenetic trees were revisited during class through additional letter card questions with subsequent discussions. It is important to note that students were not asked to construct phylogenetic trees prior to data collection.
The second phylogenetic tree construction exercise (Additional file 1: Figures S1–S2) was placed on the individual section of the comprehensive final exam. The two versions of the task involve different taxa and traits but result in the same branch pattern, with no unresolved nodes or convergent evolution. In preparation for the subsequent group component of the final exam, two students from each group of four received version A, while the other two students received version B. For groups of three, at least one student received each task version. The third phylogenetic tree construction exercise (Additional file 1: Figure S3) was created by merging both versions of the construction prompt from the individual component of the final exam into a larger and more challenging task for the group component of the final exam. The resulting phylogenetic tree does not contain unresolved nodes, but unlike the earlier construction exercises, convergent evolution is present. All phylogenetic trees constructed for the group section of the evolution unit exam (n = 23), individual component of the final exam (n = 77), and group section of the final exam (n = 22) constitute the data for this investigation.
Rubric development and coding
Phylogenetic trees constructed during the individual component of the comprehensive final exam (only data obtained from individuals) were analyzed for associations between task version, style, conventionality, and accuracy using Fisher’s exact tests (Fisher 1934). The null hypothesis is that one variable of phylogenetic tree construction, such as style, is independent of a second variable, such as conventionality. An exact test for goodness-of-fit was used to analyze the distribution of diagonal and bracket phylogenetic trees from the individual component of the final exam, where the null hypothesis is an equal distribution (McDonald 2014). Phylogenetic trees from the group component of the evolution unit exam and group section of the final exam were not analyzed for variable associations or style distribution due to small sample sizes and low statistical power.
Phylogenetic trees generated by introductory biology students during the group component of the evolution unit exam (n = 23), individual section of the final exam (n = 77), and group component of the final exam (n = 22) were evaluated in terms of style, conventionality, and accuracy.
Unconventional features observed in phylogenetic trees constructed by students
Group unit exam (n = 23)
Individual final exam (n = 77)
Group final exam (n = 22)
7 (30 %)
15 (19 %)
4 (18 %)
Taxa on branches
3 (13 %)
8 (10 %)
2 (9 %)
1 (4 %)
6 (8 %)
0 (0 %)
2 (9 %)
5 (6 %)
1 (5 %)
Major and minor errors observed in phylogenetic trees constructed by students
Group unit exam (n = 23)
Individual final exam (n = 77)
Group final exam (n = 22)
0 (0 %)
10 (13 %)
3 (14 %)
1 (4 %)
10 (13 %)
3 (14 %)
5 (22 %)
12 (16 %)
5 (23 %)
6 (26 %)
31 (40 %)
5 (23 %)
10 (43 %)
30 (39 %)
8 (36 %)
0 (0 %)
7 (9 %)
1 (5 %)
Construction tasks are some of the most common instructional activities concerning phylogenetic trees, but the impact of these exercises on student learning is uncertain (Halverson 2011; Eddy et al. 2013). One factor contributing to this uncertainty could be our fragmented knowledge of what students construct during the tasks (Halverson 2011; Young et al. 2013). The goal of this project was to develop a more robust method for describing student-generated phylogenetic trees, which will support future research that attempts to link construction tasks with learning. By examining responses to construction tasks from an introductory biology course, we developed a method for describing student-generated phylogenetic trees in terms of style, conventionality, and accuracy.
Students showed a strong preference for constructing diagonal phylogenetic trees across all three assessments (Fig. 3). While diagonal and bracket phylogenetic trees are equivalent in terms of information, the choice of style could influence comprehension. For example, Novick and Catley (2013) concluded that students performed significantly better with bracket phylogenetic trees on a variety of interpretation tasks, regardless of background in biology. Thus, our students favored the style that may hinder their interpretation abilities. However, we caution that the present study did not explicitly investigate how students interpret self-constructed phylogenetic trees, which is another important research topic for understanding the effects of construction tasks on learning.
The majority of students generated conventional phylogenetic trees during all three assessments (Fig. 4), despite receiving no explicit instruction on how to construct phylogenetic trees from data. Therefore, many students adopted conventions on their own, presumably through repeated exposure to phylogenetic trees. Surprisingly, accuracy was dependent on conventionality, in that unconventional phylogenetic trees were more likely to be incorrect. The cause of this outcome is unknown, but we speculate that students who constructed unconventional phylogenetic trees may have had less experience with the diagrams, and thus were also more likely to generate incorrect phylogenetic trees. Lack of experience could be due to many factors, such as class absences (rare during phylogenetic tree instruction), non-participation in group instructional activities, or poor study habits. Unfortunately, we have no way of systematically investigating this result due to the group nature of instruction and unknown study habits of our students. However, the relationship between conventionality and accuracy is an important topic for future research.
The majority of phylogenetic trees were correct or adequate in terms of accuracy across all three assessments (Fig. 5), including the group section of the final exam when convergent evolution was present. Thus, students were relatively proficient at constructing phylogenetic trees, which is notable considering the lack of explicit instruction. However, we caution that minor construction errors (Table 3), which were common during all three assessments (Table 5), are not necessarily without consequences. Major errors, such as incorrect relative placement of taxa, directly impact interpretations of trait possession and taxa relatedness, which are skills that were assessed during the course. Minor errors could influence student thinking in other ways that are more difficult to measure. For example, empty branches on phylogenetic trees could reflect a common belief that trait evolution occurs only at nodes (Baum et al. 2005). Establishing relationships between each construction error and specific misinterpretations is an important goal for future research.
Although students constructed diagonal phylogenetic trees more often than bracket phylogenetic trees, this outcome could have been impacted by the curriculum (Additional file 1: Table S1). The course textbook (Freeman 2011) contained only bracket phylogenetic trees, and instructional materials were also biased toward the bracket style. However, assessments (homework, reading quizzes, and exams) were skewed toward diagonal phylogenetic trees. Because assessment strongly impacts learning behaviors [e.g., (Cohen-Schotanus 1999; Wormald et al. 2009)], students could have been tacitly steered toward using the diagonal style. Future classroom studies involving style should control the curriculum such that both styles are equally represented in all aspects of the course.
Students were only required to build one phylogenetic tree, in the style of their choice, during the individual section of the final exam (only data obtained from individuals). Thus, the study design for style was between-student rather than a stronger within-student approach. It is particularly an issue in this case due to the strong preference for constructing diagonal phylogenetic trees, which resulted in a smaller number of bracket phylogenetic trees for comparison. Due to this limitation, no conclusions should be drawn from this study about the effects of style on conventionality and accuracy. Future investigations should use a stronger within-student design that requires students to generate both diagonal and bracket phylogenetic trees during construction tasks.
Two major construction errors, incorrect relatedness and incorrect traits, were somewhat rare in phylogenetic trees constructed by students (Table 5). However, some of these errors could have been provoked by the assessment prompts, which did not state the polarity of traits. We assumed that introductory biology students would treat the provided traits as derived rather than ancestral characters (i.e., traits were gained over time). Although we did not find any evidence to suggest that students assumed the traits were ancestral, it is possible that the lack of polarity information in our prompts affected student reasoning. Future studies could protect against this possibility by explicitly providing polarity information to students before construction tasks or within prompts.
The impact of phylogenetic tree construction exercises on student learning is uncertain based on the literature, and one factor contributing to this uncertainty could be our fragmented knowledge of what students construct during the tasks. We developed a method for describing phylogenetic trees generated by students, which will support future research that attempts to link construction tasks with student learning. However, our method is based on data from one introductory biology course at one institution, and the results likely do not reflect undergraduate biology students as a whole. Other researchers and instructors may find additional errors and unconventional features that were not present or not recognized in our data. We encourage researchers to use our method of style, conventionality, and accuracy as a baseline for developing a more generalizable tool. In addition, we urge others to use our method for research that advances the broader goal of linking construction tasks with student learning.
JD designed the assessment items, completed all data analyses, prepared figures, and contributed to data collection and manuscript preparation. JLM contributed to data collection and manuscript preparation. Both authors read and approved the final manuscript.
This investigation was conducted in compliance with the Institutional Review Board regulations (protocol #SM12217) and was funded by the National Science Foundation (DRL-1420321) and a STEM Education Fellowship from North Dakota State University. We are grateful to Rob Zastre, Julia Bowsher, and Lisa Montplaisir for research support and Elena Bray Speth for comments on earlier versions of the manuscript.
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- American Association for the Advancement of Science. Vision and change in undergraduate biology education: a call to action. Washington, DC. 2011.Google Scholar
- Baum DA, Offner S. Phylogenies & tree-thinking. Am Biol Teach. 2008;70(4):222–9.Google Scholar
- Baum DA, Smith SD, Donovan SS. The tree-thinking challenge. Science. 2005;310:979–80.View ArticlePubMedGoogle Scholar
- Catley KM, Novick LR. Seeing the wood for the trees: an analysis of evolutionary diagrams in biology textbooks. Bioscience. 2008;58(10):976–87.View ArticleGoogle Scholar
- Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Measur. 1960;20(1):37–46.View ArticleGoogle Scholar
- Cohen-Schotanus J. Student assessment and examination rules. Med Teach. 1999;21(3):318–21.View ArticleGoogle Scholar
- Cortright RN, Collins HL, Rodenbaugh DW, DiCarlo SE. Student retention of course content is improved by collaborative-group testing. Adv Physiol Educ. 2003;27(3):102–8.View ArticlePubMedGoogle Scholar
- Dees J, Momsen JL, Niemi J, Montplaisir L. Student interpretations of phylogenetic trees in an introductory biology course. CBE-Life Sci Educ. 2014;13:666–76.PubMedPubMed CentralGoogle Scholar
- Dobzhansky T. Nothing in Biology Makes Sense Except in the Light of Evolution. Am Biol Teach. 1973;35(3):125–9.View ArticleGoogle Scholar
- Eddy SL, Crowe AJ, Wenderoth MP, Freeman S. How should we teach tree-thinking? An experimental test of two hypotheses. Evol: Educ Outreach. 2013. 6:13.Google Scholar
- Fisher RA. Statistical Methods for Research Workers. 5th ed. Edinburgh: Oliver and Boyd; 1934.Google Scholar
- Freeman S. Biological Science. 4th ed. San Francisco: Benjamin Cummings; 2011.Google Scholar
- Freeman S, O’Connor E, Parks JW, Cunningham M, Hurley D, Haak D, Dirks C, Wenderoth MP. Prescribed Active Learning Increases Performance in Introductory Biology. CBE-Life Sci Educ. 2007;6:132–9.View ArticlePubMedPubMed CentralGoogle Scholar
- Gregory TR. Understanding evolutionary trees. Evol: Educ Outreach. 2008;1:121–37.Google Scholar
- Halverson KL. Improving tree-thinking one learnable skill at a time. Evol: Educ Outreach. 2011;4:95–106.Google Scholar
- Halverson KL, Pires CJ, Abell SK. Exploring the complexity of tree thinking expertise in an undergraduate systematics course. Sci Educ. 2011;95:794–823.View ArticleGoogle Scholar
- Johnson DW, Johnson RT, Smith KA. Cooperative learning returns to college: what evidence is there that it works? Change. 1998;30(4):26–35.View ArticleGoogle Scholar
- McDonald JH. Handbook of Biological Statistics. 3rd ed. Baltimore: Sparky House Publishing; 2014.Google Scholar
- Meir E, Perry J, Herron JC, Kingsolver J. College students’ misconceptions about evolutionary trees. Am Biol Teach. 2007;69(7):71–6.View ArticleGoogle Scholar
- Morabito NP, Catley KM, Novick LR. Reasoning about evolutionary history: post-secondary students’ knowledge of most recent common ancestry and homoplasy. J Biol Educ. 2010;44(4):166–74.View ArticleGoogle Scholar
- National Research Council. Discipline-based education research: understanding and improving learning in undergraduate science and engineering. Washington: The National Academies Press; 2012.Google Scholar
- Novick LR, Catley KM. Understanding Phylogenies in Biology: the Influence of a Gestalt Perceptual Principle. J Exp Psychol: Appl. 2007;13(4):197–223.Google Scholar
- Novick LR, Catley KM. Reasoning about evolution’s grand patterns: college students’ understanding of the tree of life. Am Educ Res J. 2013;50(1):138–77.View ArticleGoogle Scholar
- Novick LR, Stull AT, Catley KM. Reading phylogenetic trees: the effects of tree orientation and text processing on comprehension. Bioscience. 2012;62(8):757–64.View ArticleGoogle Scholar
- Omland KE, Cook LG, Crisp MD. Tree thinking for all biology: the problem with reading phylogenies as ladders of progress. BioEssays. 2008;30(9):854–67.View ArticlePubMedGoogle Scholar
- Smith KA. Going deeper: formal small-group learning in large classes. New Dir Teach Learn. 2000;81:25–46.View ArticleGoogle Scholar
- Thomas DR. A general inductive approach for analyzing qualitative evaluation data. Am J Eval. 2006;27(2):237–46.View ArticleGoogle Scholar
- Wormald BW, Schoeman S, Somasunderam A, Penn M. Assessment drives learning: an unavoidable truth? Anat Sci Educ. 2009;2(5):199–204.View ArticlePubMedGoogle Scholar
- Young AK, White BT, Skurtu T. Teaching undergraduate students to draw phylogenetic trees: performance measures and partial successes. Evol: Educ Outreach. 2013. 6:16.Google Scholar | <urn:uuid:94b8f785-2611-47b3-bfd9-f1089e228913> | 2.953125 | 5,022 | Academic Writing | Science & Tech. | 32.685412 | 95,624,332 |
It has been speculated that lead poisoning may have played a role in the fall of the Roman Empire: it is thought to have been caused by the concentration of grape juice in lead containers.
Though the introduction of lead-free gasoline has reduced damage to the environment, the annual production of lead continues to increase worldwide because lead is still used in batteries, glass, and electronic components. However, there has thus far been little research into what, at a molecular level, causes the toxic effects of lead. French researchers have now applied quantum chemistry to very simple enzyme models and gained new insights. As they have reported in Angewandte Chemie, it seems that the lead¡¯s ¡°electron shield¡± is the main culprit.
Lead does the most damage to the nervous system, kidneys, liver, brain, and blood. These kinds of damage are especially severe for children as they can be irreversible. Complexation agents that grab onto the metal cations are used as antidotes. However, these agents are not lead-specific, meaning that they also remove other important metal cations from the body.
C. Gourlaouen and O. Parisel (Laboratoire de Chimie Th¨¦orique, Universit¨¦ Paris 6) took a closer look at two proteins to which lead likes to bind. Calmodulin, a calcium-binding protein, plays an important role in regulating and transporting the calcium cation in the human body. A calcium ion binds to seven ligands at the active centers of the enzyme. If one of the four possible calcium ions of calmodulin is replaced by lead, the lead ion remains roughly heptacoordinated, but this active center becomes distorted and inefficient; the three remaining sites get a reduced efficiency.
¦Ä-Aminolevulinic acid dehydratase is essential for the biosynthesis of hemoglobin. Inhibition of this enzyme disrupts the formation of blood to the point of anemia. At the active center, a zinc ion binds to four ligands, three of which involve a sulfur atom. When lead replaces zinc, it only binds to the three sulfur atoms. The reason for this is the emerging free electron pair of the lead cation. It acts as an electronic shield on one side, pushing away the fourth ligand. Such a dramatic geometrical distortion at the active center could explain why lead inhibits this enzyme.
The different behavior of lead in these two enzymes demonstrates that it can enter into complexes in which the metal¨Cligand bonds can either point in all directions, or into only one hemisphere, while the other hemisphere is filled by the free electron pair. This observation may help in the design of future lead-specific antidotes.
Author: Olivier Parisel, Universit¨¦ Pierre et Marie Curie, Paris VI (France), http://www.lct.jussieu.fr/rubrique13.html
Title: Is an Electronic Shield at the Molecular Origin of Lead Poisoning? A Computational Modeling Experiment
Angewandte Chemie International Edition 2007, 46, No. 4, 553¨C556, doi: 10.1002/anie.200603037
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Health and Medicine
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering | <urn:uuid:301e229d-a9bd-48b7-b5e2-c6e8dc4e4d56> | 3.703125 | 1,304 | Content Listing | Science & Tech. | 42.731307 | 95,624,345 |
This tutorial is part 2 of 2 in the series.
When you get started with Git, it can be quite overwhelming. First, the idea of a distributed version control system and the benefits of it are not clear for everyone. And second, there are plenty of commands with additional options in order to master Git on the command line. It can be intimidating.
However, you will never need all the combinations of commands and options in Git. For me they break down to only a few essential commands that I use for web development. Everything else can be looked up whenever complex problems arise.
In this article, I want to give you a brief introduction to Git and GitHub, how to get started and how to use it. Afterward, I want to show you my essential commands for Git that enabled me to do web development in the recent years. It’s no magic and doesn’t need to be overwhelming.
Why Git and GitHub?
Git is a version control system for tracking file/folder snapshots and their changes across multiple machines. Most of the time, the files are related to software, for instance the source code of an application, but they don’t need to be only of this kind. I already met writers and content marketers using Git to organize their files and to collaborate with others.
These files and folders are grouped into a repository. Multiple people can collaborate on a repository. Basically a repository is your project’s container for Git and GitHub. People can create a local copy of the repository, modify the folders/files, and sync all the changes back to the remote repository. All collaborators can pull the recent changes from the remote repository to their local repository.
While Git happens on the command line by executing commands to pull, modify and push repositories, GitHub is the web-based Git platform. You can create repositories on the GitHub website and synchronize them to with a project on your local machine. Afterward, you can use Git on the command line to execute commands.
Remote vs. Local Repository?
In GitHub, a person or organization (e.g. Facebook, Airbnb) can have repositories. These repositories can have files or whole folder structures for source code, markdown or other content. Unless a repository is private, everyone has reading access to it. It is a remote repository, because it is decentralized from your local machine.
Yet everyone is able to make a copy of the remote repository to his or her local machine. It becomes a local repository. You can make changes in your local repository that are not immediately reflected in the remote repository. You decide when or whether you want to merge the changes back to the remote repository.
The local repository can be used to experiment with source code, to add improvements or to fix issues. Eventually, these adjustments in the local repository get merged back to the remote repository. However, the collaborator has to have writing permission for the remote repository.
The distribution of repositories makes it possible to collaborate as a group on one remote repository when everyone has reading and writing access. A local repository is used to perform changes while the remote repository is the single source of truth.
GitHub offers the possibility to make repositories private. But you would have to upgrade to a paid GitHub account. Once your GitHub profile is upgraded, you can make any repository private thus only visible for yourself.
Getting started with Git and GitHub Setup
Now that you know about Git and GitHub, you might wonder how to get started. That’s fairly straight forward, covered by multiple guides, but also by the GitHub website itself.
First, visit the official GitHub website to create an account. Second, you have to install Git on your command line. Every operating system should come with a default command line, but you can check this developer setup guide to get to know my setup. Third, I highly recommend to setup SSH for your GitHub account. It is optional but secures your access to GitHub. In addition, it leaves out the tedious task where you always have to enter your credentials when you push changes of your local repository to a remote repository on GitHub.
Last but not least, explore and socialize on GitHub. You can explore different repositories by visiting profiles of people and organizations. You can watch and star the repositories to get updates and to show your admiration. You can even start to contribute on a repository as a open source contributor.
In order to socialize, you can follow people who start interesting projects or discussions on GitHub. Try it out by following my account to have your first social connection. I would be keen to see you using it.
Initialize a repository with Git and GitHub
In the beginning, you somehow have to initialize a Git repository. You can initialize a local repository by using the
git init command in a project’s folder on your local machine.
A local repository has a .git file where all the information, for instance the commit history, about the repository is saved. Another file, a .gitignore file, can be added to ignore certain files which shouldn’t be added to the remote repository. Ignored files are only in your local repository.
git init touch .gitignore
For instance, you may want to ignore the .env file where you store sensible environment variables of your project or the node_modules/ folder for not uploading all your project dependencies to your remote GitHub repository.
After you have used the
git init command in your local project, you can create a repository on GitHub. There you can give it a name, an optional description and license (e.g. MIT). Don’t use the checkbox for adding a README.md. Instead, leave the checkbox unchecked. Then you get the instructions to link your local repository to your remote repository in the next step.
In addition, you may want to add a README.md file in your project which is then displayed in your repository on GitHub. Basically that’s everything you need to know for initializing a git project, adding the .gitignore file to it, connecting it to your remote repository on GitHub, add adding changes to it with the add, commit, and push sequence. You will learn more about this sequence in the next section.
Otherwise, if you check the checkbox, you will have a ready to go remote repository which you can clone then to your local machine for having it as local repository. If you want to have a copy of a remote repository, you can clone it by using
git clone <repository_url> to your local machine.
After you have linked your local repository and added, committed and pushed your initial project to the remote repository (not when you have cloned it), you can start to adjust your project (local repository). Afterward, you always follow the add, commit and push sequence. More about this in the next section.
Push your Changes
Over the past years, I have noticed that the GitHub commands I use break down to only a few essential ones that I use in recurring scenarios. These essential commands were quite sufficient for me to come along in web development.
Once you have a local repository, you want to “commit” changes to the code base. Each commit is saved as an atomic step that changes your repository. It is saved in the Git history that is accessible on the command line and GitHub.
Commits come with a commit message. You will see later on how to write a commit message. In addition, a hash is automatically generated to identify your commit. You don’t have to care about the hash in the beginning, but later it can be used to jump to specific points in history or to compare commits with each other.
The commits happen in your local repository before you eventually “push” them to the remote repository where they are accessible and visible for everyone. You can accumulate multiple commits locally before you sync them to the remote repository with a push.
How would you get your changes from a local repository to the remote repository? There are three essential commands: add, commit, push.
First, you can either add all or only selected changed files for the next commit.
git add . git add <path/to/file>
These files will change their status from unstaged to staged files. You can always verify it with
git status. When files are staged, they can be committed. There is also a way back from a staged to an unstaged file.
git reset HEAD <path/to/file>
Second, you can commit the staged files with a commit that comes with a commit message. The message describes your change. There are two ways to commit. You can use the shortcut commit command to add the commit message inline:
git commit -m "<message>"
Also you can use the default commit command to make a more elaborated commit message with multi-lines afterward.
The latter command will open up your default command line editor. Usually, the default command line editor is vim. In vim you would type your commit message. Afterward, you can save and exit vim by using
:wq which stands for write and quit. Most of the time, you will use the shortcut commit though. It is fast and often an inlined commit message is sufficient.
Now, before you get to the third step, multiple commits can accumulate in your local repository. Eventually, in the third step, you would push all the commits in one command to the remote repository.
git push origin master
These are the three necessary steps to get your changes from your local repository to the remote repository. But when you collaborate with others, there can be an intermediate step before you push your changes. It can happen that someone else already pushed changes in the remote repository while you made your changes in your local repository. Thus, you would have to pull all the changes from the remote repository before you are allowed to push your own changes. It can be simple as that:
git pull origin master
However, I never pull directly. Instead, I pull rebase:
git pull --rebase origin master
What’s the difference between pull and pull rebase? A basic
git pull would simply put all the changes from the remote repository on top of your changes. With a pull rebase, it is the other way around. The changes from the remote repository come first, then your changes will be added on top. Essentially a pull rebase has two benefits:
- it keeps an ordered git history, because your changes are always added last
- it helps you to resolve conflicts, if you run into them, because you can adjust your own changes more easily
If you have changed but uncommited files when you pull from the remote repository, you are asked to stash your changed files first. After you have pulled all the changes, you can apply the stash again. Stashing will be explained later in the article.
Git Status, Log and History
There a three essential git commands that give you a status of your project about current and recent changes. They don’t alter anything in your local repository but only show you information. For instance, whenever you want to check the local staged and unstaged changes, type:
Whenever you want to see your local unstaged changes compared to the recent commit, type:
And whenever you want to see the git history of commits, type:
git log is not helpful for most people. Each commit takes too much space and it is hard to scan the history. You can use the following configuration to setup a more concise alias:
git config --global alias.lg "log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit"
Now you can use it with
git lg instead of
git log. Try it out to see the difference.
Git Branches are used for multiple use cases. Imagine you are working on a new feature for your project. You want to open a new branch for it to track the changes independently from the whole project, to be more specific: independently from the master branch. Before you merge the branch into your master branch, you (or others) can review the changes.
Another use case is when you work in a team of developers. You want to give everyone the freedom to work independently on improvements, bug fixes and features. Thus, it makes sense to branch out from the master branch for these use cases. What are the essential commands for Git branching? You can either create a new branch on your own:
git checkout -b <branch>
Or checkout a branch that is already there.
git checkout <branch>
When the branch is newly created by another collaborator and not yet known to your local repository, you can fetch all the branch information from the remote repository. Branches after all are tracked remotely as well. Afterward, you can checkout the branch in your local repository.
git fetch git checkout <branch>
Once you are on the branch, you can pull all the recent changes for it from the remote repository.
git pull --rebase origin <branch>
Now you can start to adjust the code,
git add . and
git commit them, and push your changes eventually. But rather than pushing them to the master branch, you would push them to the branch.
git push origin <branch>
That’s how you can work on so called feature branches for your project. Other developers can collaborate on these branches and eventually the branches are merged in a Pull Request to the master branch.
Merge a Pull Request
At some point, you want to merge a branch to the master branch. You would use the GitHub interface to open a Pull Request (PR) before merging it. Pull Requests help to inspire discussions and peer reviews for an improved code quality and to share knowledge across collaborators.
Before opening a PR, I usually follow these steps to checkout the branch, get all the updates to merge them with my own, get all the recent changes from the master branch too, and force push all the changes to the branch itself.
First, when being on the master branch, update the master branch to the recent changes:
git pull --rebase origin master
Second, checkout the branch:
git checkout <branch>
If you have not the branch yet, fetch all the branches from the remote repository before and then checkout the branch:
git fetch git checkout <branch>
Third, pull rebase all recent changes from the branch:
git pull --rebase origin <branch>
Fourth, rebase all the changes locally from the recent master branch on top:
git rebase master
Last but not least, force push all the changes to the remote branch:
git push -f origin <branch>
The branch is synced with changes from all collaborators, your changes and changes from the master branch. Finally, when the branch is updated in the remote repository, you can hit the “Merge Pull Request” button on GitHub.
Sometimes, when you pull the recent changes from a remote repository or when you rebase the master on a branch, you run into conflicts. Conflicts happen when Git cannot resolve multiple changes on the same file. That can happen more often than expected when collaborating with multiple people.
For instance, imagine it happens for a
git rebase master on your branch. The command line would indicate that it stopped the rebase and shows you the conflicting files. That’s no reason to panic. You can open the indicated files and resolve the conflicts. In the file you should see the changes well separated: the changes from master (HEAD) and from your branch (usually the commit message). You have to decide which of both versions you want to take in order to resolve the conflict. After you have resolved all conflicts in all files (usually all files are shown on the command line), you can continue the rebase:
git add . git rebase --continue
If you run again into conflicts, you can resolve them and run the commands again.
A git stash happens usually when you want to throw away changes permanently or temporary.
The latter, when you want to stash only temporary, can be used when you want to do something else in between. For instance, fixing a bug or creating a PR one someone’s behalf.
The stash is a heap. You can pick up the latest stash to apply it again to your local repository.
git stash apply
If you don’t want to “throw away” all changes by stashing, but only selected files, you can use the checkout command instead:
git checkout -- <path/to/file>
The file goes from unstaged to not changed at all. But remember, whereas stashing allows you to get the stash back from the heap, the checkout reverts all changes in the files. So you are not able to retrieve these changes.
Once you merged a Pull Request, you usually want to delete the remote and local branch.
git branch -d <branch> git push origin :<branch>
While the first command deletes the branch on your local machine, the second command deletes the remote branch on GitHub. It is always good to clean up after you, so you should make this a habit.
I must admit, it is not an essential command for Git, but I use it often to organize my commits on a branch. I like to have a tidy branch before I open it as a PR for others. Tidying a branch means to bring commits in an order that makes sense, rewriting commit messages or “squashing” commit. To squash commits means to merge multiple commits into one.
When using an interactive rebase, you can decide how many commits you want to interactively adjust.
git rebase -i HEAD˜<number>
Afterward, since you adjusted the Git history, you need to force push your changes. A force push will overwrite the Git commits in your remote repository.
git push -f origin master
In general, you should be careful with force pushes. A good rule of thumb is that you can do them on a branch, but never on the master branch. In larger projects a force push is often programmatically not allowed on the master branch.
Commit Message Conventions
When you collaborate with others or want to have tidy commit messages on your own, you can follow Git commit message conventions. There are a handful of conventions. I am used to follow these that were brought up in the Angular community:
- feat: A new feature
- fix: A bug fix
- docs: A documentation change
- style: A code style change, doesn’t change implementation details
- refactor: A code change that neither fixes a bug or adds a feature
- perf: A code change that improves performance
- test: When testing your code
- chore: Changes to the build process or auxiliary tools and libraries
They follow this syntax:
An example taken from the command line could be:
git commit -m "feat(todo-list) add filter feature"
That’s how you can keep a tidy commit history for yourself but also for your team.
Git aliases are used to make up own Git commands by using the built-in Git commands. Aliases allow you to make Git commands more concise or to group them. For instance, you can group two Git commands in order to execute them in one command. That would for example make sense if you wanted to delete a branch. The local and remote deletion would happen in one command. Something like this:
git nuke. In another scenario you would abbreviate
git pull --rebase with
Pull Requests vs. Issues
Pull Requests (PR) and Issues are used in collaboration with multiple people.
When someone in your team created a new branch to work independently on a feature, the branch will lead to a PR eventually. A PR can be reviewed by other collaborators on GitHub. You can have discussions, reviews and have the option to merge or close the PR.
An issue is mostly opened before a branch and PR is created. The issue states a problem in the project and stimulates a discussion. The conversation can lead to a specification that can be used as blueprint to implement a solution. Therefore, you would create a PR based on the Issue. Issues can be labeled to keep track of different categories of issues.
Finally, it is also possible to use PRs and Issues in a private, single person repository. Even when you work on your own, you can use these feature of GitHub to keep better track of problems and changes.
These GitHub and Git essentials should be everything you need to get started in this area. You shouldn’t feel intimidated by the setup nor by the commands. After all, the commands break down to several atomic ones that can be used in only a few essential scenarios.
The essential Git commands break down to:
- git init
- git clone
- git add
- git commit
- git push
- git pull –rebase
- git fetch
- git status
- git log (git lg)
- git diff
Obviously, there are more Git commands (git bisect, git reflog, …) that you could master. However, I don’t find myself using them very often. You can look these up, once you need them, before you have to memorize them. After all, in most cases you will more likely lookup the issue you want to solve in Git rather than a specific command. Most of these issues in Git are well explained when you search for them.
Did the article help you? Share it with your friends on social media , support me on Patreon, become a React developer with my books
The Road to learn React
Build a Hacker News App along the way. No setup configuration. No tooling. No Redux. Plain React in 190+ pages of learning material. Learn React like 33.000+ readers.Get the Book | <urn:uuid:248797dc-e5f1-447c-97d8-e3982b283f70> | 2.734375 | 4,581 | Tutorial | Software Dev. | 50.605002 | 95,624,350 |
|-height||The size of the image. These might be create-only with new() taking a size which is then fixed. If the image can be resized then set() of -width and/or -height does a resize.|
|-file (string)||Set by new() reading a file, or load() or save() if passed a filename, or just by set() ready for a future load() or save().|
|-file_format (string)||The name of the file format loaded or to save as. This is generally an abbreviation like XPM, set by load() or set() and then used by save().|
|-hotx (integers, or maybe -1 or maybe undef)|
|-hoty||The coordinates of the hotspot position. Images which can be a mouse cursor or similar have a position within the image which is the active pixel for clicking etc. For example XPM and CUR (cursor form of ICO) formats have hotspot positions.|
|-zlib_compression (integer -1 to 9, or undef)||
The compression level for images which use Zlib, such as PNG. 0 is no
compression, 9 is maximum compression. -1 is the Zlib compiled-in default
(usually 6). undef means no setting to use an image library default if
it has one, or the Zlib default.
For reference, PNG format doesnt record the compression level used in the file, so for it -zlib_compression can be set() to control a save(), but generally wont read back from a load().
|-quality_percent (integer 0 to 100, or undef)||The quality level for saving lossy image formats such as JPEG. 0 is the worst quality, 100 is the best. Lower quality should mean a smaller file, but fuzzier. undef means no setting which gives some image library default.|
Sloping lines are drawn by a basic Bressenham line drawing algorithm with integer-only calculations. It ends up drawing the same set of pixels no matter which way around the two endpoints are passed.
Would there be merit in rounding odd numbers of pixels according to which way around line ends are given? Eg. a line 0,0 to 4,1 might do 2 pixels on y=0 and 3 on y=1, but 4,1 to 0,0 the other way around. Or better to have consistency either way around? For reference, in the X11 drawing model the order of the ends doesnt matter for wide lines, but for implementation-dependent thin lines its only encouraged, not required.
Ellipses are drawn with the midpoint ellipse algorithm. This algorithm chooses between points x,y or x,y-1 according to whether the position x,y-0.5 is inside or outside the ellipse (and similarly x+0.5,y on the vertical parts).
The current ellipse code ends up with 0.5s in the values, which means floating point, but is still exact since binary fractions like 0.5 are exactly representable. Some rearrangement and factors of 2 could make it all-integer. The discriminator in the calculation may exceed 53-bits of float mantissa at around 160,000 pixels wide or high. That might affect the accuracy of the pixels chosen, but should be no worse than that.
The current code draws a diamond with the Bressenham line algorithm along each side. Just one line is calculated and is then replicated to the four sides, which ensures the result is symmetric. Rounding in the line (when width not a multiple or height, or vice versa) is biased towards making the pointier vertices narrower. That tends to look better, especially when the diamond is small.
The subclasses like GD or PNGwriter which are front-ends to other drawing libraries dont necessarily use these base algorithms, but can be expected to something sensible within the given line endpoints or ellipse bounding box. (Among the image libraries its surprising how variable the quality of the ellipse drawing is.)
Image::Xpm, Image::Xbm, Image::Pbm, Image::Base::GD, Image::Base::Imager, Image::Base::Imlib2, Image::Base::Magick, Image::Base::PNGwriter, Image::Base::SVG, Image::Base::SVGout, Image::Base::Text, Image::Base::Multiplex
Mark Summerfield. I can be contacted as <email@example.com> - please include the word imagebase in the subject line.
Copyright (c) Mark Summerfield 2000. All Rights Reserved.
Copyright (c) Kevin Ryde 2010, 2011, 2012.
This module may be used/distributed/modified under the LGPL.
|perl v5.20.3||IMAGE::BASE (3)||2012-08-01| | <urn:uuid:66da5385-8e16-41fa-9130-1c082095d589> | 2.734375 | 1,045 | Documentation | Software Dev. | 59.586062 | 95,624,382 |
If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
With advances in machine learning and the deployments of neural networks, logistic regression-powered models are expanding their uses throughout PayPal. PayPal's deep learning system is able to filter out deceptive merchants and crack down on sales of illegal products. Kutsyy explained the machines can identify "why transactions fail, monitoring businesses more efficiently," avoiding the need to buy more hardware for problem solving. The AI Podcast is available through iTunes, DoggCatcher, Google Play Music, Overcast, PlayerFM, Podbay, Pocket Casts, PodCruncher, PodKicker, Stitcher and Soundcloud.
For example, for personalized recommendations, we have been working with learning to rank methods that learn individual rankings over item sets. Figure 1: Typical data science workflow, starting with raw data that is turned into features and fed into learning algorithms, resulting in a model that is applied on future data. This means that this pipeline is iterated and improved many times, trying out different features, different forms of preprocessing, different learning methods, or maybe even going back to the source and trying to add more data sources. Probably the main difference between production systems and data science systems is that production systems are real-time systems that are continuously running.
By using memory-optimized tables, resume features are stored in main memory and disk IO could be significantly reduced. If the database engine server detects more than 8 physical cores per NUMA node or socket, it will automatically create soft-NUMA nodes that ideally contain 8 cores. We then further created 4 SQL resource pools and 4 external resource pools to specify the CPU affinity of using the same set of CPUs in each node. We can create resource governance for R services on SQL Server by routing those scoring batches into different workload groups (Figure. | <urn:uuid:7042fca2-38d7-4a6b-8d90-4e075c881067> | 2.765625 | 460 | Content Listing | Science & Tech. | 21.073853 | 95,624,384 |
TORONTO: A mysterious and massive hole, with an area of 80,000 square kilometres, has been spotted in the winter sea ice cover around Antarctica, scientists say.
This opening, known as a polynya, is the largest observed in the Weddell Sea since the 1970s, according to Professor Kent Moore of the University of Toronto in Canada.
At its largest extent, this winter's polynya had an area of open water close to 80,000 square kilometres (km2), he said.
Researchers said the event marks the second year in a row in which the polynya has formed, although it was not as large last year.
Without the insulating effect of sea ice cover, a polynya allows the atmosphere and ocean to exchange heat, momentum and moisture leading to significant impacts on climate.
Ocean convection occurs within the polynya bringing warmer water to the surface that melts the sea ice and prevents new ice from forming.
Moore collaborated with members of the Southern Ocean Carbon and Climate Observations and Modelling (SOCCOM) project to investigate these polynyas and their climate impacts.
Due to the harshness of the Antarctic winter and the difficulties of operating within its pack ice, there exist few direct observations of these polynyas and their impacts on the atmospheric and oceanic circulation, researchers said.
As part of the SOCCOM project, robotic profiling floats capable of operating under sea ice have deployed in the region for the past number of years.
Last month, one of these floats surfaced inside the Weddell Sea polynya, providing unique data on its existence, researchers said.
With these new ocean measurements, along with space-based observations and climate models, comes the possibility that these polynyas' secrets and their impacts on the climate may finally be revealed, they said. | <urn:uuid:bb8e5f3b-f658-424e-9312-9fe767fae96c> | 3.421875 | 377 | News Article | Science & Tech. | 29.523555 | 95,624,400 |
Cleaning up municipal and industrial wastewater can be dirty business, but engineers at the University of Colorado Boulder have developed an innovative wastewater treatment process that not only mitigates carbon dioxide (CO2) emissions, but actively captures greenhouse gases as well.
The treatment method, known as Microbial Electrolytic Carbon Capture (MECC), purifies wastewater in an environmentally-friendly fashion by using an electrochemical reaction that absorbs more CO2 than it releases while creating renewable energy in the process.
"This energy-positive, carbon-negative method could potentially contain huge benefits for a number of emission-heavy industries," said Zhiyong Jason Ren, an associate professor of Civil, Environmental, and Architectural Engineering at CU-Boulder and senior author of the new study, which was recently published in the journal Environmental Science and Technology.
Wastewater treatment typically produces CO2 emissions in two ways: the fossil fuels burned to power the machinery, and the decomposition of organic material within the wastewater itself. Plus, existing wastewater treatment technologies consume high amounts of energy. Public utilities in the United States treat an estimated 12 trillion gallons of municipal wastewater each year and consume approximately 3 percent of the nation's grid energy.
Existing carbon capture technologies are energy-intensive and often entail costly transportation and storage procedures. MECC uses the natural conductivity of saline wastewater to facilitate an electrochemical reaction that is designed to absorb CO2 from both the water and the air. The process transforms CO2 into stable mineral carbonates and bicarbonates that can be used as raw materials by the construction industry, used as a chemical buffer in the wastewater treatment cycle itself or used to counter acidity downstream from the process such as in the ocean.
The reaction also yields excess hydrogen gas, which can be stored and harnessed as energy in a fuel cell.
The findings offer the possibility that wastewater could be treated effectively on-site without the risks or costs typically associated with disposal. Further research is needed to determine the optimal MECC system design and assess the potential for scalability.
"The results should be viewed as a proof-of-concept with promising implications for a wide range of industries," said Ren.
Power companies have many reasons to perk up at the possibility of a carbon-negative wastewater treatment solution. The Environmental Protection Agency's Clean Power Plan, expected to take full effect in the year 2020, will require power plants to comply with reduced CO2 emission levels.
The study may also have positive long-term implications for the world's oceans. Approximately 25 percent of CO2 emissions are subsequently absorbed by the sea, which lowers pH, alters ocean chemistry and hence threatens marine organisms, especially coral reefs and shellfish. Dissolved carbonates and bicarbonates produced via MECC, however, could act to chemically counter these effects if added to the ocean.
"This treatment system generates alkalinity through electrochemical means and we could potentially use that to help offset the effects of ocean acidification," said Greg Rau, a senior researcher at the Institute of Marine Sciences at the University of California Santa Cruz and a co-author of the study. "This is one of several environmentally-friendly things this technology does."
Many wastewater treatment plants are located on coastlines, raising the possibility that future MECC implementation in these facilities could couple both CO2 and ocean acidity mitigation.
Explore further: Drugs in wastewater contaminate drinking water | <urn:uuid:c54a271b-8ef6-4f36-82ac-94707528b987> | 3.640625 | 698 | News Article | Science & Tech. | 8.453333 | 95,624,405 |
Auster, P. J., Gjerde, K., Heupel, E., Watling, L., Grehan, A., and Rogers, A. D. 2011. Definition and detection of vulnerable marine ecosystems on the high seas: problems with the "move-on" rule. - ICES Journal of Marine Science, 68: 254-264. Fishing in the deep sea in areas beyond national jurisdiction has produced multiple problems related to management for conservation and sustainable use. Based on a growing concern, the United Nations has called on States to prevent significant adverse impacts to vulnerable marine ecosystems (VMEs) in the deep sea. Although Food and Agriculture Organization (FAO) guidelines for management were produced through an international consultative process, implementing criteria for designation of VMEs and recognition of such areas when encountered by fishing gear have been problematic. Here we discuss assumptions used to identify VMEs and current requirements related to unforeseen encounters with fishing gear that do not meet technological or ecological realities. A more precautionary approach is needed, given the uncertainties about the location of VMEs and their resilience, such as greatly reducing the threshold for an encounter, implementation of large-scale permanent closed areas, and prohibition of bottom-contact fishing.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:a94fc919-833b-4731-a885-656debedeb50> | 2.71875 | 270 | Academic Writing | Science & Tech. | 28.757142 | 95,624,414 |
Prim's algorithm computes a minimum spanning tree when edges have
Prim's algorithm works by doing a sort of breadth-first search, starting
from some arbitrary node, and keeping the edges in a priority queue. It
keeps extracting the lowest-weight edge and either discarding it without
expanding the search (if the edge leads to a node already in the tree) or
adding it to the result, marking its opposite node as in the tree, and
expanding the search.
Done in the naive way I just explained, Prim's algorithm is dominated by
the cost of enqueueing and dequeueing
|E| edges into/outof the
priority queue. Each enqueue/dequeue takes
O(log |E|) time, so
the total asymptotic cost is
O(|E| log |E|) time or, more
O(|E| log |E| + |V|).
If we had some way to do those enqueues and dequeues in constant time,
the total running time would be
O(|E| + |V|)... | <urn:uuid:5ed497f2-514a-440c-b657-cb543dab725e> | 2.6875 | 237 | Q&A Forum | Software Dev. | 59.484231 | 95,624,443 |
Understanding the exposure of the nation’s living marine resources such as shellfish and corals to changing ocean chemistry is a primary goal for the NOAA OAP. Repeat hydrographic surveys, ship-based surface observations, and time series stations (mooring and ship-based) in the Atlantic, Pacific, and Indian Oceans have allowed us to begin to understand the long-term changes in carbonate chemistry in response to ocean acidification.
There are currently 19 OAP-supported buoys in coastal, open-ocean and coral reef waters which contribute to NOAA's Ocean Acidification Monitoring Program, with other deployments planned.
Currently, there are two types of floating devices which instruments can be added in order to measure various ocean characteristics - buoys and wave gliders. Buoys are moored, allowing them to remain stationary and for scientists to get measurements from the same place over time. The time series created from these measurements are key to understanding how ocean chemistry is changing over time. There are also buoys moored in the open-ocean and near coral reef ecosystems to monitor the changes in the carbonate chemistry in these ecosystems. The MAP CO2 sensors on these buoys measure pCO2 every three hours.
Access our buoy data
Research cruises are a way to collect information about a certain ecosystem or area of interest.
For decades, scientists have learned about physical, chemical and biological properties of the ocean and coasts by observations made at sea. Measurements taken during research cruises can be used to validate data taken by autonomous instruments. One instrument often used on research cruises is a conductivity, temperature, and depth sensor (CTD), which measures the physical state of the water (temperature, salinity, and depth). The sensor often goes in the water on a rosette, which also carries niskin bottles used to collect water samples from various depths in the water column. Numerous chemical and biological properties can be measured from water collected in niskin bottles.
Ships of Opportunity (SOPs) or Volunteer Observing Ships (VOSs) are vessels at sea for other reasons than ocean acidification studies, such as commercial cargo ships or ferries.
The owners of these vessels allow scientific instrumentation that measures ocean acidification (OA) parameters to be installed and collect data while the ship is underway. This allows data on ocean chemistry to be collected in many remote areas of the world's ocean, such as high latitude waters, long distances from land (e.g. mid-basin waters), and places not easily accessible by research cruises. These partnerships have greatly increased the spatial coverage of OA monitoring world-wide. To learn more, check out the Ships of Opportunity programs established by the NOAA Pacific Marine Environmental Laboratory (PMEL) and the NOAA Atlantic Oceanographic Marine Laboratory (AOML).
Scientists at the NOAA Pacific Marine Environmental Laboratory (PMEL) are working with engineers at Liquid Robotics, Inc. to optimize a Carbon Wave Glider.
This instrument (pictured above) can be driven via satellite from land. Carbon Wave Gliders can be outfitted with pCO2, pH, oxygen, temperature and salinity sensors, and the glider’s equipment takes measurements as it moves through the water. The glider’s motion is driven by wave energy, and its sensors are powered through solar cells and batteries, when needed.
NOAA’s Coral Reef Conservation Program (CRCP) in partnership with OAP is engaged in a coordinated and targeted series of field observations, moorings and ecological monitoring efforts in coral reef ecosystems.
These efforts are designed to document the dynamics of ocean acidification (OA) in coral reef systems and track the status and trends in ecosystem response. This effort serves as a subset of a broader CRCP initiative referred to as the National Coral Reef Monitoring Plan, which was established to support conservation of the Nation’s coral reef ecosystems. The OAP contributes to this plan through overseeing and coordinating carbonate chemistry monitoring. This monitoring includes a broadly distributed spatial water sampling campaign complemented by a more limited set of moored instruments deployed at a small subset of representative sites in both the Atlantic/Caribbean and Pacific regions. Coral reef carbonate chemistry monitoring is implemented by researchers at the NOAA Atlantic Oceanographic & Meteorological Laboratory (AOML) and NOAA's PIFSC Coral Reef Ecosystems Division.
For Bill Mook, coastal acidification is one thing his oyster hatchery cannot afford to ignore.
Mook Sea Farm depends on seawater from the Gulf of Maine pumped into a Quonset hut-style building where tiny oysters are grown in tanks. Mook sells these tiny oysters to other oyster farmers or transfers them to his oyster farm on the Damariscotta River where they grow large enough to sell to restaurants and markets on the East Coast.
The Baumann lab is looking for a motivated post-doc to complement my lab in 2017, helping to study the combined effects of warming, acidification and deoxygenation on marine fish, in particular on early live stages. We look for a candidate to conduct novel multistressor experiments within and across generations of key forage fish such as Atlantic silversides (Menidia menidia) or Northern sand lance (Ammodytes dubius). This NSF funded work will elucidate short-term as well as whole life cycle consequences of multistressor environments using a mixture of field, experimental and modeling approaches. The ideal candidate will try to broaden the scope of the research of our lab by bringing novel aspects, approaches, or techniques to our existing expertise. | <urn:uuid:4403d25c-b29b-466e-9d5f-c3ad3202be79> | 3.40625 | 1,156 | About (Org.) | Science & Tech. | 22.967721 | 95,624,445 |
Observed Resistance to Pyrimidine Analogs and Sensitivity to Uracil in Drosophila is attributed to Deregulation of Pyrimidine Metabolism
Pyrimidine nucleotides play a central role in cellular metabolism and regulation. In most organisms two pathways provide pyrimidines: the de novo biosynthetic pathway and the salvage pathway. Drosophila melanogaster, the fruit fly, is an ideal organism for study of the genetic basis and regulatory mechanisms of various metabolic pathways. De novo pyrimidine biosynthesis in the fruit fly is a six-step pathway, which is catalyzed by enzymes encoded by three separate genes (Freund and Jarry, 1987; Rawls et al., 1993; Eisenberg et al., 1993). The gene rudimentary (r) is Drosophila’s equivalent of the mammalian gene for CAD (Freund and Jarry, 1987). De novo pyrimidine biosynthesis is important for the proper development of flies. However, the salvage pathway can suffice when the external supply of pyrimidines is very high (Falk and Nash, 1974).
KeywordsMutant Allele Wild Type Allele Catabolic Pathway Salvage Pathway Pyrimidine Nucleotide
Unable to display preview. Download preview PDF. | <urn:uuid:51db5e47-b8a6-497f-bbb8-8231ee220180> | 2.625 | 271 | Truncated | Science & Tech. | 16.5344 | 95,624,459 |
Spider silk is the strongest natural substance on Earth, while graphene is the strongest material known to science. So, what happens when you spray a spider with graphene? You get a mega-bionic web, strong enough to catch falling aircraft, says a team of scientists from the University of Trento, Italy.
Graphene is the strongest material on Earth known to scientists – it is 200 times stronger that steel. It is also the world’s lightest and thinnest material, consisting of a single layer of pure carbons laid out in a hexagonal lattice pattern.
Researchers describe graphene as a 2-dimensional object because it is so thin.
Graphene is a form of carbon made up of planar sheets which are one atom thick, with the atoms arranged in a honeycomb-shaped lattice.
Professor of Solid and Structural Mechanics at the University of Trento, Nicola Pugno, wondered what could happen if you combined the world’s strongest natural material with the strongest material of any kind, i.e. spider silk with graphene.
A team of researchers at the University of Trento, led by Prof. Pugno, sprayed five spiders from the Pholcidae family with a mixture of water and graphene particles. They also sprayed ten other spiders with a carbon nanotube and water mixture.
Some of the spiders were clearly harmed by the application of spray and produced poor-quality silk. For others, however, the effect was incredible. One of them produced a web 3.5 times as strong as a normal spider web.
How did the graphene get into the spider silk?
The researchers do not know how the graphene and carbon nanotubes got into the spider silk. They suggested that perhaps the carbon coated the outside of the strands, but that would not explain why the web got so strong and tough, Prof. Pugno said.
The scientists believe that spiders mop up materials in their environment and incorporate them into the silk they weave – the same probably occurred with the graphene and nanotubes.
Some of the spiders died soon after being sprayed with the mixture.
Could a graphene-laced spider web catch a falling airplane?
Prof. Pugno and colleagues wonder what this new super-silk could be used for. One of the suggestions put forward was to make a giant net that could catch falling airplanes.
The team now plans to see how else graphene might be used to toughen natural materials, such as the silk made by silkworms.
Prof. Pugno said in an interview with the New Scientist “This concept could become a way to obtain materials with superior characteristics.”
In April, an international scientific team informed that graphene can convert light to electricity at ultra-fast speeds. They believe their discovery may have a major impact on a number of technologies, including cameras, solar cells and data communications applications.
Scientists from the University of California, San Diego, announced they had devised a method to increase how much electric charge graphene can store. They say their research may help improve the energy storage ability of capacitors for potential applications in cars, wind turbines and solar power.
Scottish scientists say they can now produce graphene one hundred times cheaper.
Video – What is graphene? | <urn:uuid:f0378faa-84d6-4ccb-ac05-674a17555b35> | 3.671875 | 666 | News Article | Science & Tech. | 49.351345 | 95,624,460 |
Efforts to reduce greenhouse gas emissions in the energy sector could lead to greater pressure on water resources, increasing water use and thermal water pollution. Dedicated adaptation measures will be needed in order to avoid potential trade-offs between the water and climate change impacts of the energy system.
Climate mitigation efforts in the energy system could lead to increasing pressure on water resources, according to a new study published in the journal Environmental Research Letters. Yet increased energy efficiency and a focus on wind and solar power, which require less water, or the switch to more water efficient cooling technologies could help avoid this problem, the study shows.
The new study aimed to systematically pinpoint the drivers of water demand in the energy system, examining 41 scenarios for the future energy system that are compatible with limiting future climate change to below the 2°C target, which were identified by the IIASA-led 2012 Global Energy Assessment.
“While there are alternative possible energy transition pathways which would allow us to limit global warming to 2°C, many of these could lead to unsustainable long-term water use,” says IIASA researcher Oliver Fricko, who led the study, “Depending on the energy pathway chosen, the resulting water use by the energy sector could lead to water allocation conflicts with other sectors such as agriculture or domestic use, resulting in local shortages.”
The energy sector already accounts for approximately 15% of global water use. According to the study global water use of energy could, however, increase by more than 600% by 2100 relative to the base year (2000). Most of this water usage comes from thermoelectric power plants—centralized solar power plants as well as nuclear, fossil fuel or biomass-powered plants—that rely on water for cooling.
Water use is however not the only problem. When river or sea-water is used for power plant cooling, it gets released back into the environment at a higher temperature, a problem known as thermal pollution, which can affect aquatic organisms. The study finds that thermal pollution will increase in the future unless measures are taken to reduce such pollution through mitigation technologies.
The study highlights the importance of energy efficiency. IIASA researcher Simon Parkinson, who also worked on the study, says, “The simplest way to reduce the pressure that the energy sector puts on water resources is to reduce the amount of energy that we use by increasing energy efficiency. This is especially true for developing countries where electricity demand is set to increase rapidly.”
The study shows the importance of an integrated analysis for understanding interlinked global challenges related to water, climate, and energy. It follows a recent IIASA study showing that climate change impacts on water resources could also affect energy production capacity [http://www.iiasa.ac.at/web/home/about/160104-water-energy.html].
“Our findings have major implications for the way how climate change mitigation strategies should be designed. Energy planners need to put more emphasis on the local water impacts, since they may limit policy choices. Ultimately we need integrated strategies, which maximize synergies and avoid trade-offs between the water and climate change and other energy-related objectives,” says Keywan Riahi, Director of the Energy Program at IIASA.
The new study builds on research conducted for the IIASA-coordinated Global Energy Assessment, and provides an analysis linking water, energy, and climate change mitigation, a focus of several new IIASA research projects.
Fricko O, Parkinson SC, Johnson N, Strubegger M, Van Vliet MTH, Riahi K, (2016). Energy sector water use implications of a 2-degree C climate policy. Environmental Research Letters 11 034011 doi:10.1088/1748-9326/11/3/034011 http://iopscience.iop.org/article/10.1088/1748-9326/11/3/034011
Environmental Research Letters covers all of environmental science, providing a coherent and integrated approach including research articles, perspectives and editorials.
The International Institute for Applied Systems Analysis (IIASA) is an international scientific institute that conducts research into the critical issues of global environmental, economic, technological, and social change that we face in the twenty-first century. Our findings provide valuable options to policy makers to shape the future of our changing world. IIASA is independent and funded by scientific institutions in Africa, the Americas, Asia, Oceania, and Europe. www.iiasa.ac.at
MSc Katherine Leitzell | idw - Informationsdienst Wissenschaft
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Materials Sciences
19.07.2018 | Earth Sciences
19.07.2018 | Life Sciences | <urn:uuid:6e7e32f8-b145-49d9-9bec-c3aea02a7339> | 3.671875 | 1,583 | Content Listing | Science & Tech. | 34.851359 | 95,624,461 |
NASA's Terra and Aqua satellites flew over Typhoon Hagupit from Dec. 6 through Dec. 8 and the MODIS instrument that flies aboard both satellites provided images of the storm as it moved through the country.
The Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite caught a picture of Hagupit on Dec. 6 before it made landfall. On Dec. 7, the MODIS instrument aboard NASA's Terra satellite took an image of the storm as it was making landfall in the eastern Philippines.
NASA's Aqua satellite captured this image on Dec. 8 at 04:50 UTC of Tropical Storm Hagupit (22W) over the Philippines.
Image Credit: NASA's Goddard MODIS Rapid Response Team
On Dec. 8 at 04:50 UTC (Dec. 7 at 11:50 p.m. EST) when NASA's Aqua satellite passed over the tropical cyclone again, it had weakened to a tropical storm and was located over Luzon in the northern Philippines. The image showed that Hagupit's cloud extent had grown and it covered the northern and central Philippines, extending south into Mindanao. Although the center was difficult to find in the image, it appeared that it was centered in the Sulu Sea, which lies in the middle of the Philippine islands.
On Dec. 6 at 1500 UTC (10 a.m. EST/11 p.m. local time, Manila), Tropical Storm Hagupit, known in the Philippines a Tropical Storm Ruby, had maximum sustained winds near 45 knots (51.7 mph/83.3 kph). It was centered near 13.8 north longitude and 121.3 east latitude, just 51 nautical miles (58.6 miles/94.4 km) south-southeast of Manila. It was moving to the west-northwest at 6 knots (6.9 mph/11.1 kph).
Warnings that remain in effect in the Philippines on Dec. 8 include: Public storm warning signal #2 in the following provinces: In Luzon: Metro Manila, Batangas, Cavite, Bataan, Laguna, Southern Quezon, Marinduque, Northern Oriental Mindoro including Lubang Island.
Public storm warning signal #1 remains in effect in the following provinces: Luzon: Zambales, Pampanga, Tarlac, Bulacan, Rizal, Rest of Quezon, Rest of Mindoro Provinces, Romblon. For the updated forecast from PAGASA, visit: http://pagasa.dost.gov.ph/index.php/tropical-cyclone/weather-bulletin-update
Forecasters at the Joint Typhoon Warning Center project that Hagupit's current weakening trend will continue as the storm passes into the South China Sea. Once there, unfavorable atmospheric conditions of cooler, drier air will weaken the storm further. It is expected to reach Ho Chi Minh City in southern Vietnam by Dec. 11 as a depression.
NASA's Goddard Space Flight Center
Rob Gutro | EurekAlert!
Further reports about: > Aqua satellite > EST > Flight Center > Goddard Space Flight > Goddard Space Flight Center > Joint Typhoon Warning Center > MODIS instrument > Moderate Resolution Imaging Spectroradiometer > NASA > Space Flight Center > Typhoon > Typhoon Warning Center > UTC > atmospheric conditions > satellite > tropical storm
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Materials Sciences
19.07.2018 | Earth Sciences
19.07.2018 | Life Sciences | <urn:uuid:62796c13-8415-4594-9890-c9177e29c421> | 2.71875 | 1,326 | Content Listing | Science & Tech. | 49.827532 | 95,624,462 |
Cooperation is a no-brainer for symbiotic bacteria
Humans may learn cooperation in kindergarten, but what about bacteria, whose behavior is preprogrammed by their DNA?
Some legume plants, which rely on beneficial soil bacteria called rhizobia that infect their roots and provide nitrogen, seem to promote cooperation by exacting a toll on those bacterial strains that dont hold up their end of the symbiotic bargain, according to a team of researchers at the University of California, Davis.
"In the case of soybeans, it appears that the plant applies sanctions against rhizobia that dont provide nitrogen. The plant does this by decreasing the oxygen supply to the rhizobia," said R. Ford Denison, a crop ecologist in the UC Davis Department of Agronomy and Range Science. "In this way, the host plant can control the environment of the symbiotic bacteria to favor the evolution of cooperation by ensuring that bacterial cheaters reproduce less."
Findings from this study, to be reported in a letter in the Sept. 4 issue of the journal Nature, may one day lead to crops that selectively favor the most productive, beneficial strains of rhizobia, thus making optimal use of naturally available nitrogen.
Scientists have long been intrigued by the cooperative relationships between certain legumes -- peas, soybeans and alfalfa -- and the soil bacteria that "fix," or convert, nitrogen from the air into a form that can be used by the plant. While the rhizobia produce nitrogen for the plant, the plant returns the favor by providing nutrients necessary for the growth and reproduction of the bacteria.
Such mutually beneficial relationships are common in nature and would be easier to understand if there were only one bacterial strain associated with the plant. But there are often several competing strains interacting with the plant, and not all of those strains fix nitrogen at the same rate.
Why wouldnt the bacteria that dont expend energy and resources on fixing nitrogen for the plant be fitter because they have more resources available for their own growth and reproduction? Wouldnt the bacterial species that dutifully provide the plant with nitrogen eventually lose out to their goldbricking cousins that arent doing so?
Denison and colleagues suspected that the plants were somehow penalizing rhizobial species that "cheat" on the symbiotic relationship by fixing little or no nitrogen for the plant. To test that hypothesis, they altered the atmospheric conditions surrounding soybean root nodules containing the rhizobia. By replacing the air with a nitrogen-free argon-and-oxygen mixture, they reduced the rhizobias ability to fix nitrogen to just 1 percent of normal -- forcing the bacteria to shirk their nitrogen-fixing duties.
The researchers observed the impact of this simulated rhizobial cheating on whole soybean plants, on root systems split in half and grown in different atmospheres, and on individual root nodules.
They discovered that the plants appeared to retaliate by decreasing the supply of oxygen to the root nodules inhabited by the rhizobial species that failed to fix nitrogen. They also found that nitrogen-fixing populations consistently grew to larger numbers over time, perhaps because they had access to more oxygen. The root nodules inhabited by nitrogen-fixing rhizobia grew more, so they cost the plant more but not relative to the benefits they provided to the plant.
"The data illustrate that the soybean plants selectively reward or punish their symbiotic bacteria, based on the amount of nitrogen they provide to the plant hosts," Denison said. "This mechanism helps explain why this ancient cooperation between the plant and various rhizobial strains hasnt already broken down."
He noted that such breakdown in cooperation between species can have serious consequences, as in the case of coral bleaching that results when algae leave or are expelled from the coral.
Collaborating with Denison on this study were E. Toby Kiers and Robert A. Rousseau of UC Davis Department of Agronomy and Range Science, and Stuart A. West of the Institute of Cell, Animal & Population Biology at the University of Edinburgh.
Funding for the study was provided by the National Science Foundation, the California Agricultural Experiment Station, the Land Institute, the Royal Society, the Biotechnology and Biological Sciences Research Council, the Natural Environment Research Council and the UC Davis Department of Agronomy and Range Science.
* Pat Bailey, UC Davis News Service, (530) 752-9843,
* Ford Denison, Agronomy and Range Science, (530) 752-9688 , firstname.lastname@example.org
* Toby Kiers, Agronomy and Range Science, (207)963-7016
Pat Bailey | EurekAlert! | <urn:uuid:b7f392f5-f14e-42f7-9d36-f88feec98596> | 3.53125 | 971 | News Article | Science & Tech. | 30.275343 | 95,624,463 |
Since much of the US population is unwisely bathing itself 24/7/365 in data-carrying, pulsed, Radio-Frequency Microwave Radiation (RF/MW radiation), it is important to understand what we are actually doing to ourselves, our loved ones and our environment. Duration of exposure to these micro-second pulses of electrical power, not intensity, is the most important factor.
Therefore, always-on wireless infrastructure antennas are hazardous to one’s health — that is why wireless telecommunications base station antennas belong 200 feet off the ground and at least 2,500 feet away from homes, schools, hospitals, public buildings, parks and wilderness areas. That is why installation of so-called “Small Cell” Distributed Antenna Systems (DAS) anywhere near second-story bedroom windows is a disaster — no matter what government guideline is quoted to justify this assault.
The common thread that ties of Wi-Fi, 4G and 5G together is the reliance on OFDM/OFDMA modulation — sophisticated mathematical transformations that pack huge amounts of digital data onto carrier microwaves in order to transmit the data through the atmosphere. The pulsed microwaves either penetrate or reflect off of anything in their path: any flora, fauna and mandmade structures you can imagine, including human adults and children, pets, and already threatened species of pollinators birds, bees and butterflies.
From 1G to 4G & Towards 5G – Evolution Of Communication
Fundamentals of 4G/LTE: OFDM/ OFDMA
What is Wi-Fi and why should you turn off your Wireless Router/Access Point?
Pulsed Microwave Radiation is the Foundation of Wireless Mobile Data. Microwave Radiation, when used to transfer data from Point-A to Point-B, is comprised of micro-second pulses of electrical power sprayed through the atmosphere. Microwaves either penetrate or reflect off of anything or anyone in their path, The data transferred are electrical pulses or ”bullets” traveling at the speed of light, about 670 million miles per hour. These pulses of data can cause biological harm to many living organisms, including humans.
How Do Microwaves Radiate Over Long Distances?
Electromagnetic waves are produced whenever charged particles are accelerated. In the near-field region (within 3-4 wavelengths from the source antenna charges), waves are incoherent, erratic and choppy with high micro-second peaks of Electric and Magnetic fields. This creates a toxic “hell-stew” of powerful zaps, crackles and pops that are difficult to characterize with any degree of accuracy. Unfortunately, this is the range where people typically hold their wireless devices. Whenever one sends/receives digital data wirelessly from their device, a toxic, spherical cloud 36″ to 48″ in diameter forms around the device, exposing everyone nearby to peaks of RF/MW radiation. Specific Absorption Rate (SAR) is neither an accurate nor a scientific measure of the hazards created by this mixture of Electric and Magnetic fields in the near-field region. SAR is a misleading ‘average of an average’, designed to hide the peaks of Electric and Magnetic power that surround one’s device. These peaks of power interrupt the sensitive electrical signals of our body and causes DNA and neurological damage, suppress the immune system, and interrupt hormone production and regulation.
In the far-field, the Electric Field (E, in blue) and Magnetic Field (B, in red) orient themselves at a 90 degree angle from each other
In the far-field region (beyond 3-4 wavelengths from the source antenna charges), electromagnetic waves become coherent and radiate as self-propagating, transverse, oscillating waves of Electric Fields and Magnetic Fields, as depicted in the diagram, above. The diagram shows an EMR wave propagating in the far-field region from left to right along the X axis. The Electric Field (E) in blue is in a vertical plane (along the Z axis) and the Magnetic Field (B) in red is in a horizontal plane (along the Y axis). Radiating Electric and Magnetic Fields in the far-field region are always in phase and oriented at 90 degrees to each other. Directly below is a 2D animation of the near-field region of an antenna, showing the activity in the X and Z axes. The blue Electric Fields eventually begin radiating at some distance from the antenna.
Slow motion animation: near-field chaos and far-field radiation from a dipole antenna
How Do Microwaves Send and Receive Digital Data?
A wavelength carries massive numbers of erratic pulses of digital data, that wirelessly transmit text, image, audio and video data to and from computers, tablets, phones and the (predicted) billions of IoT machines, appliances, “things,” sensors and devices. Unfortunately, the microwaves used for this purpose are not the smooth sine waves you may have learned about in text books that describe the transmission of visible light (430 to 770 THz) or Alternating Current electrical power (60 Hz). Natural Electromagnetic Fields (EMF) come from two main sources: the sun, and thunderstorm activity, Man-made, pulsed RF/MW radiation differs from natural EMF. Manmade Radio-Frequency Electromagnetic Fields (RF/EMF) and the resulting RF/MW radiation are defined by the equation: c = f λ, where c = the speed of light, f = the frequency, and λ = the wavelength. This means that since c is a constant, as frequency increases, wavelength decreases. Frequency is measured in a unit called Hertz, which represents the number of cycles or oscillations of a wave in one second. The unit Hertz is named after Heinrich Rudolph Hertz a German scientist who first demonstrated that electromagnetic waves radiate at a constant speed. In order to transmit digital data, an antenna’s microchips distort the waves’ shape or pace to modulate (encode) the data stream onto the carrier waves at the source before the antenna transmits them. At the destination, other microchips demodulate (decode) the data stream so the destination device can display the text/image or play the audio/video. A modem is a device that literally modulates and demodulates data streams; engineers shortened the name to modem.
Each antenna in this scheme is a two-way microwave transmitter/receiver. There are an infinite number of combinations of wavelength, frequency, intensity and modulation, the mathematical transformations that encode data onto a carrier wave. Each combination is a new digital fingerprint that uniquely identifies a new manmade toxic agent that, when transmitted into the air, instantly fills our homes, schools, workplaces or public spaces. Below is a list of the Microwave frequencies that – with the advent of 5G – will be added to our already RF/MW radiation-saturated airwaves.
The Panoply of Microwave Frequencies/Wavelengths in a 4G/5G World
- 5G: 600 MHz = waves 20 inches long
- 4G: 700 MHz = waves 17 inches long
- 3G/4G: 800 MHz = waves 15 inches long
- 3G/4G: 900 MHz = waves 13 inches long
- 3G/4G: 1800 MHz = waves 7 inches long
- 3G/4G: 2100 MHz = waves 6 inches long
- Wi-Fi: 2450 MHz = waves 5 inches long (unlicensed)
- 5G: 3100 MHz to 3550 MHz = waves 3.8 to 3.3 inches long
- 5G: 3550 MHz to 3700 MHz = waves 3.3 to 3.2 inches long
- 5G: 3700 MHz to 4200 MHz = waves 3.2 to 2.8 inches long
- 5G: 4200 to 4900 MHz = waves 2.8 to 2.4 inches long
- Wi-Fi: 5800 MHz = waves 2.0 inches long (unlicensed)
- 5G: 24,250 to 24,450 MHz = waves 0.5 inch long
- 5G: 25,050 to 25,250 MHz = waves 0.5 inch long
- 5G: 25,250 to 27,500 MHz = waves 0.4 inch long
- 5G: 27,500 to 29,500 MHz = waves 0.4 inch long
- 5G: 31,800 to 33,400 MHz = waves 0.4 inch long
- 5G: 37,000 to 40,000 MHz = waves 0.3 inch long
- 5G: 42,000 to 42,500 MHz = waves 0.3 inch long
- 5G: 57,000 to 64,500 MHz = waves 0.3 inch long (unlicensed)
- 5G: 64,000 to 71,000 MHz = waves 0.2 inch long
- 5G: 71,000 to 76,000 MHz = waves 0.2 inch long
- 5G: 81,000 to 86,000 MHz = waves 0.1 inch long
All of the waves listed above are examples of both Microwaves and Radio waves, therefore, scientists use the term Radio-Frequency Microwave Radiation (RF/MW radiation) to describe this entire range of wavelengths/frequencies.
Radio waves are from 1 mm to 100,000,000 meters (frequency of 300,000 MHz down to 3 Hz)
Microwaves are from 1 mm to 1 meter (frequency of 300,000 MHz down to 300 MHz)
Microwaves have different properties, depending on their wavelength. The longer waves (20″ down to 5″) travel further and penetrate deeper into buildings and living tissue. The shorter waves (0.5″ down to 0.1″) are called millimeter waves (mm-waves) because they measure from 10 mm (at 30,000 MHz), down to 1 mm (at 300,000 MHz). The mm-waves are not as efficient because they don’t travel as far, tend to reflect off of buildings, and deposit mainly into the eyes and skins of living organisms.
Millimeter waves (from 10-mm|30GHz to 1-mm|300GHz) are readily absorbed by the atmosphere and by the eyes and skin of living organisms
- 700 million to 2.1 billion microwaves per second for 2G/3G/4G mobile data sent to cell phones
- 2.4 billion to 5.8 billion microwaves per second for Wi-Fi data to tablets/laptops
In the second-half of 2017, if Verizon, AT&T and others wireless carriers have their way, the US population will be radiated with additional, pulsed microwaves (24 billion to 90 billion microwaves per second) for 5G services and for navigation-assisted cars.
As with any toxic agent, the proper way to evaluate its toxicity is to consider not just the rate of exposure (as the Federal RF/MW radiation guidelines do), but consider total exposure over time. Below is a graph of RF/MW radiation exposures from Wi-Fi of an elementary school student using a wireless iPad. One can see extremely high peaks of electrical power. These peaks cannot be seen using SAR tests.
The intense peaks of RF/MW radiation and total exposure over time (not the average rate of exposure), are what impacts health the most.
In May of 2016, scientists at the US Federal National Toxicology Program released “partial findings” from the $25 million study on cellphone radiation, that found that both hyperplasias (abnormal increases in volume of a tissue or organ caused by the formation and growth of new normal cells) and tumors occur at significantly higher rates in the presence of continuous RF/MW radiation,
Disregarding these findings, six short weeks later, the FCC approved a move to 5G, and the wireless industry got to work installing Distributed Antenna Systems on utility poles as quickly as they possibly could. Some antennas have been placed as close as 20 feet from second story bedroom windows and will spray 4G or 5G RF/MW radiation 24/7/365 into these bedrooms.. Cancer clusters have been documented for people living closer than 2,000 feet to mobile communications base stations,.Antennas for mobile communications base stations should never be lower than 200 feet, and never closer than 2,000 feet to people and other living organisms.
Massive electromagnetic pollution is spiraling out of control, with both Industry and Government denying the scientific proof of harm from RF/MW radiation. Our Government and the Wireless Industry should not transmit digital data wirelessly, using the data-dense modulation schemes: Orthogonal Frequency-Division Multiplexing (OFDM/OFDMA) used in Wi-Fi, 4G/LTE and 5G — because the US Government has already proven that the data-sparse 2G modulation is hazardous. We must, instead, transmit data from Point A to Point B to every business, every home, every school and farm with far superior fiber optic cables. | <urn:uuid:ee210cb2-66eb-4185-8777-ee3f31ab064c> | 3.078125 | 2,725 | Knowledge Article | Science & Tech. | 50.895168 | 95,624,471 |
Two Kansas State University biologists are studying streams to prevent tallgrass prairies from turning into shrublands and forests.
By looking at 25 years of data on the Konza Prairie Biological Station, Allison Veach, doctoral student in biology, Muncie, Indiana, and Walter Dodds, university distinguished professor of biology, are researching grassland streams and the expansion of nearby woody vegetation, such as trees and shrubs. They have found that burn intervals may predict the rate of woody vegetation expansion along streams.
Kansas State University
Walter Dodds, university distinguished professor of biology (pictured), and Allison Veach, doctoral student in biology, are researching grassland streams and the expansion of nearby woody vegetation. They have studied 25 years of data on the Konza Prairie Biological Station and found that increasing fire frequency reduces the rate of woody vegetation expansion.
Their latest research appears in the peer-reviewed journal PLOS ONE in an article "Fire and Grazing Influences on Rates of Riparian Woody Plant Expansion along Grassland Streams."
Grasslands in North America and across the globe are rapidly disappearing, Veach said, and woody plants are expanding and converting grasslands into forest ecosystems. This change in environment can affect stream hydrology and biogeochemistry, said Dodds, who has studied streams and watersheds on the Konza prairie for more than 20 years.
"This is an important issue regionally, because as trees expand into these grassland areas, people who are using grassland for cattle production have less grass for animals, too," Dodds said.
In their latest research, the biologists studied 25 years of aerial photography on Konza and observed the expansion of trees and shrubs in riparian areas, which include areas within 30 meters of streambeds. The researchers focused on three factors that affect grassland streams: burn intervals; grazers, such as bison; and the historical presence of woody vegetation.
Their analysis revealed an important finding: Burn intervals predicted the rate of woody vegetation expansion. Burning every one to two years slowed the growth of trees and shrubs, Veach said.
"Although we can reduce woody expansion by burning more frequently, we can't prevent it from occurring over time," Veach said. "Woody plant encroachment may not be prevented by fire alone."
The research shows the importance of burning to maintain the tallgrass prairie, Dodds said. While burning can help to slow the expansion of trees and shrubs, additional actions are need to maintain quickly disappearing grassland ecosystems.
"It's clear from this research that if you don't burn at all, these grassland streams basically are going to switch to forests and will not be grassland streams anymore," Dodds said.
Dodds and Veach also found that bison do not significantly affect woody vegetation expansion along streams. Previous Konza research has shown that bison do not spend significant time near stream areas, so they may not influence the growth of nearby trees and shrubs, Veach said.
Woody vegetation also may be expanding in grasslands because of more carbon dioxide in the atmosphere, Dodds said. Grasses and trees compete for carbon dioxide, and grasses are much better at conserving water and efficiently using carbon dioxide. As atmospheric carbon dioxide levels increase, it becomes easier for trees to gather carbon dioxide and gives them a growing advantage over grasses.
"The tallgrass prairie is almost nonexistent on the globe," Veach said. "In order for us to preserve tallgrass prairie, we need to look at woody encroachment because it has been an issue. Things like no fire or differences in climate change may allow woody plant species to competitively take over grasslands."
The biologists plan to continue studying water quality and quantity issues at Konza. Konza is an 8,600-acre tallgrass prairie ecological research site jointly owned by the university and The Nature Conservancy.
Veach and Dodds received research funding from the National Science Foundation's Konza Prairie Long-Term Ecological Research program and the Kansas Experimental Program to Stimulate Competitive Research. The research also involved Adam Skibbe at the University of Iowa.
Jennifer Torline Tidball
Jennifer Torline Tidball | newswise
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:ac9c7df4-c240-4a00-b767-ed8e7a2b9d4f> | 3.75 | 1,473 | Content Listing | Science & Tech. | 35.918011 | 95,624,475 |
Artist’s impression on an asteroid impact with the Earth
In the past five weeks two asteroids have passed close by Earth, at distances of 1.2 and 3 times the distance to the Moon. Another asteroid has recently been shown to be on course for a collision with Earth in 2880.
Monitoring known asteroids allows astronomers to predict which may collide with Earth. But that is only true for the asteroids we know of. What about those that lie in the asteroid blind spot between the Sun and Earth? The European Space Agency is studying ways in which its missions can assist in monitoring these unseen but potentially hazardous asteroids.
It is difficult to estimate the danger posed by asteroids. This is, in part, because astronomers do not yet know how many asteroids there are. A recent discovery, made using data from ESA`s Infrared Space Observatory (ISO), showed that there could be nearly two million asteroids larger than one kilometre in the main asteroid belt, between Mars and Jupiter. That is more than twice as many as previously thought.
Monica Talevi | alphagalileo
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:a3bd398e-54a1-4864-8e54-0e655b1fa0e7> | 3.265625 | 795 | Content Listing | Science & Tech. | 41.127587 | 95,624,476 |
|WikiProject Physics||(Rated Start-class, High-importance)|
|WikiProject Chemistry||(Rated Start-class, High-importance)|
- 1 Merger proposal
- 2 Other talk
- 3 Potential Inconsistancy?
- 4 Graham's law
- 5 Boyle's Law
- 6 Use of SI units
- 7 Disputed
- 8 Clarification of purpose?
- 9 Avogadro's law mistake
- 10 Combined and ideal gas laws: k5
- 11 References
- 12 Conundrum: Inverse relationship between P and T
- 13 Going beyond classical ideal gases
I propose that Gas laws be merged into Gas. I think that the content in the Gas laws article can easily be explained in the context of Gas, and the Gas article is of a reasonable size in which the merging of Gas laws will not cause any problems as far as article size or undue weight is concerned. Katanada (talk) 07:33, 27 March 2011 (UTC)
I disagree. If we include everything about gas in the gas article it will get to large. I think it is better if we provide a short section and a link (as already exists with many other gas related topics). Modularity is great. — Preceding unsigned comment added by Youarefunny (talk • contribs) 15:57, 12 June 2011 (UTC)
I think that the articles should not be merged as the Gas article would become too large. Also, if a user is searching for Gas laws it would be harder for them to find the information if the article was merged. — Preceding unsigned comment added by Pineapple Head0000 (talk • contribs) 17:04, 11 July 2011 (UTC)
Ideal gas and Gas laws are both currently stubs. They both deal with the same topic, as far as I can tell: Gas laws are laws of Ideal Gases, and Ideal Gases are hypothetical gases which obey those laws. Does anyone have objections to me merging the two & leaving one as a redirect? (and which one to choose as the main page?) -- Tarquin 04:59 Jul 30, 2002 (PDT)
Hmm... how about we axe everything under the summary? I mean, we really don't need brief summaries of two different random gas laws when there are individual entries for them. I think we should leave this up otherwise, because I think someone looking up "gas laws" wants a brief overview of various gas laws... maybe they're looking for one in particular, or wants to know the major ones. Have a link to ideal gases, sure, but not the whole page... anyway, my 2 cents. Lepidoptera 03:59, 16 July 2005 (UTC)
Is it a contradiction between this article, where:
- p=const*T or p/T=const
and the Law of Charles and Gay-Lussac, where:
- T/V=const or V/T=const
or am I wrong? First version presumes V=const, the second p=const. Could someone explain this to me in here? Excuse my language but it's not my native. C4 10:57, 6 May 2004 (UTC)
There's no inconsistancy. There are three laws: Charles, Boyle's, and Gay-Lassac. In Charles's law, you assume that pressure is constant, and V1/T1=V2/T2. In Boyle's law, you assume that temperature is constant, and P1V1=P2V2. In Gay-Lassac's law, you assume that volume is constant, and P1/T1=P2/T2.
The gas laws are valid only for special conditions and ideal gas behavior. Ideal gas properties (laws) operate only when attractions between neighboring gas molecules are small compared to their average kinetic energy. The volume of the space available has to be big compared to the size of the gas particles.
Boyles law, P1V1 = P2V2 is valid only when the temperature and number of mols of gas are constant.
Charles law V1/ T1 = V2/T2 is valid only when the number of mols of gas and the pressure are constant.
All of these describe what happens to a fixed amount of gas (constant mols) when one variable in the combined gas law is held constant.Moleculedoctor (talk) 20:34, 24 February 2009 (UTC)
- Graham's law, named after Thomas Graham, states that the kinetic energy of two samples of different gases at the same temperature is identical.
Yeah, Graham's law is about diffusion... Lepidoptera 03:54, 16 July 2005 (UTC)
If an exhaust pump is connected to a fixed 100cm^3 flask, the volume of liquid swept out by the piston is 25cm^3 each stroke and the pressure in the vessel is originally 1 atm, what is the pressure in the flask after 2 strokes?
Use of SI units
I think the article should be converted to SI units. The official unit for pressure is Pascal, not atmosphere. The unit for volume is m^3, not liters.
I defiantly agree. Also it says Kelvins where I'm pretty sure it is just Kelvin.
Do the units even matter as long as they are persistence throughout the equation on which one uses for each of the three "things"? In other words, the article needs to mention whether the units have to be in those mentioned for the laws to work. I don't think they do. Bayerischermann 01:49, 7 June 2006 (UTC)
- Quite right; I have made a suitable addition. --Runcorn 19:30, 7 June 2006 (UTC)
Indeed physical quantities, hence quantity relationship equations, are independent of units as long as you use a coherent system such as the SI. What isn't independent of units is the value of constants. 188.8.131.52 14:24, 26 April 2007 (UTC)
- Yes, I think that the article already says that.--Runcorn 21:10, 26 April 2007 (UTC)
Hi, I've corrected 'amount' to 'number' in several places throughout this page. the rule of thumb: If you can count it, use 'number'; if you can weigh it, use 'amount'. Steev (talk) 23:58, 14 October 2011 (UTC)
Clarification of purpose?
The lead implies that the article is describing the historical "gas laws" rather then a detailed treatment of the laws themselves - better placed elsewhere. Have tidied up a bit to emphasise this. Pdch (talk) 12:59, 1 January 2008 (UTC)
Avogadro's law mistake
There is a mistake in Avogadro's law, says P1/T1=P2/T2 when it should have mole number, not temperature, and should probably have a fancy-pants image for the equation like all the other sections. I don't have the expertise or impetus to make it look nice, so I'll leave it to someone who is actually a legitimate Wikipedia editor instead of a random guy, so I thought I'd just point it out here. — Preceding unsigned comment added by 184.108.40.206 (talk) 05:21, 4 December 2011 (UTC)
Combined and ideal gas laws: k5
It's a constant, named "constant 5". Usually, when teaching Gas Laws, you are dealing with many constants. So you use subscript to avoid confusion. As in: PV=K1, V/T=K2, etc.
Conundrum: Inverse relationship between P and T
If I combine Boyle’s and Charles’ laws I get an absurd relationship.
(1) Boyle’s Law: PV=k1
(2) Charles’ Law: V/T=k2 or V=k2T.
Substituting for V in (1),
k2PT=k1 or PT=k1/k2, or P=k3/T. (Inverse relationship. Absurd!)
I would appreciate it very much if a person more learned than me would point out the error in this operation. --Prof. Bleent (talk) 16:12, 17 April 2015 (UTC)
Going beyond classical ideal gases
As the article currently stands, all the stuff only applies up to the classical ideal gas law. No attempt is made at handling the general ``gas laws". Hence, no Van der Waals equation of state, which is equally a valid gas law that one would think comes under this heading. I think a change in the name of this article and at least a hint and link to stuff beyond the simplistic classical ideal gas is both the cheapest way to handle this problem, and also in accordance with the reason of existence of this article. 220.127.116.11 (talk) 11:28, 3 November 2015 (UTC) | <urn:uuid:75722c50-6edc-485c-91ed-3e502118b0cd> | 2.59375 | 1,891 | Comment Section | Science & Tech. | 71.671608 | 95,624,487 |
A McGill-led research team using the Herschel Space Observatory has discovered a giant, galaxy-packed filament ablaze with billions of new stars. The filament connects two clusters of galaxies that, along with a third cluster, will smash together and give rise to one of the largest galaxy superclusters in the universe.
The filament is the first structure of its kind spied in a critical era of cosmic buildup when colossal collections of galaxies called superclusters began to take shape. The glowing galactic bridge offers astronomers a unique opportunity to explore how galaxies evolve and merge to form superclusters.
"We are excited about this filament, because we think the intense star formation we see in its galaxies is related to the consolidation of the surrounding supercluster," said Kristen Coppin, a postdoctoral fellow in astrophysics at McGill and lead author of a new paper in Astrophysical Journal Letters.
"This luminous bridge of star formation gives us a snapshot of how the evolution of cosmic structure on very large scales affects the evolution of the individual galaxies trapped within it," said Jim Geach, a co-author also based at McGill.
The intergalactic filament, containing hundreds of galaxies, spans 8 million light-years and links two of the three clusters that make up a supercluster known as RCS2319. This emerging supercluster is an exceptionally rare, distant object whose light has taken more than seven billion years to reach us.
RCS2319 is the subject of a huge observational study, led by Professor Tracy Webb and her group at McGill's Department of Physics. Previous observations in visible and X-ray light had found the cluster cores and hinted at the presence of a filament. It was not until astronomers trained Herschel on the region, however, that the intense star-forming activity in the filament became clear. Dust obscures much of the star-formation activity in the early universe, but telescopes like Herschel can detect the infrared glow of this dust as it is heated by nascent stars. (The Herschel Space Observatory is a European Space Agency mission with important NASA contributions.)
The amount of infrared light suggests that the galaxies in the filament are cranking out the equivalent of about 1,000 solar masses (the mass of our sun) of new stars per year. For comparison's sake, our Milky Way galaxy is producing about one solar mass-worth of new stars per year.
Researchers chalk up the blistering pace of star formation in the filament to the fact that galaxies within it are being crunched into a relatively small cosmic volume under the force of gravity. "A high rate of interactions and mergers between galaxies could be disturbing the galaxies' gas reservoirs, igniting bursts of star formation," said Geach.
By studying the filament, astronomers will be able to explore the fundamental issue of whether "nature" versus "nurture" matters more in the life progression of a galaxy. "Is the evolution of a galaxy dominated by intrinsic properties such as total mass, or do wider-scale cosmic environments largely determine how galaxies grow and change?" Geach asked. "The role of the environment in influencing galactic evolution is one of the key questions of modern astrophysics."
The galaxies in the RCS2319 filament will eventually migrate toward the center of the emerging supercluster. Over the next seven to eight billion years, astronomers think RCS2319 will come to look like gargantuan superclusters in the local universe, like the nearby Coma cluster. These advanced clusters are chock-full of "red and dead" elliptical galaxies that contain aged, reddish stars instead of young ones.
"The galaxies we are seeing as starbursts in RCS2319 are destined to become dead galaxies in the gravitational grip of one of the most massive structures in the universe," said Geach. "We're catching them at the most important stage of their evolution."
Herschel is a European Space Agency cornerstone mission, with science instruments provided by consortia of European institutes and with important participation by NASA. NASA's Herschel Project Office is based at NASA's Jet Propulsion Laboratory, Pasadena, Calif. The NASA Herschel Science Center, part of the Infrared Processing and Analysis Center at the California Institute of Technology in Pasadena, supports the United States astronomical community. Caltech manages JPL for NASA. More information on Herschel is available at http://www.herschel.caltech.edu, http://www.nasa.gov/herschel and http://www.esa.int/SPECIALS/Herschel
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:1be9fc8b-f8c9-415e-99b6-0716556b487a> | 3.78125 | 1,507 | Content Listing | Science & Tech. | 37.476463 | 95,624,490 |
Large carnivore populations are recovering in many protected areas in North America, but the effect of increasing carnivore numbers on existing predatorprey and predatorpredator interactions is poorly understood. We studied diet and spatial overlap among cougars (Puma concolor) and gray wolves (Canis lupus) in Banff National Park, Alberta (19932004) to evaluate how wolf recovery in the park influenced diet choice and space use patterns of resident cougars. Cougars (n = 13) and wolves (n = 8 in 2 packs) were monitored intensively over 3 winters (20002001 to 20022003) via radio telemetry and snowtracking. We documented a 65% decline in the local elk population following the arrival of wolves, with cougars concurrently switching from a winter diet primarily constituted of elk to one consisting mainly of deer and other alternative prey. Elk also became less important in wolf diet, but this latter diet switch lagged 1 y behind that of cougars. Wolves were responsible for cougar mortality and usurping prey carcasses from cougars, but cougars failed to exhibit reciprocal behaviour. Cougar and wolf home ranges overlapped, but cougars showed temporal avoidance of areas recently occupied by wolves. We conclude that wolves can alter the diet and space use patterns of sympatric large carnivores through interference and exploitative interactions. Understanding these relationships is important for the effective conservation and management of large mammals in protected areas where carnivore populations are recovering.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:bef09d67-98db-4b53-b1c3-5af76479b454> | 3.328125 | 317 | Academic Writing | Science & Tech. | 12.318889 | 95,624,491 |
1. When a high energy electron impacts molecule M in the ionization chamber of a mass spectrometer, what type of species is initially produced?
c) radical cation
d) radical anion
What compound would be expected to show intense IR absorption at 3300 cm-1?
Ethyne (H-C=C-H) does not show IR absorption in the region 2000-2500 cm-1 because:
a. there is no change in the dipole moment when the CC bond in ethyne stretches
b.there is a change in the dipole moment when the CC bond in ethyne stretches.
c.C-H stretches occur at lower energies.
d.CC stretches occur at about 1640 cm-1.
Which compound would show a larger than usual M+2 peak?
Which of the following fragment peaks would not be expected in n-hexane?
We OTAs at BrainMass do not know the time that you submitted the posting. All we know is the Deadline that you have set for this posting. To what your 45 minutes refers, we cannot tell. However, the deadline that you posted is 5:46 EST, Friday the 17th.
I trust that you understand that, and that I have completed this work well before the deadline.
When a high energy electron impacts molecule M in the ionization chamber of a mass spectrometer, what type of species is ...
The expert provides detailed explanation of how to solve each answer, using numbers and text. | <urn:uuid:52d75328-0ec8-42fb-a434-b789af4b7657> | 2.703125 | 313 | Q&A Forum | Science & Tech. | 75.083797 | 95,624,502 |
Authors: Anders Lindman
The fundamental set theory (FST) is defined as an axiomatic set theory using nonclassical three-valued logic in the foundation and classical two-valued logic in its applications. In this way the nonclassical logic becomes encapsulated and is only used for resolving inconsistencies such as Russell's paradox.
Comments: 3 Pages.
[v1] 2017-10-21 10:32:06
Unique-IP document downloads: 106 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:f107c2ec-70c3-4fb8-b941-a08e349169f1> | 3.015625 | 238 | Knowledge Article | Science & Tech. | 41.806818 | 95,624,512 |
Wheeler–Feynman absorber theory
The Wheeler–Feynman absorber theory (also called the Wheeler–Feynman time-symmetric theory), named after its originators, the physicists Richard Feynman and John Archibald Wheeler, is an interpretation of electrodynamics derived from the assumption that the solutions of the electromagnetic field equations must be invariant under time-reversal transformation, as are the field equations themselves. Indeed, there is no apparent reason for the time-reversal symmetry breaking, which singles out a preferential time direction and thus makes a distinction between past and future. A time-reversal invariant theory is more logical and elegant. Another key principle, resulting from this interpretation and reminiscent of Mach's principle due to Tetrode, is that elementary particles are not self-interacting. This immediately removes the problem of self-energies.
- 1 T-symmetry and causality
- 2 T-symmetry and self-interaction
- 3 Criticism
- 4 Developments since original formulation
- 5 Woodward effect
- 6 Conclusions
- 7 See also
- 8 Notes
- 9 Sources
T-symmetry and causality
The requirement of time-reversal symmetry, in general, is difficult to conjugate with the principle of causality. Maxwell's equations and the equations for electromagnetic waves have, in general, two possible solutions: a retarded (delayed) solution and an advanced one. Accordingly, any charged particle generates waves, say at time and point , which will arrive at point at the instant (here is the speed of light), after the emission (retarded solution), and other waves, which will arrive at the same place at the instant , before the emission (advanced solution). The latter, however, violates the causality principle: advanced waves could be detected before their emission. Thus the advanced solutions are usually discarded in the interpretation of electromagnetic waves. In the absorber theory, instead charged particles are considered as both emitters and absorbers, and the emission process is connected with the absorption process as follows: Both the retarded waves from emitter to absorber and the advanced waves from absorber to emitter are considered. The sum of the two, however, results in causal waves, although the anti-causal (advanced) solutions are not discarded a priori.
Feynman and Wheeler obtained this result in a very simple and elegant way. They considered all the charged particles (emitters) present in our universe and assumed all of them to generate time-reversal symmetric waves. The resulting field is
Then they observed that if the relation
holds, then , being a solution of the homogeneous Maxwell equation, can be used to obtain the total field
The total field is retarded, and causality is not violated.
The assumption that the free field is identically zero is the core of the absorber idea. It means that the radiation emitted by each particle is completely absorbed by all other particles present in the universe. To better understand this point, it may be useful to consider how the absorption mechanism works in common materials. At the microscopic scale, it results from the sum of the incoming electromagnetic wave and the waves generated from the electrons of the material, which react to the external perturbation. If the incoming wave is absorbed, the result is a zero outgoing field. In the absorber theory the same concept is used, however, in presence of both retarded and advanced waves.
The resulting wave appears to have a preferred time direction, because it respects causality. However, this is only an illusion. Indeed, it is always possible to reverse the time direction by simply exchanging the labels emitter and absorber. Thus, the apparently preferred time direction results from the arbitrary labelling.
T-symmetry and self-interaction
One of the major results of the absorber theory is the elegant and clear interpretation of the electromagnetic radiation process. A charged particle that experiences acceleration is known to emit electromagnetic waves, i.e., to lose energy. Thus, the Newtonian equation for the particle () must contain a dissipative force (damping term), which takes into account this energy loss. In the causal interpretation of electromagnetism, Lorentz and Abraham proposed that such a force, later called Abraham–Lorentz force, is due to the retarded self-interaction of the particle with its own field. This first interpretation, however, is not completely satisfactory, as it leads to divergences in the theory and needs some assumptions on the structure of charge distribution of the particle. Dirac generalized the formula to make it relativistically invariant. While doing so, he also suggested a different interpretation. He showed that the damping term can be expressed in terms of a free field acting on the particle at its own position:
However, Dirac did not propose any physical explanation of this interpretation.
A clear and simple explanation can instead be obtained in the framework of absorber theory, starting from the simple idea that each particle does not interact with itself. This is actually the opposite of the first Abraham–Lorentz proposal. The field acting on the particle at its own position (the point ) is then
If we sum the free-field term of this expression, we obtain
and, thanks to Dirac's result,
Thus, the damping force is obtained without the need for self-interaction, which is known to lead to divergences, and also giving a physical justification to the expression derived by Dirac.
The Abraham–Lorentz force is, however, not free of problems. Written in the non-relativistic limit, it gives
Since the third derivative with respect to the time (also called the "jerk" or "jolt") enters in the equation of motion, to derive a solution one needs not only the initial position and velocity of the particle, but also its initial acceleration. This apparent problem, however, can be solved in the absorber theory by observing that the equation of motion for the particle has to be solved together with the Maxwell equations for the field. In this case, instead of the initial acceleration, one only needs to specify the initial field and the boundary condition. This interpretation restores the coherence of the physical interpretation of the theory.
Other difficulties may arise trying to solve the equation of motion for a charged particle in the presence of this damping force. It is commonly stated that the Maxwell equations are classical and cannot correctly account for microscopic phenomena, such as the behavior of a point-like particle, where quantum-mechanical effects should appear. Nevertheless, with absorber theory, Wheeler and Feynman were able to create a coherent classical approach to the problem (see also the "paradoxes" section in the Abraham–Lorentz force).
Also, the time-symmetric interpretation of the electromagnetic waves appears to be in contrast with the experimental evidence that time flows in a given direction and, thus, that the T-symmetry is broken in our world. It is commonly believed, however, that this symmetry breaking appears only in the thermodynamical limit (see, for example, the arrow of time). Wheeler himself accepted that the expansion of the universe is not time-symmetric in the thermodynamic limit. This, however, does not imply that the T-symmetry must be broken also at the microscopic level.
Finally, the main drawback of the theory turned out to be the result that particles are not self-interacting. Indeed, as demonstrated by Hans Bethe, the Lamb shift necessitated a self-energy term to be explained. Feynman and Bethe had an intense discussion over that issue, and eventually Feynman himself stated that self-interaction is needed to correctly account for this effect.
Developments since original formulation
Inspired by the Machian nature of the Wheeler–Feynman absorber theory for electrodynamics, Fred Hoyle and Jayant Narlikar proposed their own theory of gravity in the context of general relativity. This model still exists in spite of recent astronomical observations that have challenged the theory. Stephen Hawking had criticized the original Hoyle-Narlikar theory believing that the advanced waves going off to infinity would lead to a divergence, as indeed they would, if the universe were only expanding. However, as emphasized in the revised version of the Hoyle-Narlikar theory devoid of the "Creation Field" (generating matter out of empty space) known as the Gravitational absorber theory, the universe is also accelerating in that expansion. The acceleration leads to a horizon type cutoff and hence no divergence. Gravitational absorber theory has been used to explain the mass fluctuations in the Woodward effect (see section on Woodward effect below).
Transactional interpretation of quantum mechanics
Again inspired by the Wheeler–Feynman absorber theory, the transactional interpretation of quantum mechanics (TIQM) first proposed in 1986 by John G. Cramer, describes quantum interactions in terms of a standing wave formed by retarded (forward-in-time) and advanced (backward-in-time) waves. Cramer claims it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes, such as quantum nonlocality, quantum entanglement and retrocausality.
Shu-Yuan Chu's quantum theory in the presence of gravity
In 1993, Chu developed a model of how to do quantum mechanics in the presence of gravity, which combines some of the latest ideas in particle physics, superstrings, and a time-symmetric Wheeler–Feynman description of gravity and inertia. In 1998 he extended this work to derive Einstein's equation for the "adjunct gravitational field" using concepts from statistics and maximizing the entropy.
Attempted resolution of causality
T. C. Scott and R. A. Moore demonstrated that the apparent acausality suggested by the presence of advanced Liénard–Wiechert potentials could be removed by recasting the theory in terms of retarded potentials only, without the complications of the absorber idea. The Lagrangian describing a particle () under the influence of the time-symmetric potential generated by another particle () is
where is the relativistic kinetic energy functional of particle , and and are respectively the retarded and advanced Liénard–Wiechert potentials acting on particle and generated by particle . The corresponding Lagrangian for particle is
is a total time derivative, i.e. a divergence in the calculus of variations, and thus it gives no contribution to the Euler–Lagrange equations. Thanks to this result the advanced potentials can be eliminated; here the total derivative plays the same role as the free field. The Lagrangian for the N-body system is therefore
The resulting Lagrangian is symmetric under the exchange of with . For this Lagrangian will generate exactly the same equations of motion of and . Therefore, from the point of view of an outside observer, everything is causal. This formulation reflects particle-particle symmetry with the variational principle applied to the N-particle system as a whole, and thus Tetrode's Machian principle. Only if we isolate the forces acting on a particular body do the advanced potentials make their appearance. This recasting of the problem comes at a price: the N-body Lagrangian depends on all the time derivatives of the curves traced by all particles, i.e. the Lagrangian is infinite-order. However, much progress was made in examining the unresolved issue of quantizing the theory. Also, this formulation recovers the Darwin Lagrangian, from which the Breit equation was originally derived, but without the dissipative terms. This ensures agreement with theory and experiment, up to but not including the Lamb shift. Numerical solutions for the classical problem were also found. Furthermore, Moore showed that a model by Feynman and Hibbs is amenable to the methods of higher than first-order Lagrangians and revealed chaoticlike solutions. Moore and Scott showed that the radiation reaction can be alternatively derived using the notion that, on average, the net dipole moment is zero for a collection of charged particles, thereby avoiding the complications of the absorber theory. An important bonus from their approach is the formulation of a total preserved canonical generalized momentum, as presented in a comprehensive review article in the light of quantum nonlocality.
This apparent acausality may be viewed as merely apparent, and this entire problem goes away. An opposing view was held by Einstein.
Alternative Lamb shift calculation
As mentioned previously, a serious criticism against the absorber theory is that its Machian assumption that point particles do not act on themselves does not allow (infinite) self-energies and consequently an explanation for the Lamb shift according to quantum electrodynamics (QED). Ed Jaynes proposed an alternate model where the Lamb-like shift is due instead to the interaction with other particles very much along the same notions of the Wheeler–Feynman absorber theory itself. One simple model is to calculate the motion of an oscillator coupled directly with many other oscillators. Jaynes has shown that it is easy to get both spontaneous emission and Lamb shift behavior in classical mechanics. Furthermore, Jayne's alternatives provides a solution to the process of "addition and subtraction of infinities" associated with renormalization.
This model leads to essentially the same type of Bethe logarithm an essential part of the Lamb shift calculation vindicating Jaynes' claim that two different physical models can be mathematically isomorphic to each other and therefore yield the same results, a point also apparently made by Scott and Moore on the issue of causality.
The Woodward effect is a physical hypothesis about the possibility for a body to see its mass change when the energy density varies in time. Proposed in 1990 by James Woodward, the effect is based on a formulation of Mach's principle proposed in 1953 by Dennis Sciama.
If confirmed experimentally (see timeline of results in the main article), the Woodward effect would open pathways in astronautics research, as it could be used to propel a spacecraft by propellantless propulsion meaning that it would not have to expel matter to accelerate. As previously formulated by Sciama, Woodward suggests that the Wheeler–Feynman absorber theory would be the correct way to understand the action of instantaneous inertial forces in Machian terms.
This universal absorber theory is mentioned in the chapter titled "Monster Minds" in Feynman's autobiographical work Surely You're Joking, Mr. Feynman! and in Vol. II of the Feynman Lectures on Physics. It led to the formulation of a framework of quantum mechanics using a Lagrangian and action as starting points, rather than a Hamiltonian, namely the formulation using Feynman path integrals, which proved useful in Feynman's earliest calculations in quantum electrodynamics and quantum field theory in general. Both retarded and advanced fields appear respectively as retarded and advanced propagators and also in the Feynman propagator and the Dyson propagator. In hindsight, the relationship between retarded and advanced potentials shown here is not so surprising in view of the fact that, in field theory, the advanced propagator can be obtained from the retarded propagator by exchanging the roles of field source and test particle (usually within the kernel of a Green's function formalism). In field theory, advanced and retarded fields are simply viewed as mathematical solutions of Maxwell's equations whose combinations are decided by the boundary conditions.
- Symmetry in physics and T-symmetry
- Transactional interpretation
- Abraham–Lorentz force
- Two-state vector formalism
- Paradox of a charge in a gravitational field
- Ni Guangjiong
- Gleick, James (1993). Genius: The Life and Science of Richard Feynman. New York : Vintage Books. ISBN 978-0679747048.
- F. Hoyle and J. V. Narlikar (1964). "A New Theory of Gravitation". Proceedings of the Royal Society A. Bibcode:1964RSPSA.282..191H. doi:10.1098/rspa.1964.0227.
- "Cosmology: Math Plus Mach Equals Far-Out Gravity". Time. June 26, 1964. Retrieved 7 August 2010.
- Hoyle, F.; Narlikar, J. V. (1995). "Cosmology and action-at-a-distance electrodynamics". Reviews of Modern Physics. 67 (1): 113–155. Bibcode:1995RvMP...67..113H. doi:10.1103/RevModPhys.67.113.
- Edward L. Wright. "Errors in the Steady State and Quasi-SS Models". Retrieved 7 August 2010.
- Fearn, Heidi (September 2016). Gravitational Absorber Theory & the Mach Effect (PDF). Exotic Propulsion Workshop. Estes Park, CO: Space Studies Institute. pp. 89–109.
- Cramer, John G. (July 1986). "The Transactional Interpretation of Quantum Mechanics". Reviews of Modern Physics. 58 (3): 647–688. Bibcode:1986RvMP...58..647C. doi:10.1103/RevModPhys.58.647.
- Cramer, John G. (February 1988). "An Overview of the Transactional Interpretation" (PDF). International Journal of Theoretical Physics. 27 (2): 227–236. Bibcode:1988IJTP...27..227C. doi:10.1007/BF00670751.
- Cramer, John G. (3 April 2010). "Quantum Entanglement, Nonlocality, Back-in-Time Messages" (PPT). John G. Cramer's Home Page. University of Washington.
- Cramer, John G. (2016). The Quantum Handshake: Entanglement, Nonlocality and Transactions. Springer Science+Business Media. ISBN 978-3319246406.
- S. Y. Chu, Phys. Rev. Lett., 71, 2847 (1993)
- Richard Feynman: A life in science, p.273 et seq., John Gribbin, Mary Gribbin, Dutton, Penguin Books, 1997
- Shu-Yuan Chu, Time-Symmetric Approach to Gravity, arXiv:gr-qc/98020v1, Feb 27, 1998
- Moore, R. A.; Scott, T. C.; Monagan, M. B. (1987). "Relativistic, many-particle Lagrangean for electromagnetic interactions". Phys. Rev. Lett. 59 (5): 525–527. Bibcode:1987PhRvL..59..525M. doi:10.1103/PhysRevLett.59.525.
- Moore, R. A.; Scott, T. C.; Monagan, M. B. (1988). "A Model for a Relativistic Many-Particle Lagrangian with Electromagnetic Interactions". Can. J. Phys. 66 (3): 206–211. Bibcode:1988CaJPh..66..206M. doi:10.1139/p88-032.
- Scott, T. C.; Moore, R. A.; Monagan, M. B. (1989). "Resolution of Many Particle Electrodynamics by Symbolic Manipulation". Comput. Phys. Commun. 52 (2): 261–281. Bibcode:1989CoPhC..52..261S. doi:10.1016/0010-4655(89)90009-X.
- Scott, T. C. (1986). "Relativistic Classical and Quantum Mechanical Treatment of the Two-body Problem". MMath thesis. U. of Waterloo, Canada.
- Scott, T. C.; Moore, R. A. (1989). "Quantization of Hamiltonians from High-Order Lagrangians". Nucl. Phys. B. 6 (Proc. Suppl.): 455–457. Bibcode:1989NuPhS...6..455S. doi:10.1016/0920-5632(89)90498-2.
- Moore, R. A.; Scott, T. C. (1991). "Quantization of Second-Order Lagrangians: Model Problem". Phys. Rev. A. 44 (3): 1477–1484. Bibcode:1991PhRvA..44.1477M. doi:10.1103/PhysRevA.44.1477.
- Moore, R. A.; Scott, T. C. (1992). "Quantization of Second-Order Lagrangians: The Fokker-Wheeler-Feynman model of electrodynamics". Phys. Rev. A. 46 (7): 3637–3645. Bibcode:1992PhRvA..46.3637M. doi:10.1103/PhysRevA.46.3637.
- Moore, R. A.; Qi, D.; Scott, T. C. (1992). "Causality of Relativistic Many-Particle Classical Dynamics Theories". Can. J. Phys. 70 (9): 772–781. Bibcode:1992CaJPh..70..772M. doi:10.1139/p92-122.
- Moore, R. A. (1999). "Formal quantization of a chaotic model problem". Can. J. Phys. 77 (3): 221–233. Bibcode:1999CaJPh..77..221M. doi:10.1139/p99-020.
- Scott, T. C.; Andrae, D. (2015). "Quantum Nonlocality and Conservation of momentum". Phys. Essays. 28 (3): 374–385. Bibcode:2015PhyEs..28..374S. doi:10.4006/0836-1398-28.3.374.
- E. T. Jaynes, "The Lamb Shift in Classical Mechanics" in "Probability in Quantum Theory", pp. 13–15, (1996) Jaynes' analysis of Lamb shift.
- E. T. Jaynes, "Classical Subtraction Physics" in "Probability in Quantum Theory", pp. 15–18, (1996) Jaynes' analysis of handing the infinities of the Lamb shift calculation.
- Woodward, James F. (October 1990). "A new experimental approach to Mach's principle and relativistic gravitation". Foundations of Physics Letters. 3 (5): 497–506. Bibcode:1990FoPhL...3..497W. doi:10.1007/BF00665932.
- Sciama, D. W. (1953). "On the Origin of Inertia". Royal Astronomical Society. 113: 34–42. Bibcode:1953MNRAS.113...34S. doi:10.1093/mnras/113.1.34.
- Woodward, James F. (May 2001). "Gravity, Inertia, and Quantum Vacuum Zero Point Fields". Foundations of Physics. 31 (5): 819–835. doi:10.1023/A:1017500513005.
- Wheeler, J. A.; Feynman, R. P. (April 1945). "Interaction with the Absorber as the Mechanism of Radiation" (PDF). Reviews of Modern Physics. 17 (2–3): 157–181. Bibcode:1945RvMP...17..157W. doi:10.1103/RevModPhys.17.157.
- Wheeler, J. A.; Feynman, R. P. (July 1949). "Classical Electrodynamics in Terms of Direct Interparticle Action" (PDF). Reviews of Modern Physics. 21 (3): 425–433. Bibcode:1949RvMP...21..425W. doi:10.1103/RevModPhys.21.425. | <urn:uuid:c0a55f74-bb0c-4047-a755-01445b83d862> | 3.03125 | 5,070 | Knowledge Article | Science & Tech. | 52.803124 | 95,624,514 |
When highly electronegative elements form a covalent bond with the hydrogen atom, the electrons constituting the covalent bond are shifted towards the more electronegative atom. This results in a partial positive charge getting developed on the hydrogen atom which helps in the bond formation with the electronegative atoms of the other molecules. This particular bond is called the hydrogen bond and it is comparatively weaker than the covalent bond. We can see an example here; in HF molecule there is a hydrogen bond between hydrogen atom of one molecule and the fluorine atom of another molecule.
– – – H?+– F?- – – – H?+– F?-– – – H?+– F?-
In this case, the hydrogen bond acts as a bridge between two atoms, where one atom is held by a covalent bond and the other atom is held by a hydrogen bond. In the structure above, the hydrogen bond is depicted by the dotted line (—) and the covalent bond is shown by solid line.
So it can be said that a hydrogen bond is just an attractive force which binds the hydrogen atom of one molecule to the electronegative atoms (F, O or N) of another molecule.
Cause of hydrogen bond formation
Hydrogen atom is bonded with a highly electronegative element, and therefore the shared pair of electrons move away from the hydrogen atom towards the electronegative atom. Hydrogen atom becomes electropositive with respect to the electronegative element. This results in the development of positive charge over hydrogen atom and partial negative charge over the electronegative element. This further leads to the formation of a polar molecule with electrostatic force of attraction. The magnitude of H-bond depends on the physical state of the compounds. It reaches a maximum value in solid state and minimum in gaseous state.
Types of Hydrogen bonding
There are two types of H-bonds, and it is classified as the following:
- Intermolecular hydrogen bonding – This type of bond formation occurs between the different molecules of same or different compounds. For example- hydrogen bonding in water and alcohol.
- Intramolecular hydrogen bonding – This type of bond formation occurs when the hydrogen atom lies in between the two electronegative elements present in the same molecule.
|Compound||Molar Mass||Normal Boiling Point|
|H2O||18 g/mol||373 K|
|HF||20 g/mol||292.5 K|
|NH3||17 g/mol||239.8 K|
|H2S||34 g/mol||212.9 K|
|HCl||36.4 g/mol||197.9 K|
|PH3||34 g/mol||185.2 K|
We have seen the hydrogen bonds and the reason for its formation and its types. Hydrogen bond has a very powerful impact on the properties and structure of many compounds and is the reason for the anomalous expansion of water. For more details on this topic, download Byju’s Learning App.
Practise This Question | <urn:uuid:53958ba8-d25c-4ecd-a4b5-a0258ec406f9> | 4.09375 | 656 | Knowledge Article | Science & Tech. | 46.948343 | 95,624,529 |
Describe the nature and characteristics of PCB's. Why are they of importance to an EH&S or FS professional who encounters them when coming upon an incident scene where they are present? What precautions, or actions, should the EH&S or FS professional take when faced with this situation? What happens to FS personnel and their turnout gear once they are exposed/splashed with PCB's?© BrainMass Inc. brainmass.com July 18, 2018, 12:45 pm ad1c9bdddf
Poly-chlorinated biphenyl compounds, commonly known as PCBs, comprise a catagory of over 200 organochlorides which are comprised of two benzene rings connected by a single carbon carbon double bond; these compounds also usually contain 2 to 10 chlorine atoms attached to phenyl rings. The compounds are classified as persistent organic pollutants and their use has been banned in the US since 1979. They were used for many years in various industrial and electrical applications, including as coolants, but toxic effects were observed in wildlife which lead to growing public disapproval of the use of these chemicals and they have since been include in the Stockholm 2001 Convention on Persistent Organic Pollutants. ...
This solution discusses poly-chlorinated biphenyl compounds, commonly known as PCB's, in 442 words. | <urn:uuid:9aed9a05-8870-4ce1-8652-4008cdb3edca> | 3.296875 | 267 | Truncated | Science & Tech. | 49.69167 | 95,624,539 |
FUNGI AS DRIVERS OF ECOSYSTEM RESPONSE TO GLOBAL CHANGE
Microbes are the engines of carbon (C) cycling in soils, which hold 2-3 times more C than the atmosphere. Fungi represent a significant component of the microbial community in many ecosystems, and in particular, are the primary agents of decomposition in temperate forests. Fungal-mediated decomposition is known to be sensitive to stressors such as atmospheric nitrogen (N) deposition and rising global temperatures, but we lack a mechanistic understanding of the traits that determine responses to environmental change. Here I use a combination of soil biochemical assays, environmental metabarcoding, lab-based culture techniques, and whole-genome sequencing to describe the responses of whole fungal communities and underlying physiological attributes to simulated N deposition and increased temperature. I show that a shift from lignin-decomposing saprotrophs to yeast species with low relative activity levels can partly explain reduced decomposition rates in N-fertilized forests. Similarly, experimental soil warming favors ectomycorrhizal fungi over saprotrophs causing reduced decay rates relative to thermodynamic assumptions. Finally, I show that the efficiency with which fungi convert C resources into biomass is generally decreased by temperature and is linked to life-history traits such as dispersal strategy and growth rate. Together my results provide an understanding of both how and why fungal communities respond to a changing climate. | <urn:uuid:edb46b9c-987c-4cb7-923c-72f2cdb5feae> | 3.03125 | 297 | Academic Writing | Science & Tech. | -0.357316 | 95,624,558 |
Enthalpy of fusion
The enthalpy of fusion of a substance, also known as (latent) heat of fusion, is the change in its enthalpy resulting from providing energy, typically heat, to a specific quantity of the substance to change its state from a solid to a liquid, at constant pressure. For example, when melting 1 kg of ice (at 0°C under a wide range of pressures), 333.55 kJ of energy is absorbed with no temperature change. The heat of solidification (when a substance changes from liquid to solid) is equal and opposite.
This energy includes the contribution required to make room for any associated change in volume by displacing its environment against ambient pressure. The temperature at which the phase transition occurs is the melting point or the freezing point, according to context. By convention, the pressure is assumed to be 1 atm (101.325 kPa) unless otherwise specified.
The 'enthalpy' of fusion is a latent heat, because during melting the heat energy needed to change the substance from solid to liquid at atmospheric pressure is latent heat of fusion, as the temperature remains constant during the process. The latent heat of fusion is the enthalpy change of any amount of substance when it melts. When the heat of fusion is referenced to a unit of mass, it is usually called the specific heat of fusion, while the molar heat of fusion refers to the enthalpy change per amount of substance in moles.
The liquid phase has a higher internal energy than the solid phase. This means energy must be supplied to a solid in order to melt it and energy is released from a liquid when it freezes, because the molecules in the liquid experience weaker intermolecular forces and so have a higher potential energy (a kind of bond-dissociation energy for intermolecular forces).
When liquid water is cooled, its temperature falls steadily until it drops just below the line of freezing point at 0 °C. The temperature then remains constant at the freezing point while the water crystallizes. Once the water is completely frozen, its temperature continues to fall.
The enthalpy of fusion is almost always a positive quantity; helium is the only known exception. Helium-3 has a negative enthalpy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative enthalpy of fusion below 0.77 K (−272.380 °C). This means that, at appropriate constant pressures, these substances freeze with the addition of heat. In the case of 4He, this pressure range is between 24.992 and 25.00 atm (2,533 kPa).
|Substance||Heat of fusion
|Heat of fusion|
|Paraffin wax (C25H52)||47.8-52.6||200–220|
These values are mostly from the CRC Handbook of Chemistry and Physics, 62nd edition. The conversion between cal/g and J/g in the above table uses the thermochemical calorie (calth) = 4.184 joules rather than the International Steam Table calorie (calINT) = 4.1868 joules.
A) To heat 1 kg (1.00 liter) of water from 283.15 K to 303.15 K (10 °C to 30 °C) requires 83.6 kJ. However, to melt ice also requires energy. We can treat these two processes independently; thus, to heat 1 kg of ice from 273.15 K to water at 293.15 K (0 °C to 20 °C) requires:
- (1) 333.55 J/g (heat of fusion of ice) = 333.55 kJ/kg = 333.55 kJ for 1 kg of ice to melt
- (2) 4.18 J/(g·K) × 20K = 4.18 kJ/(kg·K) × 20K = 83.6 kJ for 1 kg of water to increase in temperature by 20 K
- = 417.15 kJ
From these figures it can be seen that one part ice at 0 °C will cool almost exactly 5 parts water from 20 °C to 0 °C.
B) Silicon has a heat of fusion of 50.21 kJ/mol. 50 kW of power can supply the energy required to melt about 100 kg of silicon in one hour, after it is brought to the melting point temperature:
50 kW = 50kJ/s = 180000kJ/h
180000kJ/h * (1 mol Si)/50.21kJ * 28gSi/(mol Si) * 1kgSi/1000gSi = 100.4kg/h
The heat of fusion can also be used to predict solubility for solids in liquids. Provided an ideal solution is obtained the mole fraction of solute at saturation is a function of the heat of fusion, the melting point of the solid and the temperature (T) of the solution:
This equals to a solubility in grams per liter of:
the heat of fusion being the difference in chemical potential between the pure liquid and the pure solid, it follows that
Application of the Gibbs–Helmholtz equation:
and with integration:
the end result is obtained:
- Atkins & Jones 2008, p. 236.
- Ott & Boerio-Goates 2000, pp. 92–93.
- Hoffer, J. K.; Gardner, W. R.; Waterfield, C. G.; Phillips, N. E. (April 1976). "Thermodynamic properties of 4He. II. The bcc phase and the P-T and VT phase diagrams below 2 K". Journal of Low Temperature Physics. 23 (1): 63–102. Bibcode:1976JLTP...23...63H. doi:10.1007/BF00117245.
- Ibrahim Dincer and Marc A. Rosen. Thermal Energy Storage: Systems and Applications, page 155
- Measurement and Prediction of Solubility of Paracetamol in Water-Isopropanol Solution. Part 2. Prediction H. Hojjati and S. Rohani Org. Process Res. Dev.; 2006; 10(6) pp 1110–1118; (Article) doi:10.1021/op060074g | <urn:uuid:90d5f936-9efa-4528-a9ca-63180f292011> | 4.21875 | 1,317 | Knowledge Article | Science & Tech. | 76.627267 | 95,624,611 |
A Gold Catalyst for Clear Water
News Dec 24, 2014
A new catalyst could have dramatic environmental benefits if it can live up to its potential, suggests research from Singapore. A*STAR researchers have produced a catalyst with gold-nanoparticle antennas that can improve water quality in daylight and also generate hydrogen as a green energy source.
This water purification technology was developed by He-Kuan Luo, Andy Hor and colleagues from the A*STAR Institute of Materials Research and Engineering (IMRE). “Any innovative and benign technology that can remove or destroy organic pollutants from water under ambient conditions is highly welcome,” explains Hor, who is executive director of the IMRE and also affiliated with the National University of Singapore.
Photocatalytic materials harness sunlight to create electrical charges, which provide the energy needed to drive chemical reactions in molecules attached to the catalyst’s surface. In addition to decomposing harmful molecules in water, photocatalysts are used to split water into its components of oxygen and hydrogen; hydrogen can then be employed as a green energy source.
Hor and his team set out to improve an existing catalyst. Oxygen-based compounds such as strontium titanate (SrTiO3) look promising, as they are robust and stable materials and are suitable for use in water. One of the team’s innovations was to enhance its catalytic activity by adding small quantities of the metal lanthanum, which provides additional usable electrical charges.
Catalysts also need to capture a sufficient amount of sunlight to catalyze chemical reactions. So to enable the photocatalyst to harvest more light, the scientists attached gold nanoparticles to the lanthanum-doped SrTiO3 microspheres. These gold nanoparticles are enriched with electrons and hence act as antennas, concentrating light to accelerate the catalytic reaction.
The porous structure of the microspheres results in a large surface area, as it provides more binding space for organic molecules to dock to. A single gram of the material has a surface area of about 100 square meters. “The large surface area plays a critical role in achieving a good photocatalytic activity,” comments Luo.
To demonstrate the efficiency of these catalysts, the researchers studied how they decomposed the dye rhodamine B in water. Within four hours of exposure to visible light 92 per cent of the dye was gone, which is much faster than conventional catalysts that lack gold nanoparticles.
These microparticles can also be used for water splitting, says Luo. The team showed that the microparticles with gold nanoparticles performed better in water-splitting experiments than those without, further highlighting the versatility and effectiveness of these microspheres.
Nano-tech Diagnostic Can Indicate Cancer or Thrombotic Risk in One Drop of BloodNews
A team of international researchers led by Professor Martin Hegner, Investigator in CRANN and Trinity’s School of Physics, have developed an automated diagnostic platform that can quantify bleeding – and thrombotic risks – in a single drop of blood, within seconds.READ MORE
Gotta Sample 'Em All! Underwater Pokéball Captures Ocean LifeNews
A new device developed by Wyss Institute reseachers safely traps delicate sea creatures inside a folding polyhedral enclosure and lets them go without harm using a novel, origami-inspired design. The ultimate aim is to allow the sea creatures to be (gently) analyzed in high detail.READ MORE
Targeting Headaches and Tumors with Nano-SubmarinesNews
Scientists have developed a new method to enable miniature drug-filled nanocarriers to dock on to immune cells, which in turn attack tumors. In the future, this may lead to targeted treatment that can largely eliminate damage to healthy tissue.READ MORE | <urn:uuid:5005fdbf-9e19-4bd4-8385-9dfb8109558c> | 3.296875 | 780 | Truncated | Science & Tech. | 17.362227 | 95,624,620 |
A study of human–mammal interaction across the globe found animals are more prone to take to the night around humans. Jason G. Goldman reports.
Most invertebrates get smaller on average in cities, although a few very mobile species respond to urbanization by growing.
Hippo poop is piling up in Tanzania’s freshwater fisheries—which is bad news for biodiversity, and deleterious for the dinner plate. Jason G. Goldman reports.
Sea lions and fur seals in Uruguay have become a tourist attraction—but the animals have become less, not more, accepting of humans. Jason G. Goldman reports.
Building houses at the edge of the wilderness increases the danger of catastrophic blazes
Hunting regulations in Sweden prohibit killing brown bear mothers in company of cubs—causing mama bears to care for their young longer. Jason G. Goldman reports.
In a study of children interacting with toy animals Native American kids and non-Native kids imagined the animals very differently.
Lawns mowed every two weeks hosted more bees than lawns mowed every three weeks. Jason G. Goldman reports.
Humans may be driving large mammals to extinction
Non-native milkweed species planted in the southern U.S. could harm monarch butterflies as temperatures rise. Jason G. Goldman reports.
Rather than always making the same call in response to the same stimuli, North Atlantic right whales are capable of changing their vocalizations.
Native American kids and non-Native kids conceptualize wild animals differently
Ravens produce different types of calls depending on their age and sex—which might help ravens size up other individuals. Jason G. Goldman reports.
Lemurs consume far less fruit than other primates
For blue tits, timing can be a factor in whether they remain together or part ways
Areas of Kenya without large wildlife saw tick populations rise as much as 370 percent—meaning more danger to humans. Jason G. Goldman reports.
Having lions and giraffes together in protected areas means far lower survival rates for juvenile giraffes. Jason Goldman reports.
The apes may focus on dominance rather than morality when it comes to interpreting social behaviors
It takes months for members of a mongoose breeding society to trust newcomers with important tasks like watching for predators. Jason G. Goldman reports.
Leopard geckos compensate for the lost appendage’s movement | <urn:uuid:cb42fece-4654-477b-bf8a-aa1d0e5be749> | 3 | 489 | Content Listing | Science & Tech. | 43.731817 | 95,624,639 |
Introduction: On the Origin of the Terms Fluorescence, Phosphorescence, and Luminescence
As an introduction to the historical part of this book, it is worth recalling the origin of the terms fluorescence, phosphorescence, luminescence When and how these phenomena were discovered are of course the basic questions , but the first step of a historical research is the understanding of the etymology of a word invented for designating a phenomenon Fluorescence is a beautiful example of a term whose etymology is not obvious at all; in particular, it is strange, at first sight, that it contains fluor which is not a fluorescent element! In contrast, the etymologies of phosphorescence and luminescence are straightforward: both these terms contain light øως1in Greek and lumen in Latin, respectively) Let us examine first these two terms.
KeywordsZinc Sulfide Calcium Fluoride Barium Sulfate Quinine Sulfate Calcium Sulfide
Unable to display preview. Download preview PDF.
- 1.Harvey EN (1957) History of luminescence The American Philosophical Society, PhiladelphiaGoogle Scholar
- 2.Brewster D (1833) Trans Roy Soc Edinburgh 12:538–545Google Scholar
- 3.Herschel JFW (1945) Phil Trans 143–145; (1945) Phil Trans 147-153Google Scholar
- 5.Edmond Becquerel is the father of Henri Becquerel who discovered radioactivity, Edmond Becquerel invented the famous phosphoroscope which bears his nameGoogle Scholar
- 6.Cosmos (1854) 3:509-510Google Scholar
- 7.Becquerel E (1842) Annales de Chimie et Physique (3) 9:257–322Google Scholar
- 9.Macquer PJ (1779) Dictionnaire de Chymie, p 462Google Scholar
- 10.Macquer PJ (1779) Dictionnaire de Chymie, p 464Google Scholar
- 11.Robbins M (1994) Fluorescence Gems and minerals under ultraviolet light Geoscience PressGoogle Scholar
- 12.Perrin F (1929) Doctoral thesis, Paris; Annales de Physique 12:2252–2254Google Scholar
- 13.Nickel B (1996) Pioneers in photochemistry From the Perrin diagram to the Jablonski diagram EPA Newsletter 58:9–38Google Scholar | <urn:uuid:a8f0f72c-0ae0-4c39-935f-5b857170b9ed> | 3 | 517 | Truncated | Science & Tech. | 39.472232 | 95,624,648 |
Elementary Differential Geometry
by Gilbert Weinstein
Publisher: UAB 2009
Number of pages: 62
These notes are for a beginning graduate level course in differential geometry. It is assumed that this is the students' first course in the subject. Thus the choice of subjects and presentation has been made to facilitate as much as possible a concrete picture.
Home page url
Download or read it online for free here:
by Edward Nelson - Princeton Univ Pr
The lecture notes for the first part of a one-term course on differential geometry given at Princeton in the spring of 1967. They are an expository account of the formal algebraic aspects of tensor analysis using both modern and classical notations.
by Gabriel Lugo - University of North Carolina at Wilmington
These notes were developed as a supplement to a course on Differential Geometry at the advanced undergraduate level, which the author has taught. This texts has an early introduction to differential forms and their applications to Physics.
by Richard Koch - University of Oregon
These are differential geometry course notes. From the table of contents: Preface; Curves; Surfaces; Extrinsic Theory; The Covariant Derivative; The Theorema Egregium; The Gauss-Bonnet Theorem; Riemann's Counting Argument.
by Dmitri Zaitsev - Trinity College Dublin
From the table of contents: Chapter 1. Introduction to Smooth Manifolds; Chapter 2. Basic results from Differential Topology; Chapter 3. Tangent spaces and tensor calculus; Tensors and differential forms; Chapter 4. Riemannian geometry. | <urn:uuid:38f2f806-7917-4651-b459-381410d3c38b> | 2.609375 | 334 | Content Listing | Science & Tech. | 30.090475 | 95,624,660 |
Darwin’s win-win for Global Worming? R.J.B 12th Feb., 2017.
Few realize Charles Darwin had a soft spot for the humble earthworm and scientific data to show why. The topic of his swan song book was: Worms & Vegetable Mould (= earthworms & topsoil humus) in which he calculated that all fertile soils have passed many times through their bodies, and should continue to do so. Darwin’s (1881: 158) estimated from Hensen (1877: 360) that there must exist 133,000 living worms in a hectare of land (ca. 13.3 m-2) with 3 g per worm (Darwin mistakes this for 1 g) (= 40 g/m2 or 0.04 kg m-2), or 53,767 in an acre with this latter number of worms he calculated as weighing 356 pounds per acre. Such a modest estimate gives a global earthworms population of around 1.3 x 1015 or 1.3 quadrillion with (0.4 t ha-1 x 9.5 Gha of productive land) = 3.8 Gigatonne biomass, or about ten times all humanity.
As a cross-check of accuracy, Dr Ken Lee (1985: tab. 7) has summary with average of 273 m-2 and 63 g/m2 (= 0.63 t ha-1) for a range of biome habitats (0.63 x 9.5 Gha) = ~6.0 Gt total earthworm biomass. If global non-ice/sand topsoil land were taken as 12.1 Gha (Ref), this gives (0.63 x 12.1 Gha) = 7.6 Gt or just about double the result from Darwin’s figures.
Interestingly, although often thought of as good fishing bait, the earthworm fresh weight of 3.8-7.6 Gt is double or quadruple the “wet weight” of global fish stocks confidently calculated as between 0.89-2.05 Gt (Ref1, Ref2) of which just 0.15 Gt are total annual combined fish catch & aquaculture (Ref3), the highest on record so far.
Since Dr Ken Lee’s (1985: 33) reference text on Earthworm Ecology has their moisture content at 70%, thus 30% dry, a total is of between 1.14 to 2.28 Gt dry biomass.
Richly organic topsoil populations of earthworms are much higher – averaging 500 worms m-2 and up to 400 gm-2 – such that, for the 7 billion of us, each person alive today has support of 7 million earthworms. There are 7,000 currently described earthworm species (see Blakemore, 2015: 542; Ref) now working day & night, rain or shine, relentlessly recycling organic matter for healthy plants that sustain all Life on Earth from the Sun’s energy.
Darwin (1881: 173) also estimated that earthworms annually eject in the order of 15 tons (his actual mean value was 14.09 tons) of surface castings per acre of pasture/commons land (= 33.6 t ha-1 yr-1 x 9.5 Gha of non-ice/desert land = 319 Gt yr-1 globally, cf. FAO)*. Although this too is surely an underestimate as earthworms can process their own body weight of humus per day and often they cast below ground – sometimes as deep as 15 m – as well as on the surface. Nevertheless, fertile topsoil has >24 kg m-2 of recyclable Organic Carbon (= 2,300 Gt C globally)** in living soil organisms or locked up in humic matter (+ glomalin) produced from activities when alive or their bodies when they die.
Organic farming/gardening preserves substantially higher levels of earthworms (as all good farmers & gardeners know), more water and carbon with greater yields & lower costs (financial, environmental, medical). This was clearly shown at Lady Eve Balfour’s pioneering Haughley farm 36 yrs ago and in a 1981 study to mark Darwin’s book’s Centenary (this study eventually published by Blakemore, 2000); see also Blakemore (2016a, 2016b).
Why all this is especially relevant today is twofold: Firstly, topsoils are degrading with 50-70% now lost to erosion and pollution giving us perhaps only 60 years more of harvests. Secondly, and inter-related, is that we are rapidly losing earthworms too. Populations decline under intensive agriculture – as demonstrated at Haughley – and following deforestation, with many species becoming extinct: e.g., recent review of New Zealand’s 200 native earthworms listed 20 or so (ca. 10%) as extinct or likely soon to be.
Regarding Climate Change (and Global ‘Worming’), the only way proven to remove CO2 from the air (so-called Carbon Capture and Storage or CCS) is via plant photosynthesis and, since Darwin’s earthworms reprocess leaf litter, therefore all atmospheric CO2 is recycled via worm intestines in regular 12-yr cycles (as estimated from NASA data)**.
A positive outcome of the 2015 Paris COP21 meeting is a proposal (4 per 1000 Initiative) to increase soil carbon. Yet it seems Darwin, the “Father of Evolution”, had good prescience in 1881 to already offer us this practical, safe and reasonable soil-based solution to critical trilemma of global Species Extinction, Climate Change & Food Supply.
My humble proposal, along the lines of 4 per 1000 Initiative (that refers to extra 0.04% organic carbon in topsoil needed to reduce global greenhouse gasses), is to aim for “4 worms per 1,000 g soil“. This would give us an entirely reasonable earthworm population of ~400 worms per m2 – as was found in Haughley’s permanent pasture – or 4 million worms per hectare thereby ensuring not only extra carbon, but also greater aeration, drainage, soil water and yields. Plus, importantly, greater biodiversity of earthworms and of whatever else depends upon them (which ultimately is most other organisms)…
Thanks CD; we salute you and our under-appreciated earthworm friends with a debt of gratitude on your birthday. The beauty is Darwin again showing with his worms that, as with his evolution, small changes in time may have massive beneficial consequences. Either way, both earthworms and evolution have brought us this far…
*FAO (2015: 103) estimates median soil formation globally now as just 0.15 t ha-1 yr-1.
**NASA has carbon in SOIL (~2,300 Gt) = all in plants (550), air (800) + oceans (1,000) combined! As >60 Gt atmospheric C is reprocessed via soil per yr (800 / 60) = ~12-13 yrs.
Details of data presented are in latest Cosmopolitan Earthworms CD (Blakemore, 2016) and on VermEcology websites: Annelida (http://www.annelida.net/earthworm/) & veop (https://veop.wordpress.com/). Date: 12th Feb., 2017. Email firstname.lastname@example.org.
Submission of this, and similar articles, to BBC, Guardian and Nature got nary a reply…
[Update 10th May, 2017. I stand corrected, Nature did graciously accept my small submission – “Soil: Restore earthworms to rebuild topsoil” – https://www.nature.com/nature/journal/v545/n7652/full/545030b.html. So, for the worms, things are looking up!].
[Update 27th May, 2018. A new publication recalculates global biomass with some figures compliant with above estimates, and others, as is to be expected, considerably off: http://www.pnas.org/content/early/2018/05/15/1711842115 ; http://www.pnas.org/content/pnas/suppl/2018/05/16/1711842115.DCSupplemental/pnas.1711842115.sapp.pdf ].
Photos below are of The Mount, Shrewsbury, Shropshire, UK; the stone steps are for Dr Robert Darwin, who was portly, to mount his horse. And I believe Chaz’s birth room is actually the 2nd window of the 1st floor, or 2nd window of 2nd floor in Japanese.
© R.J. Blakemore May, 2017; May, 2018. | <urn:uuid:a5f4e3de-94a6-4750-9c09-4b855a3a2e16> | 3.046875 | 1,848 | Personal Blog | Science & Tech. | 74.893956 | 95,624,665 |
The dictionary defines plasma as an "almost completely ionized gas, containing equal numbers of free electrons and positive ions... formed by heating low-pressure gases until the atoms have sufficient energy to ionize each other." Plasma has been used in the restoration of metal objects, and an article in Restauro 1/1996 reports initial results of treatment of paper, focussing on eradication of mold spores and stains. The authors suggest that it might be useful for deacidifying books as well, by using alkaline plasma (nitrogen, ammonia) and strengthening them by a polymerization process.
One of the authors of the Restauro article, Manfred Anders from Stuttgart University, was invited to meet with six other scientists in Delft on November 30, 1995, for the following purposes:
Ronald van Deventer, project leader from the TNO Center for Paper and Board Research, Delft, reviewed current strengthening processes that coat the paper and concluded that they are not suitable for strengthening brittle paper. John Havermans, project manager of conservation research at the TNO Center, surveyed plasma treatments and suggested that they be used to solve the brittle paper problem. Svetlana Uspenskaya, head of the research group in the Conservation Department at BAN, St. Petersburg, described her research using plasma to decrease the water absorbency of paper. Manfred Anders described the positive and negative effects of plasma on paper.
Timestamp: Sunday, 03-Mar-2013 21:38:45 PST
Retrieved: Tuesday, 17-Jul-2018 19:09:35 GMT | <urn:uuid:2cfb7b52-9d7b-4a01-8778-d89393efbdc3> | 2.875 | 326 | Knowledge Article | Science & Tech. | 37.512625 | 95,624,674 |
|TUNICATA : STOLIDOBRANCHIA : Styelidae||SEA SQUIRTS|
Description: A smooth rounded red sea squirt found in densely aggregated colonies. The small individuals have a common basal test, and increase in number by budding. The siphons have a six lobed appearance on partial contraction, which is distinctive. 5-8mm in diameter.
Habitat: Characteristic of the infralittoral, occurring on rock, kelp holdfasts and algae.
Distribution: Apparently a southern species in the British Isles, quite common on the south coast of England as far east as Dorset, and occurring on the west coast of Ireland.
Similar Species: Easily and frequently mistaken for Dendrodoa grossularia, leading to confusion over distribution.
Key Identification Features:
Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK.
WoRMS: Species record : World Register of Marine Species.
|Picton, B.E. & Morrow, C.C. (2016). Distomus variolosus Gaertner in Pallas, 1774. [In] Encyclopedia of Marine Life of Britain and Ireland. |
http://www.habitas.org.uk/marinelife/species.asp?item=ZD1990 Accessed on 2018-07-18
|Copyright © National Museums of Northern Ireland, 2002-2015| | <urn:uuid:eea5089c-411c-4aaa-ace0-b038156a05c1> | 2.984375 | 306 | Knowledge Article | Science & Tech. | 40.893966 | 95,624,689 |
Learn Kotlin programming with the best free online courses and tutorials for beginners and advanced learners aggregated from Udemy, Edx, Skillshare, Coursera, Udacity, Treehouse, YouTube and other MOOCs .
Learn basics of Kotin for Android
Neste curso vamos aprender o básico sobre programação Android - Free Course
Learn the fundamentals of the Kotlin programming language from Kotlin experts at Google.
how to use Kotlin for Android Development while creating a practical real world application
This Android Kotlin course describes the usage of intents to communicate between Android components.
In this course you'll master the basics of Android applications development with Kotlin.
The Most Comprehensive Kotlin Course! Packed With Everything You Need To Know To Become A Professional Kotlin Developer!
Explore and Follow course collections.
Don't have an account? | <urn:uuid:7d2513fa-6169-425c-b0e3-1023dc42dbdc> | 2.515625 | 184 | Content Listing | Software Dev. | 26.021563 | 95,624,690 |
Nitrogen (N) generally limits plant growth and controls biosphere responses to climate change. We introduce a new mathematical model of plant N acquisition, called Fixation and Uptake of Nitrogen (FUN), based on active and passive soil N uptake, leaf N retranslocation, and biological N fixation. This model is unified under the theoretical framework of carbon (C) cost economics, or resource optimization. FUN specifies C allocated to N acquisition as well as remaining C for growth, or N-limitation to growth. We test the model with data from a wide range of sites (observed versus predicted N uptake r2 is 0.89, and RMSE is 0.003 kg N m−2·yr−1). Four model tests are performed: (1) fixers versus nonfixers under primary succession; (2) response to N fertilization; (3) response to CO2 fertilization; and (4) changes in vegetation C from potential soil N trajectories for five DGVMs (HYLAND, LPJ, ORCHIDEE, SDGVM, and TRIFFID) under four IPCC scenarios. Nonfixers surpass the productivity of fixers after ∼150–180 years in this scenario. FUN replicates the N uptake response in the experimental N fertilization from a modeled N fertilization. However, FUN cannot replicate the N uptake response in the experimental CO2 fertilization from a modeled CO2 fertilization; nonetheless, the correct response is obtained when differences in root biomass are included. Finally, N-limitation decreases biomass by 50 Pg C on average globally for the DGVMs. We propose this model as being suitable for inclusion in the new generation of Earth system models that aim to describe the global N cycle.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:70b2d7c1-e934-4f2f-a7dc-98ca8ee3d583> | 2.625 | 377 | Academic Writing | Science & Tech. | 37.546303 | 95,624,694 |
A common refrain by climate sceptics that surface temperatures have not warmed over the past 17 years, implying climate models predicting otherwise are unreliable, has been refuted by new research led by James Risbey, a senior CSIRO researcher.
Setting aside the fact the equal hottest years on record - 2005 and 2010 - fall well within the past 17 years, Dr Risbey and fellow researchers examined claims - including by some members of the Intergovernmental Panel on Climate Change - that models overestimated global warming.
In a study published in Nature Climate Change on Monday, the team found that models actually generate good estimates of recent and past trends provided they also took into account natural variability, particularly the key El Nino-La Nina phases in the Pacific.
“You’re always going to get periods when the warming slows down or speeds up relative to the mean rate because we have these strong natural cycles,” Dr Risbey said.
In roughly 30-year cycles, the Pacific alternates between periods of more frequent El Ninos - when the ocean gives back heat to the atmosphere - to La Ninas, when it acts as a massive heat sink, setting in train relatively cool periods for surface temperatures.
By selecting climate models in phase with natural variability, the research found that model trends have been consistent with observed trends, even during the recent “slowdown” period for warming, Dr Risbey said.
“The climate is simply variable on short time scales but that variability is superimposed on an unmistakable long-term warming trend,” he said.
While sceptics have lately relied on a naturally cool phase of the global cycle to fan doubts about climate change, the fact temperature records continue to fall even during a La-Nina dominated period is notable, Dr Risbey said.
The temperature forcing from the build-up of greenhouse gases in the atmosphere “is beginning to overwhelm the natural variability on even shorter decadal time scales”, he said.
“We will always set more heat records during an El Nino [phase] ... than we will during the opposite but we’re still setting records even during the cold phase because we’re still warming,” Dr Risbey said.
While climatologists are wary about picking when the Pacific will switch back to an El-Nino dominated phase, the world may get an inkling of what is in store if an El Nino event is confirmed later this year.
The Bureau of Meteorology last week maintained its estimate of a 70 per cent chance of an El Nino this year. It noted, though, that warming sea-surface temperatures in the central and eastern Pacific had yet to trigger the consistent reinforcing atmospheric patterns such as a stalling or reversal in the easterly trade winds.
Even without the threshold being reached, however, El-Nino-like conditions had already contributed to the warmest May and June on record and equal-warmest April. Australia too has continued to see well-above average temperatures, with last year and the 12 months to June 30 setting records for warmth.
Data out this week from the US may confirm early readings that June's sea-surface temperatures were the biggest departure from long-term averages for any month.
Morning & Afternoon Newsletter | <urn:uuid:d11cac63-5ed2-4117-9dc1-e919547f69a1> | 3.328125 | 676 | News Article | Science & Tech. | 31.478158 | 95,624,696 |
Latest revision as of 11:10, 19 July 2018
Samuel's Article Summary Page
Article 1 Why This Zoo Lion Killed a Lost Water Bird
In this article a lioness kills a heron that came for water. Even though the lioness was raised in captivity.
Cats,big and small are instinctual hunters. Without practicing or being instructed, they know how to move silently, stalk, and pounce on prey. Even domesticated cats hunt small animals like birds and rodents with these keen skills. This is why the lioness killed the heron because it was its instincts.Since the heron entered the lions exhibit, it triggered the lioness' instincts to kill. Therefore, even captive or domestic cats are instinctual hunters.
Article 2 Tracking Animal Migrations
In this article I speak about understand Tracking Animals migration helps the animals.
Tracking the Animal during migration helps us how individuals and populations move within local areas and migrate across oceans and continents.Also, this information is being used to address environmental challenges.The information on the animals migration path or environment helps us stop climate and land-use changes, biodiversity loss, invasive species, and the spread of infectious diseases.So, Tracking animal migration paths ,HELP US KEEP THEM SAFE, and stop the challenges that await them during there migrations.
Article3 First snake found in amber is a baby from the age of the dinosaurs
In this article a baby snake is found in amber and its the first snake ever found from 100 million years ago
Around 100 million years ago, a baby snake hatched on a tropical island in the Indian Ocean. The tiny snake, just 10 centimetres long, got stuck in resin oozing from a tree. That chunk of resin remained buried as the island drifted north and became part of what is now Myanmar. When it was finally dug up a few years ago, the skeletal remains were misidentified as a centipede and the amber sold to a private collector. But It has now been studied by an international team, who have scanned the amber to build up a 3D image of the skeleton. “The baby is unquestionably a snake,” says team member Michael Caldwell of the University of Alberta, Canada, a palaeontologist who specialises in studying ancient snakes and lizards. This would make it the first ever snake found in amber. | <urn:uuid:20c694bc-7fd5-46c2-b4f0-807ca8e78d1e> | 2.953125 | 484 | Content Listing | Science & Tech. | 44.957885 | 95,624,700 |
Authors: George Rajna
New research gives insight into a recent experiment that was able to manipulate an unprecedented number of atoms through a quantum simulator. This new theory could provide another step on the path to creating the elusive quantum computers. Chinese scientists Xianmin Jin and his colleagues from Shanghai Jiao Tong University have successfully fabricated the largest-scaled quantum chip and demonstrated the first two-dimensional quantum walks of single photons in real spatial space, which may provide a powerful platform to boost analog quantum computing for quantum supremacy. To address this technology gap, a team of engineers from the National University of Singapore (NUS) has developed an innovative microchip, named BATLESS, that can continue to operate even when the battery runs out of energy. Stanford researchers have developed a water-based battery that could provide a cheap way to store wind or solar energy generated when the sun is shining and wind is blowing so it can be fed back into the electric grid and be redistributed when demand is high. Researchers at AMOLF and the University of Texas have circumvented this problem with a vibrating glass ring that interacts with light. They thus created a microscale circulator that directionally routes light on an optical chip without using magnets.
Comments: 56 Pages.
[v1] 2018-05-14 11:00:15
Unique-IP document downloads: 6 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:ca46b0bf-6b7c-4cf7-b631-6dde63cf371d> | 3.015625 | 419 | Knowledge Article | Science & Tech. | 32.781686 | 95,624,721 |
Finally block are the block, that executed when the try block exits. This block executed even after the unexpected exception occured.The Run time always execute the expression in finally block irrespective of the try block. This block is known for recovering and preventing from resources leak.On closing of the file and recovering of a file, you need to place the expression in the finally block.
Understand with Example.
In this Tutorial we want to describe you a code that helps you to handle finally exceptions. The Finally block succeed when the try block exists and will executed when an unexpected exception occurs in the code.. The program given below describes how to use the finally block. In this program if the file "girish.txt" does not occurs it will throw File not Found exception. and also executes the finally block of the program.
1)File Input Stream - This is used to decode the byte and further convert it into a character. stream
Inside the main method the input stream include a fileinputstream object that store and convert convert the byte code to a character stream from a file name "girish.txt".In case there is no such file on the name of "girish.txt".The catch block caught the unexpected exception .Later on finally block print the exception.
Output of the program | <urn:uuid:2a44a22b-3528-40c0-81eb-659cb9a50d96> | 3.640625 | 266 | Tutorial | Software Dev. | 56.481286 | 95,624,733 |
Applying more than 50 volts across a droplet of weak hydrochloric acid causes the drop to rise into the air above a glowing plasma layer
Researchers in France have discovered a new way to levitate liquid droplets, which surprisingly also creates a mini light show, with the droplet sparking as it floats above a faint blue glowing gap.
Described this week in the journal Applied Physics Letters, from AIP Publishing, the work may offer an inexpensive new way to generate a freely movable microplasma, as well as yield insights into fundamental physics questions.
The floating effect is similar to Leidenfrost levitation -- in which droplets dance on a hot vapor cushion. But by creating the vapor with a strong jolt of electricity instead of heat, the researchers found they could ionize the gas into a plasma that glowed a soft blue light.
"This method is probably an easy and original way to make a plasma," said Cedric Poulain, a physicist at the French Alternative Energies and Atomic Energy Commission. Poulain speculates that the deformability of a liquid drop would let the researchers rig up a device to move the plasma along a surface, but he admits that such applications were far from his and his colleagues' minds when they first conceived the experiment.
At first, the researchers wanted to explore the limits of the analogy between the boiling phenomenon and water electrolysis, which is the breakup of water into hydrogen and oxygen gases by an electric current.
As an example of boiling behavior, Poulain described the case of a liquid droplet at the surface of a hot pan. If the pan temperature is just above 100 degrees Celsius, the drop spreads and water vapor bubbles grow at the pan surface. However, if the pan is very hot (more than 280 degrees Celsius), a cushion of vapor is formed between the drop and the pan, levitating the drop and preventing contact between the liquid water and the pan, a phenomenon called the Leidenfrost effect. "This is a classical ‘grandmother’ trick to test the temperature of a pan," Poulain said.
The team wondered if a similar transition exists in the case of water electrolysis. The analogy interested the authors, because they study an event called "boiling crisis" in nuclear power plant steam generators. If the core of a nuclear reactor gets too hot, bubbles in the cooling water can suddenly coalesce to form a vapor film that limits further heat transfer and leads to a dangerous increase in temperature.
A Cushion of Vapor from a Jolt of Electricity
In their lab, Poulain and his colleagues devised a set-up to run electricity through conductive droplets and film the droplets' behavior at high speed. They suspended a small drop of weak hydrochloric acid, which conducts electricity, above a metal plate and applied a voltage across the drop. When the drop touched the plate, electricity began to flow, and the water in the hydrochloric acid solution started to break down into hydrogen and oxygen gas.
Above 50 volts, the bottom of the droplet started sparking. It levitated, rising over the surface of the plate, and a faint blue glow emanated from the gap.
At first the researchers believed that the drop might be resting on a cushion of hydrogen gas from the breakup of water, but further analysis revealed that the gaseous cushion was in fact mostly water vaporized by energy from the electric current.
The blue light emission was unexpected and probably the most exciting feature of the experiment, the team said. Although fifty volts is a relatively low voltage, Poulain explained that the tiny gap between the droplet and the metal plate is what gives rise to the very high electric field necessary to generate a long-term and dense plasma with little energy.
Exploring the Blue Light
The researchers next plan to analyze the composition of the plasma layer. They say it appears to be a superposition of two types of plasma that is not well understood. They will also study the fast dynamics at the bottom of the drop just as the sparks begin to fly, which should yield additional insights into the plasma.
Although plasma dynamics may seem far removed from the problem of film boiling in nuclear reactors, Poulain is happy about the path the curiosity-driven research has taken the team.
"It's very exciting," he said of the team's foray into plasma levitation.
The article, "The plasma levitation of droplets," is authored by Cedric Poulain, Antoine Dugue, Antoine Durieux, Nader Sadeghi, and Jerome Duplat. It will be published in the journal Applied Physics Letters on August 11, 2015 (DOI: 10.1063/1.4926964). After that date, it can be accessed at: http://scitation.aip.org/content/aip/journal/apl/107/6/10.1063/1.4926964
The authors of this paper are affiliated with the French Alternative Energies and Atomic Energy Commission (CEA), Ecole Polytechnique, and the University of Grenoble Alpes.
ABOUT THE JOURNAL
Applied Physics Letters features concise, rapid reports on significant new findings in applied physics. The journal covers new experimental and theoretical research on applications of physics phenomena related to all branches of science, engineering, and modern technology. See: http://apl.aip.org
Jason Socrates Bardi
Jason Socrates Bardi | newswise
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:1e0d91b5-9fcf-44c0-827c-de911fe2fdaa> | 3.828125 | 1,707 | Content Listing | Science & Tech. | 42.27516 | 95,624,741 |
When you think of plastic pollution, you may think of water bottles piling up in landfills or even littering public beaches.
But offshore, swirling in ocean currents, plastic debris persists, never truly breaking down, but instead breaking into ever smaller pieces. These particles are called microplastics, and research shows they’re threatening aquatic and marine habitats. The fragments are small enough to be ingested by fish and other marine species, and they can also leach toxic chemicals into the water.
For the past two years, Maia McGuire has been working to raise public consciousness here in Florida about the issue and how it negatively affects the viability of marine life. McGuire is the Florida Sea Grant agent for the UF/IFAS Extension in Flagler and St. Johns counties. In late 2015, she was awarded an outreach and education grant through the marine debris program of the National Oceanic and Atmospheric Administration to begin the Florida Microplastics Awareness Project.
McGuire will hold a public lecture on her work with the project, especially at the local level, at Marineland on Tuesday morning as part of an ongoing series sponsored by the Guana Tolomato Matanzas (GTM) National Estuary Research Reserve.
As a marine biologist specializing in educating the public on ways to protect the health of coastal and aquatic areas, McGuire became interested in microplastics when she began hearing people talk about the Great Pacific Garbage Patch, an area in the North Pacific Ocean that has been found to be especially concentrated with plastic particles suspended just beneath the surface.
"And I’d hear people say, ‘Oh, that’s so horrible … for them … over there,’" said McGuire, "and I would have to say, ‘Hey, it’s all one ocean.’ So it was very obvious there was this disconnect."
That’s why she’s employed citizen scientists from across the state to collect data in a kind of grassroots campaign. The volunteers take samples from local bodies of water and return them to a center, where they are filtered and examined under a microscope for microplastics. If everyday folks rather than lab technicians are involved in the process, McGuire hopes, they’ll be more invested in the outcomes. Perhaps even become ambassadors in letting others know what they can do to curb the problem.
The project has trained 26 regional coordinators in coastal communities from Nassau County to Key West to lead the efforts.
At first, volunteers began by measuring bodies of salt water, including estuaries, and then expanded out to include lakes, ponds and other waterways. The goal was to show the extent of the problem in Florida’s waters.
"It’s really about making people more conscious and aware that there is plastic out there in the water, even if you can’t see it," said McGuire.
Fragments of plastic enter the water from many environmental sources, such as polyethylene microbeads used in toothpastes, poorly discarded plastic products and even little bits of fibers from clothing that are released into the wastewater stream from doing laundry.
Justina Dacey is the community engagement coordinator for the St. Johns Riverkeeper, which has worked with McGuire on the project here in Northeast Florida, training volunteers to collect and filter samples.
"It’s just this great opportunity for people to understand more about the process and that it can be tedious, but that it’s helping raise awareness," Dacey said.
McGuire said the public can play a role in reducing the amount of plastic that ends up in waterways. Just a few ways are using reusable shopping bags instead of plastic ones; using a refillable coffee mug instead of a foam one; and choosing more natural fabrics instead of synthetics such as microfiber, acrylic and polyester.
The United Nations Ocean Conference estimates the world’s oceans might contain more weight in plastics than fish by the year 2050.
If McGuire and others working to reduce these pollutants have anything to do with it, that will not be the case. | <urn:uuid:186a199a-9d49-4ecf-99ca-d6080951d59b> | 3.265625 | 847 | News Article | Science & Tech. | 39.376816 | 95,624,747 |
Tapah was downgraded from a tropical storm to a tropical depression and is located 239 nautical miles southeast of Iwo To.
Tapah rapidly dissipated due to the effected of strong vertical windshear from the west and a sharp decreased in sea surface temperature.
The storm is currently tracking northwest at 10 knots per hour and is expected to recurve to the northeast and accelerate.
Maximum wave height is currently 10 feet. The storm will be monitored for signs of regeneration.
NASA captured this image of the storm with the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite on April 29, 2014 at 11:55 p.m. EDT (3:55 UTC) in the western Pacific Ocean.
Rob Gutro | Eurek Alert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:4112b346-1b95-41bb-a4c8-d7ff36d26ca4> | 2.671875 | 787 | Content Listing | Science & Tech. | 40.812165 | 95,624,796 |
The GRAPES-3 muon telescope located at TIFR's Cosmic Ray Laboratory in Ooty recorded a burst of galactic cosmic rays of about 20 GeV, on 22 June 2015 lasting for two hours.
The burst occurred when a giant cloud of plasma ejected from the solar corona, and moving with a speed of about 2.5 million kilometers per hour struck our planet, causing a severe compression of Earth's magnetosphere from 11 to 4 times the radius of Earth.
It triggered a severe geomagnetic storm that generated aurora borealis, and radio signal blackouts in many high latitude countries.
Earth's magnetosphere extends over a radius of a million kilometers, which acts as the first line of defence, shielding us from the continuous flow of solar and galactic cosmic rays, thus protecting life on our planet from these high intensity energetic radiations. Numerical simulations performed by the GRAPES-3 collaboration on this event indicate that the Earth's magnetic shield temporarily cracked due to the occurrence of magnetic reconnection, allowing the lower energy galactic cosmic ray particles to enter our atmosphere.
Earth's magnetic field bent these particles about 180 degree, from the day-side to the night-side of the Earth where it was detected as a burst by the GRAPES-3 muon telescope around mid-night on 22 June 2015. The data was analyzed and interpreted through extensive simulation over several weeks by using the 1280-core computing farm that was built in-house by the GRAPES-3 team of physicists and engineers at the Cosmic Ray Laboratory in Ooty.
This work has recently been published in Physical Review Letters
Solar storms can cause major disruption to human civilization by crippling large electrical power grids, global positioning systems (GPS), satellite operations and communications.
The GRAPES-3 muon telescope, the largest and most sensitive cosmic ray monitor operating on Earth is playing a very significant role in the study of such events. This recent finding has generated widespread excitement in the international scientific community, as well as electronic and print media.
Links to articles
Research paper: P. K. Mohanty et al., Phys. Rev. Lett. 117, 171101 (2016),
APS Physics highlight: http://physics.
Pravata K Mohanty | EurekAlert!
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:724448df-5119-435f-864d-19045bcd60a2> | 3.46875 | 1,109 | Content Listing | Science & Tech. | 41.481272 | 95,624,797 |
Astronomical Spectroscopy Notes from Richard Gray, Appalachian State, and D. J. Schroeder 1974 in “Methods of Experimental Physics, Vol. 12-Part A Optical and Infrared”, p.463. See also Chapter 3 in “Stellar Photospheres” textbook. Elements Resolution Grating Equation Designs.
Astronomical SpectroscopyNotes from Richard Gray, Appalachian State, andD. J. Schroeder 1974 in “Methods of Experimental Physics, Vol. 12-Part A Optical and Infrared”, p.463.See also Chapter 3 in “Stellar Photospheres” textbook
A spectrograph should be
designed so that the slit
width is approximately
the same as the average
seeing. Otherwise you
will lose a lot of light.
Without the disperser, the spectrograph optics
would simply reimage the slit on the detector.
With the disperser, monochromatic light passing
through the spectrograph would result in a single
slit image on the detector; its position on the detector
is determined by the wavelength of the light.
This implies a spectrum is made up of overlapping
images of the slit. A wide slit lets in a lot of light,
but results in poor resolution. A narrow slit lets in
limited light, but results in better resolution.
Collimator focal length
Camera focal length
Let s = slit width, p = projected slit width (width of slit on detector).
Then, to first order:
Optimally, p should have a width equal to two pixels on the detector.Resolution element Δλ = wavelength span associated with p.
Prisms: disperse light into a spectrum
because the index of refraction is a
function of the wavelength. Usually:
n(blue) > n(red).
Diffraction gratings: work through
the interference of light. Most modern
spectrographs use diffraction gratings.
Most astronomical spectrographs use
reflection gratings instead of transmission
A combination of the two is called a
Diffraction gratings are made up of very narrow grooves which
have widths comparable to a wavelength of light. For instance,a 1200g/mm grating has spacings in which the groove width is
about 833nm. The wavelength of red light is about 650nm.
Light reflecting off these grooves will interfere. This leads
Light reflecting from grooves A and
B will interfere constructively if the
difference in path length is an
integer number of wavelengths.
The path length difference will
be a + b, where a = d sinα and
b = d sinβ. Thus, the two
reflected rays will interfere
a grating of groove spacing d at an angle α with the grating
Normal, it will be diffracted at an angle β from the grating.
If m, d and α are kept constant, λ is clearly a function of β.
Thus, we have dispersion.
produce multiple spectra. If m = 0, we have the zeroth order,
undispersed image of the slit. If m = 1, we have two first order
spectra on either side of the m = 0 image, etc.
illustrated is a
These orders will overlap, which produces problems for grating
If, for instance, you want to observe at 8000Å in 1st order,
you will have to deal with the 4000Å light in the 2nd order.
This is done either with blocking filters or with cross dispersion.
Massey & Hanson 2011arXiv 1010.5270v2.pdf
Meaning that a wavelength of λm in the mth order overlaps with a
wavelength of λm+1 in the m+1th order.
Dispersion is the degree to which the spectrum is spread out.
To get high resolution, it is really necessary to use a diffraction
grating that has high dispersion. Dispersion (dβ/dλ) is given by:
Thus, to get high resolution, three strategies are possible:
long camera focal length (f3), high order (m), or small
grating spacing (d). The last has some limitations. The
first two lead to the two basic designs for high-resolution
spectrographs: coudé (long f3) and echelle (high m).
Littrow (not commonly used in
Ebert: used in astronomy, but
p = s. Note camera = collimator.
Czerny-Turner: most versatile
design. Most commonly used
Echelle grating: coarse grating (big d) used
at high orders (m ~ 100; tan θB = 2).
Kitt Peak 4-m Echelle
Orders are separated by cross
dispersion: using a second
disperser to disperse λ in a direction perpendicular to the | <urn:uuid:39b991dd-ece0-4b1a-aa4b-009f1ea6b565> | 3.71875 | 1,066 | Knowledge Article | Science & Tech. | 61.883786 | 95,624,802 |
Green stink bug
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (September 2015) (Learn how and when to remove this template message)
|Green stink bug|
The green stink bug or green soldier bug (Chinavia hilaris) is a stink bug belonging to the family Pentatomidae.
It was historically placed in the genus Acrosternum, but according to Dr. David Rider of North Dakota State University and other experts, this placement is inaccurate; the genus name Acrosternum should be restricted to a handful of Old World, small, pale green species that live in dry arid areas, while the larger, brighter green species that live in both the Old and New Worlds should go by the genus name Chinavia, therefore this species is called Chinavia hilaris in more recent literature (e.g., Schwertner and Grazia, 2006).
The green stink bug's color is typically bright green, with narrow yellow, orange, or reddish edges. It is a large, shield-shaped bug with an elongate, oval form and a length between 13–18 mm. It can be differentiated from the species Nezara viridula by its black outermost three antennal segments. Its anterolateral (= in front and away from the middle) pronotal margin is rather straight and not strongly arced such as in Acrosternum pennsylvanicum.
Both adults and nymphs have large stink glands on the underside of the thorax extending more than half-way to the edge of the metapleuron. They discharge large amounts of this foul-smelling liquid when disturbed. This liquid, dried and pulverized, was once used at industrial level to reinforce the smell of some acids. Now it's been replaced by artificial composites.
It is found in orchards, gardens, woodlands and crop fields throughout North America, feeding with their needle-like mouthparts on the juices of a wide variety of plants from May until the arrival of frost. Adults develop a preference for developing seeds and thus become crop pests (tomato, bean, pea, cotton, corn, soybean, eggplant). When no seeds are present, they also feed on stems and foliage, thus damaging several fruit trees, such as the apple, cherry, orange and peach trees. Moreover, it can be found in Queensland and New South Wales. Green stink bugs bored well on bonfire salvia as well as tomatoes and mulberry. Also it has been found on peaches, apricots, grapes, silver beet and french beans. The difference here is that they don't seem to be a pest for them.
Adults appear in the field early September and become plentiful in sheltered positions. Then, mating happens in early October and finally, the eggs can be found mid to late October. Nymphs appear in late October and early November. Two or three generations occur in the summer months in the field and in the laboratory at 26 °C.
They attach their keg-shaped eggs on the underside of foliage in double rows of twelve eggs or more. The green stink bug produces one generation in the North and two generations in the South. The early instar nymphs are rather brightly colored and striped, turning green when approaching adulthood. The eggs are usually laid in clusters of 14 (some clusters contain fewer eggs, with 9 being the smallest number recorded out of 77 observations). The eggs are laid either on the undersurfaces of leaves or on the stems of plants or on the flowers of salvia.
It is parasitized by the tachinid fly Trichopoda pennipes. The green stink bug uses the pheromone methyl (E,Z,Z)-2,4,6-decatrienoate in its communication system and this may be used to attract the bug away from crop fields.
|Wikimedia Commons has media related to Acrosternum.|
- McDonald, F. J. D. "LIFE CYCLE OF THE GREEN STINK BUG". Dallas. doi:10.1111/j.1440-6055.1971.tb00040.x/asset/j.1440-6055.1971.tb00040.x.pdf;jsessionid=ed764334ae5a384d989da2759d4d01b4.f02t02.
- Susan Mahr. "Trichopoda pennipes". University of Wisconsin-Madison. Retrieved 2008-03-18.
- The Pherobase. "Semiochemical - me-E2Z4Z6-decatrienoate". Pest Management Information System. Retrieved 2008-03-18.
- Chinavia hilaris BugGuide. Iowa State University Entomology. Retrieved 6 October 2010.
- Lorus and Margery Milne : National Audubon Society : Field Guide to North American Insects and Spiders; Alfred A. Knopf, New York, fourteenth printing, 1996; ISBN 0-394-50763-0
- McPherson, J.E. (1982). The Pentatomoidea (Hemiptera) of Northeastern North America. Southern Illinois University Press. ISBN 0-8093-1040-6.
- Schwertner, C. F. and J. Grazia. 2006. Descrição de seis espécies de Chinavia (Hemiptera, Pentatomidae, Pentatominae) da América do Sul. Iheringia (Zool.) 96(2): 237-248. | <urn:uuid:42514285-deec-43b9-99a1-cdb188e07ed3> | 3.109375 | 1,175 | Knowledge Article | Science & Tech. | 60.101684 | 95,624,817 |
- Published on Wednesday, 11 July 2018 09:47
Insights into its 100-year history reveal how the cosmological constant was marginalised by physicists before being reinstated by astronomers to explain the accelerated expansion of the universe
Physicists are now celebrating the 100th anniversary of the cosmological constant. On this occasion, two papers recently published in EPJ H highlight its role in modern physics and cosmology. Although the term was first introduced when the universe was thought to be static, today the cosmological constant has become the main candidate for representing the physical essence believed to be responsible for the accelerated expansion of our universe. Before becoming widely accepted, the cosmological constant was during decades the subject of many discussions about its necessity, its value and its physical essence. Today, there are still unresolved problems in understanding the deep physical nature of the phenomena associated with the cosmological constant.>
- Published on Wednesday, 16 May 2018 18:01
Revisiting the roots of a physics field known as computational statistical mechanics
It may sound like the stuff of fairy tales, but in the 1950s two numerical models initially developed as a pet project by physicists led to the birth of an entirely new field of physics: computational statistical mechanics. This story has recently appeared in a paper published in EPJ H, authored by Michel Mareschal, an Emeritus Professor of Physics at the Free University of Brussels, Belgium. The article outlines the long journey leading to the acceptance of such models - namely Monte Carlo and Molecular Dynamics simulations - as reliable evidence for describing matter. This happened at a time when the computing power required to run simulations was scarce. Today, these techniques are used by thousands of researchers to model the behaviour of materials, in contexts ranging from fusion to biological systems.
- Published on Monday, 16 April 2018 13:26
Personal recollections of an astrophysicist shed new light on the 1995 discovery on 51 Pegasi b
In recent history, a very important achievement was the discovery, in 1995, of 51 Pegasi b, the first extrasolar planet ever found around a normal star other than the Sun. In a paper published in EPJ H, Davide Cenadelli from the Aosta Valley Astronomical Observatory (Italy) interviews Michel Mayor from Geneva Observatory (Switzerland) about his personal recollections of discovering this exoplanet. They discuss how the development of better telescopes made the discovery possible. They also delve into how this discovery contributed to shaping a new community of scholars pursuing this new field of research. In closing, they reflect upon the cultural importance that the 51 Pegasi b discovery had in terms of changing our view of the cosmos.
- Published on Tuesday, 27 February 2018 17:41
The personal memories of Jayant Narlikar point to the need for restoring cosmology as the flagship of astronomy
"Cosmologists are often wrong but never in doubt”, Russian physicist Lev Landau once said . In the early days, astronomers began by observing and modelling stars in different stages of evolution and comparing their findings with theoretical predictions. Stellar modelling uses well-tested physics, with concepts such as hydrostatic equilibrium, law of gravitation, thermodynamics, nuclear reactions etc. Yet in contrast, cosmology is based on a large number of untested physical assumptions, like nonbaryonic dark matter and dark energy whose physics has no proven link with the rest of physics. In a recent paper published in EPJ H, Jayant V. Narlikar, professor emeritus at the Inter-University Centre for Astronomy and Astrophysics in Pune, India, shares his personal reminiscences of the evolution of the subject of cosmology over six decades. He tells of the increase in our confidence in the standard model of cosmology to the extent that it has become a dogma.
- Published on Tuesday, 05 December 2017 10:21
The personal recollections of a physicist involved in developing a reference model in particle physics, called the Standard Model, particularly in Italy
Understanding the Universe requires first understanding its building blocks, a field covered by particle physics. Over the years, an elegant model of particle physics, dubbed the Standard Model, has emerged as the main point of reference for describing the fundamental components of matter and their interactions. The Standard Model is not confined to particle physics; it also provides us a guide to understanding phenomena that take place in the Universe at large, down to the first moments of the Big Bang, and it sets the stage for a novel cosmic problem, namely the identification of dark matter. Placing the Standard Model in a historical context sheds valuable light on how the theory came to be. In a remarkable paper published in EPJ H, Luciano Maiani from the University of Rome and the National Institute of Nuclear Physics, Italy, shares his personal recollections with Luisa Bonolis from the Max Planck Institute for the History of Science, Berlin, Germany. During an interview recorded over several days in March 2016, Maiani outlines the role of those researchers who were instrumental in the evolution of theoretical particle physics in the years when the Standard Theory was developed.
- Published on Monday, 26 June 2017 23:06
Journey into the post-war transformation leading to the return of General Relativity within physics
Einstein’s 1915 theory of gravitation, also known as General Relativity, is now considered one of the pillars of modern physics. It contributes to our understanding of cosmology and of fundamental interactions between particles. But that was not always the case. Between the mid-1920s and the mid-1950s, General Relativity underwent a period of stagnation, during which the theory was mostly considered as a stepping-stone for a superior theory. In a special issue of EPJ H just published, historians of science and physicists actively working on General Relativity and closely related fields share their views on the process, during the post-World War II era, in particular, which saw the “Renaissance” of General Relativity, following progressive transformation of the theory into a bona fidae physics theory.
EPJ H Highlight - Historical account of how donut-shaped fusion plasmas managed to decrease adverse turbulence
- Published on Monday, 20 February 2017 16:35
Achieving fusion has become more realistic since plasma flow was identified as regulating turbulence in the 1980s
Fusion research has been dominated by the search for a suitable way of ensuring confinement as part of the research into using fusion to generate energy. In a recent paper published in EPJ H, Fritz Wagner from the Max Planck Institute for Plasma Physics in Germany, gives a historical perspective outlining how our gradual understanding of improved confinement regimes for what are referred to as toroidal fusion plasmas –- confined in a donut shape using strong magnetic fields-- have developed since the 1980s. He explains the extent to which physicists’ understanding of the mechanisms governing turbulent transport in such high-temperature plasmas has been critical in improving the advances towards harvesting fusion energy.
- Published on Wednesday, 12 October 2016 10:12
History shows experiments to be just as key as theory in gravity physics
In the 1950s and earlier, the gravity theory of Einstein's general relativity was largely a theoretical science. In a new paper published in EPJ H, Jim Peebles, a physicist and theoretical cosmologist who is currently the Albert Einstein Professor Emeritus of Science at Princeton University, New Jersey, USA, shares a historical account of how the experimental study of gravity evolved.
- Published on Tuesday, 22 March 2016 08:47
On the evolution of how we have defined time, time interval and frequency since antiquity
The earliest definitions of time and time-interval quantities were based on observed astronomical phenomena, such as apparent solar or lunar time, and as such, time as measured by clocks, and frequency, as measured by devices were derived quantities. In contrast, time is now based on the properties of atoms, making time and time intervals themselves derived quantities. Today’s definition of time uses a combination of atomic and astronomical time. However, their connection could be modified in the future to reconcile the divergence between the astronomic and atomic definitions. These are some of the observations made by Judah Levine, author of a riveting paper just published in EPJ H, which provides unprecedented insights into the nature of time and its historical evolution.
- Published on Wednesday, 02 December 2015 11:42
The Abraham Pais Prize for History of Physics is given annually to recognize outstanding scholarly achievements in the history of physics. Professor Allan Franklin, who is an Editor of EPJ H and author of the Springer book The Rise and Fall of the Fifth Force, receives the 2016 Abraham Pais Prize for History of Physics for "path-breaking historical analyses of the roles of experiment in physics and for explicating the nature of evidence and error in scientific argument". | <urn:uuid:5b4b707c-372e-4c4a-b192-a21256eef162> | 2.53125 | 1,808 | Content Listing | Science & Tech. | 15.337685 | 95,624,847 |
abort man page
abort — cause abnormal process termination
#include <stdlib.h> void abort(void);
The abort() first unblocks the SIGABRT signal, and then raises that signal for the calling process (as though raise(3) was called). This results in the abnormal termination of the process unless the SIGABRT signal is caught and the signal handler does not return (see longjmp(3)).
If the SIGABRT signal is ignored, or caught by a handler that returns, the abort() function will still terminate the process. It does this by restoring the default disposition for SIGABRT and then raising the signal for a second time.
The abort() function never returns.
For an explanation of the terms used in this section, see attributes(7).
Up until glibc 2.26, if the abort() function caused process termination, all open streams were closed and flushed (as with fclose(3)). However, in some cases this could result in deadlocks and data corruption. Therefore, starting with glibc 2.27, abort() terminates the process without flushing streams. POSIX.1 permits either possible behavior, saying that abort() "may include an attempt to effect fclose() on all open streams".
SVr4, POSIX.1-2001, POSIX.1-2008, 4.3BSD, C89, C99.
gdb(1), sigaction(2), assert(3), exit(3), longjmp(3), raise(3)
This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at https://www.kernel.org/doc/man-pages/.
assert(3), assert_perror(3), guestfs(3), jemalloc(3), mallopt(3), mcheck(3), pmemobj_tx_begin(3), rawshark(1), signal(7), signal-safety(7), stdio(3), Tcl_Panic(3), tshark(1), wireshark(1). | <urn:uuid:c1606afe-fdc6-4f93-b024-3e3fa9870505> | 2.78125 | 463 | Documentation | Software Dev. | 74.438333 | 95,624,849 |
Researchers at the University of Basel’s Biozentrum have discovered that Bartonella bacteria exchange genes efficiently using a domesticated virus encoded in their genome. As the findings published in «Cell Systems» demonstrate, the exchange of genetic material only takes place between bacteria with a high level of fitness. The gene transfer between pathogens prevents the accumulation of genetic defects, promotes the spread of beneficial gene mutations and thus keeps the bacteria fit.
Bartonella are bacteria that can cause diverse infectious diseases in man, such as cat-scratch disease. In order to prevent the accumulation of mutations during the infection cycle, pathogens require efficient DNA repair mechanisms. Therefore, the sharing of intact genes within bacterial populations plays an important role, as errors in the gene pool can be eliminated and the genetic material kept fresh.
In collaboration with the ETH Zurich Prof. Christoph Dehio’s team at the Biozentrum, University of Basel, has discovered that for the efficient exchange of genes Bartonella use virus-like particles, so-called gene transfer agents. They also demonstrated that damaged bacteria are excluded from this gene transfer process and so it is much less likely that detrimental genetic material is spread in the population.
Gene transfer using domesticated viruses
Gene transfer agents evolved as derivatives of bacteriophages, viruses that attack bacteria. However, other than bacteriophages packing their own genome they package random pieces of the bacterial genome and transfer these to other bacteria. Using these domesticated bacteriophages, bacterial populations can efficiently exchange DNA fragments. This type of gene transfer, however, comes at a high price: The fraction of the bacterial population that produces gene transfer agents dies while releasing the particles. But what are the advantages for the surviving bacterial population that takes up the gene fragments?
As the bacterial populations grow, bacteria divide regularly. For each cell division, the genome is duplicated and passed on to the two daughter cells. Errors creep in regularly during this recurrent process. Only efficient repair mechanisms, including the exchange of flawless genetic material, can prevent the accumulation of genetic aberrations. In short: The genetic material is kept fresh.
“A further evolutionary advantage of gene transfer agents is the spread of new genetic material throughout the bacterial population, endowing it with new properties. This may also include antibiotic resistance”, explains Dehio. But this survival advantage for bacteria means, on the other hand, a threat to humans.
Only the fittest bacteria transfer genes
It has long remained unknown how the exchange of genetic material between bacteria using gene transfer agents works and how it is regulated. In their study, Dehio’s team has comprehensively identified the involved components. In particular, stress signals are key players in this process. Only bacteria in good condition exchange genetic material, whereas bacteria stressed as a result of unfavorable gene mutations do not transfer genes.
“In other words only the fittest and genetically most promising bacteria in a population divide and exchange genetic material. In genetically weakened and therefore stressed bacteria this mechanism is switched off”, says Maxime Québatte, the first author of the study.
The sharing of intact genetic material endows the fittest part of a bacterial population to persist in the host and to be passed onto new hosts successfully. This knowledge may, in turn, be used to develop new strategies to combat infections caused by the pathogen Bartonella.
Maxime Québatte, Matthias Christen, Alexander Harms, Jonas Körner, Beat Christen, and Christoph Dehio
Gene transfer agent promotes evolvability within the fittest subpopulation of a bacterial pathogen
Cell Systems (2017), doi: 10.1016/j.cels.2017.05.011
Prof. Dr. Christoph Dehio, University of Basel, Biozentrum, Tel. +41 61 207 21 40, email: email@example.com
Heike Sacher, University of Basel, Biozentrum, Communications, Tel. +41 61 207 14 49, email: firstname.lastname@example.org
Heike Sacher | Universität Basel
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:80b9b97b-4312-4f7d-9a4c-78eb4e9bc047> | 3.890625 | 1,445 | Content Listing | Science & Tech. | 30.61169 | 95,624,862 |
An environmentally concerned graduate student uses the more earth-friendly anti-freeze, propylene glycol (C3H8O2, density = 1.038). The final composition in her car's radiator is 30% (volume/volume) ethylene glycol (C2H6O2, density 1.114 g/ml) and 20% (V/V) propylene glycol in water. At what temperature will her engine overheat? (Kb for water =0.52).© BrainMass Inc. brainmass.com July 22, 2018, 6:39 pm ad1c9bdddf
Calculate the molality of the solution:
Remember, for coligative properties the type of solute does not matter. Therefore, the number of moles solute is simply the total number of moles of all coolants. Since the volume of the solution does not matter because we are dealing with percentages, we will pick 1L.
The volume of propylene glycol is ...
Boiling point of a solution given density and molecular weight is determined in the solution. | <urn:uuid:220cc0ea-0567-4541-ad58-2451c4db46ae> | 3.3125 | 230 | Tutorial | Science & Tech. | 54.245185 | 95,624,874 |
Following a decade of intensive research into graphene and two-dimensional materials a new semiconductor material shows potential for the future of super-fast electronics. The new semiconductor named Indium Selenide (InSe) is only a few atoms thick, similarly to graphene.
Two-dimensional (2D) nanomaterials have been made by dissolving layered materials in liquids, according to new research. The liquids can be used to apply the 2D nanomaterials over large areas and at low costs, enabling a variety of important future applications.
Scientists analytically studied the optical absorption efficiency of a TiN nanoparticle and found that it has a broad and strong absorption peak thanks to lossy plasmonic resonances. Surprisingly, the sunlight absorption efficiency of a TiN nanoparticle outperforms that of a carbon nanoparticle and a gold nanoparticle.
Using new photonics technology, European scientists are developing a multi-gas detector that can spot dozens of harmful emissions with a single sensor in milliseconds, delivering a breakthrough for the prevention climate change.
By carefully incorporating strands of custom DNA into different layers of flexible films, researchers can force those films to bend, curl and even flip over by introducing the right DNA cue. They could also reverse these changes by way of different DNA cues.
A team of mechanical engineers has successfully used acoustic waves to move fluids through small channels at the nanoscale. The breakthrough is a first step toward the manufacturing of small, portable devices that could be used for drug discovery and microrobotics applications. | <urn:uuid:74ea51be-e432-4447-812a-88c962987164> | 3.15625 | 314 | Content Listing | Science & Tech. | 18.718836 | 95,624,876 |
http://www.VenturaCountyStar.com ( 5 August, 2007 )
Developers Trying to Harness Earth's Energy in New Way
The Power of Wind
By Allison Bruce
Gene Kelley, above, shows off his WindWing, which he and fellow W2 Energy partners believe can replace current propeller-driven wind turbines. According to Kelley, the WindWing can produce much more energy at a fraction of the cost. Photo by Karen Quincy Loberg
Website : http://w2energycorp.com
W2 Energy Development Corp.
402 E. Gutierrez Street
Santa Barbara, CA 93101
Phone: (805) 879-5215
A Santa Barbara company may have a simple solution for wind energy -- all from taking a look at a different part of the plane.
While most wind turbines these days are built as propellers, Gene Kelley is convinced that wings are a better answer for capturing wind energy. Though the physics and work that has gone into his invention can get complex, the underlying concept of his "WindWing" is basic enough for a child to understand.
Anyone who has stuck a hand out of a car window has felt how the WindWing works. As the hand is tilted upward, the wind pushes the hand up. As it tilts downward, the wind pushes it down.
The resulting up-and-down motion, or oscillation, is what gives the WindWing its power.
Kelley, a flight buff with decades of aviation experience added to his work as a "human factors engineer," said the use of a wing as opposed to a propeller creates a simpler and more efficient way to capture the energy of the wind.
The wing concept could be applied to water as well. Kelley said it could apply to any flowing, fluid medium. For now, the company is focusing on the WindWing to prove the concept.
He filed a patent application in 2005 and is awaiting its approval.
"We want to build a better way of harvesting energy from whatever renewable source there is," Kelley said.
Kelley, who started a company called InnovaTech LLC for his research about 14 months ago, has a diverse background when it comes to invention. In his 40 years of experience, he's worked on projects from coal mines to aircraft carriers. He was on the research team that developed the "rumble strips" that let drivers know when they're veering off the road.
Kelley started W2 Energy Development Corp. about five months ago to develop the concept of the WindWing.
His original thought was to create inexpensive, portable power for emergencies around the world, such as tsunamis and earthquakes. He had planned to create a foundation to build and give away the technology.
But it turned out the best way to develop the technology was through a corporation, complete with opportunities for investment that would bring in money to drive the development. Kelley still hopes to build a foundation to achieve his original philanthropic goal.
The first step after starting W2 was to build a prototype that would prove that the basic concept was valid. Months of work and scavenged parts went into building the prototype that now sits in a small room in the back of a hangar at the Santa Barbara Airport.
Kelley and David Buckalew, W2 vice president for information resources, built the prototype with a lot of trial and error. The wings were constructed of sheets of metal in Kelley's backyard and parts of the device come from the mechanism used to control a car window, plus the kinds of weights you would find at your local gym. Other parts were custom made by a machine shop.
The prototype consists of four wings on one end and weights on the other. The weighted end of the bar is short -- one foot long compared to the 10 feet on the other end where the wings sit. Kelley compares the balancing of the bar on its central support pole to balancing weight on a teeter-totter.
Because the lever is built at a 10-1 ratio, the force of the wind is magnified, so that 200 pounds of lift on the wings translates into one ton of useful force. In practical application, the ratio will be determined by factors such as wind conditions and wing length.
When a fan is turned on in the prototype room, the 6.7 mph breeze starts to push the wing end upward. When the wings reach the top, a position sensor is tripped and the orientation of the wings tilts downward, changing their "angle of attack." The wings are then pushed downward until they reach the bottom and the wing angle changes again.
The system moves gently, with springs on the central pole that compress as the bar reaches the top and bottom of its movement and springs back to give the bar a shove in the opposite direction. That means energy isn't lost in turning the lever.
The up-and-down flapping hints at one of the system's benefits over its propeller-equipped kin -- that it is less likely to kill birds.
In the actual working model, a rod and pump or generator would be attached to the weighted end. As it moves up and down with the wind, it could be used to compress air, pump water or generate electricity.
"It's going to be a very, very successful technology," said Ron Pretlac, chief operating officer. "Because of the applications, it creates a whole variety of new solutions."
Wind energy currently provides less than 1 percent of the electricity used in the United States, according to the American Wind Energy Association. The association supports President Bush's assertion that up to 20 percent of the nation's electricity could come from wind.
Reducing carbon dioxide emissions
If 10 percent of wind power potential from the nation's 10 windiest states was captured, it could reduce U.S. carbon dioxide emissions by almost one-third, the association reports.
Wind energy is a hot place to be these days, said Christine Real de Azua, assistant director of communications for the American Wind Energy Association.
"There's a lot of opportunity and wind energy attracts a lot of new concepts," she said. "It's all very exciting."
She said demand has grown at such a rate that wind turbines are sold out through this year and into next. Companies are investing in more manufacturing facilities to meet that demand.
Those at W2 are convinced their approach could convince more people of the merits of wind power. Most important is the matter of efficiency.
Real de Azua said the industry "manufactures and produces wind turbines that are proven and reliable and efficient in the market."
A wind turbine today will convert up to 45 percent of the energy from the wind it encounters into electricity, said John Dunlop, technical services senior engineer for the American Wind Energy Association.
Propeller turbines have increased in productivity as blades were improved, maintenance was scheduled for when the wind was slow and people were able to operate turbines for longer periods of time, he said.
Kelley doesn't disagree that the propeller turbines can capture energy from they wind they come in contact with, but he's concerned about all the wind those three blades are missing.
That's because the three blades have a small surface area that makes contact with the wind at any point in time. He said it ends up equating to capturing only 5 percent of the wind energy in a column of air. The wind also tends to hit different parts of the blades at different speeds, so the propeller is constantly having to make adjustments, which takes energy.
With the WindWing, the wings put more surface area in contact with the wind. This provides more lift, which translates into more power. The WindWing is about 40 to 60 percent efficient at getting power from the wind, W2 reports.
Several WindWings could be stacked on a single tower, so that those at different levels could each be adjusted to get the most out of the different wind speeds. The angle of the wings can be adjusted so that there is a high angle for a light wind and a low angle for a strong wind.
The company is researching how many WindWings can be stacked on the same pole. The design is also scalable, so that the wings could range from the size of a conference room table to that of a Boeing 747 "jumbo jet."
Those at W2 said it would take far fewer towers to get the same amount of power generated by propeller turbines. A single WindWing could replace eight to 12 propellers, Kelley said.
There's also the lower cost.
Because of its simple design, the WindWing would be less expensive to make.
A cheaper option
A utility-scale propeller turbine, with blades that can reach more than 40 meters in length and generate 1.8 megawatts of power, can run more than $1.5 million. Smaller, residential- or farm-sized turbines run from a few thousand to up to $80,000, according to the American Wind Energy Association.
Though it is still in the early stages of development, Kelley said the WindWing could cost as little as one-tenth of what it costs for a propeller turbine. Of course, the location of the installation and other factors could affect the cost.
Then there are the little pluses, such as the ability to mount solar cells on the wings, letting them do double-duty in power generation.
It all comes down to a system that makes more sense for users and the environment, Kelley said.
The idea has generated international interest. Though those with the company did not want to talk specifics, they said they were pursuing possibilities for both the WindWing and its close companion, the WaterWing, which could be used to pump water out of aqueducts or generate power from pipelines. Kelley said W2 is trying to raise from $5 million to $7 million in its first round of financing as it moves from the proof-of-concept prototype to a new WindWing prototype that is larger and more functional with more controls.
The company also is looking for land to lease where it can construct a cluster of three different sizes. That could be accomplished in the next six months. Once constructed, the site could act as a live demonstration of the technology. That could help W2 move into the market.
The company is looking at all the different entry points to the market, including the farmer who needs to water crops, the household user and the industrial complex in need of power. W2 also wants to build political support.
But when the people behind W2 stand around the prototype as it waves up and down in the hangar, the discussion isn't about money or political will.
"Can you imagine taking this to Africa and using this to drill for water?" Buckalew asks.
Pretlac notes it could be used to drill for water, pump and purify the water and then distribute it.
"That's a dream of ours, and you can see it works," he said.
US Patent Application # 20070040389
Adaptable Flow-Driven Energy Capture System
February 22, 2007
Gene R. Kelley
A scalable fluid-driven assembly that is uniquely configured to oscillate in the presence of fluid-flow. The assembly includes an adjustable electromechanically controlled fluidfoil. The fluidfoil is controlled to permit a consistently optimum angle of attack into the prevailing flow and to remain parallel with respect thereto.
Correspondence Name and Address:
Kenneth L. Stein
402 East Gutierrez
U.S. Current Class: 290/55
U.S. Class at Publication: 290/055
Intern'l Class: F03D 9/00 20060101 F03D009/00
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to capturing and storing the kinetic energy of a flowing fluid. More particularly, the present invention relates to the capture and storage of wind power and hydropower.
2. Description of Related Art
Windmills and wind turbines are generally well known in the art. Windmills traditionally include a plurality of blades or vanes connected to a rotatable shaft. Wind (or other fluids) act upon the blades to create an aerodynamic or hydrodynamic reaction upon the blades causing the shaft and blades to rotate about the axis of the shaft. Windmills have traditionally been employed across the world to: pump water, grind grain and crush stone. Additionally, windmills have been employed in systems that convert kinetic energy, namely wind, into electrical energy. The rotation of the blades of a windmill drives a generator, which in turn produces an electric current. For applications that require linear actuation, additional mechanical systems are required to translate the rotation of the blades into such linear motion, further complexifying a windmill's operation.
Wind turbines are designed to work between certain wind speeds. The lower speed, called the `cut in speed` is generally 4-5 ms.sup.-1, as there is too little energy below this speed to overcome system losses. The `cut out speed` is determined by the ability of the particular machine to withstand high wind. The `rated speed` is the wind speed at which the particular machine achieves its maximum rated output. Above this speed, it may have mechanisms that maintain the output at a constant value with increasing wind speed.
Windmills and wind turbines require frequent repair and maintenance. Blades can be damaged by high winds and the complex mechanisms that have been devised to accommodate for such must be frequently inspected and maintained. Additionally, while windmills and wind turbines present emission-free options to oil- and gas-fueled power plants, they have been implicated in the annual deaths of tens of thousands of birds, some of which are endangered. Besides the loss of life, repair and maintenance are necessitated as a result of a number of such avian fatalities.
Hydropower plants operate similarly to harness the kinetic energy of flowing water to generate electricity. Hydropower plants generally include a dam, one or more turbines and a corresponding number of generators. Each turbine is positioned at the dam such that water flowing through the dam strikes and turns the turbine's blades. Each turbine is attached to a generator via a shaft such that rotation of the turbine turns the generator producing an electrical current. However, while wind turbines are designed to rotate orthogonal to airflow, hydropower turbines are generally designed to rotate parallel with water flow. Therefore, improvements to wind turbines are not easily translatable to hydropower turbines.
Therefore, what is needed in the art is a system for capturing and storing the kinetic energy of a flowing fluid. What is further needed is such a system that is simpler in construction and provides greater efficiencies than current wind turbines and/or hydropower turbines. Additionally, what is needed is a system that requires less maintenance and repair.
SUMMARY OF THE INVENTION
It is to the solution of the hereinabove mentioned problems to which the present invention is directed. In accordance with the present invention there is provided an adaptable flow-driven energy capture system comprising:
a support mast having a first end and a second end, the support mast affixed a surface at said first end thereof;
a balance beam having a first end and a second end and extending therebetween, said balance beam comprising a force arm side extending from said first end thereof in the direction of said second end and a load arm side extending from said second end thereof in the direction of said first end, said force arm side and said load arm side coterminating at a balance beam fulcrum, said balance beam pivotally attached the second end of the support mast at the balance beam fulcrum;
a compensatory weight attached the load arm side of the balance beam at about the second end thereof, said compensatory weight selected to equalize the weight disposed about the balance beam fulcrum;
a fluidfoil mast comprising and extending between and defining two ends thereof, said fluidfoil mast pivotally connected the force arm side of said balance beam;
at least one fluidfoil pivotally attached the fluidfoil mast, said at least one fluidfoil having a leading edge and a trailing edge cooperatively defining an edge axis extending therebetween, said fluidfoil further having an orthogonally disposed longitudinal axis;
an angle of attack positioner attached at each of and disposed between the at least one fluidfoil and the fluidfoil mast, said positioner moderating fluidfoil angle of attack with respect to fluid flow therepast;
a vane disposed anterior the at least one fluidfoil, said vane registering fluid flow forces that are not parallel with the fluidfoil edge axis;
at least one control having a support mast end and a fluidfoil support mast end, said control rod pivotally attached thereto and extending parallel the balance beam affixed to the Support Mast.
It is an object of the present invention to provide a fluidfoil and associated electromechanical assembly capable of extracting energy from low to high velocity prevailing winds for the capture of such.
It is further an object of the present invention to provide for the selectable control of positive and negative lift on a fluidfoil by changing fluidfoil attitude.
It is another object of the present invention to provide an energy recapture device for conserving and reusing the energy forces required for controlling fluidfoil transitions from positive to negative lift related orientations.
The adaptable fluid-driven system of the present invention is uniquely configured to oscillate in the presence of and orthogonal to the direction of fluid flow. Each of the at least one fluidfoil is dynamically positioned to promote a constant and optimum angle of attack.
A balance beam is rotatably affixed a support mast at a fulcrum point. The balance beam comprises a force arm and a load arm with each extending from opposed ends of the balance beam and coterminating at the fulcrum. The force arm and the load arm are different lengths thereby providing the mechanical advantage that enables the oscillatory motion even in the presence of low energy fluid flow. Energy of such fluid flow is a function of the fluid density and velocity.
The support mast is affixed a surface and includes a rotational portion disposed at a point along the length thereof such that the mast may rotate at a side of the rotational portion opposed the ground.
A counterweight is attached to the load arm such that the weight at either side of the balance beam fulcrum is substantially equivalent. Given the unequal lengths of the force arm and the load arm, there is a mechanical advantage at the load arm end of the balance beam equal to the product of the length of the load Arm multiplied by the length of the force arm.
A fluidfoil is aligned with a fluid flow by a vane attached at the load arm side of the fulcrum. Lift is created across the fluidfoil in proportion to fluid flow velocity and the characteristics of the airfoil well known to those skilled in the art of fluidfoils, such as airfoils. Control rods each extend equidistantly and parallel to the balance beam and are pivotally affixed to the support mast.
A fluidfoil mast is attached to the balance beam and control rods in a like manner and extends in parallel to the support mast. This arrangement forms a dynamic rhomboid assembly that allows the fluidfoil to maintain an optimum angle of attack into fluid flow by adjusting that angle.
An angle of attack positioning mechanism adjusts the fluidfoil's angle of attack to a constant positive or negative lift position thus enabling an up and down motion that produces lift in both directions and creating an energy harvesting capability from low velocity as well as high velocity fluid flows including wind and water flow.
Harvested energy from the fluidfoil is transferred by the lever action of the rhomboid assembly to a connector for energy transfer to one of a variety of energy storage systems for converting the energy of the linear oscillating motion to other desired forms of energy. Such systems include generators or compressors or the like.
As the fluid foil oscillates through positive and negative lift modes, the energy expended to make the transition is partially recaptured by an energy recapture device. This is a dual function device that dampens and stops the upward or downward motion of the fluidfoil as the angle of attack positioner changes the fluidfoil from a positive to a negative lift or vise versa.
The transition point at which the foil changes from positive to negative lift and vice versa requires energy to be extracted from the positive upward momentum and stored as the action is stopped and turned around. An energy accumulator in conjunction with cam actions, solenoids, air compression pistons or calibrated springs is employed for this purpose. When this action is completed and the foil reverses its lift generating capability, the stored energy is transferred back to the foil by the energy releasing function of the accumulator to aid in quickly regenerating a negative lift component in the downward cycle. The same occurs in the negative to positive lift transition.
For a more complete understanding of the present invention, reference is made to the following detailed description and accompanying drawings. In the drawings, like reference characters refer to like parts, in which:
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a plan view of a preferred embodiment of an adaptable flow-driven energy capture system in accordance the present invention;
FIG. 2 is an elevated lateral perspective view of a balance beam and fluidfoil mast portions of an adaptable flow-driven energy capture system in accordance with a preferred embodiment of the present invention; and
FIG. 3 is a side perspective view of a the preferred embodiment according to the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring to the drawings more particularly by reference numbers, FIGS. 1 and 2 show an adaptable flow-driven energy capture system 10 according to the present invention. The system 10 is uniquely configurable to oscillate in the presence of and orthogonal to the direction of fluid flow (shown as `X`). Fluid flow may include airflow, running water, or some other fluid the properties of which fall within about the properties of water and air.
The system 10 generally includes a support mast 12, a balance beam 14, a counterweight 16, an angle-of-attack positioner 17 and an at least one fluidfoil 18. The support mast 12 has a first end 20 and a second end 22, the support mast 12 is attached to a ground 24 at said first end 20 thereof. The support mast 12 may be formed from corrosion resistant strong lightweight materials. Additionally, the material should withstand the forces associated with the reciprocating movement of the balance beam 14 resulting from movement of the at least one fluidfoil 18. Aluminum, titanium, composite or some other material well known to one skilled in the art may be used.
The balance beam 14 has a first end 30 and a second end 32 and extends therebetween. The balance beam 14 is preferably formed from a strong, lightweight material that resists corrosion. Such materials are well known in the art and include aluminum, titanium, or some other material well known for such properties. The balance beam 14 comprises a force arm side 34 extending from said first end 30 thereof in the direction of said second end 32 and a load arm side 36 extending from said second end 32 thereof in the direction of said first end 30, said force arm side 34 and load arm side 36 each coterminate at a balance beam fulcrum 38. The balance beam 14 is pivotally and rotatably attached at the second end 22 of the support mast 12 at the balance beam fulcrum 38.
The force arm side 34 and the load arm side 36 are different lengths. More particularly, the load arm side 36 of the balance beam 14 is longer than the load arm side 34 providing a mechanical advantage at the force arm side 34 of the balance beam. As discussed further hereinbelow, by configuring the relative lengths of the force arm side 34 and the load arm side 36, one is able to configure the system 10 depending upon the conditions under which the system is operating.
As shown in FIG. 3, the support mast 12 houses a bearing 50 to which is affixed a support masthead 53 that extends coaxially and rotates about a longitudinal axis of the support mast 12. Force sensing means 55 sense rotational forces at the bearing 50 and precludes rotation of the support masthead 53. Such force sensors are known to those skilled in the art and as such shall not be further discussed herein. The support masthead 53 is preferably formed from materials known to those skilled in the art to function similarly to those comprising the balance beam 14 and the support mast 12.
The counterweight 16 is attached the load arm side 36 of the balance beam 14 at the second end 32. The counterweight 16 is selected to equalize the weight at either side of the fulcrum 38. The means for attaching the counterweight 16 preferably provide for removably attaching the counterweight 16 such as clamping or bolting, or some other means for removable attachment well known to those skilled in the art. The unequal lengths of the force arm side 34 and the load arm side 36 create a mechanical advantage at the load arm end of the balance beam.
A fluidfoil mast 40 has a first end 42 and a second end 44 and extends therebetween. The fluidfoil mast 40 is pivotally connected at the force arm side 30 of the balance beam 14. Each at least one fluidfoil 18 is pivotally attached the fluidfoil mast 40 at a fluidfoil pivot point 22. The fluidfoil mast 40 additionally comprises a center section 45 having two opposed ends 49, 51. End sections 53, 55 are rotatably attached one at each end 49, 51 through a motor or some other well-known means for rotating 71 one element relative another. In this fashion, each of the at least one fluid foils 18 can be rotated about the longitudinal axis of the foil support mast 40 as described hereinbelow in greater detail.
Each fluidfoil 18 comprises a leading edge 50 and a trailing edge 52 that define an edge axis (Y) extending therebetween. As most easily viewed in FIG. 2, each at least one fluidfoil 18 further defines a longitudinal axis (Z). While the system 10 will function with at least one fluidfoil 18 as disclosed, it is to be appreciated that the at least one fluidfoil 18 in the preferred embodiment comprises two substantially identical airfoils 26, 26.
Lift is created across the at least one fluidfoil 18 in proportion to fluid flow velocity and characteristics of the fluidfoil 18 well known to those skilled in the art of fluidfoils, including airfoils. Control rods 70, 72 each extend preferably equidistantly and parallel to the balance beam 14 and are pivotally affixed to the support masthead 52 and the fluidfoil mast 40 respectively via well-known pivotal mounting means. This arrangement forms a dynamic rhomboid assembly that allows the fluidfoil 18 to maintain an optimum angle of attack into fluid flow by restricting the travel of the fluidfoil mast 40 to remain perpendicular to the ground 24.
The angle of attack positioner 17 is attached at each of and disposed between the at least one fluidfoil 18 and the fluidfoil mast 40. By pivoting the at least one fluidfoil 18 about pivot point 22, the angle of attack positioner 17 moderates the at least one fluidfoil's 18 angle of attack with respect to fluid flow X therepast. As such, each of the at least one fluidfoil 18 is alternatingly positioned to maintain the angle of attack at a generally constant positive or negative lift position depending upon the direction of travel of the balance beam. Sensing means 80, such as an optical encoder, potentiometer or other well-known rotational sensors is preferably disposed about the fulcrum.
When the foil reaches the top/bottom of travel as indicated by the position indicated by the sensing means 80, the angle of attack positioner 17 is activated to reverse the angle of attack. As such, given the configuration of the preferred embodiment, the angle of attack positioner is configured to receive such control signals. Varying the angle of attack enables the reciprocating up and down motion that produces lift in both directions and facilitates energy harvesting from low velocity as well as high velocity fluid flows. Note that the terms `up` and `down` are with respect to the defined ground 24.
While fluid flow velocity is within a predetermined range, the positioner 17 maintains the fluidfoil 18 at an optimum angle of attack to provide maximum lift. When fluid flow exceeds such a range, positioner 17 alters the angle of attack, effectively reducing lift to guard against damaging the system 10. A wind meter, such as the WindMate wind meter produced by SpeedTech, Inc., located in Great Falls, Va. 22066, can be housed within one of the at least one fluidfoil 18 to measure wind speed. Such information is used to adjust the angle of attack at times when wind speeds exceed a selected threshold. Wind meters are well known to those skilled in the art and as such shall not be discussed further herein.
As the at least one fluidfoil 18 oscillates through positive and negative lift modes, the energy expended to make the transition between such is partially recaptured by an energy recapture device 61. The energy recapture device 61 dampens and stops the upward or downward motion of the at least one fluidfoil 18 as the angle of attack positioner 17 changes the fluidfoil from a positive to a negative lift or vise versa.
A transition point at which the fluidfoil 18 changes from positive to negative lift and vice versa requires energy to be extracted from the travel momentum and stored as the action is stopped and turned around. An energy accumulator in conjunction with cam actions, solenoids, air compression pistons or calibrated springs is employed for this purpose. Such devices and their function with regard to reciprocating motion are well known in the art. As the fluidfoil reaches it's maximum travel, the energy recapture device 61 drives the movement of the balance beam 14 in the opposite direction from that it was traveling. To aid in quickly regenerating a negative lift component in the downward cycle. The same occurs in the negative to positive lift transition.
A vane 60 is attached anterior the at least one fluid foil 18. Preferably the vane is positioned at the second end 32 of the balance beam 14. The vane 60 is configured so that fluid flow incident thereto serves to apply rotational force at the force sensing means 55 at the bearing 50. The rotational force, or torque, at the bearing 50 is communicated to the means for rotating 71 at the end sections 53, 55 to rotate the at least one fluidfoil 18 in response to the sensed torque. As such, no rotation takes place at the bearing 50.
The vane 60 is attached to the balance beam 14 via well-known mounting means including brackets, or bolts and is preferably removably mounted to ease in repair or replacement if such is required. Alternatively, the vane 60 may be permanently affixed by welding or some other well-known means for permanent attachment. Additionally, the vane 60 is preferably formed from a lightweight corrosion-resistant material consistent with the other elements of the preferred embodiment of the present invention.
Harvested energy from the fluidfoil is transferred by the lever action of the rhomboid assembly to a connector 80 for energy transfer to one of a variety of energy storage systems for converting the energy of the linear oscillating motion to other desired forms of energy. Such systems include, for example electrical generators. Alternatively, the connector 80 may drive a compressor 82 for compressing air.
While certain exemplary embodiments of the present invention have been described and shown on the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art. As such, what is claimed is:
U.S. installed wind energy generating capacity: 11,603 megawatts
Worldwide wind energy capacity: 74,223 megawatts
U.S. electricity generated from wind: Estimated 31 billion kilowatt hours, about 0.7 percent of the U.S. electricity supply
Amount of carbon dioxide emitted if 31 billion kilowatt hours were generated from the U.S. electricity fuel mix: 19 million tons
U.S. wind energy potential: 10,777 billion kilowatt hours a year
U.S. industry growth rate: 22 percent average over last five years
Average American homes served by one megawatt of wind capacity: 250-300
Source: American Wind Energy Association
OSCILLATION WIND DRIVEN MACHINE
Inventor(s): BOZOVIC ZORAN [YU]
Classification: - international: (IPC1-7): F03D5/00
Abstract -- Oscillation wind driven machine represents a new approach in the use of potential energy of wind in that it works as an oscillator within the resonance, in the regime of laminar flow of fluids, thereby it differs from a standard windmill, in that: it has no wastes induced by turbulent flow of fluids; a driver wing (1) is suspensioned in two or more points thus there is no moment problem seen with propeller of standard windmill; the driver wing engages 20 times more quantity of fluid on the surface circumscribed by the propeller of the standard windmill; machine represents a module that is combined in a whole where dimensions and number of the module are limitted, only by the projected power of the plant; ; the machine projects the range of fluid speeds and within the range it keeps the constant degree of exploitation, where at the slower speeds it transfers less power, and at faster speeds a larger one within the same oscillation frequence.
Your Support Maintains this Service --
The Rex Research Civilization Kit
... It's Your Best Bet & Investment in Sustainable Humanity on Earth ...
Ensure & Enhance Your Survival & Genome Transmission ...
Everything @ rexresearch.com on a Data DVD !
Rex Research, POB 19250, Jean, NV 89019 USA | <urn:uuid:21383221-12bc-41a2-9226-2420fc4cf943> | 2.984375 | 7,107 | About (Org.) | Science & Tech. | 44.986946 | 95,624,890 |
|Discovered by||Asaph Hall|
|Discovery date||17 August 1877|
|9376 km (2.76 Mars radii)|
(7 h 39.2 min)
Average orbital speed
|Inclination||1.093° (to Mars's equator)
0.046° (to local Laplace plane)
26.04° (to the ecliptic)
|Dimensions||27 × 22 × 18 km|
(41 mEarths) 1.769
(33 nEarths) 5.339
(77 nEarths) 1.784
(581.4 µ g)
Equatorial rotation velocity
|11.0 km/h (6.8 mph) (at longest axis)|
|Temperature||≈ 233 K|
Phobos (systematic designation: Mars I) is the larger and innermost of the two natural satellites of Mars, the other being Deimos. Both moons were discovered in 1877 by American astronomer Asaph Hall.
Phobos is a small, irregularly shaped object with a mean radius of 11 km (7 mi), and is seven times more massive than Deimos, Mars's outer moon. Phobos is named after the Greek god Phobos, a son of Ares (Mars) and Aphrodite (Venus) which was the personification of Horror. The name "Phobos" is pronounced // FOH-bəs, or like the Greek Φόβος.
Phobos orbits 6,000 km (3,700 mi) from the Martian surface, closer to its primary than any other known planetary moon. It is so close that it orbits Mars faster than Mars rotates, and completes an orbit in just 7 hours and 39 minutes. As a result, from the surface of Mars it appears to rise in the west, move across the sky in 4 hours 15 min or less, and set in the east, twice each Martian day. Phobos is one of the least reflective bodies in the Solar System, and features a large impact crater, Stickney. The temperatures range from about −4 °C (25 °F) to −112 °C (−170 °F), on the sunlit and shadowed sides respectively.
Images and models indicate that Phobos may be a rubble pile held together by a thin crust, and that it is being torn apart by tidal interactions. Phobos is drawing closer to Mars by 2 meters every one hundred years, and it is predicted that in 30 to 50 million years it will collide with the planet or break up into a planetary ring.
- 1 Discovery
- 2 Physical characteristics
- 3 Orbital characteristics
- 4 Origin
- 5 Shklovsky's "Hollow Phobos" hypothesis
- 6 Exploration
- 7 In fiction
- 8 See also
- 9 References
- 10 External links
Phobos was discovered by astronomer Asaph Hall on 18 August 1877, at the United States Naval Observatory in Washington, D.C., at about 09:14 Greenwich Mean Time (contemporary sources, using the pre-1925 astronomical convention that began the day at noon, give the time of discovery as 17 August at 16:06 Washington mean time). Hall also discovered Deimos, Mars's other moon, on 12 August 1877 at about 07:48 UTC. The names, originally spelled Phobus and Deimus respectively, were suggested by Henry Madan (1838–1901), Science Master of Eton, based on Greek mythology, in which Phobos is a companion to the god Ares.
Phobos has dimensions of 27 × 22 × 18 km, and retains too little mass to be rounded under its own gravity. Phobos does not have an atmosphere due to low mass and low gravity. It is one of the least reflective bodies in the Solar System. Spectroscopically it appears to be similar to the D-type asteroids, and is apparently of composition similar to carbonaceous chondrite material. Phobos's density is too low to be solid rock, and it is known to have significant porosity. These results led to the suggestion that Phobos might contain a substantial reservoir of ice. Spectral observations indicate that the surface regolith layer lacks hydration, but ice below the regolith is not ruled out.
Phobos is heavily cratered. The most prominent surface feature is the crater Stickney, named after Asaph Hall's wife, Angeline Stickney Hall, Stickney being her maiden name. As with Mimas's crater Herschel, the impact that created Stickney must have nearly shattered Phobos. Many grooves and streaks also cover the oddly shaped surface. The grooves are typically less than 30 meters (98 ft) deep, 100 to 200 meters (330 to 660 ft) wide, and up to 20 kilometers (12 mi) in length, and were originally assumed to have been the result of the same impact that created Stickney. Analysis of results from the Mars Express spacecraft, however, revealed that the grooves are not in fact radial to Stickney, but are centered on the leading apex of Phobos in its orbit (which is not far from Stickney). Researchers suspect that they have been excavated by material ejected into space by impacts on the surface of Mars. The grooves thus formed as crater chains, and all of them fade away as the trailing apex of Phobos is approached. They have been grouped into 12 or more families of varying age, presumably representing at least 12 Martian impact events.
Faint dust rings produced by Phobos and Deimos have long been predicted but attempts to observe these rings have, to date, failed. Recent images from Mars Global Surveyor indicate that Phobos is covered with a layer of fine-grained regolith at least 100 meters thick; it is hypothesized to have been created by impacts from other bodies, but it is not known how the material stuck to an object with almost no gravity.
The unique Kaidun meteorite that fell on a Soviet military base in Yemen in 1980 has been hypothesized to be a piece of Phobos, but this has been difficult to verify because little is known about the exact composition of Phobos.
Named geological features
Geological features on Phobos are named after astronomers who studied Phobos and people and places from Jonathan Swift's Gulliver's Travels. There is one named regio, Laputa Regio, and one named planitia, Lagado Planitia; both are named after places in Gulliver's Travels (the fictional Laputa, a flying island, and Lagado, imaginary capital of the fictional nation Balnibarbi). The only named ridge on Phobos is Kepler Dorsum, named after the astronomer Johannes Kepler. Several craters have been named like the crater Hrishikesh.
|Clustril||Character in Gulliver's Travels|
|D'Arrest||Heinrich Louis d'Arrest, astronomer|
|Drunlo||Character in Gulliver's Travels|
|Flimnap||Character in Gulliver's Travels|
|Grildrig||Character in Gulliver's Travels|
|Gulliver||Main character of Gulliver's Travels|
|Hall||Asaph Hall, discoverer of Phobos|
|Limtoc||Character in Gulliver's Travels|
|Öpik||Ernst J. Öpik, astronomer|
|Reldresal||Character in Gulliver's Travels|
|Roche||Édouard Roche, astronomer|
|Sharpless||Bevan Sharpless, astronomer|
|Shklovsky||Iosif Shklovsky, astronomer|
|Skyresh||Character in Gulliver's Travels|
|Stickney||Angeline Stickney, wife of Asaph Hall|
|Todd||David Peck Todd, astronomer|
|Wendell||Oliver Wendell, astronomer|
The orbital motion of Phobos has been intensively studied in terms of orbits completed, making it "the best studied natural satellite in the Solar System". Its close orbit around Mars produces some unusual effects. With an altitude of 5,989 km (3,721 mi), Phobos orbits Mars below the synchronous orbit radius, meaning that it moves around Mars faster than Mars itself rotates. Therefore, from the point of view of an observer on the surface of Mars, it rises in the west, moves comparatively rapidly across the sky (in 4 h 15 min or less) and sets in the east, approximately twice each Martian day (every 11 h 6 min). Because it is close to the surface and in an equatorial orbit, it cannot be seen above the horizon from latitudes greater than 70.4°. Its orbit is so low that its angular diameter, as seen by an observer on Mars, varies visibly with its position in the sky. Seen at the horizon, Phobos is about 0.14° wide; at zenith it is 0.20°, one-third as wide as the full Moon as seen from Earth. By comparison, the Sun has an apparent size of about 0.35° in the Martian sky. Phobos's phases, inasmuch as they can be observed from Mars, take 0.3191 days (Phobos's synodic period) to run their course, a mere 13 seconds longer than Phobos's sidereal period. As seen from Phobos, Mars would appear 6,400 times larger and 2,500 times brighter than the full Moon appears from Earth, taking up a quarter of the width of a celestial hemisphere. The Mars–Phobos Lagrangian L1 is 2.5 kilometers (1.6 mi) above Stickney, which is unusually close to the surface.
An observer situated on the Martian surface, in a position to observe Phobos, would see regular transits of Phobos across the Sun. Several of these transits have been photographed by the Mars Rover Opportunity. During the transits, Phobos's shadow is cast on the surface of Mars; an event which has been photographed by several spacecraft. Phobos is not large enough to cover the Sun's disk, and so cannot cause a total eclipse.
Tidal deceleration is gradually decreasing the orbital radius of Phobos by 2 meters every one hundred years. Scientists estimate that Phobos will be destroyed in approximately 30–50 million years, with one study's estimate being about 43 million years.
Phobos' grooves were long thought to be fractures caused by the impact that formed Stickney crater. Other modelling suggested since the 1970s support the idea that the grooves are more like "stretch marks" that occur when Phobos gets deformed by tidal forces, but in 2015 when the tidal forces were calculated and used in a new model, the stresses were too weak to fracture a solid moon of that size, unless Phobos is a rubble pile surrounded by a layer of powdery regolith about 100 m (330 ft) thick. Given Phobos's irregular shape and assuming that it is a pile of rubble (specifically a Mohr–Coulomb body), it will eventually break up when it reaches approximately 2.1 Mars radii.
Researchers suggest that the grooves are "stretch marks" caused by tidal forces. This idea is based on the model that Phobos is a rubble pile surrounded by a 330 feet layer of powdery regolith. Stress fractures calculated for this model line up with the grooves on Phobos. The model is supported with the discovery that some of the grooves are younger than others, implying that the process that produces the grooves is ongoing.
When Phobos is eventually torn apart by tidal forces, it is likely that a fraction of the debris will form a planetary ring around Mars. This ring may last for between one and one hundred million years.
The origin of the Martian moons is still controversial. Phobos and Deimos both have much in common with carbonaceous C-type asteroids, with spectra, albedo, and density very similar to those of C- or D-type asteroids. Based on their similarity, one hypothesis is that both moons may be captured main-belt asteroids. Both moons have very circular orbits which lie almost exactly in Mars's equatorial plane, and hence a capture origin requires a mechanism for circularizing the initially highly eccentric orbit, and adjusting its inclination into the equatorial plane, most probably by a combination of atmospheric drag and tidal forces, although it is not clear that sufficient time is available for this to occur for Deimos. Capture also requires dissipation of energy. The current Martian atmosphere is too thin to capture a Phobos-sized object by atmospheric braking. Geoffrey Landis has pointed out that the capture could have occurred if the original body was a binary asteroid that separated under tidal forces.
Another hypothesis is that Mars was once surrounded by many Phobos- and Deimos-sized bodies, perhaps ejected into orbit around it by a collision with a large planetesimal. The high porosity of the interior of Phobos (based on the density of 1.88 g/cm3, voids are estimated to comprise 25 to 35 percent of Phobos's volume) is inconsistent with an asteroidal origin. Observations of Phobos in the thermal infrared suggest a composition containing mainly phyllosilicates, which are well known from the surface of Mars. The spectra are distinct from those of all classes of chondrite meteorites, again pointing away from an asteroidal origin. Both sets of findings support an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's moon.
Shklovsky's "Hollow Phobos" hypothesis
In the late 1950s and 1960s, the unusual orbital characteristics of Phobos led to speculations that it might be hollow.
Around 1958, Russian astrophysicist Iosif Samuilovich Shklovsky, studying the secular acceleration of Phobos's orbital motion, suggested a "thin sheet metal" structure for Phobos, a suggestion which led to speculations that Phobos was of artificial origin. Shklovsky based his analysis on estimates of the upper Martian atmosphere's density, and deduced that for the weak braking effect to be able to account for the secular acceleration, Phobos had to be very light—one calculation yielded a hollow iron sphere 16 kilometers (9.9 mi) across but less than 6 cm thick. In a February 1960 letter to the journal Astronautics, Fred Singer, then science advisor to U.S. President Dwight D. Eisenhower, said of Shklovsky's theory:
If the satellite is indeed spiraling inward as deduced from astronomical observation, then there is little alternative to the hypothesis that it is hollow and therefore Martian made. The big 'if' lies in the astronomical observations; they may well be in error. Since they are based on several independent sets of measurements taken decades apart by different observers with different instruments, systematic errors may have influenced them.
Subsequently, the systemic data errors that Singer predicted were found to exist, and the claim was called into doubt, and accurate measurements of the orbit available by 1969 showed that the discrepancy did not exist. Singer's critique was justified when earlier studies were discovered to have used an overestimated value of 5 cm/yr for the rate of altitude loss, which was later revised to 1.8 cm/yr. The secular acceleration is now attributed to tidal effects, which had not been considered in the earlier studies.
The density of Phobos has now been directly measured by spacecraft to be 1.887 g/cm3. Current observations are consistent with Phobos being a rubble pile. In addition, images obtained by the Viking probes in the 1970s clearly showed a natural object, not an artificial one. Nevertheless, mapping by the Mars Express probe and subsequent volume calculations do suggest the presence of voids and indicate that it is not a solid chunk of rock but a porous body. The porosity of Phobos was calculated to be 30% ± 5%, or a quarter to a third being empty. This void space is mostly on small scales (millimeters to meters), between individual grains and boulders.
Phobos has been photographed in close-up by several spacecraft whose primary mission has been to photograph Mars. The first was Mariner 7 in 1969, followed by Viking 1 in 1977, Mars Global Surveyor in 1998 and 2003, Mars Express in 2004, 2008, and 2010, and Mars Reconnaissance Orbiter in 2007 and 2008. On August 25, 2005, the Spirit Rover, with an excess of energy due to wind blowing dust off of its solar panels, took several short-exposure photographs of the night sky from the surface of Mars. Phobos and Deimos are both clearly visible in the photograph.
The Soviet Union undertook the Phobos program with two probes, both launched successfully in July 1988. Phobos 1 probe dedicated with flyby and jumping lander to Phobos. But in case of success of it, Phobos 2 may be ordered to Deimos. The first was lost en route to Mars, whereas the second returned some data and images but failed shortly before beginning its detailed examination of Phobos's surface, including a lander. Other Mars missions collected more data, but the next dedicated mission attempt would be a sample return mission.
The Russian Space Agency launched a sample return mission to Phobos in November 2011, called Fobos-Grunt. The return capsule also included a life science experiment of The Planetary Society, called Living Interplanetary Flight Experiment, or LIFE. A second contributor to this mission was the China National Space Administration, which supplied a surveying satellite called "Yinghuo-1", which would have been released in the orbit of Mars, and a soil-grinding and sieving system for the scientific payload of the Phobos lander. However, after achieving Earth orbit, the Fobos-Grunt probe failed to initiate subsequent burns that would have sent it off to Mars. Attempts to recover the probe were unsuccessful and it crashed back to Earth in January 2012.
In 1997 and 1998, the Aladdin mission was selected as a finalist in the NASA Discovery Program. The plan was to visit both Phobos and Deimos, and launch projectiles at the satellites. The probe would collect the ejecta as it performed a slow flyby (~1 km/s). These samples would be returned to Earth for study three years later. The Principal Investigator was Dr. Carle Pieters of Brown University. The total mission cost, including launch vehicle and operations was $247.7 million. Ultimately, the mission chosen to fly was MESSENGER, a probe to Mercury.
In 2007, the European aerospace subsidiary EADS Astrium was reported to have been developing a mission to Phobos as a technology demonstrator. Astrium was involved in developing a European Space Agency plan for a sample return mission to Mars, as part of the ESA's Aurora programme, and sending a mission to Phobos with its low gravity was seen as a good opportunity for testing and proving the technologies required for an eventual sample return mission to Mars. The mission was envisioned to start in 2016, was to last for three years. The company planned to use a "mothership", which would be propelled by an ion engine, releasing a lander to the surface of Phobos. The lander would perform some tests and experiments, gather samples in a capsule, then return to the mothership and head back to Earth where the samples would be jettisoned for recovery on the surface.
In 2007, the Canadian Space Agency funded a study by Optech and the Mars Institute for an unmanned mission to Phobos known as Phobos Reconnaissance and International Mars Exploration (PRIME). A proposed landing site for the PRIME spacecraft is at the "Phobos monolith", a bright object near Stickney which casts a prominent shadow. Astronaut Buzz Aldrin referred to this "monolith" in a July 22, 2009 interview with C-SPAN: "We should go boldly where man has not gone before. Fly by the comets, visit asteroids, visit the moon of Mars. There’s a monolith there. A very unusual structure on this potato shaped object that goes around Mars once in seven hours. When people find out about that they’re going to say ‘Who put that there? Who put that there?’ The universe put it there. If you choose, God put it there..." The PRIME mission would be composed of an orbiter and lander, and each would carry 4 instruments designed to study various aspects of Phobos's geology. As of 30 April 2009[update], PRIME does not have a projected launch date.
In 2008, NASA Glenn Research Center began studying a Phobos and Deimos sample return mission that would use solar electric propulsion. The study gave rise to the "Hall" mission concept, a New Frontiers-class mission currently under further study.
As of January 2013, a new Phobos Surveyor mission is currently under development by a collaboration of Stanford University, NASA's Jet Propulsion Laboratory, and the Massachusetts Institute of Technology. The mission is currently in the testing phases, and the team at Stanford plans to launch the mission between 2023 and 2033.
In March 2014, a Discovery class mission was proposed to place an orbiter in Mars orbit by 2021 to study Phobos and Deimos through a series of close flybys. The mission is called Phobos And Deimos & Mars Environment (PADME). Two other Phobos missions that were proposed for the Discovery 13 selection included a mission called Merlin, which would flyby Deimos but actually orbit and land on Phobos, and another one is Pandora which would orbit Deimos and orbit Phobos also.
As part of a manned mission to Mars
Phobos has been proposed as an early target for a manned mission to Mars. The tele-operation of robotic scouts on Mars by humans on Phobos could be conducted without significant time delay, and planetary protection concerns in early Mars exploration might be addressed by such an approach.
Phobos has also been proposed as an early target for a manned mission to Mars because a landing on Phobos would be considerably less difficult and expensive than a landing on the surface of Mars itself. A lander bound for Mars would need to be capable of atmospheric entry and subsequent return to orbit, without any support facilities (a capacity that has never been attempted in a manned spacecraft), or would require the creation of support facilities in-situ (a "colony or bust" mission); a lander intended for Phobos could be based on equipment designed for lunar and asteroid landings. Additionally, the delta-v to land on Phobos and return is only 80% of that for a trip to and from the surface of the Moon, partly due to Phobos's very weak gravity.[full citation needed]
The human exploration of Phobos could serve as a catalyst for the human exploration of Mars and be exciting and scientifically valuable in its own right.
- Arthur C. Clarke's short story "Hide-and-Seek" (in Expedition to Earth) is set on and near Phobos.
- Ty Drago's novel "Phobos" is a "mysterious deaths investigation" thriller set on Mars and Phobos.
- Id Software's 1993 game Doom has its first third set on Phobos.
- In Andy Weir's novel The Martian the path of Phobos rising and setting is used to aid navigation.
- List of natural satellites
- List of missions to the moons of Mars
- Phobos and Deimos in fiction
- Transit of Phobos from Mars
- "Mars: Moons: Phobos". NASA Solar System Exploration. 30 September 2003. Retrieved 2 December 2013.
- "Planetary Satellite Physical Parameters". JPL (Solar System Dynamics). 13 July 2006. Retrieved 29 January 2008.
- "NASA – Phobos". Solarsystem.nasa.gov. Retrieved 2014-08-04.
- "Phobos is Slowly Falling Apart". NASA. SpaceRef. 10 November 2015. Retrieved 2015-11-11.
- "Notes: The Satellites of Mars". The Observatory. 1 (6): 181–185. 20 September 1877. Bibcode:1877Obs.....1..181. Retrieved 4 February 2009.
- Hall, A. (17 October 1877). "Observations of the Satellites of Mars". Astronomische Nachrichten (Signed 21 September 1877). 91 (2161): 11/12–13/14. Bibcode:1877AN.....91...11H. doi:10.1002/asna.18780910103.
- Morley, T. A. (February 1989). "A Catalogue of Ground-Based Astrometric Observations of the Martian Satellites, 1877–1982". Astronomy and Astrophysics Supplement Series. 77 (2): 209–226. Bibcode:1989A&AS...77..209M. (Table II, p. 220: first observation of Phobos on 18 August 1877.38498)
- Madan, H. G. (4 October 1877). "Letters to the Editor: The Satellites of Mars". Nature (Signed 29 September 1877). 16 (414): 475. Bibcode:1877Natur..16R.475M. doi:10.1038/016475b0.
- Hall, A. (14 March 1878). "Names of the Satellites of Mars". Astronomische Nachrichten (Signed 7 February 1878). 92 (2187): 47–48. Bibcode:1878AN.....92...47H. doi:10.1002/asna.18780920304.
- "Solar System Exploration: Planets: Mars: Moons: Phobos: Overview". Solarsystem.nasa.gov. Retrieved 19 August 2013.
- "New Views of Martian Moons".
- Lewis, J. S. (2004). Physics and Chemistry of the Solar System. Elsevier Academic Press. p. 425. ISBN 0-12-446744-X.
- "Porosity of Small Bodies and a Reassesment of Ida's Density".
When the error bars are taken into account, only one of these, Phobos, has a porosity below 0.2...
- "Close Inspection for Phobos".
It is light, with a density less than twice that of water, and orbits just 5,989 kilometers (3,721 mi) above the Martian surface.
- Busch, M. W.; et al. (2007). "Arecibo Radar Observations of Phobos and Deimos". Icarus. 186 (2): 581–584. Bibcode:2007Icar..186..581B. doi:10.1016/j.icarus.2006.11.003.
- Murchie, S. L.; Erard, S.; Langevin, Y.; Britt, D. T.; et al. (1991). "Disk-resolved Spectral Reflectance Properties of Phobos from 0.3–3.2 microns: Preliminary Integrated Results from PhobosH 2". Abstracts of the Lunar and Planetary Science Conference. 22: 943. Bibcode:1991pggp.rept..249M.
- Rivkin, A. S.; et al. (March 2002). "Near-Infrared Spectrophotometry of Phobos and Deimos". Icarus. 156 (1): 64–75. Bibcode:2002Icar..156...64R. doi:10.1006/icar.2001.6767.
- Fanale, F. P.; Salvail, J. R. (1989). "Loss of water from Phobos". Geophys. Res. Lett. 16 (4): 287–290. Bibcode:1989GeoRL..16..287F. doi:10.1029/GL016i004p00287.
- Fanale, Fraser P.; Salvail, James R. (Dec 1990). "Evolution of the water regime of Phobos". Icarus. 88: 380–395. Bibcode:1990Icar...88..380F. doi:10.1016/0019-1035(90)90089-R.
- "Stickney Crater-Phobos".
One of the most striking features of Phobos, aside from its irregular shape, is its giant crater Stickney. Because Phobos is only 28 by 20 kilometers (17 by 12 mi), it must have been nearly shattered from the force of the impact that caused the giant crater. Grooves that extend across the surface from Stickney appear to be surface fractures caused by the impact.
- Murray, J. B.; et al. "New Evidence on the Origin of Phobos' Parallel Grooves from HRSC Mars Express" (PDF). 37th Annual Lunar and Planetary Science Conference, March 2006.
- Showalter, M. R.; Hamilton, D. P.; Nicholson, P. D. (2006). "A Deep Search for Martian Dust Rings and Inner Moons Using the Hubble Space Telescope" (PDF). Planetary and Space Science. 54 (9–10): 844–854. Bibcode:2006P&SS...54..844S. doi:10.1016/j.pss.2006.05.009.
- Britt, Robert Roy (13 March 2001). "Forgotten Moons: Phobos and Deimos Eat Mars' Dust". space.com. Retrieved 12 May 2010.
- Ivanov, Andrei V. (March 2004). "Is the Kaidun Meteorite a Sample from Phobos?". Solar System Research. 38 (2): 97–107. Bibcode:2004SoSyR..38...97I. doi:10.1023/B:SOLS.0000022821.22821.84.
- Ivanov, Andrei; Michael Zolensky (2003). "The Kaidun Meteorite: Where Did It Come From?" (PDF). Lunar and Planetary Science. 34.
The currently available data on the lithologic composition of the Kaidun meteorite– primarily the composition of the main portion of the meteorite, corresponding to CR2 carbonaceous chondrites and the presence of clasts of deeply differentiated rock – provide weighty support for considering the meteorite’s parent body to be a carbonaceous chondrite satellite of a large differentiated planet. The only possible candidates in the modern Solar System are Phobos and Deimos, the moons of Mars.
- Gazetteer of Planetary Nomenclature USGS Astrogeology Research Program, Categories
- Gazetteer of Planetary Nomenclature USGS Astrogeology Research Program, Phobos
- Gazetteer of Planetary Nomenclature USGS Astrogeology Research Program, Craters
- USGS Staff. "Phobos Map – Shaded Relief" (PDF). USGS. Retrieved 18 August 2013.
- Bills, Bruce G.; Gregory A. Neumann; David E. Smith; Maria T. Zuber (2005). "Improved estimate of tidal dissipation within Mars from MOLA observations of the shadow of Phobos" (PDF). Journal of Geophysical Research. 110 (E07004). Bibcode:2005JGRE..110.7004B. doi:10.1029/2004je002376.
- Efroimsky, M.; Lainey, V. (2007). "Physics of bodily tides in terrestrial planets and the appropriate scales of dynamical evolution.". Journal of Geophysical Research. 112 (E12): E12003. Bibcode:2007JGRE..11212003E. arXiv: . doi:10.1029/2007JE002908.
- Holsapple, K. A. (December 2001). "Equilibrium Configurations of Solid Cohesionless Bodies". Icarus. 154 (2): 432–448. Bibcode:2001Icar..154..432H. doi:10.1006/icar.2001.6683.
- Hurford, t. et al. 2015. annual Meeting of the Division of Planetary Sciences of the American Astronomical Society at National Harbor, Maryland.
- Black, B. A., and T. Mittal (2015), The demise of Phobos and development of a Martian ring system, Nature Geosci, advance online publication, doi:10.1038/ngeo2583.
- Burns, J. A. "Contradictory Clues as to the Origin of the Martian Moons," in Mars, H. H. Kieffer et al., eds., U. Arizona Press, Tucson, 1992
- "Close Inspection for Phobos".
One idea is that Phobos and Deimos, Mars's other moon, are captured asteroids.
- Landis, G. A. "Origin of Martian Moons from Binary Asteroid Dissociation," American Association for the Advancement of Science Annual Meeting; Boston, MA, 2001; abstract.
- Martin Pätzold & Olivier Witasse (4 March 2010). "Phobos Flyby Success". ESA. Retrieved 4 March 2010.
- Craddock, R. A.; (1994); The Origin of Phobos and Deimos, Abstracts of the 25th Annual Lunar and Planetary Science Conference, held in Houston, TX, 14–18 March 1994, p. 293
- Andert, T. P.; Rosenblatt, P.; Pätzold, M.; Häusler, B.; et al. (2010-05-07). "Precise mass determination and the nature of Phobos". Geophysical Research Letters. 37 (9): L09202. Bibcode:2010GeoRL..3709202A. doi:10.1029/2009GL041829.
- Giuranna, M.; Roush, T. L.; Duxbury, T.; Hogan, R. C.; et al. (2010). "Compositional Interpretation of PFS/MEx and TES/MGS Thermal Infrared Spectra of Phobos" (PDF). European Planetary Science Congress Abstracts, Vol. 5. Retrieved 2010-10-01.
- "Mars Moon Phobos Likely Forged by Catastrophic Blast". Space.com. 2010-09-27. Retrieved 2010-10-01.
- Shklovsky, I. S.; The Universe, Life, and Mind, Academy of Sciences USSR, Moscow, 1962
- Öpik, E. J. (September 1964). "Is Phobos Artificial?". Irish Astronomical Journal. 6: 281–283. Bibcode:1964IrAJ....6..281.
- Singer, S. F.; Astronautics, February 1960
- Öpik, E. J. (March 1963). "News and Comments: Phobos, Nature of Acceleration". Irish Astronomical Journal. 6: 40. Bibcode:1963IrAJ....6R..40.
- Singer, S. F. (1967), On the Origin of the Martian Satellites Phobos and Deimos, Seventh International Space Science Symposium held 10–18 May 1966 in Vienna, North-Holland Publishing Company, Bibcode:1967mopl.conf..317S
- "More on the Moons of Mars". Singer, S. F., Astronautics, February 1960. American Astronautical Society. Page 16
- "Mars Express closes in on the origin of Mars' larger moon". DLR. 16 October 2008. Retrieved 16 October 2008.
- "Cheap Flights to Phobos" by Stuart Clark, in New Scientist magazine, 30 January 2010.
- "Closest Phobos flyby gathers data". BBC News. London. 4 March 2010. Retrieved 7 March 2010.
- "Two Moons Passing in the Night". NASA. Retrieved 27 June 2011.
- "Projects LIFE Experiment: Phobos". The Planetary Society. Retrieved 12 May 2010.
- "Russia, China Could Sign Moon Exploration Pact in 2006". RIA Novosti. 11 September 2006. Retrieved 12 May 2010.
- "HK triumphs with out of this world invention". Hong Kong Trader. 1 May 2007. Retrieved 12 May 2010.
- "PolyU-made space tool sets for Mars again". Hong Kong Polytechnic University. 2 April 2007. Retrieved 12 May 2010.
- "Russia's failed Phobos-Grunt space probe heads to Earth", BBC News, 14 January 2012
- "S86-25375 (1986)". Spaceflight.nasa.gov. Retrieved 2014-08-04.
- Barnouin-Jha, Olivier S. "Aladdin: sample return from the moons of Mars". Aerospace Conference, 1999. Proceedings. 1999 IEEE. Aerospace Conference, 1999. Proceedings. 1999 IEEE. Retrieved 28 March 2013.
- Pieters, Carle. "ALADDIN: PHOBOS -DEIMOS SAMPLE RETURN" (PDF). 28th Annual Lunar and Planetary Science Conference. 28th Annual Lunar and Planetary Science Conference. Retrieved 28 March 2013.
- "Messenger and Aladdin Missions Selected as NASA Discovery Program Candidates". Retrieved 28 March 2013.
- "Five Discovery mission proposals selected for feasiblilty studies". Retrieved 28 March 2013.
- "NASA Selects Missions to Mercury and a Comet's Interior as Next Discovery Flights". Retrieved 28 March 2013.
- Amos, J.; Martian Moon ’Could be Key Test’, BBC News (9 February 2007)
- Optech press release, "Canadian Mission Concept to Mysterious Mars moon Phobos to Feature Unique Rock-Dock Maneuver," 3 May 2007.
- PRIME: Phobos Reconnaissance & International Mars Exploration, Mars Institute website, accessed 27 July 2009.
- Lee, P., R. Richards, A. Hildebrand, and the PRIME Mission Team 2008. The PRIME (Phobos Reconnaissance and International Mars Exploration) Mission and Mars sample Return. 39th Lunar Planet. Sci. Conf., Houston, TX, March 2008. [#2268]|http://www.lpi.usra.edu/meetings/lpsc2008/pdf/2268.pdf
- "Buzz Aldrin Reveals Existence of Monolith on Mars Moon". C-SPAN. 22 July 2009.
- Mullen, Leslie (30 April 2009). "New Missions Target Mars Moon Phobos". Astrobiology Magazine. Space.com. Retrieved 5 September 2009.
- Lee, P. et al. 2010. Hall: A Phobos and Deimos Sample Return Mission. 44th Lunar Planet. Sci. Conf., The Woodlands, TX. 1–5 Mar 2010. [#1633] Bibcode: 2010LPI....41.1633L.
- Elifritz, T. L. - OSIRIS-REx II to Mars
- Pandika, Melissa (28 December 2012). "Stanford researchers develop acrobatic space rovers to explore moons and asteroids". Stanford Report. Stanford, California. Stanford News Service. Retrieved 3 January 2013.
- Lee, Pascal; Bicay, Michael; Colapre, Anthony; Elphic, Richard (March 17–21, 2014). Phobos And Deimos & Mars Environment (PADME): A LADEE-Derived Mission to Explore Mars's Moons and the Martian Orbital Environment (PDF). 45th Lunar and Planetary Science Conference (2014).
- Reyes, Tim (1 October 2014). "Making the Case for a Mission to the Martian Moon Phobos". Universe Today. Retrieved 2014-10-05.
- Lee, Pascal; Benna, Mehdi; Britt, Daniel; Colaprete, Anthony (March 16–20, 2015). PADME (Phobos And Deimos & Mars Environment): A Proposed NASA Discovery Mission to Investigate the Two Moons of Mars (PDF). 46th Lunar and Planetary Science Conference (2015).
- MERLIN: The Creative Choices Behind a Proposal to Explore the Martian Moons (Merlin and PADME info also)
- Barraclough, Simon; Ratcliffe, Andrew; Buchwald, Robert; Scheer, Heloise; Chapuy, Marc; Garland, Martin (June 16, 2014). Phootprint: A European Phobos Sample Return Mission (PDF). 11th International Planetary Probe Workshop. Airbus Defense and Space.
- Koschny, Detlef; Svedhem, Håkan; Rebuffat, Denis (August 2, 2014). "Phootprint - A Phobos sample return mission study". ESA. Retrieved 2015-12-22.
- "Martian moon Phobos hip-deep in powder". Jpl.nasa.gov. 1998-09-11. Retrieved 2014-08-04.
- Landis, Geoffrey A. "Footsteps to Mars: an Incremental Approach to Mars Exploration," Journal of the British Interplanetary Society, Vol. 48, pp. 367–342 (1995); presented at Case for Mars V, Boulder CO, 26–29 May 1993; appears in From Imagination to Reality: Mars Exploration Studies, R. Zubrin, ed., AAS Science and Technology Series Volume 91, pp. 339–350 (1997). (text available as Footsteps to Mars (PDF)
- Lee, P., S. Braham, G. Mungas, M. Silver, P. Thomas, and M. West (2005). Phobos: A Critical Link Between Moon and Mars Exploration. Report of the Space Resources Rountable VII: LEAG Conference on Lunar Exploration, League City, TX 25–28 Oct 2005. LPI Contrib. 1318, p. 72. Bibcode: 2005LPICo1287...56L
- "Discover – June 2009". Discover.coverleaf.com. 2009-04-29. Retrieved 2014-08-04.
- Lee, P. (2007). Phobos-Deimos ASAP: A Case for the Human Exploration of the Moons of Mars. First Int’l Conf. Explor. Phobos & Deimos. NASA Research Park, Moffett Field, CA, 5–7 Nov 2007. LPI Contrib. 1377, p. 25 [#7044]|http://www.lpi.usra.edu/meetings/phobosdeimos2007/pdf/7044.pdf
- Drago, Ty. (2003). Phobos. Tor Books, New York.
|Wikimedia Commons has media related to:| | <urn:uuid:2114fd8d-485a-420c-9a2e-063d6c199a2a> | 3.84375 | 8,920 | Knowledge Article | Science & Tech. | 65.524503 | 95,624,894 |
Python library for symbolic mathematics
This package is based on the package 'python-sympy' from project 'science'. SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python and does not require any external libraries.
Source Files (show merged sources derived from linked package)
|_link||000000004848 Bytes||1273219032about 8 years ago| | <urn:uuid:509f2841-e4cd-40d2-bd44-3590daf8ef95> | 2.671875 | 117 | Product Page | Software Dev. | 24.392361 | 95,624,907 |
Astronomers from Bonn and Tautenburg in Thuringia (Germany) used the 100-m radio telescope at Effelsberg to observe several galaxy clusters. At the edges of these large accumulations of dark matter, stellar systems (galaxies), hot gas, and charged particles, they found magnetic fields that are exceptionally ordered over distances of many million light years. This makes them the most extended magnetic fields in the universe known so far.
The results will be published on March 22 in the journal „Astronomy & Astrophysics“.
Galaxy clusters are the largest gravitationally bound structures in the universe. With a typical extent of about 10 million light years, i.e. 100 times the diameter of the Milky Way, they host a large number of such stellar systems, along with hot gas, magnetic fields, charged particles, embedded in large haloes of dark matter, the composition of which is unknown.
Radio map of the relic at the outskirts of the galaxy cluster CIZA J2242+53 in a distance of about two billion light years, observed with the Effelberg radio telescope at 3 cm wavelength
Maja Kierdorf et al., 2017, A&A 600, A18
Collision of galaxy clusters leads to a shock compression of the hot cluster gas and of the magnetic fields. The resulting arc-like features are called “relics” and stand out by their radio and X-ray emission. Since their discovery in 1970 with a radio telescope near Cambridge/UK, relics were found in about 70 galaxy clusters so far, but many more are likely to exist. They are messengers of huge gas flows that continuously shape the structure of the universe.
Radio waves are excellent tracers of relics. The compression of magnetic fields orders the field lines, which also affects the emitted radio waves. More precisely, the emission becomes linearly polarized. This effect was detected in four galaxy clusters by a team of researchers at the Max Planck Institute for Radio Astronomy in Bonn (MPIfR), the Argelander Institute for Radio Astronomy at the University of Bonn (AIfA), the Thuringia State Observatory at Tautenburg (TLS), and colleagues in Cambridge/USA.
They used the MPIfR’s 100-m radio telescope near Bad Münstereifel-Effelsberg in the Eifel hills at wavelengths of 3 cm and 6 cm. Such short wavelengths are advantageous because the polarized emission is not diminished when passing through the galaxy cluster and our Milky Way. Fig.1 shows the most spectacular case.
Linearly polarized relics were found in the four galaxy clusters observed, in one case for the first time. The magnetic fields are of similar strength as in our Milky Way, while the measured degrees of polarization of up to 50% are exceptionally high, indicating that the emission originates in an extremely ordered magnetic field. “We discovered the so far largest ordered magnetic fields in the universe, extending over 5-6 million light years”, says Maja Kierdorf from MPIfR Bonn, the project leader and first author of the publication.
She also wrote her Master Thesis at Bonn University on this subject. For this project, co-author Matthias Hoeft from TLS Tautenburg developed a method that permits to determine the “Mach number”, i.e. the ratio of the relative velocity between the colliding gas clouds and the local sound speed, using the observed degree of polarization. The resulting Mach numbers of about two tell us that the galaxy clusters collide with velocities of about 2000 km/s, which is faster than previously derived from measurements of the X-ray emission.
The new Effelsberg telescope observations show that the polarization plane of the radio emission from the relics turns with wavelength. This “Faraday rotation effect”, named after the English physicist Michael Faraday, indicates that ordered magnetic fields also exist between the clusters and, together with hot gas, cause the rotation of the polarization plane. Such magnetic fields may be even larger than the clusters themselves.
„The Effelsberg radio telescope proved again to be an ideal instrument to detect magnetic fields in the universe“, emphasizes co-author Rainer Beck from MPIfR who works on this topic for more than 40 years. “Now we can systematically search for ordered magnetic fields in galaxy clusters using polarized radio waves.”
The research team comprises of Maja Kierdorf, Rainer Beck, Matthias Hoeft, Uli Klein, Reinout van Weeren, William Forman, and Christine Jones. First author Maja Kierdorf and Rainer Beck are MPIfR employees.
Relics in galaxy clusters at high radio frequencies, M. Kierdorf, R. Beck, M. Hoeft, U. Klein, R. J. van Weeren, W. R. Forman, and C. Jones, 2017, Astronomy & Astrophysics 600, A18 (March 22, 2017): https://doi.org/10.1051/0004-6361/201629570
Max-Planck-Institut für Radioastronomie, Bonn.
Phone: +49 228 525-180
Dr. Rainer Beck,
Max-Planck-Institut für Radioastronomie, Bonn
Phone: +49 6221 528-323
Dr. Norbert Junkes,
Press and Public Outreach
Max-Planck-Institut für Radioastronomie, Bonn.
Phone: +49 228 525-399
Norbert Junkes | Max-Planck-Institut für Radioastronomie
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:3882c4f7-3443-4e27-8fe0-0c43b0f23623> | 3.296875 | 1,826 | Content Listing | Science & Tech. | 46.186647 | 95,624,911 |
11 months ago
Drone projectiles are usually deadly, but this kind is decidedly more Earth-friendly.
A drone designed to plant trees from the sky is helping a non-profit organization grow forests around the world. The quadcopters fire tree-sprouting bullets from 300 feet above in a process that 10 times as fast and half as expensive if done by hand, Fast Company reports.
Developed by startup BioCarbon Engineering, the tree-planting process works in three phases. First, a drone will map out the area and collecting data on obstructions, biodiversity, and topgraphy. Then, the startup’s program will develop a flight pattern that’s optimized for tree growth, according to the magazine.
Finally, the drone will fly the pattern that was mapped out and fire seed-filled biodegradable pods into the soil below. Loaded with 300 “seedpods,” the drone can cover almost two and a half acres in less than 20 minutes. With BioCarbon’s current system, its possible for six drones to be flown at once. Fast Company reports they have a collective output of 100,000 trees planted in a single day.
In September, BioCarbon will deployed the drones in the skies above Myanmar and work with Worldview International Foundation to plant and regrow mangrove forests along the Irrawaddy River. | <urn:uuid:776abe07-7cc6-416b-9e6f-396bc337e8a1> | 3.3125 | 284 | News Article | Science & Tech. | 49.910132 | 95,624,918 |
|Isotope mass||55.9349375 u|
|Excess energy||−60601.003± 1.354 keV|
|Binding energy||492253.892± 1.356 keV|
|Complete table of nuclides|
Nickel-62, a relatively rare isotope of nickel, has a higher nuclear binding energy per nucleon; this is consistent with having a higher mass per nucleon because nickel-62 has a greater proportion of neutrons, which are slightly more massive than protons. See the nickel-62 article for more information regarding the ordering of binding energy per nucleon, and mass-per-nucleon, for various nuclides.
Thus, light elements undergoing nuclear fusion and heavy elements undergoing nuclear fission release energy as their nucleons bind more tightly, and the resulting nuclei approach the maximum total energy per nucleon, which occurs at 62Ni. However, during nucleosynthesis in stars the competition between photodisintegration and alpha capturing causes more 56Ni to be produced than 62Ni (56Fe is produced later in the star's ejection shell as 56Ni decays). This means that as the Universe ages, more matter is converted into extremely tightly bound nuclei, such as 56Fe. This progression of matter towards iron and nickel is one of the phenomena responsible for the heat death of the universe.
- J. R. de Laeter; J. K. Böhlke; P. De Bièvre; H. Hidaka; H. S. Peiser; K. J. R. Rosman; P. D. P. Taylor (2003). "Atomic weights of the elements. Review 2000 (IUPAC Technical Report)". Pure and Applied Chemistry. 75 (6): 683–800. doi:10.1351/pac200375060683.
|Iron-56 is an
isotope of iron
|Decay product of:
|Decays to: | | <urn:uuid:e0cb203c-2c01-41ca-8e0b-2c52f3fa1c3b> | 2.90625 | 420 | Knowledge Article | Science & Tech. | 64.550878 | 95,624,937 |
Share this article:
Spotty thunderstorms that develop over the High Plains are likely to bring isolated severe weather incidents from the western part of the Dakotas to western Texas into the start of the weekend.
Where dry air from the mountains and deserts meets up with more humid air from the Gulf of Mexico, showers and thunderstorms will be common over the High Plains during the spring and early summer.
The weather pattern into Thursday will be no exception.
Preparing for severe weather: How to protect your car from thousands of dollars in hail damage
7 lightning safety tips if you’re caught outside during a thunderstorm
5 life-threatening tornado safety myths debunked
"Widely scattered storms over the Plains may become strong to locally severe along that boundary between the dry and moist air each day," according to AccuWeather Storm Warning Meteorologist Richard Schraeger.
On Tuesday afternoon, isolated storms produced hail as large as baseballs and wind gusts up to 80 mph in western Texas.
During Wednesday evening, storms may tend to focus right along the dry and moist boundary from western Texas to the western part of the Dakotas and eastern Montana.
On Thursday, a disturbance is forecast to move slowly eastward from the northern Rockies.
This system may help to focus strong to locally severe thunderstorms over parts of the northern and central Plains toward nightfall.
Isolated flash flooding and strong wind gusts may be the greatest threats from the Thursday night storms. However, incidents of hail are also likely.
While a small number of tornadoes may be spawned from the strongest storms into Wednesday night over the High Plains, the overall setup is not conducive for a tornado outbreak.
It is possible that conditions may become more favorable for a few tornadoes with the approach of the storm from the northern Rockies beginning Thursday night and perhaps continuing into this weekend.
The greatest risk may focus on, but may not be limited to areas from Nebraska and eastern Wyoming to parts of Oklahoma and northern Texas.
Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated.
A new round of severe weather is threatening lives from Ohio through Tennessee and will continue into Friday night.
In select regions of the world, people can live long enough to make some wonder if these countries have discovered the heavily sought-after fountain of youth.
A town in Iowa was severely damaged by a tornado on Thursday, while strong storms led to a tour boat disaster in Missouri that killed 17.
A boat carrying 31 people capsized on a lake near Branson, Missouri, as thunderstorms moved through the area on Thursday evening.
The risk of severe thunderstorms, including isolated tornadoes, will progress farther to the east and south over the central United States into Friday night.
Severe thunderstorms tracked across Iowa on Thursday afternoon with several tornadoes touching down across the state.
A deadly heat wave is expected to continue into next week across Japan as Tropical Storm Ampil bypasses the region to the south. | <urn:uuid:c341491c-aea4-45c8-a089-4d20bc88384c> | 2.625 | 628 | Truncated | Science & Tech. | 40.034171 | 95,624,940 |
This sequence of images taken by NASA's Hubble Space Telescope shows Comet 252P/LINEAR as it passed by Earth. The visit was one of the closest encounters between a comet and our planet.
The images were taken on April 4, 2016, roughly two weeks after the icy visitor made its closest approach to Earth on March 21. The comet traveled within 3.3 million miles of Earth, or about 14 times the distance between our planet and the moon. These observations also represent the closest celestial object Hubble has observed, other than the moon.
The images reveal a narrow, well-defined jet of dust ejected by the comet's icy, fragile nucleus. The nucleus is too small for Hubble to resolve. Astronomers estimate that it is less than one mile across.
A comet produces jets of material as it travels close to the sun in its orbit. Sunlight warms ices in a comet's nucleus, resulting in large amounts of dust and gas being ejected, sometimes in the form of jets. The jet in the Hubble images is illuminated by sunlight.
The jet also appears to change direction in the images, which is evidence that the comet's nucleus is spinning. The spinning nucleus makes the jet appear to rotate like the water jet from a rotating lawn sprinkler. The images underscore the dynamics and volatility of a comet's fragile nucleus.
Comet 252P/LINEAR is traveling away from Earth and the sun; its orbit will bring it back to the inner solar system in 2021, but not anywhere close to the Earth.
These visible-light images were taken with Hubble's Wide Field Camera 3.
The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope.
The Space Telescope Science Institute (STScI) in Baltimore, Maryland, conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy in Washington, D.C.
For images, video, and more information about Comet 252P/LINEAR and Hubble, visit:
Planetary Science Institute, Tucson, Arizona
Rob Gutro | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:08ff671a-b3ef-44f2-910a-855d1d5a8ea6> | 4 | 1,019 | Content Listing | Science & Tech. | 41.897897 | 95,624,943 |
Relativistic plasma jets are observed in many systems that host accreting black holes. According to theory, coiled magnetic fields close to the black hole accelerate and collimate the plasma, leading to a jet being launched1,2,3. Isolating emission from this acceleration and collimation zone is key to measuring its size and understanding jet formation physics. But this is challenging because emission from the jet base cannot easily be disentangled from other accreting components. Here, we show that rapid optical flux variations from an accreting Galactic black-hole binary are delayed with respect to X-rays radiated from close to the black hole by about 0.1 seconds, and that this delayed signal appears together with a brightening radio jet. The origin of these subsecond optical variations has hitherto been controversial4,5,6,7,8. Not only does our work strongly support a jet origin for the optical variations but it also sets a characteristic elevation of ≲103 Schwarzschild radii for the main inner optical emission zone above the black hole9, constraining both internal shock10 and magnetohydrodynamic11 models. Similarities with blazars12,13 suggest that jet structure and launching physics could potentially be unified under mass-invariant models. Two of the best-studied jetted black-hole binaries show very similar optical lags8,14,15, so this size scale may be a defining feature of such systems.
Access optionsAccess options
Rent or Buy article
Get time limited or full article access on ReadCube.
All prices are NET prices.
Subscribe to Journal
Get full journal access for 1 year
only $8.25 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research has made use of data from the NuSTAR mission, a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory, and funded by the National Aeronautics and Space Administration. We thank the NuSTAR Operations, Software and Calibration teams for support with the execution and analysis of these observations. This research has made use of the NuSTAR Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA), as well as the High Energy Astrophysics Science Archive Research Center. P.G. thanks the Science and Technology Facilities Council (STFC) for support (grant reference ST/J003697/2). ULTRACAM and V.S.D. are supported by STFC grant ST/M001350/1. P.G. thanks C.B. Markwardt, C.M. Boon, A.B. Hill, M. Fiocchi, K. Forster, A. Zoghbi and T. Muñoz-Darias for help and discussions. J.C. acknowledges financial support from the Spanish Ministry of Economy, Industry and Competitiveness (MINECO) under the 2015 Severo Ochoa Program MINECO SEV-2015-0548, and to the Leverhulme Trust through grant VP2-2015-04. T.R.M. acknowledges STFC (ST/L000733/1). J.M. acknowledges financial support from the French National Research Agency (CHAOS project ANR-12-BS05-0009), and D.A. thanks the Royal Society. S.M. acknowledges support from Netherlands Organisation for Scientific Research (NWO) VICI grant no. 639.043.513. We thank P. Wallace for use of his SLA C library. P.A.C. is grateful to the Leverhulme Trust for the award of a Leverhulme Emeritus Fellowship. Part of this research was supported by the UK-India UKIERI/UGC Thematic Partnership grants UGC 2014-15/02 and IND/CONT/E/14-15/355. This work profited from discussions carried out during a meeting organized at the International Space Science Institute (ISSI) Beijing by T. Belloni andD. Bhattacharya.
Electronic supplementary material
5 supplementary figures, 7 sections, 47 references | <urn:uuid:4a215a65-3cd7-4ec4-86c3-dca054ebf0f4> | 2.546875 | 907 | Truncated | Science & Tech. | 54.788081 | 95,624,945 |
Statistics Definitions > Probability Density Function
The probability density function(also called a probability distribution function) shown above shows all possible values for Y, which for this case, has an infinite amount of possibilities. For example, the random variable Y could equal 180 pounds, 151.2 pounds or 201.9999999999 pounds.
We can use probability density functions to answer a question like:
What is the probability that a person will weigh between 150 lbs and 250 lbs?
Written in notation, the question becomes:
P(150 < Y < 250)
To answer the question, shade the area on the graph:
Then approximate the area. From looking at the shaded area, it looks like it’s about 75 percent.
So, P(150 < Y < 250) = 75%.
You can use the same technique for figuring out the probability for less than or greater than a certain number. Just shade in the area to the right of the number (for greater than) or to the left of the number (for less than).
Use Caution When Reading Probability Density Function Graphs!
Probability functions are great for figuring out intervals (because then you have an area to measure). However, you have to use a little caution with reading probability density function graphs, especially when it comes to exact numbers. For example: What about the probability any person will weigh exactly 180lbs? Written in notation, the question would be:
Looking at the graph, you might think that the probability of a person weighing 180lbs is about 50%, But that doesn’t make sense, right? That half of all people weigh exactly 180lbs! What you have to think about is that someone could weigh 180 pounds or they could weigh 180 pounds and a fraction of an ounce either way. they could weigh 180.00001 pounds or they could weight 179.999999999 pounds. In fact, the odds of someone weighing exactly 180 pounds is so tiny it’s practically zero.
Another way to look at this is that if you drew the “area” for a question like this, it would actually just be a line. And a line has zero area!
TI 83 NormalPDF Function
The TI 83 normalPDF function, accessible from the DISTR menu will calculate the normal probability density function, given the mean μ and standard deviation σ. The function doesn’t actually give you a probability, because the normal distribution curve is continuous. However, you can use it to plot a bell curve and to find x-values and y-values for points on the curve.
TI 83 NormalPDF function: Steps
Watch the video or read the steps below:
Sample Problem: Graph a bell curve on the TI 83 calculator with a mean of 100 and standard deviation of 15. Use the NormalPDF function.
Step 1: Press Y=.
Step 2: Press 2nd VARS 1 to get “normalPDF.”
Step 3: Press the X,T,θ,n button, then the mean (100), then the standard deviation, 15. Close the parentheses.
Step 4: Press WINDOW.
Step 5: Change the window values to the following (type the values into the relative boxes):
- The x-min/x-max is set to the mean, minus/plus three standard deviations, as this is a bell curve so +/- 3 standard deviations will show the entire curve.
- Ymax uses the normalpdf function to determine the maximum y-value at the mean (the peak of the curve)
Step 6: Press GRAPH. The TI 83 will graph a normal distribution curve on your screen.
Step 7: Press TRACE and then type in any number to find the y-value. For this example, type 80 and then press ENTER.
That’s how to use the TI 83 NormalPDF!
Tip: A mean of zero and a standard deviation of are the default values for a normal distribution on the calculator, if you don’t set those values.
Lost your guidebook? Download a new one here from the TI website.
Check out our Youtube channel for more stats help and tips!------------------------------------------------------------------------------
If you prefer an online interactive environment to learn R and statistics, this free R Tutorial by Datacamp is a great way to get started. If you're are somewhat comfortable with R and are interested in going deeper into Statistics, try this Statistics with R track.Comments? Need to post a correction? Please post on our Facebook page. | <urn:uuid:bf585707-d3b6-4788-b8c5-07182ea8c215> | 4.65625 | 936 | Tutorial | Science & Tech. | 68.254191 | 95,624,951 |
- Category: Climate Change
- Published: Thursday, 12 April 2018 00:08
- Written by Bill Jaynes
- Hits: 272
Patrick D. Nunn
Professor of Geography
University of the Sunshine Coast, Australia)
23rd March 2018
People have been living on islands in Micronesia for as much as 3500 years. We know that the first people arrived at Ritidian on Guam from the Philippines this long ago, and that their descendants have been in these islands ever since. But to listen to some discussions about future climate change and how vulnerable some islands in this region apparently are, you might justifiably wonder how people have survived for more than three millenia in this part of the world.
This summarises the point that sometimes ‘western’ science is not as well-informed as you might expect. In the last few decades, science has correctly identified a challenge for livelihoods everywhere in the world from climate change. Temperatures are rising, sea level is rising, the intensity and frequency of typhoons and droughts are changing, all of which pose complex challenges to the way people live, whether they be in Micronesia or Mexico, Pohnpei or Pakistan. Scientists use global models of the Earth’s climate to understand what is happening, how the complex climate system responds to particular ‘forcings’ and, in doing so, arrive at particular ‘projections’ of what may happen in particular places at certain times in the future.
RISING SEA LEVEL
For island countries, rising sea level is naturally a key concern. In Micronesia, where sea level is currently rising at 2-3 times the global average, scientists are thinking about how 21st-century sea-level rise might reconfigure coastal geographies, especially in low-lying coastal areas. Science has taken on the additional burden of advising countries like FSM and its neighbors how they should best prepare for and respond to such climate-driven changes. Global solutions suggest we might either ‘protect’ our shorelines, perhaps by building hard structures like seawalls; or we might ‘accommodate’ the effects of sea-level rise by rethinking the ways in which we use the coast; or we might ‘retreat’ from the shoreline, moving our activities and infrastructure to more secure locations.
These three options are often touted as new ways in which communities in countries like FSM should think about responding to sea-level rise, both now and in the future. Commonly such suggestions overlook the fact that people have lived in Micronesia for more than three millenia, during which time they have overcome climate changes, including swings of sea level up and down. How did this happen?Add a comment | <urn:uuid:feb956be-0551-45a2-9199-f72afed88105> | 3.28125 | 572 | Truncated | Science & Tech. | 32.147636 | 95,624,962 |
Prescribed Burning and Erosion Potential in Mixed Hardwood Forests of Southern Illinois
AbstractPrescribed fire has several benefits for managing forest ecosystems including reduction of fuel loading and invasive species and enhanced regeneration of desirable tree species. Along with these benefits there are some limitations like nutrient and sediment loss which have not been studied extensively in mixed hardwood forests. The objective of our research was to quantify the amount of sediment movement occurring on a watershed scale due to prescribed fire in a southern Illinois mixed hardwood ecosystem. The research site was located at Trail of Tears State Forest in western Union county, IL, USA and included five watershed pairs. One watershed in each pair was randomly assigned the prescribed burn treatment and the other remained as control (i.e., unburned). The prescribed burn treatment significantly reduced the litter depth with 12.6%–31.5% litter remaining in the prescribed burn treatment watersheds. When data were combined across all watersheds, no significant differences were obtained between burn treatment and control watershed for total suspended solids and sediment concentrations or loads. The annual sediment losses varied from 1.41 to 90.54 kg·ha−1·year−1 in the four prescribed burn watersheds and 0.81 to 2.54 kg·ha−1·year−1 in the four control watersheds. Prescribed burn watershed 7 showed an average soil sediment loss of 4.2 mm, whereas control watershed 8 showed an average accumulation of sediments (9.9 mm), possibly due to steeper slopes. Prescribed burning did not cause a significant increase in soil erosion and sediment loss and can be considered acceptable in managing mixed hardwood forests of Ozark uplands and the Shawnee Hills physiographic regions of southern Illinois. View Full-Text
Share & Cite This Article
Singh, G.; Schoonover, J.E.; Monroe, K.S.; Williard, K.W.J.; Ruffner, C.M. Prescribed Burning and Erosion Potential in Mixed Hardwood Forests of Southern Illinois. Forests 2017, 8, 112.
Singh G, Schoonover JE, Monroe KS, Williard KWJ, Ruffner CM. Prescribed Burning and Erosion Potential in Mixed Hardwood Forests of Southern Illinois. Forests. 2017; 8(4):112.Chicago/Turabian Style
Singh, Gurbir; Schoonover, Jon E.; Monroe, Kyle S.; Williard, Karl W.J.; Ruffner, Charles M. 2017. "Prescribed Burning and Erosion Potential in Mixed Hardwood Forests of Southern Illinois." Forests 8, no. 4: 112.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here. | <urn:uuid:73001a77-d5f4-4606-9eb2-994242171151> | 2.75 | 574 | Academic Writing | Science & Tech. | 55.987171 | 95,624,963 |
Northern Dwarf Siren, Gulf Hammock Dwarf Siren, Slender Dwarf Siren, Borad-striped Dwarf Siren
© 2001 John White (1 of 6)
Pseudobranchus striatus (LeConte, 1824)
Paul E. Moler1
1. Historical versus Current Distribution. Northern dwarf sirens (Pseudobranchus striatus) occur in the lower Gulf and Atlantic Coastal Plains from Orangeburg County, South Carolina (Folkerts, 1971), south to central peninsular Florida (Hernando and Volusia counties; Moler and Kezer, 1993), then west to Baker and Lee counties, Georgia (Goin and Crenshaw, 1949), and Walton County, Florida (Moler and Thomas, 1982). Three subspecies are recognized (Moler and Kezer, 1993; Crother et al., 2000): Gulf Hammock dwarf sirens (P. s. lustricolus), slender dwarf sirens (P. s. spheniscus), and broad-striped dwarf sirens (P. s. striatus). The current range of northern dwarf sirens is unchanged from their historical range, although populations have been lost as wetland habitats have been reduced through drainage of surface waters associated with residential, agricultural, and silvicultural development.
2. Historical versus Current Abundance. Northern dwarf sirens are often common in suitable habitats, but such habitats have been reduced through drainage of surface waters, and current numbers are reduced relative to their historical abundance.
3. Life History Features.
A. Breeding. Reproduction is aquatic.
i. Breeding migrations. Do not migrate. Northern dwarf sirens breed and permanently reside in the same aquatic habitats.
ii. Breeding habitat. Northern dwarf sirens live and breed in cypress (Taxodium sp.) or gum (Nyssa sp.) ponds and other shallow, acidic, wetlands of the flatwoods.
i. Egg deposition sites. Eggs are laid singly or in small bunches among aquatic vegetation (Noble, 1930).
ii. Clutch size. Unknown.
C. Larvae. Not well known. Petranka (1998) notes that larvae are similar to those of southern dwarf sirens (Pseudobranchus axanthus). Ashton and Ashton (1988) indicate Pseudobranchus larvae make take 2 yr to reach sexual maturity.
D. Juvenile Habitat. Same as adult habitat.
E. Adult Habitat. Most often associated with cypress or gum ponds and other shallow, acidic wetlands of the flatwoods. Unlike southern dwarf sirens (P. axanthus) northern dwarf sirens are not normally found among water hyacinths, which are typically absent from acidic wetlands of the flatwoods. Northern dwarf sirens have been collected from similar floating mats of frog's-bit (Limnobium spongium; Moler and Kezer, 1993), but they more typically inhabit decaying bottom vegetation and the soft, mucky soils of pond margins (Le Conte, 1824; Goin and Crenshaw, 1949).
F. Home Range Size. Unknown.
G. Territories. Unknown.
H. Aestivation/Avoiding Dessication. Northern dwarf sirens burrow into bottom sediments when wetlands dry (Harper, 1935; Goin and Crenshaw, 1949). They will remain buried until their wetland refills.
I. Seasonal Migrations. None.
J. Torpor (Hibernation). Northern dwarf sirens remain buried in mud and bottom debris during cold weather.
K. Interspecific Associations/Exclusions. Northern dwarf sirens occur sympatrically, but only occasionally syntopically, with southern dwarf sirens in northern peninsular Florida. In areas of sympatry, northern dwarf sirens are typically found in more acidic habitats than are southern dwarf sirens, but they are known to occur syntopically at a few sites (Moler and Kezer, 1993).
L. Age/Size at Reproductive Maturity. Unknown. Collections from Okefenokee Swamp, Georgia, suggest that northern dwarf sirens mature in < 1 yr (B. Freeman, personal communication). A maximum size of 203 mm was noted by Moler and Mansell (1986).
M. Longevity. Unknown.
N. Feeding Behavior. Unknown, but probably similar to that of southern dwarf sirens.
O. Predators. Unknown. Southern banded water snakes (Nerodia fasciata), black swamp snakes (Seminatrix pygaea), mud snakes (Farancia abacura), and crayfish snakes (Regina sp.) are common associates and likely prey on northern dwarf sirens. Various species of wading birds are likely major predators on Pseudobranchus, especially when dwarf sirens are concentrated by falling water levels. Predaceous fish are also probable predators.
P. Anti-Predator Mechanisms. Flight. Pseudobranchus are slippery and thus may escape when gripped. When seized, Pseudobranchus may produce high-pitched squeaks, but the utility of vocalizations as an anti-predator mechanism is unknown.
Q. Diseases. Unknown.
R. Parasites. Unknown.
4. Conservation. The current range of northern dwarf sirens is unchanged from their historical range, although populations have been lost as wetland habitats have been reduced through drainage of surface waters associated with residential, agricultural, and silvicultural development. Northern dwarf sirens are often common in suitable habitats, but such habitats have been reduced through drainage of surface waters, and current numbers are reduced relative to their historical abundance. Northern dwarf sirens are considered Threatened in South Carolina (www.dnr.state.sc.us).
1Paul E. Moler
Literature references for Amphibian Declines: The Conservation Status of United States Species, edited by Michael Lannoo, are here.
Feedback or comments about this page.
Citation: AmphibiaWeb. 2018. <http://amphibiaweb.org> University of California, Berkeley, CA, USA. Accessed 22 Jul 2018.
AmphibiaWeb's policy on data use. | <urn:uuid:4ec21372-f8db-417a-a25b-f6d61d06f6fc> | 3.328125 | 1,297 | Knowledge Article | Science & Tech. | 41.869788 | 95,624,964 |
Himalayan balsam (Impatiens glandulifera) is a non-native annual plant that was introduced into parts of Europe during the mid-nineteenth century as an ornamental plant for parks and gardens.
This plant species was first recognised as an invasive species and a threat to ecological stability in the 1930’s. However, since then the problem has escalated and is now of international concern, due to its negative impact on ecosystem biodiversity. This is primarily due to its ability to out-compete and overcrowd native vegetation. Himalayan balsam has now naturalised in many countries, resulting in a shift in management strategies from attempting to remove the invasive plant, to limiting its territory and further spread.
Alan Gange, Amanda Currie & Nadia Ab Razak (Royal Holloway, University of London), Carol Ellison, Norbert Maczey & Suzy Wood (CABI ) and Robert Jackson & Mojgan Rabiey (University of Reading) discuss the threat of invasive species to biodiversity, including the biological control of Himalayan balsam
Invasive species are one of the main threats to biodiversity across the world, being second only to habitat destruction in causing biodiversity decline. In the UK, they cost the economy £1.7 billion annually, through costs of control, losses to agriculture and damage to infrastructure.
Building on CABI research into the biological control of Himalayan balsam (Impatiens glandulifera) using a rust fungus (Puccinia komarovii var. glanduliferae), a Natural Environment Research Council (NERC) funded collaboration between Royal Holloway, CABI and the University of Reading is investigating the role of the microbial community associated with the plant and how these microbes may be exploited to enhance biocontrol efficacy and aid in the recovery of invaded sites. It is hoped that the findings of the study may be applicable to biocontrol programmes more widely.
Himalayan balsam is one of the UK’s most widespread invasive weed species, colonising river banks, wasteland, damp woodlands, roadways and railways. Research by CABI scientists has shown local invertebrate biodiversity is negatively affected by the presence of Himalayan balsam. This leads to fragmented, destabilised ecosystems, which has serious consequences on processes and functioning, and complicates habitat restoration unless remedial actions are implemented. | <urn:uuid:9746727d-8dc2-4d03-92c7-a37cdfc9032b> | 3.578125 | 489 | News (Org.) | Science & Tech. | 9.359429 | 95,624,965 |
When NASA and the Japan Space Agency's Tropical Rainfall Measuring Mission or TRMM satellite passed over Tropical Storm Flossie, it measured rainfall rates occurring throughout the storm. TRMM noticed that the heaviest rainfall was occurring at a rate of 1.2 inches per hour north of the center. The heavy rain wrapped around the storm from the north to the east. Most of the remaining rainfall was light to moderate. Microwave satellite imagery shows some inner core features trying to form.
This image shows TRMM rainfall data laid over an infrared image from NOAA's GOES-15 satellite to show rainfall within Flossie's cloud cover. Heaviest rainfall (red) was falling north of the center. Image Credit: NRL/NASA/NOAA
An image created at the Naval Research Laboratory, Washington, combined the TRMM data with an infrared image from NOAA's GOES-15 satellite to show rainfall within Flossie's cloud cover. Rain was falling throughout much of the storm.
On Friday, July 26 at 8 a.m. PDT (1500 UTC) the center of Tropical Storm Flossie was located near latitude 16.1 north and longitude 132.3 west, almost mid-way between the southern tip of Baja California and Hilo, Hawaii (about 1,530 miles (2, 465 km) from each place. Maximum sustained winds were near 50 mph (85 kph). Flossie was moving to the west-northwest at 18 mph (30 kph) and had a minimum central pressure of 1,000 millibars.
As Flossie continues its westward track, the National Hurricane Center notes that residents of Hawaii should monitor the storm's approach.Text credit: Rob Gutro
Rob Gutro | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:15830abf-a0bf-4a45-a9c4-ee8ab66fa39e> | 3.171875 | 933 | Content Listing | Science & Tech. | 49.133723 | 95,624,969 |
Local weather observations, soundings, and computer models, and data from satellites like GOES-13 give forecasters information about developing weather situations. The GOES-13 satellite data in animated form showed the forecasters how the area of severe weather was developing, helping to prompt watches and warnings.
The GOES-13 (Geostationary Operational Environmental Satellite) satellite is operated by the National Oceanic and Atmospheric Administration. NASA/NOAA's GOES Project at the NASA Goddard Space Flight Center in Greenbelt, Md. created the animation of GOES-13 satellite data that covered the period during the massive tornado outbreak.
The GOES animation of the severe weather outbreak is in a large-format HDTV movie that runs 30 seconds. "The animation runs through the period of April 14-15, 2012 and the GOES imagery reveals the strong flow of warm, moist air from the Gulf into the advancing cold front," said Dennis Chesters of NASA's GOES Project.
The destructive outbreak was Saturday night, April 14 to Sunday morning, April 15, and appears half way through the GOES video, when the long streak of clouds springs into view in the middle of the frame. Although there is not much detail in the infrared-only cloud tops, there is evidence of sudden violence.
Meteorologists had predicted the set up for severe weather days in advance. In fact, the NOAA Storm Prediction Center Days sent out the alert to more five states to be on guard for developing "extremely dangerous" or "catastrophic" weather conditions. The states included Nebraska, Kansas, Iowa, Oklahoma, Missouri, Texas, and Illinois.
As factors came together, the National Weather Service forecast this week's Great Plains Tornado outbreak 24 hours in advance, and gave prompt and urgent warnings that saved lives. Six fatalities were recorded, and there were 213 severe thunderstorm warnings and 124 tornado warnings.
Rob Gutro | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:a9303a9b-a534-46a3-8045-5eca8eee6f3b> | 3.28125 | 1,022 | Content Listing | Science & Tech. | 40.393789 | 95,624,993 |
Matz and his colleagues recently discovered the grape-sized protists and their complex tracks on the ocean floor near the Bahamas. This is the first time a single-celled organism has been shown to make such animal-like traces.
The finding is significant, because similar fossil grooves and furrows found from the Precambrian era, as early as 1.8 billion years ago, have always been attributed to early evolving multicellular animals.
"If our giant protists were alive 600 million years ago and the track was fossilized, a paleontologist unearthing it today would without a shade of doubt attribute it to a kind of large, multicellular, bilaterally symmetrical animal," says Matz, an assistant professor of integrative biology. "We now have to rethink the fossil record."
The team's discovery was published online today in Current Biology and will appear in a subsequent print issue.
Most animals, from humans to insects, are bilaterally symmetrical, meaning that they can be roughly divided into halves that are mirror images.
The bilateral animals, or "Bilateria," appeared in the fossil record in the early Cambrian about 542 million years ago, quickly diversifying into all of the major animal groups, or phyla, still alive today. This rapid diversification, known as the Cambrian explosion, puzzled Charles Darwin and remains one of the biggest questions in animal evolution to this day.
Very few fossils exist of organisms that could be the Precambrian ancestors of bilateral animals, and even those are highly controversial. Fossil traces are the most accepted evidence of the existence of these proto-animals.
"We used to think that it takes bilateral symmetry to move in one direction across the seafloor and thereby leave a track," explains Matz. "You have to have a belly and a backside and a front and back end. Now, we show that protists can leave traces of comparable complexity and with a very similar profile."
With their find, Matz and his colleagues argue that fossil traces cannot be used alone as evidence that multicellular animals were evolving during the Precambrian, slowly setting the stage for the Cambrian explosion.
"I personally think now that the whole Precambrian may have been exclusively the reign of protists," says Matz. "Our observations open up this possible way of interpreting the Precambrian fossil record."
He says the appearance of all the animal body plans during the Cambrian explosion might not just be an artifact of the fossil record. There are likely other mechanisms that explain the burst-like origin of diverse multicellular life forms.
DNA analysis confirmed that the giant protist found by Matz and his colleagues in the Bahamas is Gromia sphaerica, a species previously known only from the Arabian Sea.
They did not observe the giant protists in action, and Matz says they likely move very slowly. The sediments on the ocean floor at their particular location are very stable and there is no current—perfect conditions for the preservation of tracks.
Matz says the protists probably move by sending leg-like extensions, called pseudopodia, out of their cells in all directions. The pseudopodia then grab onto mud in one direction and the organism rolls that way, leaving a track.
He aims to return to the location in the future to observe their movement and investigate other tracks in the area.
Matz says the giant protists' bubble-like body design is probably one of the planet's oldest macroscopic body designs, which may have existed for 1.8 billion years.
"Our guys may be the ultimate living fossils of the macroscopic world," he says.
Dr. Misha Matz | EurekAlert!
Further reports about: > Bilateral > Cambrian > Cambrian explosion > Precambrian > Precambrian fossil > animal evolution > bubble-like body design > giant protists > kind of large, multicellular, bilaterally symmetrical animal > multicellular > multicellular animal > ocean floor > organism > protist > pseudopodia
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
Pollen taxi for bacteria
18.07.2018 | Technische Universität München
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:5dd595cc-c10b-4847-b918-ee565e1734a8> | 4.0625 | 1,474 | Content Listing | Science & Tech. | 37.54826 | 95,625,015 |
A scalar theory of surface scattering phenomena has been formulated by utilizing the same Fourier techniques that have proven so successful in the area of image formation. An analytical expression has been obtained for a surface transfer function which relates the surface micro-roughness to the scattered distribution of radiation from that surface. The existence of such a transfer function implies a shift-invariant scattering function which does not change shape with the angle of the incident beam. This is a rather significant development which has profound implications regarding the quantity of data required to completely characterize the scattering properties of a surface. This theory also provides a straight-forward solution to the inverse scattering problem (i.e., determining surface characteristics from scattered light measurements) and results in a simple method of predicting the wave length dependence of the scattered light distribution. Both theoretical and experimental results will be presented along with a discussion of the capabilities and limitations of this treatment of surface scatter phenomena.
James E. Harvey,
"Light-Scattering Characteristics Of Optical Surfaces", Proc. SPIE 0107, Stray Light Problems in Optical Systems, (26 September 1977); doi: 10.1117/12.964594; https://doi.org/10.1117/12.964594 | <urn:uuid:bfbc8e25-04bb-40e5-bee1-ecb8d304122a> | 2.546875 | 252 | Academic Writing | Science & Tech. | 34.372348 | 95,625,017 |
The Mathematical Analysis of Electromagnetic Fields around Surface Cracks in Metals
The work described in this paper arises from a program for the detection and measurement of surface cracks in metals carried out at University College London. The instrument which was developed for the purpose, the Crack Microgauge, employs the acpd (alternating current potential difference) method. An alternating electric current at a frequency of 6 kHz is applied to the specimen, and the instrument measures the voltage between the probe terminals which are applied to the surface of the specimen. By examining the variation of the voltage readings with position on the surface and, in particular, the jump in readings obtained when the probe crosses the crack, the crack can be detected and features of its geometry deduced. The correlation between instrument readings and information about the crack geometry must be made by use of a theoretical model of the electromagnetic field produced in the crack neighborhood. The authors have been principally concerned in the study of this mathematical problem. In this paper we have attempted to bring together in summary form the most significant results arising from the studies on several different projects.
KeywordsFatigue Crack Crack Depth Skin Depth Normal Crack Crack Geometry
Unable to display preview. Download preview PDF.
- Auld, B.A., Muennemann, F. and Winslow, D.K., 1982, Observation of fatigue crack closure effects with the ferromagnetic resonance eddy current probe, J. Nondestr. Eval., to appear.Google Scholar
- Collins, R., Michael, D.H. and Ranger, K.B., 1981, The a.c. field around a plane semi-elliptical crack in a metal surface, in: “Proceedings of Thirteenth Symposium on Nondestructive Evaluation”. San Antonio, TX.Google Scholar
- Dover, W.D.,Charlesworth, F.D.W., Taylor, K.A., Collins, R. and Michael, D.H., 1980, A.c. field measurement: theory and practice, in: “The measurement of crack length and shape during fracture and fatigue”. C.J. Beevers, ed., Engineering Materials Advisory Service, Warley, U.K.Google Scholar
- Dover, W.D., Charlesworth, F.D.W., Taylor, K.A., Collins, R. and Michael, D.H., 1981, The use of a.c. field measurements to determine the shape and size of a crack in a metal, in: “Eddy current characterization of materials and structures”, G. Birnbaum and G. Free, eds, American Society for Testing and Materials, Philadelphia, PA.Google Scholar
- Kahn, A.H., 1981, Impedance changes produced by a crack in a plane surface, Review of Progress in Quantitative NDE, Vol. 1, D.O. Thompson and D.E. Chimenti (eds.), Plenum Pub. Corp., NY, 1982.Google Scholar
- Michael, D.H., Collins, R., and Dover, W.D., 1982, Detection and measurement of cracks in threaded bolts with an a.c. potential difference method, Proc. Roy. Soc. London, to appear.Google Scholar
- Mirshekar-Syahkal, D., Michael, D.H. and Collins, R., 1981, Parasitic voltages induced by artificial flaws when measured using the a.c, field technique, J. Nondestr. Eval., 2, 3 /4.Google Scholar
- Mirshekar-Syahkal, D., Collins, R. and Michael, D.H., 1982, The influence of skin depth on crack measurement by the a.c. field technique, J. Nondestr. Eval., 3, 2.Google Scholar
- Muennemann, F., Auld, B.A., Fortunko, C.M. and Padget, S.A., 1982, Inversion of eddy current signals in a non-uniform probe field, these proceedings.Google Scholar | <urn:uuid:72c4e707-f0a6-4ce5-b51c-81791f99f3d7> | 2.578125 | 855 | Academic Writing | Science & Tech. | 65.56929 | 95,625,049 |