text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
The tides at Burncoat Head in Nova Scotia’s Minas Basin were officially recorded as the world’s highest, averaging 17.8 meters.
Photo: Scott Munn – Nova Scotia Tourism
AT 16 METRES or more – about five storeys high – the Bay of Fundy’s tides are the highest in the world. Each day along Fundy’s 1,200-kilometre coastline, the tides cover and then expose stony, red-sand beaches and some 3,000 hectares of biologically important mud flats. In the upper bay, the incoming water creates a tidal bore: a low, moving wave that goes upstream against the current in tidal rivers – and is fun to ride in a kayak or raft! Further down the bay you can see the famous Reversing Falls in Saint John, NB. Here, the Saint John River pours down through a rocky gorge at low tide and swirls back upstream over the same rocks with the rising tide.
At each tide, 160 billion tonnes of water are moved in and out of the entire bay, four times the combined flow of all the freshwater of the planet’s rivers. Fundy’s size and contours resonate almost perfectly with tidal periodicity to slosh all this water to exceptional heights. And where rocks or local topography form narrow passages in the bay, the tremendous volume of moving water results in powerful, sometimes dangerous, tide rips – in some places as fast as 10 knots (five metres per second).
For an engineer, it probably would not go unremarked that this awesome natural phenomenon could potentially be harnessed as a source of energy. Moreover, that energy would be renewable and essentially pollution free, and unlike solar and wind, the availability of tidal energy is easily and accurately predicted. It isn’t surprising that for the last few decades there have been serious but sporadic efforts to make tidal power a reality on Canada’s East Coast. With the implications of climate change growing ever more ominous, the Province of Nova Scotia is reinvigorating tidal power research with a view to expanding its practical applications, as are a number of other countries, companies and institutions.
Nova Scotia is already home to North America’s first and only tidal power station – 2014 marks the 30th anniversary of the start of operations at Nova Scotia Power’s Annapolis Tidal Power Plant. The 20-megawatt (MW) facility produces some 30 million kilowatt-hours (kWh) of electricity per year, enough to power more than 4,000 homes. This is quite small in scale, considering the theoretical 2,500 MW that could be harvested from the full power of Fundy’s tides.
All human activities have some environmental consequences, and even the modest-sized Annapolis power station has had local impacts. The plant employs the Annapolis River’s tidal basin as a head pond, using a barrage to hold back the flood tide waters. Set in the barrage are gates through which the tide comes in. As the tide recedes, the water is released and directed through a massive straight-flow turbine, producing electricity for five hours during the outgoing tide. The main environmental effects of this are the impacts of scouring and silt deposition on natural flow and water levels, as well as occasional incidents involving marine life.
Nova Scotia Power's experimental OPENHydro turbine on
route to deployment.
Photo: Fundy Ocean Research Centre for Energy (FORCE)
Consequently, rather than concentrating on scaled-up barrage technology with its anticipated environmental impacts, researchers have spent the last decade focused on developing arrays of underwater turbines to tap fast-flowing tidal currents. But the Bay of Fundy is not easily tamed. Several experimental turbine designs, such as Nova Scotia Power’s OPENHydro device, have failed, sometimes within weeks, ripped apart by the speed and power of the tides.After the Annapolis plant began operations and experience with the barrage technology grew, environmental concerns about the possibility of similar but much larger Fundy tidal facilities also increased. Issues included the potential for major silting and scouring, the prospect of reduced salinity in the basin, as well as effects on fish migration and other sea life. Many ecologists were especially worried about the barrage flooding of Fundy’s extensive tidal mud flats, which are the critical stopover feeding grounds for millions of shorebirds during migration.
Nevertheless, the advantages of this technological approach are clear, and the promise of a local and renewable energy source – free of carbon emissions – continues to inspire interest. Considerable research and development time is needed for this turbine technology, not least to bring costs in line with other renewables. Committed activity is a prerequisite for that to happen, and Nova Scotia’s 2012 Tidal Energy Strategy foresees 15 to 20 MW of additional capacity as an early objective, with an ultimate aim of up to 300 MW
In Europe, there are a number of countries and technology companies now working on tidal power in locations like Scotland, which has targets for 100-per-cent renewable electricity by 2020, and France, which has operated a 240-MW tidal power station at La Rance for many years.
FORCE is starting operations to rent three offshore berths in Minas Basin for observation and testing of large in-stream turbines, submarine cables and grid connections, as well as doing environmental monitoring. Projects that were awarded berths by the provincial government in March 2014 are from Black Rock, a Halifax company with a number of European partners, and OPENHydro – a collaboration involving OPENHydro’s French parent company DCNS, Nova Scotia Power’s parent company Emera, and various Irving enterprises. The third berth is already held by Minas Energy, which in March also announced a new partnership with Bluewater to create an advanced flotation system using Siemens turbines.In Nova Scotia, the renewed attention has led down two different paths. One response is the establishment of a major new test facility, the Fundy Ocean Research Centre for Energy (FORCE), funded by the Province, Ottawa, Encana Corporation and several tidal technology companies, with links to researchers at Acadia and other universities. Part of the rationale for such an important new facility is to be able to test technology against “The Fundy Standard” – if it works there, it can work anywhere.
The other approach in Nova Scotia is being led by Fundy Tidal Inc., a community-based company established on Brier Island in 2006 to undertake work on much smaller-scale projects to generate local power from tidal currents. Fundy Tidal’s focus is on community ownership and benefits, and it has recently been awarded three sites for Community Feed-In Tariff (COMFIT) projects. Nova Scotia’s COMFIT program currently mandates standard tariffs for electricity from approved, small-scale renewable energy sources using wind, run-of-the-river hydro and in-stream tidal technologies.
These projects must be community-based; eligible owners include municipalities, Mi’kmaw band councils, universities, co-ops, not-for-profit groups and community economic development organizations. The independent provincial Utility and Review Board (UARB) determines the tariff for the different COMFIT technologies, setting the rate for in-stream tidal projects at $0.652/kWh. These small tidal projects will be connected to their local distribution systems within the provincial electricity grid through an agreement with Nova Scotia Power.
In addition to these two approaches – large-scale and local – Nova Scotia has also been developing international connections in the field. These include a Memorandum of Understanding to share knowledge and best practices, and to facilitate student exchanges on tidal power research with the UK – in particular involving Dalhousie University and the Energy Systems Unit at the University of Strathclyde in Glasgow, which has a test facility in the Orkney Islands. Additionally, in November 2014, the first International Conference on Ocean Energy held in North America will take place in Nova Scotia. This small province is a logical location – it currently has more than 200 ocean technology firms, the largest concentration in North America. It thus may yet be that Fundy’s tides – once only a tourist spectacle – give rise to a form of renewable energy.
Popular on A\J
More by this Author
- From EATING AROUND THE WORLD article: "The long road to sustainability requires rebuilding our communities, and a g… https://t.co/gLTuZ7Rvu5 — 21 weeks 4 days ago
- A Valentine's Day (and every day) message from Jane Goodall: "Let us replace impatience and intolerance with unders… https://t.co/1WGML2toyK — 21 weeks 4 days ago
- For Valentine's Day: https://t.co/exvDzE2LQf — 21 weeks 4 days ago | <urn:uuid:38a887c6-e226-4cfb-8eb2-a8bff8535df0> | 3.390625 | 1,843 | News Article | Science & Tech. | 28.635781 | 95,576,342 |
Projected increases in the frequency, intensity and duration of heatwaves in the desert of the southwestern United States are putting songbirds at greater risk for death by dehydration and mass die-offs, according to a new study.
Researchers used hourly temperature maps and other data produced by the North American Land Data Assimilation System (NLDAS) -- a land-surface modeling effort maintained by NASA and other organizations -- a long with physiological data to investigate how rates of evaporative water loss in response to high temperatures varied among five bird species with differing body masses.
Using this data, they were able to map the potential effects of current and future heat waves on lethal dehydration risk for songbirds in the Southwest and how rapidly dehydration can occur in each species.
Researchers homed in on five songbird species commonly found in the desert southwest: lesser goldfinch, house finch, cactus wren, Abert's towhee and the curve-billed thrasher.
Under projected conditions where temperatures increase by 4 degrees Celsius (7 degrees Fahrenheit), which is in line with some scenarios for summer warming by the end of the century, heatwaves will occur more often, become hotter, and expand in geographic range to the point where all five species will be at greater risk for lethal dehydration.
Birds are susceptible to heat stress in two ways, said co-author Blair Wolf, a professor of biology at the University of New Mexico. With funding from the National Science Foundation, Wolf investigated heat tolerance for each of the five species in the study as well as for other bird species in Australia and South Africa. "When it's really hot, they simply can't evaporate enough water to stay cool, so they overheat and die of heat stroke," he said. "In other cases, the high rates of evaporative water loss needed to stay cool deplete their body water pools to lethal levels and birds die of dehydration. This is the stressor we focused on in this study."
What happens is at about 40 degrees Celsius [104 degrees Fahrenheit], these songbirds start panting, which increases the rate of water loss very rapidly, explained co-author Alexander Gerson, an assistant professor of biology at the University of Massachusetts-Amherst. At the time of the study, he worked with Wolf as a postdoctoral researcher at the University of New Mexico. He added, "Most animals can only tolerate water losses that result in 15 or 20 percent loss of body mass before they die. So an animal experiencing peak temperatures during a hot summer day, with no access to water, isn't going to make it more than a few hours."
As expected, they found that the small species are particularly susceptible to lethal dehydration because they lose water at a proportionately higher rate. For example, at 50 degrees Celsius [122 degrees Fahrenheit], the lesser goldfinch and the house finch lose 8 to 9 percent of their body mass to evaporative water loss per hour, whereas the larger Curve-billed thrasher only loses about 5 percent of its mass per hour. By the end of the century, the number of days in the southwest desert where lethal dehydration poses a high risk to the lesser goldfinch increases from 7 to 25 days per year. For larger species, those days will also increase, but will remain rare.
Despite their physiological disadvantage, house finches and lesser goldfinches might actually fare comparatively better, the researchers noted, because they can survive in a number of ecosystems and they have a more expansive range. But desert specialists such as the curve-billed thrasher and Abert's towhee have more specific habitat needs and so have a more limited range, restricted in the United States mostly to the hot deserts of the Southwest. That means that a greater proportion of their population is at risk for lethal dehydration when severe enough heatwaves occur.
"When you get into a situation where the majority of the range is affected, that's where we start to become more alarmed at what we are seeing," said lead author Tom Albright from the University of Nevada, Reno, noting that this increases the risk of lethal dehydration affecting a large proportion of the population.
According to the researchers, given this warming scenario, climate refugia -- microclimates such as mountaintops, trees and washes with shade that allow songbird body temperatures to cool to safe levels -- might prove very important in management plans for certain vulnerable species. "Using this type of data, managers identifying the best refugia can have a better idea of the temperature profile that will be suitable for these birds," Gerson said.
This research is part of a global effort among researchers from the US, South Africa and Australia to more thoroughly understand the physiological responses of birds to increasing temperatures, with the goal of broadening our understanding of how rising temperatures will affect individuals, populations, and community structure. A three-year, $350,000 NASA New Investigator award also funded University of Nevada, Reno-based modeling aspects of this research.
To read the paper, visit:
For more information about NASA Earth science research, visit:
Samson Reiny | EurekAlert!
New research calculates capacity of North American forests to sequester carbon
16.07.2018 | University of California - Santa Cruz
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:eb9f2e89-cfe3-442b-a338-df57aac829a4> | 3.4375 | 1,671 | Content Listing | Science & Tech. | 36.024234 | 95,576,369 |
Share this article:
Jupiter and Venus will pair up in the sky on Monday morning, shining brightly together shortly before sunrise.
The two planets will appear so close together that they may look like they are just one bright star rather than two planets.
This is the closest these two planets will appear all year, an astronomical event known as a conjunction.
Venus and Jupiter will rise together about one hour before sunrise in the eastern sky, but they will remain low on the horizon.
The planets will be visible for around an hour before the light from the rising sun becomes too bright to spot the planets.
A telescope isn’t needed to see the planets as they are two of the brightest planets in the night sky. However, a telescope or a pair of high-powered binoculars focused on Jupiter will reveal the planet's colorful bands of clouds, as well as its four largest moons.
Those planning to use a telescope or binoculars during the event should use caution since the sun will be rising in the same part of the sky as Venus and Jupiter. Looking at the sun through a telescope or binoculars can lead to serious eye damage without the proper solar filters.
Mars will also be visible to the east before sunrise on Monday, appearing about halfway between Jupiter and the crescent moon. Mars is visible with the naked eye and appears slightly red compared to the stars around it.
The best viewing conditions in the United States early on Monday morning will be across the Mississippi Valley and over the interior West where skies will be mainly clear.
Clouds will obscure the conjunction for many other areas of the country; however, there may be enough breaks in the clouds in the central and northern Plains and the Southwest for people to spot the planets.
Venus and Jupiter will still appear close to each other on the days leading up to and following the conjunction, so if clouds block the event on Monday, onlookers can wake up early to view them before sunrise on another day.
However, the two planets won’t be quite as close as they will appear on Monday in the morning sky on days before and after the conjunction.
AccuWeather Astronomy Facebook page
Bundle up Sunday morning to view a rocket streaking across the mid-Atlantic skyline
Spectacular Northern Lights dazzle Michigan night sky
SpaceX to phase out everything but its Mars-colonizing 'BFR' Rocket
As Jupiter and Venus drift apart in the sky over the next several weeks, Jupiter and Mars will appear come closer together gradually.
The two planets will eventually meet in the sky on Jan. 6, 2018, before sunrise, appearing almost as close as Jupiter and Venus will on Monday morning.
The conjunction of these two planets will be visible to the southeast for several hours before sunrise and will be much higher in the sky.
The next conjunction of Venus and Jupiter will not happen until Jan. 22, 2019.
Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated.
After several dry days, the weather will take a downhill turn as NASCAR drivers gear up for the Foxwoods Resort Casino 301 in Loudon, New Hampshire, on Sunday.
An 11-million-ton iceberg hovers over the town of Innaarsuit in Greenland. The massive iceberg floats dangerously close to shore, threatening the small town.
Two people suffered shark bites while swimming in the water off Fire Island in Suffolk County, New York, according to NBC New York.
Newly formed Tropical Storm Ampil is set to strengthen as it tracks toward Japan’s Ryukyu Islands into the weekend.
A rainstorm moving up from the south will coincide with a shift in the jet stream and mark the beginning of an extended period of wet, humid conditions in the northeastern US that may last into August.
Eventualmente, la aspirante a ingeniero ambiental espera trabajar tanto con gobiernos como con corporaciones para eliminar microplásticos de los océanos de manera segura y eficiente.
Drenching thunderstorms advanced into the northeastern United States Tuesday afternoon and evening, bringing reports of flash flooding throughout the region.
Weather invariably comes into play at certain points during the Tour de France, especially when some tour stages can be greater than 100 miles in length. | <urn:uuid:d322e05d-e2e2-4592-b71c-e8f46cdebe91> | 3.375 | 910 | News Article | Science & Tech. | 44.290815 | 95,576,403 |
+44 1803 865913
By: Rohini Prasad Devkota
76 pages, b/w photos, b/w illustrations, tables
Climate change is a global phenomenon, which is due to emissions of greenhouse gases, mainly from fossil fuels combustion and deforestation. It may have far-reaching harmful impacts: changes in rainfall patterns might have serious impacts on food production; floods and droughts could become more frequent and may accelerate emigration; many human and ecosystem services could face serious impacts due to changes in temperature and rainfall patterns. The research described here attempted to evaluate and quantify the greenhouse gases emitted from different points of wastewater treatment systems under various temperatures and oxygen levels.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
We have always been very happy with NHBS service.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:76a3ecf9-ba97-477a-8fe2-4e2c981247a6> | 3.328125 | 205 | Product Page | Science & Tech. | 30.82625 | 95,576,405 |
The use of thermodynamic power cycles in space results in much higher conversion efficiencies than the traditional solar cell or thermoelectric couple. This has many beneficial consequences in both solar and nuclear applications. The 20% to 30% cycle efficiency reduces the solar energy collection area significantly, thereby reducing size, weight and drag for low earth orbit missions such as the Space Station. For nuclear fueled systems, the 4 to 5 fold increase in conversion efficiency over thermoelectrics reduces the amount of fuel needed, thereby reducing weight, size, cost and hazard.
Two competing dynamic cycles, the Organic Rankine Cycle (ORC) and the Closed Bray ton Cycle (CBC), are being developed by NASA LeRC for solar dynamic systems on the Space Station and by DOE for the U.S. Air Force. For each application (solar or nuclear), the basic cycles are similar. The major variable is power level. The solar dynamic systems being considered are in the 20 to 40 KWe range. Nuclear reactors can be used as the energy source from 10 KWe on up. Radioisotopes are best suited for the 1 to 10 KWe range.
As part of the development process for the 1 to 10 KWe sized systems, Grumman has conducted technology demonstrations of critical components of both competing cycles under funding from the U.S. Air Force Space Division.
This paper describes the respective critical components, their function, operation, verification test philosophy, test hardware and test results. | <urn:uuid:5c6ec96b-70a1-4408-bb94-6e96fe6cc89e> | 2.984375 | 299 | Academic Writing | Science & Tech. | 42.296032 | 95,576,418 |
Forest gumption: How scientists are tapping everything from drones to pruning shears to stem global warming
SEARCH FOR SOLUTIONS
One method of stemming greenhouse gases – by pruning excessive undergrowth that prevents forests from flourishing – is one of a slew of quixotic ideas being worked on by scientists and researchers around the world to help solve what could be the dominant issue of the next 100 years.
Kilombero Valley, Tanzania
Andy Marshall, a biologist, yanks on the steering wheel of a battered Nissan station wagon and swings it off a track in the Kilombero Valley of southern Tanzania. Rain from the night before has left hubcap-deep puddles across the road. Mr. Marshall downshifts, swerves onto a recently harvested field of sugar cane, and parks on the furrows. The Nissan shudders for an instant before going quiet.
The biologist – a researcher on the staffs of the University of the Sunshine Coast in Queensland, Australia and the University of York in England – and three Tanzanian villagers slog a short distance through dirt clods and stubble toward a tall leafy wall of deep green: the Magombera Forest. Cradled at the base of the Udzungwa Mountains, the Magombera is one of the most biologically diverse habitats in Africa. Many large mammals, birds, and reptiles inhabit the emerald woods, including elephants, waterbucks, buffaloes, bush pigs, wart hogs, aardvarks, porcupines, and three monkey species. Marshall himself has discovered a new species of chameleon here: the Kinyongia magomberae. An unusual mixture of East African trees normally not found together shade the forest floor. The canopy towers 100 feet above the ground.
Until recently, the Magombera carpeted about six square miles of mostly flat land in the valley. But in the past three decades, half of the forest plain has been cleared, primarily for farming. The jungle that remains has been seriously degraded – selectively logged for construction timber – leaving gaping holes in the high, green canopy.
Marshall wants to patch Magombera’s wounds. The unnatural holes in the forest’s fabric lessen the trees’ capacity to soak up and store carbon dioxide, the gas that’s warming the planet and turning the weather chaotic. Forest gaps also reduce the jungle’s suitability for some of its rare wildlife. If only he can cure this small woodland’s ills, Marshall says, his method might then revive millions more acres of unhealthy forest around the world – and perhaps make a significant contribution to slowing global warming. There’s “huge potential,” he says. He needs only one simple tool: a sharp machete.
Marshall’s method of stemming greenhouse gases – by pruning excessive undergrowth that prevents forests from flourishing – is one of a slew of quixotic ideas being worked on by scientists and researchers around the world to help solve what could be the dominant issue of the next 100 years.
While most of the attention in curbing global warming focuses on lowering emissions, many people are trying to solve the problem from the other side – by preserving the “lungs of the Earth” that absorb and sequester harmful gases. Though some of the initiatives may be more notional than forest-ready, experts believe it will ultimately take a host of different approaches to avert worsening superstorms and to keep rising seas from coursing through coastal cities from Miami to Ho Chi Minh City, Vietnam.
Every year humans disgorge 36 billion metric tons of carbon dioxide – almost enough to fill up all of the Great Lakes – out of tailpipes and smokestacks. Fortunately, only about half of this planet-insulating gas stays in the atmosphere. Otherwise, Earth would be warming at an even faster rate. Ocean water and vegetation on land absorb the other half, in equal parts. Forests alone soak up one-quarter of the torrent of CO2 that people pump into the air. “We are talking about a free 25 percent emissions reduction,” says Scott Denning, a climate scientist at Colorado State University in Fort Collins. “It’s awesome.”
Preserving the health of forests is one of the best ways to slow global warming, says Professor Denning, especially in the band of productive tropical jungle that encircles the globe from the Amazon to Central Africa, through Southeast Asia and Indonesia. But humans are doing the opposite. They’re clearing these forests at a furious clip. In the thousands of years since humans discovered fire and invented agriculture and axes, people have chopped down, burned off, and cleared away a third of the woods that once carpeted the earth. The world has lost a forested area twice the size of the United States. After accelerating for centuries, the rate of forest loss has slowed slightly in recent decades. Still, every year loggers and farmers cut down a West Virginia-size area, almost all in tropical South America, Africa, and Asia.
In 2014, diplomats from 36 countries, including the US, many European nations, and Japan, signed the New York Declaration on Forests, an agreement intended to halt deforestation by 2030. They pledged to restore and reforest 865 million acres – an area larger than India – as well. That is a monumental logistical challenge. “2030 is only 12 years away,” says Stephen Elliott, a biologist at Thailand’s Chiang Mai University. Mr. Elliott, director of his university’s Forest Restoration Research Unit, is among the hundreds of scientists and policymakers around the world looking for ways to renew the vitality of land degraded by wholesale and selective logging, and protect the endangered woods that remain.
The world hears about advances in driverless car technology every day, and Elliott says the same autonomous navigation techniques might someday help to achieve the ambitious objectives of the New York Declaration. “I don’t think we can do an area the size of India by 2030 manually,” he says.
By “manually,” Elliott means how people restore forests today. He says that in Thailand, and in most other tropical countries, forest crews work with tools and apply techniques that would be familiar to their ancestors, “using Iron Age hoes and Stone Age baskets.” Sturdy farmworkers haul heavy hampers of nursery-raised saplings into clearings. They insert root balls into shallow holes cut through unyielding soil mats.
Such backbreaking work is expensive, even where labor is cheap. It’s slow, too. Farmers and ranchers already occupy the most accessible, easily worked parcels – flat areas near roads. Politically and economically, these plots are not open for reforestation. Roads don’t go where most of the available land is. Steep slopes, untamed rivers, and other obstacles also hinder access, multiplying the difficulty and expense.
Reforestation is “the only agricultural and horticultural activity that hasn’t been automated,” Elliott says. In 2015, he set out to change that, with help from autonomous drones. He invited an interdisciplinary group of 80 scientists and engineers from around the world to meet up in northern Thailand where he studies reforestation methods.
They bantered and brainstormed for four days about how drone squadrons might reconnoiter over restoration plots, pluck seed-laden fruit from treetops, shoot those seeds into the soil, and care for the seedlings that later emerged. Freewheeling discussions on how aerial robots could cut down fruit with mini chain saws, ferry this harvest in nets, and ward off rodents with urine-soaked cat litter, lived up to a conference slogan: “The craziest ideas are best.” “It was the best fun I’ve ever had,” says Elliott.
Many researchers are withholding judgment about the potential for drone restoration of forests, though. Robin Chazdon, a biologist at the University of Connecticut in Storrs, says Elliott’s idea for robotic weeding “raised my eyebrows a bit.” Professor Chazdon edited a 2016 paper in the journal Biotropica where Elliott laid out his ideas. “There are a lot of issues that remain to be worked out,” she says. Not the least of these is how to induce air-dropped seeds to germinate and how to repel seed-hungry herbivores.
Other ways to preserve the carbon sequestering ability of forests focus on preventing the trees from being cut down to begin with. This isn’t easy, either. Any attempt to silence chain saws and the thwack of axes must answer to a litany of powerful interests craving new land and fresh wood. Farmers want more acreage for crops. Ranchers want new pastures. Developers want lots to build on. And both timber companies and small-scale loggers want lumber.
Farmers in the Hoima and northern Kibaale districts in western Uganda are clearing trees – mostly for subsistence farming and to sell wood for timber and fuel – faster than almost anywhere else on Earth. In 2011, Seema Jayachandran, an economics professor at Northwestern University in Evanston, Ill., began an unusual investigation: how to entice small landowners in Uganda into protecting land, not clearing it.
With collaborators in the US, Belgium, and the Netherlands, Professor Jayachandran recruited 1,099 Ugandan families for a study of whether modest cash rewards could sway them. Her results, published last July in the journal Science, has attracted worldwide attention from forest restoration experts.
In the past two decades, farmers in many countries have been offered such payments to refrain from clearing jungle. The idea has been tried in countries as far-flung as Vietnam and Costa Rica.
Scientists disagree about how well these efforts work. One problem has been that even if land clearing slows down after payments, how can researchers be sure that the reimbursements, and not other factors, caused the change? Moreover, skeptics suggest that payment programs might simply shift cutting to other locations, with no net benefit.
Jayachandran’s experiment was novel. Unlike previous reimbursement programs for forest protection that had been studied, landowners were selected at random either to receive payments or not, creating two groups for comparison. Half of the landowners received about $12 per year for each acre they left alone. Through a local conservation group, the scientists spot-checked parcels. The research team also monitored forest cover with high-resolution satellite photos.
Jayachandran started the experiment doubting that payments could substantially reduce tree cutting. So she was surprised when a research assistant emailed her a table with preliminary findings. “Wow, this program is having a big impact,” she thought. Later verification proved that the small payments had slowed cutting substantially. People who received no money cleared 9 percent of their land in two years of observation. People paid to leave their land untouched cleared only 4 percent. The program had reduced deforestation by more than 50 percent.
Moreover, the team showed that the program was an economical way to fight global warming. Trees owned by families who received the program’s cash rewards stored 250 tons per year more CO2 than woodlands owned by neighbors who had not chosen to receive the payments, at a cost of $105. The climate benefit could be ephemeral, unless payments continue. Still, Jayachandran has calculated that paying Ugandans to protect their land in perpetuity costs much less than putting up a solar panel or a windmill in the US with an equivalent CO2 reduction.
Alex Pfaff, an economics professor at Duke University in Durham, N.C., praises the Uganda study for proving that “there is potential” for reducing deforestation by paying landowners not to cut trees. But he doesn’t think that means the idea will succeed everywhere. He says payments work best where a lot of forest is being cut, but not at great profit (such as in hilly terrain), and where somebody carefully monitors compliance. Otherwise, he says, people who have no plans for clearing forest might get paid for doing nothing. Or, conversely, they might get paid and then chop down forest anyway.
Professor Pfaff studied one of the world’s longest-running attempts to protect tropical forests with cash rewards, Costa Rica’s Payment for Environmental Services program. Like several other experts who’ve examined Costa Rica, he found that two decades of payments – totaling tens of millions of dollars – have saved very little forest. The payments “didn’t do much.” Deforestation declined, he says, but for other reasons.
In the Tanzanian wild, Marshall wants forests to heal themselves, with only a little help from humans. He’s cutting skeins of lianas, a variety of vine with a woody stalk – the “kind that Tarzan swings on,” says Marshall.
They are native to the Magombera woods and other degraded forests that Marshall hopes to help. They often proliferate after a logger makes a clearing – stymieing regrowth of some of the world’s most lush forests and preventing trees from playing their role as Earth’s burial ground for carbon.
Marshall notes that between one- and two-thirds of all tropical forest land on the planet has suffered some form of abuse. Careful tending of ailing patches could substantially boost tree productivity. He is determined to prove his case in the Magombera.
He steps from the sugar cane plantation where he parked, baking under a sulfurous sun, into the forest. In only a few paces, the air cools noticeably and the light dims. The scientist and three helpers form a single file as they tramp deeper into the woods. Marshall wears a pair of scuffed black boots. Dirt from weeks of fieldwork clings to his pants.
“Ants!” one of the villagers yells in Swahili. A column of driver ants marches across the path. The size of a rice grain, a driver ant can bite with powerful jaws. The team starts jogging, stomping the ground with each step to prevent the insects from clinging to their shoes.
Over the years, Marshall has learned to identify most of the 500 plants in Magombera by their fruit, leaves, and flowers. “Sniff this,” he says, scraping bark from a sapling. “It smells like raw carrots.”
The four arrive at a red plastic triangle standing atop a stake in the soil. It marks a corner of one of Marshall’s 40 research plots. In a tennis-court-size clearing, a twisted sapling attracts his attention. It had grown straight like a tree then doubled over vinelike back toward the ground. What is it?
Sometimes lianas look like trees. He snaps a pencil-thick branch in two. It’s a tree, Xylopia holtzii, he announces. Liana branches are stringy and don’t break cleanly.
The distinction between liana and tree is central to Marshall’s research on how to revive degraded jungle areas such as this opening in the woods. Locals probably cleared the trees decades ago. A thick mass of leaves growing on coiled liana stalks carpets the glade now. The green, living lid shades everything below, Marshall says, stalling forest regrowth. “You can imagine what a tree has to go through to break through that.”
Research elsewhere grounds Marshall’s project. Scientists have long known that vines slow forest growth. More recently, biologists have put numbers on how much more carbon vine-free forests contain.
Stefan Schnitzer, a biology professor at Marquette University in Milwaukee, says that lianas ascend into forest heights, freeloading on the scaffolding of carbon-storing tree trunks. The size of their crowns often far exceeds that of the trees themselves. A group of gluttonous guests who refuse to leave once they’ve arrived, lianas suffocate the trees that host them. But liana trunks, far less hefty than those of trees, store pitifully little CO2 , making forests webbed with the vines less efficient in hoarding the planet-warming gas.
Several years ago, Professor Schnitzer and five workers severed the trunks of every liana in 12 acres of a forest in Panama. Freed of shading and strangling vines, the trees bulked up. “It was stunning,” says Schnitzer. Three years into the experiment, the liana-free jungle was accumulating carbon in its trunks and leaves nearly twice as fast as nearby unpruned woods.
Marshall proposes putting such findings to practical – and, eventually, extensive – use. He recently published the results of his own pilot study in the Magombera Forest. He’d clipped lianas in an area of degraded jungle the size of a suburban front yard. The trees burgeoned. After five years the same land had stored eight times as much carbon as nearby control areas. Marshall says his initial results suggest that cutting lianas in a degraded forest is as effective for sequestering carbon as reforestation. And killing lianas costs 1/50th as much, he says.
Widespread liana removal awaits larger-scale and longer-term trials. Schnitzer, whose research inspired Marshall, agrees that liana removal could help degraded forests store more carbon. But he fears unintended side effects of industrial-scale liana trimming. “You’re killing part of the community and you don’t know what the ramifications are,” he says.
On a walk through his research plots in Panama earlier this year, Schnitzer described an example. His colleague Steve Yanoviak, a biology professor at the University of Louisville in Kentucky, noticed that the population of one ant species increased after crews had cleared lianas.
Professor Yanoviak speculates that anteaters, predators of this arboreal ant, can’t climb into canopies to raid the insects’ nests without lianas. Even bigger impacts on denizens of the woods might remain to be discovered. This is to say nothing of the logistics of dispatching machete-swinging forest workers into every square inch of the world’s jungles. As a result, Schnitzer argues that efforts at slowing deforestation make more sense than trimming woodlands like an arboreal hedge.
Relaxing after a tiring day crawling through thickets, Marshall talks expansively about his hopes for someday thinning liana tangles far beyond the Magombera Forest. He is still smarting from an ant bite and a nasty thorn snag. Researchers often dodge degraded forests, he says, because they’re hotter and choked with undergrowth. “It’s an inhospitable place for us big humans.”
Marshall, like Schnitzer, is uneasy about the idea of widespread liana cutting. Yet he notes that no solution will be simple or without trade-offs. Replanting costs a lot. Entrenched interests resist controls on deforestation.
Last June Marshall received a $900,000 grant from the Australian government to fund his large-scale trials. He’s now poised to turn his largest patch yet of viney jungle into a healthy forest again. “The potential carbon gain,” he says, “is colossal.”
Reporting for this story was supported by the Frank B. Mazer Foundation and the Pulitzer Center on Crisis Reporting. | <urn:uuid:741f22b7-ab33-44ee-9220-5628d6179328> | 3.875 | 4,095 | Truncated | Science & Tech. | 48.934655 | 95,576,427 |
When scientists talk about the consequences of climate change, it can mean more than how we human beings will be impacted by higher temperatures, rising seas and serious storms.
Plants and trees are also feeling the change, but they can’t move out of the way. Researchers at the University of Maryland Center for Environmental Science and University of Vermont have developed a new tool to overcome a major challenge of predicting how organisms may respond to climate change.
“When climate changes, organisms have three choices: migrate, adapt, or go extinct,” said lead author Matt Fitzpatrick of the University of Maryland Center for Environmental Science’s Appalachian Laboratory. “We’re bringing the ability to quantify that adaptation piece that had largely been missing up to this point.”
Organisms are adapted to live in certain environments and not others. However climate change is forcing them to live in climates to which they may not be well adapted. Animals can move around, but things like plants and trees are rooted in the ground and must withstand climate change or die.
Scientists have combined genetic analyses with new modeling approaches for the first time to help identify how well balsam popular trees are adapted to handle climate change. The scientists sampled the genetic code of 400 trees from 31 locations across northern North America and combined the genetic variations with computer modeling techniques to map how important genes differ within balsam poplar and to locate where trees may have the best chance of survival in a rapidly warming world.
Up until now, scientists have sought to quantify the risk of climate change to different species by mapping where those species occur today based on climate and then predicting where they may occur in the future. For instance, models for North American tree species often predict them to occur further north as climate warms.
“The problem with the approach is you’re assuming all individuals within a species are identical, like assuming all humans will respond identically to an illness,” said Fitzpatrick. “Some will respond differently given different genetic backgrounds.
It turns out that all members of a species won’t react the same way to climate change. Some poplar trees are already adapted genetically to handle climate changes expected over the next few decades while others are not--just like some people a more likely to survive a disease than others.
Increasingly local adaptation to climate is being studied at the molecular level by identifying which genes control climate adaptation and how these vary between individuals. This type of modeling of variation in genetic makeup represents an important advance in understanding how climate change may impact biodiversity.
“We’ve developed the techniques to associate genetic variation to climate and to map where individuals may and may not be pre-adapted to climates expected in the future,” said Fitzpatrick. “It’s important to know where these places are. This gives us a way to link climate responses more closely to the biology than we were able to do previously.”
The study, “Ecological genomics meets community-level modeling of biodiversity: mapping the genomic landscape of current and future environmental adaptation,” was published by Matthew Fitzpatrick of the University of Maryland Center for Environmental Science and Steven Keller of the University of Vermont. It appeared in the October 1 issue of Ecology Letters.
Amy Pelsinsky | Eurek Alert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:675bec1f-a800-4cdc-a38a-06a47ad10877> | 3.890625 | 1,272 | Content Listing | Science & Tech. | 33.234904 | 95,576,455 |
Please see the attachment for the proper formatting and related diagrams.
1. DEF and GHI are complementary angles and GHI is eight times as large as DEF. Determine the measure of each angle.
2. If line p and q are parallel, find the measure of angle 2
3. (Using ABC, find the following:
a) The length of side AC
b) The perimeter of ABC
c) The area of ABC
4. A diagonal walk through a small rectangular garden 9 meters by 12 meters can be built at $10 per linear meter. How much will the walk cost?
5. Sam wishes to fertilize his lawn that is 75 feet by 207 feet. The fertilizer is on sale for $5.25 per bag. Each bag covers 575 square yards. What is the cost to fertilize his yard?
6. What is the area of a circle with diameter 34mm?
7. A drinking glass is in the shape of a cylinder that is 60mm in diameter and 15cm tall. How many cubic centimeters of water will it hold?
8. Create a graph with four odd vertices
9. For the following floor plan, determine whether it is possible for a person to walk through each doorway without using any of the doorways twice. If it is possible, determine such a path.
See attached file for full problem description.© BrainMass Inc. brainmass.com July 16, 2018, 7:03 am ad1c9bdddf
The above general area questions are investigated in a 3-page word document with helpful diagrams, attached. | <urn:uuid:d17ffc6f-7ee6-4647-aa45-c4c11e7c9f04> | 3.640625 | 332 | Tutorial | Science & Tech. | 80.442226 | 95,576,462 |
|About 200 genera,|
roughly 1,450 species
The Sphingidae are a family of moths (Lepidoptera), commonly known as hawk moths, sphinx moths, and hornworms; it includes about 1,450 species. It is best represented in the tropics, but species are found in every region. They are moderate to large in size and are distinguished among moths for their rapid, sustained flying ability. Their narrow wings and streamlined abdomens are adaptations for rapid flight.
Some hawk moths, such as the hummingbird hawk-moth or the white-lined sphinx, hover in midair while they feed on nectar from flowers, so are sometimes mistaken for hummingbirds. This hovering capability is only known to have evolved four times in nectar feeders: in hummingbirds, certain bats, hoverflies, and these sphingids (an example of convergent evolution). Sphingids have been much studied for their flying ability, especially their ability to move rapidly from side to side while hovering, called 'swing-hovering' or 'side-slipping'. This is thought to have evolved to deal with ambush predators that lie in wait in flowers.
Most species are multivoltine, capable of producing several generations a year if weather conditions permit. Females lay translucent, greenish, flattened, smooth eggs, usually singly on the host plants. Egg development time varies highly, from three to 21 days.
Sphingid caterpillars are medium to large in size, with stout bodies. They have five pairs of prolegs. Usually, their bodies lack any hairs or tubercules, but most species have a "horn" at the posterior end, which may be reduced to a button, or absent, in the final instar. Many are cryptic greens and browns, and have countershading patterns to conceal them. Others are more conspicuously coloured, typically with white spots on a black or yellow background along the length of the body. A pattern of diagonal slashes along the side is a common feature. When resting, the larva usually holds its legs off the surface and tucks its head underneath (praying position), which, resembling the Egyptian Sphinx, gives rise to the name 'sphinx moth'. Some tropical larvae are thought to mimic snakes. Larvae are quick to regurgitate their sticky, often toxic, foregut contents on attackers such as ants and parasitoids. Development rate depends on temperature, and to speed development, some northern and high-altitude species sunbathe. Larvae burrow into soil to pupate, where they remain for 2–3 weeks before they emerge as adults.
In some Sphingidae, the pupa has a free proboscis, rather than being fused to the pupal case as is most common in the Macrolepidoptera. They have a cremaster at the tip of the abdomen. Usually, they pupate off the host plant, in an underground chamber, among rocks, or in a loose cocoon. In most species, the pupa is the overwintering stage.
Antennae are generally not very feathery, even in the males. They lack tympanal organs, but members of the group Choerocampini have hearing organs on their heads. They have a frenulum and retinaculum to join hind wings and fore wings. The thorax, abdomen, and wings are densely covered in scales. Some sphingids have a rudimental proboscis, but most have a very long one, which is used to feed on nectar from flowers. Most are crepuscular or nocturnal, but some species fly during the day. Both males and females are relatively long-lived (living 10 to 30 days). Prior to flight, most species shiver their flight muscles to warm them up, and, during flight, body temperatures may surpass 40 °C (104 °F).
In some species, differences in form between the sexes is quite marked. For example, in the African species Agrius convolvuli (the convolvulus or morning glory hawk moth), the antennae are thicker and wing markings more mottled in the male than in the female. Only males have both an undivided frenular hook and a retinaculum. Also, all male hawk moths have a partial comb of hairs along their antennae. Females call males to them with pheromones. The male may douse the female with a pheromone before mating.
Some species fly only for short periods either around dusk or dawn, while other species only appear later in the evening and others around midnight, but such species may occasionally be seen feeding on flowers during the day. A few common species in Africa, such as Cephonodes hylas virescens (the Oriental bee hawk), Macroglossum hirundo, and Macroglossum trochilus, are diurnal.
In studies with Manduca sexta, moths have dynamic flight sensory abilities due to their antennae. The antennae are vibrated in a plane so that when the body of the moth rotates during controlled aerial maneuvers, the antennae are subject to the inertial coriolis forces that are linearly proportional to the angular velocity of the body. The coriolis forces cause deflections of the antennae, which are detected by the Johnston's organ at the base of each antenna, with strong frequency responses at the beat frequency of the antennae (around 25 Hz) and at twice the beat frequency. The relative magnitude of the two frequency responses enables the moth to distinguish rotation around the different principal axes, allowing for rapid course control during aerial maneuvers.
Sphingid larvae tend to be specific feeders, rather than generalists. Compared to similarly sized saturniids, sphingids eat soft young leaves of host plants with small toxic molecules, and chew and mash the food into very small bits. Some species can tolerate quite high concentrations of specific toxins. Tobacco hornworms, Manduca sexta, detoxify and rapidly excrete nicotine, as do several other related sphinx moths in the subfamilies Sphinginae and Macroglossinae, but members of the Smerinthinae that were tested are susceptible. The species that are able to tolerate the toxin do not sequester it in their tissues; 98% was excreted. However, other species, such as Hyles euphorbiae and Daphnis nerii, do sequester toxins from their hosts, but do not pass them on to the adult stage.
Most adults feed on nectar, although a few tropical species feed on eye secretions, and the death's-head hawkmoth steals honey from bees. Night-flying sphingids tend to prefer pale flowers with long corolla tubes and a sweet odour, a pollination syndrome known as 'sphingophily'. Some species are quite general in visitations, while others are very specific, with the plant only being successfully pollinated by a particular species of moth. Orchids frequently have such specific relations with hawk moths, and very long corolla tubes. The comet orchid, Angraecum sesquipedale, a rare Malagasy flower with its nectar stored at the bottom of a 30-cm-long tube, was described in 1822 by Louis-Marie Aubert du Petit-Thouars, and later, Charles Darwin famously predicted there must be some specialised animal to feed from it:
"[A. sesquipetale has] nectaries 11 and a half inches long [29 cm], with only the lower inch and a half [3.8 cm] filled with very sweet nectar [...] it is, however, surprising, that any insect should be able to reach the nectar: our English sphinxes have probosces as long as their bodies, but in Madagascar there must be moths with probosces capable of extension to a length of between 10 and 12 inches! [30 cm]"
"[The proboscis of a hawk moth] from tropical Africa ([Xanthopan] morganii) is seven inches and a half [19 cm]. A species having a proboscis two or three inches longer [7.6 cm] could reach the nectar in the largest flowers of Angræcum sesquipedale, whose nectaries vary in length from ten to fourteen inches [36 cm]. That such a moth exists in Madagascar may be safely predicted, and naturalists who visit that island should search for it with as much confidence as astronomers searched for the planet Neptune, – and they will be equally successful."
The predicted sphingid was discovered 21 years later and described as a subspecies of the one African species studied by Wallace: Xanthopan morganii praedicta, for which, the subspecific name praedicta ("the predicted one") was given. The Madagascan individuals had a pink, rather than white, breast and abdomen and a black apical line on the forewing, broader than in mainland specimens. Molecular clock models using either rate- or fossil-based calibrations imply that the Madagascan subspecies X. morgani praedicta and the African subspecies morgani diverged 7.4 ± 2.8 Mya, which overlaps the divergence of A. sesquipedale from its sister, A. sororium, namely 7.5 ± 5.2 Mya . Since both these orchids have extremely long spurs, long spurs likely existed before that and were exploited by long-tongued moths similar to Xanthopan morganii praedicta. The long geological separation of subspecies morgani and praedicta matches their morphological differences in the colour of breast and abdomen.
Relationships and species
The Sphingidae is sometimes assigned its own exclusive superfamily, Sphingoidea, but is alternatively included with the more encompassing Bombycoidea. Following Hodges (1971) two subfamilies are accepted, namely the Sphinginae and Macroglossinae. Around 1,450 species of hawk moths are classified into around 200 genera. Some of the best-known hawk moth species are:
- Privet hawk moth (Sphinx ligustri)
- Death's-head hawk moth (Acherontia atropos)
- Lime hawk moth (Mimas tiliae)
- Poplar hawk moth (Laothoe populi)
- Convolvulus hawk moth (Agrius convolvuli)
- Catalpa sphinx (Ceratomia catalpae)
- Hummingbird hawk-moth (Macroglossum stellatarum)
- Elephant hawk moth (Deilephila elpenor)
- Vine hawk moth (Hippotion celerio)
- Spurge hawk moth (Hyles euphorbiae)
- Oleander hawk moth (Daphnis nerii)
- Pandora sphinx moth (Eumorpha pandorus)
- Tomato worm (Manduca quinquemaculata)
- Tobacco hornworm (Manduca sexta)
- Sphingidae species list
- Twin-spotted Sphinx moth
- List of moths of India
- List of moths of Great Britain (Sphingidae)
- van Nieukerken; et al. (2011). "Order Lepidoptera Linnaeus, 1758. In: Zhang, Z.-Q. (Ed.) Animal biodiversity: An outline of higher-level classification and survey of taxonomic richness" (PDF). Zootaxa. 3148: 212–221.
- Scoble, Malcolm J. (1995): The Lepidoptera: Form, Function and Diversity (2nd edition). Oxford University Press & Natural History Museum London. ISBN 0-19-854952-0
- Kitching, Ian J (2002). "The phylogenetic relationships of Morgan's Sphinx, Xanthopan morganii (Walker), the tribe Acherontiini, and allied long-tongued hawkmoths (Lepidoptera: Sphingidae, Sphinginae)". Zoological Journal of the Linnean Society. 135 (4): 471–527. doi:10.1046/j.1096-3642.2002.00021.x.
- Stevenson, R.; Corbo, K.; Baca, L.; Le, Q. (1995). "Cage size and flight speed of the tobacco hawkmoth Manduca sexta" (PDF). The Journal of Experimental Biology. 198 (Pt 8): 1665–1672. PMID 9319572. Retrieved 10 August 2012.
- Pittaway, A. R. (1993): The hawkmoths of the western Palaearctic. Harley Books & Natural History Museum, London. ISBN 0-946589-21-6
- Hossie, Thomas; Sherratt, Thomas (August 2013). "Defensive posture and eyespots deter avian predators from attacking caterpillar models". Animal Behaviour. 86 (2): 383–389. doi:10.1016/j.anbehav.2013.05.029. Retrieved 4 February 2014.
- Common, I.F.B. (1990). Moths of Australia. Leiden: Brill. p. 24. ISBN 9789004092273.
- Schreiber, Harald (1978). Dispersal Centres of Sphingidae (Lepidoptera) in the Neotropical Region. Dordrecht: Springer Netherlands. p. 18. ISBN 9789400999602.
- Messenger, Charlie (1997). "The Sphinx Moths (Lepidoptera: Sphingidae) of Nebraska". Transactions of the Nebraska Academy of Sciences (24): 91–93. Retrieved 18 April 2016.
- Pinhey, E. (1962): Hawk Moths of Central and Southern Africa. Longmans Southern Africa, Cape Town.
- McNiell Alexander, R. (February 2007). "Antennae as Gyroscopes" (PDF). Science. 315: 771–772. doi:10.1126/science.1136840. PMID 17289963.
- Sane, S.; Dieudonné, A.; Willis, M.; Daniel, T. (February 2007). "Antennal mechanosensors mediate flight control in moths". Science. 315: 863–866. doi:10.1126/science.1133598. PMID 17290001.
- Bernays, E. A.; Janzen, D. H. (1988). "Saturniid and sphingid caterpillars: two ways to eat leaves". Ecology. 69 (4): 1153–1160. doi:10.2307/1941269.
- Wink, M.; Theile, Vera (2002). "Alkaloid tolerance in Manduca sexta and phylogenetically related sphingids (Lepidoptera: Sphingidae)". Chemoecology. 12: 29–46. doi:10.1007/s00049-002-8324-2.
- Darwin, Charles (1862): On the Various Contrivances by Which British and Foreign Orchids are Fertilised by Insects, and on the Good Effects of Intercrossing John Murray, London. HTML fulltext
- "Image at perso.orange.fr". Retrieved 2011-10-18.
- Wallace, Alfred R. (1867). "Creation by law". Quarterly Journal of Science. 4: 470–488. (p. 477)
- Rothschild, Walter; Jordan, Karl (1903). "A revision of the lepidopterous family Sphingidae". Novitates Zoologicae. 9 (Supplement): 1–972.
- Netz, Christoph; Renner, Susanne S. (2017). "Long-spurred Angraecum orchids and long-tongued sphingid moths on Madagascar: A time-frame for Darwin's predicted Xanthopan/Angraecum coevolution". Biological Journal of the Linnean Society. 122 (Supplement): 469–478.
|Wikimedia Commons has media related to Sphingidae.|
|Wikispecies has information related to Sphingidae| | <urn:uuid:faf3eca5-992f-4cc0-b736-13555d72db63> | 3.984375 | 3,465 | Knowledge Article | Science & Tech. | 53.144534 | 95,576,464 |
Geochemistry of Spodosols formed in holocene till, Norra Storfjället Massif, northern Sweden
- 41 Downloads
Silt and clay size fractions of soils, from a transect of six Spodosols formed in the Norra Storfjället Massif, were analyzed by neutron activation to determine the degree to which pedogenic processes have influenced the distribution of macro, micro and trace elements. The distributions of Mg, Ca and Fe, together with Co, Cr and other trace elements in the profiles, suggest the presence of different parent materials, with A and E horizons arising from an influx of aeolian sediment. Translocation processes, both physical and chemical, occurred in the soil concentrating Fe and Br in the spodic (Bs) horizons of the profiles. The rare earth elements (REEs) are predominantly associated with the heavy mineral fraction of the soil material. The distributions of chondrite normalized REEs patterns of the profiles indicate that light rare earth element (LREE) concentrations increase with horizon depth. The depletion of LREEs in the upper soil horizons confirms the presence of material that is chemically different from that in the lower horizons, thus indicating a distinct chemical difference from the local glacial deposits.
KeywordsSilt Holocene Neutron Activation Soil Horizon Heavy Mineral
Unable to display preview. Download preview PDF.
- 11.Sveriges Meteorologiska och Hydroliska Institute, Norrkoping, Sweden, 1980.Google Scholar
- 12.W. C. Mahaney, J. R. Earl, V. Kalm, P. J. Julig, Mount. Res. Dev., 15 (1995) 165.Google Scholar
- 13.P. W. Birkeland, Soils and Geomorphology, Oxford University Press, New York, 1984, p. 14.Google Scholar
- 14.Soil Survey Staff, Soil Taxonomy, USDA, Washington, 1975.Google Scholar
- 15.R. L. Folk, Petrology of Sedimentary Rocks, University Station, Texas, 1968.Google Scholar
- 16.P. Day, in: Methods of Soil Analysis,C. A. Black (ed.), Madison, Wisc., 1965.Google Scholar
- 17.L. D. Whittig, in: Methods of Soil Analysis,C. A. Black (ed.), Madison, Wisc., 1965.Google Scholar
- 18.G. E. Schuman, M. A. Stanley, D. Knudsen, Soil Sci. Soc. Amer. Proc. 37 (1973) 480.Google Scholar
- 19.A. Walkley, C. A. Black, Soil Sci., 37 (1934) 29.Google Scholar
- 20.J. R. Earl-Goulet, Relict Spodosols above timberline as indicators of Middle Holocene palaeoenvironments Norra Storfjället massif, northern Sweden, unpublished MSc Thesis, York University, Canada, 1995.Google Scholar
- 23.G. Barker, Detrial heavy minerals in natural accumulates. Aust. Inst. Min, Metall., Melbourne, 1962.Google Scholar | <urn:uuid:5964c9b4-e039-4ca9-abbd-117b653b0258> | 2.609375 | 677 | Academic Writing | Science & Tech. | 58.67414 | 95,576,465 |
Most of this energy is for exergy energy environment and sustainable development pdf provision of lighting, heating, cooling, and air conditioning. CFCs triggered a renewed interest in environmentally friendly cooling, and heating technologies. Under the 1997 Montreal Protocol, governments agreed to phase out chemicals used as refrigerants that have the potential to destroy stratospheric ozone.
WTW or with the LCA method can differ, after the input and output are completed, level averages that may or may not be representative of the specific subset of the sector relevant to a particular product and therefore is not suitable for evaluating the environmental impacts of products. If the intensive properties of different finitely extended elements of a system differ — the wasted work potential. If the system being evaluated involves combustion, independent certification can show a company’s dedication to safer and environmental friendlier products to customers and NGOs. The thermodynamic value of a resource can be found by multiplying the exergy of the resource by the cost of obtaining the resource and processing it. Work is performed by this energy obtained from the large reservoir, and how it is typically used.
Exergy is viewed as a more fundamental property of a system, the analysis is often broken down into stages entitled “well, and have exergy content less than their energy content. But those systems cannot then be isolated from a larger surrounding environment. Review: Consumption and Use of Non, the life cycle considered usually consists of a number of stages including: materials extraction, chemical exergy is defined as the maximum work that can be obtained when the considered system is brought into reaction with reference substances present in the environment. Most of this energy is for the provision of lighting, they will not have access to data concerning inputs and outputs for previous production processes of the product. Even if the environment conditions vary slightly, exergy content is being substituted with capital investments.
Carried out at a unit process level defined by the system boundaries for the study. Gate is a partial LCA looking at only one value, advances in exergy analysis: a novel assessment of the Extended Exergy Accounting method. LCA will be continuously integrated into the built environment as tools such as the European ENSLIC Building project guidelines for buildings or developed and implemented, cA: DCW Industries. On exergy and sustainable development in environmental engineering. The exergy of a system is determined by the potential of that system to do work, what fraction of the total human depletion of the Earth’s exergy is caused by the production of a particular economic good?
EIOLCA relies on sector; a useful concept within resource accounting. The functional flow would be the items necessary for that function, réflexions sur la puissance motrice du feu sur les machines propres a developper cette puissance. Life Cycle Assessment Practitioner Survey: Summary of Results”. Irreversibility accounts for the amount of exergy destroyed in a closed system, end of life impacts include demolition and processing of waste or recyclable materials. Retrieved on: 25 April 2013.
Defined as “quantified environmental data for a product with pre, this article discusses the potential for such integrated systems in the stationary and portable power market in response to the critical need for a cleaner energy technology. Major corporations all over the world are either undertaking LCA in house or commissioning studies, defining where one field ends and the next begins is a matter of semantics, the accuracy and availability of data can also contribute to inaccuracy. Which is a property of the system. University of Utah — this makes it both very important and very difficult to use up, can we decide which is the most “realistic impossibility” over such a long period of time when we are only speculating about the reality? Exergy inputs from solar; life cycle impacts of waste wood biomass heating systems: A case study of three UK based systems”.
No heat or work interactions with the surroundings occur, inventory analysis is followed by impact assessment. Measuring the amount of heat released is one way of quantifying the energy, but applications of exergy can be placed into rigid categories. The production of motion in steam, life Cycle Assessment in the Built Environment, recent developments in Life Cycle Assessment. Depending on the situation, this information is used to improve processes, there are no transfers of availability between the system and its surroundings. While a conventional LCA uses many of the same approaches and strategies as an Eco – gate Life cycle inventory information. | <urn:uuid:a47b4d92-e63d-4da1-afac-470816d8d677> | 3.078125 | 911 | Academic Writing | Science & Tech. | 20.61915 | 95,576,472 |
17 Aug Earth made up both light and gravitational-wave signals from merging neutron stars. For the first time in the history of the pair of signals was registered by the people. Phase spiral whirling was observed by the detectors LIGO and Virgo for 30 seconds, 100 times longer than previous gravitational wave signals. Also, this signal was the closest we've seen is only 130 million light-years from us. While the Observatory was recovered from the signals of huge amount of information, created a new task: to bring all this to the theoretical meaningfulness.
Relatively speaking, we heard the sound but did not know where he is.
Ethan Siegel sat down with Chris FYROM National laboratory Los Alamos, an expert on supernovae, neutron stars and gamma ray bursts, which works on the theoretical side of these objects and events. Nobody expected that LIGO and Virgo will be able to register the merger at such an early stage of the project, only two years after the first successful registration well before reaching the planned sensitivity. But they not only saw the signals, but were able to pinpoint their source and confluence that brought us a lot of surprises.
Here are five of the biggest new issues raised by this discovery.the
Before we saw this event, we had two ways to assess the frequency of mergers of neutron holes: measurements of binary neutron stars in our galaxy (like pulsars) and our theoretical models of star formation, supernovae and their remains. All this gives us a rating of 100 such mergers occurs annually within a cubic gigaparsec of space.
Watch for a new event has provided us with a first evaluation of the observed frequency of the Aurora, and she's ten times more than expected. We thought that we need LIGO, has reached the limit sensitivity (halfway) to see something, and then another, and three additional detector to determine the exact location. And we managed not only early to see him, but also to localize the first attempt. So the question is: we were just lucky to see this event, or the frequency thereof is much higher than we thought? If above, then what is incorrect with our theoretical model? Next year LIGO will be spent on modernization, and theorists will have a little time to think.the
Our best theoretical models predicted that the merger of stars of this kind will be accompanied by bright light in the UV and optical parts of the spectrum during the day, and then will fade and disappear. But instead, the glow lasted two days before beginning to fade, and we, of course, have questions. Bright glow, which lasted so long suggests that the wind disk around the stars threw 30-40 Jupiter masses in a substance. According to our data, the substances should be less than twice or even eight times.
What is so unusual about these emissions? To simulate such a merger, you need to include many different physics:the
...and much more. Different codes model these components with different levels of difficulty, and we don't know which of the components responsible for these winds and emissions. Find the right is a problem for theorists, and we have to accept the fact that we were the first to measure the merger of neutron stars... and got a surprise.
In the last moments of the merger of two neutron stars emit not only gravitational waves, but also the catastrophic explosion that reverberates throughout the electromagnetic spectrum. And if the product is a neutron star, a black hole or something exotic secondary, the transition state is unknown to usthe
To get enough of lost weight from the merger of neutron stars, it is necessary that the product of this merger generates sufficient energy of the appropriate type to blow away the mass from the surrounding disk stars. Based on the observed gravitational wave signal, we can say that this merger has created an object with a mass of 2.74 solar, which is much higher than the solar maximum mass that can be non-rotating neutron star. That is, if nuclear matter behaves as it is expected, the merging of two neutron stars would lead to the emergence of a black hole.
a Neutron star is one of the densest collections of matter in the Universe, but its mass is an upper limit. Exceed it — and the neutron star collapses again with the formation of a black hole
If the nucleus of this object immediately after the merger had shrunk to a black hole, no release would not exist. If, instead, it became a supermassive neutron star, it would have to rotate extremely rapidly, since a large angular momentum would increase the maximum mass by 10-15%. The problem is, what if we got so rapidly rotating supermassive neutron star, it would have to be a Magnetar with an extremely powerful magnetic field that is a quadrillion times more powerful than the field on the surface of the Earth. But Magnetar quickly stops spinning and needs to collapse into a black hole in 50 milliseconds; our own observations of magnetic fields, viscosity and heating, which threw the weight, show that the object has existed for hundreds of milliseconds.
Something's not right here. Either we have a rapidly rotating neutron star, which for some reason is not a Magnetar, either we have to be emitted for hundreds of milliseconds, and our physics does not give us the answer. Thus, even for a short while, most likely, we had a supermassive neutron star, followed by the black hole. If both are true, we are dealing with the most massive neutron star and most low-mass black hole in the history of observations!the
There is a limit to how massive may be the neutron star, and if you add weight, you get exactly the black hole. This limit is 2.5 solar masses for non-rotating neutron star means that if the total mass of the merger will be lower, you will almost certainly be left with a neutron star after the merger, which will lead to strong and long ultraviolet and optical signals, which we saw in this case. On the other hand, if you go up to 2.9 solar masses, immediately after the merger will form a black hole, it's probably no ultraviolet and optical variations.
Anyway, our first merger of neutron stars was in the middle of this range, when you may receive a supermassive neutron star that generates the emissions and the optical and ultraviolet signals for a short time. If Magnetar formed with less massive mergers? And more massive — just come to black holes and are invisible at these wavelengths? How rare or common these three categories of merger: ordinary neutron stars, supermassive neutron stars and black holes? A year LIGO and Virgo will search for answers to these questions, and theorists will be the year to bring their model in line with forecasts.the
This question is very complicated. On the one hand, the discovery confirmed what had long been suspected but could not prove that merging neutron stars produce gamma-ray bursts. But we always believed that gamma-ray bursts emit gamma rays only in a narrow conical form, 10-15 degrees in diameter. Now we know, because of the provisions of the merger and the value of gravitational waves, gamma-ray bursts go at a 30 degree angle from our line of sight, but we observed a powerful gamma-ray signal.
The Nature of gamma-ray bursts needs to change. The task of theorists is to explain why the physics of these objects is so different from predicted by our models.the
When it comes to the heaviest elements in the periodic table, we know that they are produced for the most part not a supernova, namely the mergers of black holes. But to get the spectra of heavy elements from a distance of 100 million light years, you need to understand their transparency. This includes the physical understanding of the atomic transitions of electrons in the orbitals of an atom in an astronomical environment. First we have the environment to test how astronomy overlaps with nuclear physics, and follow-up merger should allow us to answer the question of opacity and transparency in particular.
It is possible that the merger of neutron stars happens all the time, and when LIGO will reach the planned level of sensitivity, we will find tens of mergers per year. It is also possible that this event was extremely rare and we are lucky to see only one for the year even after the settings are updated. The next ten years, theoretical physicists spend searching for answers to the above questions.
The Future of astronomy lies before us. Gravitational waves — a new, completely independent way of exploring the sky and mapping the sky with gravitational waves with the traditional astronomical maps, we are ready to answer the questions they never dared ask a week ago....
Scientists have discovered a new interesting hints in the dorsal columns of the ancient ancestors of the people who indicate that their different subtypes moved differently depending on the environment. Published in American Journal of Physical Anthr...
Scientists know about cancer almost everything. However, to overcome this dangerous disease completely from the experts have not yet been obtained. But if cancer cells are so intent on destroying everything, why not get them to destroy their own kind...
last year, the exoplanet is only 11 light years from Earth. And not just another exoplanet, and rocky exoplanets, presumably with great potential for habitability. A new group of researchers conducted a deeper analysis of the open-world and found evi...
to such conclusion came the jury of the international competition of City Design Mars 2017, the results of which first place was awarded to a team of developers and designers , who presented their vision of the Martian cities of t...
does the person desire to suicide? It is quite difficult to determine even if a patient talking to a therapist. According to statistics, about 80% of people who are thinking about suicide, don't talk about it to his psychoanalyst ...
is it Possible to get ? For many years doctors and scientists have tried to find the answer to this burning question. And then, finally, they came to a disappointing conclusion – it is possible. For a long time nobody could find a... | <urn:uuid:f34d96f9-ff49-489c-898d-2341a883e227> | 3.625 | 2,058 | Truncated | Science & Tech. | 51.6027 | 95,576,476 |
Boyle's Law and Charle's law can be combined into a single equation
V= constant x T/P
constant is independent of temperature and pressure but oes not depdend on the amount of gas.For one mole,the constant will have a specific value,which we will denote as R.The molar volume Vm is
Vm =R x T/P
According to the avogadro law the molar volume at a specific value of T and P is a constant independent of the nature of the gas and this implies that R is a constant of proportionality relating molar volume of a gas to T/P.
The preceeding equation can be written for n moles of gas by multiplying both sides by n.
nVm = nRT/P or PV=nRT
The equation whichc combines all of the gas laws,is called the ideal gas law.
Lets see one example :
Answer the equation asked in the chapter opening;how many grams of oxygen are there in a 50.0 L tank at 21 oc when the oxygen pressure is 15.7 atm.
Answer: P= 15.7 atm
V = 50.0 l
T = 21oc + 273 k = 294 K
n = ?
Solving the ideal gas law for n given,
n = PV/RT
n = 15.7 atm x 50.0 L /0.0821 L.atm/(K.mol) x 294 = 32.5 mol
now converting moles ot mass of o2
32.5 mol x32.0 O2/1 mol O2 = 1.04 x 10^3 g O2 | <urn:uuid:9260d954-bb54-47f7-a183-07fed06060dd> | 4.0625 | 356 | Tutorial | Science & Tech. | 99.745238 | 95,576,482 |
- Open Access
Effect of an 860-m thick, cold, freshwater aquifer on geothermal potential along the axis of the eastern Snake River Plain, Idaho
© The Author(s) 2017
Received: 12 July 2017
Accepted: 6 December 2017
Published: 15 December 2017
A 1912-m exploration corehole was drilled along the axis of the eastern Snake River Plain, Idaho. Two temperature logs run on the corehole display an obvious inflection point at about 960 m. Such behavior is indicative of downward fluid flow in the wellbore. The geothermal gradient above 935 m is 4.5 °C/km, while the gradient is 72–75 °C/km from 980 to 1440 m. Projecting the higher gradients upward to where they intersect the lower gradient on the temperature logs places the bottom of the cold, freshwater Snake River Plain aquifer, which suppresses the geothermal gradient at this location, at least 860 m below the surface. The average heat flow for the corehole between 983 and 1550 m is 132 mW/m2. Although the maximum bottom-hole temperature extrapolated from a measured time–temperature curve was only 59.3 °C, geothermometers suggest an equilibrium temperature on the order of 125–140 °C based on a single fluid sample from 1070 m. Furthermore, below 960 m the basalt core shows obvious signs of alteration, including a distinct color change, the formation of smectite clay, and the presence of secondary minerals filling vesicles and fracture zones. This alteration boundary could act as an effective cap or seal for a hot-water geothermal system.
The eastern Snake River Plain (ESRP) in southern Idaho covers an area of approximately 28,000 km2 (Morse and McCurry 2002), and is a prime target for geothermal exploration due to high geothermal gradients (Blackwell 1989). Heat flow in excess of 100 mW/m2 has been documented in the area (Blackwell and Richards 2004). This high heat flow is associated with the Yellowstone hotspot, which developed from a mantle plume (Smith et al. 2009). The hotspot was stationary as the North American plate moved southwest over it at a rate of about 2.5 cm/year (Smith and Braile 1994).
The ESRP is home to the Snake River Plain aquifer (SRPA), which is hosted primarily in basalt (Welhan et al. 2002a, b). The majority of these basalts are olivine, tholeiite pahoehoe flows (Greeley 1982; Leeman 1982; Kuntz et al. 1992) with chemical compositions similar to Hawaiian basalts. The bulk of the volcanic vents are clustered around the axis of the ESRP (Kuntz et al. 1992; Smith 2004).
The SRPA is one of the most productive aquifers in the United States (US Geological Survey 1985). The Big Lost River is one of the contributors of recharge to the aquifer (McLing 1994; Orr 1997). However, it is only believed to contribute about 4% of the water input to the system (Ackerman et al. 2010). According to Garabedian (1992), surface-water irrigation is by far the largest contributor, followed by tributary underflow, precipitation, and losses from the Snake River, streams and canals. Discharge from the aquifer is predominantly from irrigation pumping and flow from springs (Mann and Knobel 1990). The average annual discharge at Thousand Springs near Hagerman, Idaho ranges from 3662 million m3 in 1904 to a high of 6091 million m3 in 1951 (Bartholomay et al. 2017). Water temperatures range from 6 °C in the northeast to 16 °C at the discharge zone at Hagerman in the southwest (Smith 2004).
Much of the information about the SRPA comes from the Idaho National Laboratory (INL), which is located on the northeastern edge of the aquifer. However, US Geological Survey (USGS) publications provide more inclusive information on the entire ESRPA. The depth to water has been measured from 60 to 270 m below the surface (Ackerman 1991; Knobel et al. 1992). The base of the aquifer has been penetrated in eight deep wells on the INL site, with a depth ranging from 200 to 550 m based on temperature inflections (Smith 2004). The onset of low-temperature alteration identified at the base of the aquifer is thought to control aquifer thickness, and may itself be controlled by thermal flux from below (Morse and McCurry 2002).
The drilling was done by Drilling, Observation, and Sampling of Earth’s Continental Crust (DOSECC), a non-profit organization that works in concert with the International Continental Drilling Program (ICDP) on scientific drilling projects such as this one. DOSECC drilled the corehole with an Atlas-Copco CS4002 drilling rig. Drilling commenced on 27 September 2010 and continued through 27 January 2011, when a total depth (TD) of 1912.5 m had been reached (Delahunty et al. 2012). A steel liner was inserted into the well and left to equilibrate for 4 months prior to temperature logging. A separated joint in the liner prevented logging below a depth of 1440 m.
Lithologic logging took place as soon as the core reached the surface. Field lithologic logging consisted of washing, measuring, writing a physical description, photographing, and boxing the core. Once boxed, the core was transported offsite for detailed description using the ICDP’s Drilling Information System (DIS). The hole was drilled almost entirely through basalt, with thin (2–20 m thick) loess horizons in the upper 365 m and two thick sections of clastic sediments (50–61 m thick) in the lower 200 m. Over 550 basalt flow units were identified (Potter et al. 2011).
Temperature logs were run by the Southern Methodist University (SMU) Geothermal Research Laboratory and the Operational Support Group (OSG) from Helmholtz Centre Potsdam, GeoForschungsZentrum (GFZ), German Research Centre for Geosciences. SMU and OSG used wireline logging tools to record the temperature profile in the well. The SMU tool recorded temperatures while running the tool into the well, and the data had to be downloaded to a computer after the tool had been retrieved from the well. The OSG tool gave live readings and recorded the temperature while pulling the tool out of the hole. The SMU log was acquired on 4 May 2011, while the OSG log was acquired sometime between 29 June and 4 July 2011. Unfortunately, as stated previously, the SMU tool was only able to log to a depth of 1440 m due to a separated joint in the steel liner, while the OSG tool was only able to log to a depth of 1220 m due to subsequent blockage by some unknown cause.
In addition to the temperature log, SMU measured thermal conductivity using the divided-bar method at their Geothermal Laboratory. A detailed explanation of the procedure and tools used is given in Blackwell and Spafford (1987) and is described briefly here. One-inch (2.54 cm) diameter sample plugs from the intervals chosen were collected from the Kimama core and sent to SMU. There, samples were cut to approximately 1.5 in. (3.81 cm) in height, and the tops and bottoms were smoothed to insure proper coupling with the divided-bar apparatus (DBA). Samples were then saturated with water under pressure for 8–12 h. Once saturated, samples were put into the DBA at approximately 400 psi (2760 kPa) with a constant temperature of 25 °C on top and 15 °C on the bottom, forcing a heat flux within the sample. Samples stayed within the DBA until they reached thermal equilibrium, at which point relative thermal conductivity was measured and absolute conductivity was calculated through comparison to standard thermal conductivity samples. Samples were not corrected to in situ conditions because the in situ pressure and temperature impact would be minor and likely within measurement error.
The GFZ fluid sampler is a positive displacement system which allows controlled sampling without sudden decompression or degassing. It can take one sample (0.6 L) at a time. OSG made several attempts to retrieve water plus gas samples from the corehole, but the sampler refused to work properly and only one water sample with no gas phase was collected on 3 July 2011. Due to blockage in the hole, the sampler was not able to get deeper than approximately 1220 m, and the single sample was obtained from only 1070 m.
A water sample was also collected from a shallow, water-supply well nearby. This well was drilled into the SRPA to a TD of about 90 m. There was an electric, submersible pump in this well, and the sample was taken directly from the spigot. In addition, water samples were collected from shallower parts of the corehole as well as from the water-supply well by the USGS (Twining and Barholomay 2011).
The two water samples were analyzed in the field for temperature, electrical conductivity (EC), pH, and alkalinity. The water samples were analyzed for major and trace ions by ICP (inductively coupled plasma) at the Utah State University Analytical Laboratory (USUAL). Chloride concentrations were determined using a Lachat flow injector analyzer, which is an automated colorimeter. The water samples were also analyzed for the stable isotope ratios of deuterium to hydrogen and 18O to 16O by the Stable Isotope Ratio Facility for Environmental Research (SIRFER) at the University of Utah.
Twenty-two randomly oriented whole-rock sample powders obtained from core were analyzed for clay content and composition using X-ray diffraction from depths of 155, 305, 458, 610, 763, 914, 917, 933, 961, 963, 969, 970, 972, 995, 1005, 1038, 1068, 1221, 1372, 1524, 1676, and 1829 m. In addition, clay separates were analyzed in nine additional samples from the depths of 1042, 1084, 1234, 1311, 1396, 1471, 1676, 1798, and 1829 m. X-ray diffraction analyses were carried out at Utah State University using a Panalytical X’pert Pro X-ray diffraction spectrometer with CuKα radiation at 45 kV and 40 mA. The clay separates were processed in three stages using standard clay identification procedures: air dried, glycolated, and heated at 500 °C for 2 h.
Four samples were also analyzed at Vassar College using a Siemens D-5000 theta:2-theta diffractometer at 40 kV and 30 mA. Clay mineral modeling was completed using the NEWMOD software, 1985 version. NEWMOD is a computer program for the calculation of one-dimensional diffraction patterns of mixed-layered clays.
The maximum BHT was acquired from the DOSECC temperature tool. This tool lies in the core barrel, and thus does not log continuously. However, it was necessary to use this tool throughout the drilling operation because the Idaho Department of Water Resources (IDWR) requires a blowout prevention device if the temperature exceeds 100 °C (Delahunty et al. 2012). The maximum temperature recorded was 59.3 °C at 1824 m, which was the greatest depth at which DOSECC measured the temperature. Because the DOSECC tool was never in true equilibrium with the ambient conditions, the maximum BHT measurement must be lower than the true equilibrium temperature at that depth.
Several methods have been developed for correcting BHT measurements. One of these (Förster 2001) employs a simple empirical correction based on the mean annual ground-surface temperature, the estimated amount of temperature correction at the maximum measurement depth, and the depth of the cross-over point between underestimated temperatures below it and overestimated temperatures above it caused by circulation of drilling fluid. However, the BHT measurements have not been corrected using this, or any other, method because the two temperature logs have been used for determining the geothermal gradients.
Förster (2001) also estimates the average shut-in time necessary for thermal equilibrium to be achieved, which is approximately 1000 h (about 40 days) for wells < 2000 m deep. Fortunately, the SMU and OSG temperature logs were obtained 97 and 153 to 158 days after drilling had ceased, respectively. Consequently, it has been assumed that these two logs recorded equilibrated temperatures.
The OSG temperature log (Fig. 2) records a geothermal gradient of only 4.5 ºC/km above 950 m. The log records a sharp increase in temperature from 16.3 °C at 951 m depth to 23.1 °C at 970 m. The thermal gradient above 950 m is nearly isothermal. This behavior is indicative of downward fluid flow within the wellbore. Due to blockage in the hole, the OSG temperature log only reached a maximum depth of 1131 m, where a temperature of 34.7 °C was measured, resulting in a thermal gradient from 970 to 1131 m of 72 °C/km. This gradient, when projected upward, intersects the shallower, nearly isothermal gradient at approximately 860 m, suggesting that this may be where the bottom of the cold, freshwater SRPA, which suppresses the geothermal gradient at this location, lies.
The SMU temperature log (Fig. 2) reached a maximum depth of 1440 m and displayed a similar profile to the OSG log. Analysis of the temperature–depth curve shows the SRPA perturbing the geothermal gradient from 0 to 935 m. Below 1000 m, the temperature log has a nearly linear 10-m averaged gradient of 75 ± 12 °C/km. The high 1sigma within the gradient is likely an artifact of the high depth resolution of the temperature log, which introduces large gradient swings for minor changes in temperature. A 10-m average was applied to the gradient to remove some of this noise. The gradient varied from approximately 40 °C/km to greater than 100 °C/km even after the 10-m average was applied. When this higher gradient is projected upward, it also intersects the low, shallower gradient at about 860 m. The temperature at TD (1912 m), projected downward, would be approximately 94 °C. The DOSECC BHT temperature data produce a similar average gradient, but had higher error and were not used for the subsequent heat-flow calculation. There are two major gradient spikes at about 970 and 1410 m (Fig. 2) that may indicate high permeability zones.
Thermal conductivity measurements from core
Thermal conductivity (W m−1 K−1)
Number of sample runs
1.93 ± 0.00
1.09 ± 0.01
Samples at 983 and 1550 m were averaged to calculate a mean conductivity for the section with a measured thermal gradient. The average gradient for the measured thermal regime is 1.76 ± 0.2 W m−1 K−1. The thermal conductivity values below 1550 m were not used in the average conductivity because the thermal regime of this section is only characterized by BHT measurements, which show greater variability and potential error than the section above. The lower thermal conductivity measurements are expected to represent a change in rock properties, which would also be indicated in the temperature log through a change in the geothermal gradient, but there would be a constant heat flow throughout the well. Using the deeper thermal conductivity measurements would introduce unnecessary error since there is not an equally deep equilibrium thermal gradient to use for heat flow calculation. The average heat flow is 132 ± 26 mW/m2 calculated using the thermal gradient from 1000 to 1400 m, and the average thermal conductivity for samples at 983 and 1550 m. This value is statistically the same as other deep wells in the SRP (Blackwell 1989), and is thus considered representative of background regional heat flow.
Results of chemical analyses of KA-W and KA-1 water samples (all units in mg/L unless otherwise noted)
The pH values of the two water samples are similar, with both being mildly alkaline. However, the electrical conductivity (EC) in the corehole water is about three times greater than in the water-supply well. Furthermore, the KA-W sample is calcium–magnesium–bicarbonate water, while the KA-1 sample is sodium–chloride water. These differences are to be expected considering the depths from which the samples were taken.
Geothermometers are temperature indicators using temperature dependent geochemical and/or isotopic compositions of geothermal waters (Gupta and Roy 2007). All geothermometers have limitations. The following assumptions are made when using geothermometers: (1) the relevant hydrothermal minerals in the reservoir are in equilibrium with the geothermal liquid; (2) the pore fluid pressure in the reservoir is fixed by coexistence of liquid and steam; (3) the geothermal liquid cools, either conductively or adiabatically, through steam separation at 100 °C; (4) the geothermal liquid does not mix with cold, shallow waters during the ascent towards the surface; (5) the geothermal liquid does not precipitate any relevant minerals along the upflow path (Marini 2004). In most situations it is difficult to prove that these assumptions are met (Ferguson et al. 2009).
A range of elemental geothermometers was applied to the KA-1 water sample: chalcedony and quartz (Fournier 1973, 1977), Na/K (Fournier 1979), Na/K (Giggenbach 1988), Na–K–Ca (Fournier and Truesdell 1973), Na–K–Ca–Mg (Fournier and Potter 1979), and K2/Mg (Giggenbach 1988). The KA-1 water sample was not analyzed for Li, so geothermometers such as Na/Li and Mg/Li (Kharaka and Mariner 1989) could not be applied.
Geothermometer calculations for KA-1 (all values in °C)
Physical description of core
The core contains 557 flow units (Potter 2014). The unaltered basalts are a light gray, commonly diktytaxitic to intergranular textured, olivine tholeiites, very similar to the basalts at the INL described by Morse and McCurry (2002). Vesicles are concentrated just below the flow unit tops typically, but also form vesicle columns or trains locally. Phenocrysts of plagioclase and olivine exist within a fine-grained matrix of plagioclase, pyroxene, and glass. Phenocryst content ranges from about 10 to 20%, and vesicle content ranges from ≤ 5 to 50%. Most vesicles above 970 m do not contain authigenic minerals, although large irregular gas cavities, which are much larger than the surrounding vesicles, are commonly lined with calcite with or without quartz.
Below 970 m vesicles are commonly lined with calcite or smectite-group clay minerals (green-brown nontronite or blue saponite). Below 1200 m many vesicles are filled with either blue saponite or calcite plus quartz (with calcite lining the walls and quartz filling the interiors). Rare zeolite fillings were observed around 1400 m depth. The shallowest depth at which all vesicles are filled is 1042 m. The greatest depth at which open vesicles are observed is 1833 m.
Twenty-two whole-rock powders were analyzed using X-ray diffraction. The samples were chosen to avoid vesicle fillings and include only basalt groundmass. The shallowest occurrence of clay in the basalt groundmass is in the sample from 963 m; clay peaks are not present in any of the nine samples above 963 m. All seven samples below 1038 m have clay peaks present. Two of the five samples between 963 and 1038 m show a clay peak (at 969 and 972 m), while three do not (at 970, 995, and 1005 m).
Nine clay separate samples were analyzed using X-ray diffraction. Clay separate samples were taken starting at 1042 m because that is the approximate depth at which clays become abundant enough in the groundmass to collect a sufficient amount of material for X-ray analysis. The X-ray diffraction results indicate the presence of smectite clay in the core. The presence of smectite is expected because it has been found in basalts at the INL (Morse and McCurry 2002) and in Hawaiian basalts (Tomasson and Smarason 1985), both of which are very similar geochemically to the basalts at Kimama.
Smectites from the core are both dioctahedral and trioctahedral. The NEWMOD modeling suggests that two of the samples (at 1234 and 1798 m) are dioctahedral smectites. The samples from 1042, 1471, 1676, and 1829 m may also be interpreted as dioctahedral smectites. Dioctahedral clays are formed from weathering of potassium feldspars and are commonly found in sedimentary rocks. Sedimentary interbeds were observed a short distance from the samples from 1042 and 1234 m, so it is expected that they are dioctahedral.
X-ray diffraction data reveal that the sample from 1396 m is a trioctahedral smectite. The samples from 1084 and 1311 m may also be interpreted as trioctahedral smectites. Trioctahedral clays are derived from the weathering of mafic minerals, such as basaltic glass.
With increasing temperature, dioctahedral smectites convert to illite. Trioctahedral clays convert to chlorite instead of illite with increasing temperature and pressure. Johnston (1983) suggests that the temperature at which dioctahedral smectite begins to become unstable and convert to illite is 90 °C. Six of the nine samples analyzed are interpreted as dioctahedral smectites. Also, smectites should become unstable at generally the same conditions whether they are dioctahedral or trioctahedral. Finally, no mixed-layer clays of smectite/illite or smectite/chlorite were observed. Therefore, the temperature since the formation of the smectite clays has remained below 90 °C. This is consistent with the maximum temperatures recorded for each of the three temperature logs, the highest of which was 59.3 °C for the DOSECC temperature tool, as well as the temperature at TD projected downward from the SMU temperature log of 94 °C.
Three separate observations—the temperature logs, the physical characteristics of the core, and the mineralogical data from the X-ray diffraction analyses—all suggest that a major boundary between unaltered and altered basalts is present in the axial zone of the SRP at Kimama between about 860–970 m below the surface. First, the temperature logs exhibit a sharp inflection point at around 960 m, with a near isothermal gradient of 4.5 °C above 935–950 m and a conductive gradient of 72–75 °C below 970–980 m. This behavior is indicative of downward fluid flow in the wellbore, and projecting the higher gradients upward to their intersection with the lower gradient on the temperature logs places the bottom of the cold, freshwater SRPA at approximately 860 m.
Second, the basalts above 950 m show no signs of alteration. Between 970 and 1020 m, the core shows a gradational change from fresh to altered basalts. All basalts below 1020 m show signs of alteration that become more significant with depth. Clays first begin to appear as vesicle linings around 950 m depth. The color of the core remains light gray, then changes abruptly at about 1020 m, where it becomes greenish–gray, and the vesicles begin to be filled with clay minerals (nontronite, saponite), calcite, quartz, and more rarely zeolites. Finally, mineralogical data from the X-ray diffraction analyses suggest that the first signs of groundmass alteration occur at around 960 m with the appearance of smectite clays in the basalts.
All three observations point to clogging of the basalt pore spaces to create a natural boundary between the relatively fast moving, cold fresh water above 860 m depth, and little or no moving water below 970 m depth. Morse and McCurry (2002), Smith (2004), and McLing et al. (2016) have made similar observations at other locations on the ESRP, but at shallower depths. Based on these data, the base of the SRPA in the axial zone at Kimama is at least 860 m below the surface. This is 1.6–4.3 times greater than the estimated base of the SRPA on the INL site (McLing et al. 2002, 2016; Morse and McCurry 2002; Smith 2004).
The highest temperature recorded by DOSECC in the corehole was 59.3 °C at a depth of 1824 m. The geothermal gradient was only 4.5 °C/km above 935 m, but increased dramatically to 72–75 °C/km below 980 m. Extrapolating this gradient, the projected temperature at TD would be approximately 94 °C. This is consistent with the observation that the temperature since the formation of the smectite clays, the deepest sample of which was obtained from core at 1829 m, has remained below 90 °C. The suppression of the gradient above 860 m is due to the cold, fresh water of the SRPA (e.g., Smith 2004). Furthermore, the swelling smectite clays clog pore spaces in the basalts, creating a natural seal for rising thermal waters below the SRPA.
Results of the chemical analyses of the water samples show that there is a distinction between the deeper geothermal waters (KA-1) and the shallow SRPA water (KA-W). KA-1 is characterized as a sodium-chloride water, while KA-W is a calcium–magnesium-bicarbonate water. The stable isotopic compositions of KA-1 and KA-W indicate that they both are meteoric waters. If the geothermal waters are indeed meteoric, then they have moved down flow paths to these relatively great depths and have begun equilibrating with the geothermal system.
Geothermometers suggest that the deeper waters mixed into the system reach higher temperatures than were actually recorded in the corehole. The Na–K–Ca geothermometer (Fournier and Truesdell 1973) results in an estimate of 139 °C. However, this geothermometer gives inconsistent results for temperatures below 200 °C (Paces 1975). Because the temperature is below 200 °C, the Na–K–Ca–Mg geothermometer (Fournier and Potter 1979) should provide better results. This geothermometer gives an estimate of 125 °C, suggesting that there are magnesium reactions taking place which affect the geothermometer estimates.
The alteration boundary at 950 to 970 m could act as an effective cap or seal for a hot-water geothermal system. Such a system ranges in water temperatures from 50 to 150 °C (Gupta and Roy 2007).
The geothermal resource in the axial zone of the ESRP warrants further exploration. Although it could be debated whether this resource is economically reasonable in this area, well depths in excess of 2 km should yield sufficiently high temperatures to cause the conversion of smectite clays to illite, which would enhance fluid flow. Furthermore, the same caliber of geothermal resources may be found closer to the surface along the margins of the ESRP where the aquifer is thinner, thereby decreasing the depth of drilling and the initial cost of geothermal exploration.
TEL and TGF collected the water samples and interpreted the results of their chemical analyses. CJS and JWS logged the core samples, and CJS and JRW analyzed the core samples using X-ray diffraction. JFB made the thermal conductivity measurements and heat flow calculation. JWS and JPE served as the principal investigators. DDB and DLN obtained and interpreted two of the three temperature logs. All authors read and approved the final manuscript.
This work is part of Project HOTSPOT, an ARRA (American Recovery and Reinvestment Act) project funded by US Department of Energy award DE-EE0002848, the International Continental Drilling Program, and Utah State University, with additional support from the Snake Play Fairway project (DE-EE0006733). The authors also thank the two anonymous reviewers and editor Olaf Kolditz, whose insights and suggestions greatly improved the final version of this work.
The authors declare that they have no competing interests.
Ethics approval and consent to participate
Availability of data and materials
The datasets supporting the conclusions of this article are available in the Utah State University Libraries Digital Commons repository at http://digitalcommons.usu.edu/.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- Ackerman DJ. Transmissivity of the Snake River Plain aquifer at the Idaho National Engineering Laboratory, Idaho. US Geological Survey Water-Resources Investigations Report 91-4058 (DOE/ID-22097); 1991. p. 1–35.Google Scholar
- Ackerman DJ, Rousseau JP, Rattray GW, Fisher JC. Steady-state and transient models of groundwater flow and advective transport, eastern Snake River Plain aquifer, Idaho National Laboratory and Vicinity, Idaho. US Geological Survey Scientific Investigations Report 2010-5123; 2010. p. 1–220.Google Scholar
- Bartholomay RC, Maimer NV, Rattray GW, Fisher JC. An update of hydrologic conditions and distribution of selected constituents in water, eastern Snake River Plain aquifer and perched groundwater zones, Idaho National Laboratory, Idaho, emphasis 2012–15. US Geological Survey Scientific Investigations Report 2017-5021; 2017. p. 1–87.Google Scholar
- Blackwell DD. Regional implications of heat flow of the Snake River Plain, northwestern United States. Tectonophysics. 1989;164:323–43.View ArticleGoogle Scholar
- Blackwell DD, Richards M. Geothermal map of North America. American Association of Petroleum Geologists. 2004. (scale 1:6,500,000).Google Scholar
- Blackwell DD, Spafford RE. Experimental methods in continental heat flow. In: Sammis CG, Henyey TL, editors. Experimental methods in physics—Geophysics, part B—field measurements, 24. Cambridge: Academic Press; 1987. p. 189–26.Google Scholar
- Craig H. Isotopic variations in meteoric waters. Science. 1961;133:1702–3.View ArticleGoogle Scholar
- Delahunty C, Nielson DL, Shervais JW. Coring of three deep geothermal holes, Snake River Plain, Idaho. Geotherm Res Counc Trans. 2012;36:641–7.Google Scholar
- Ferguson G, Grasby SE, Hindle SR. What do aqueous geothermometers really tell us? Geofluids. 2009;9:39–48.View ArticleGoogle Scholar
- Förster A. Analysis of borehole temperature data in the Northeast German Basin: continuous logs versus bottom-hole temperatures. Pet Geosci. 2001;7:241–54.View ArticleGoogle Scholar
- Fournier RO. Silica in thermal waters: Laboratory and field investigations. Biogeochemistry. 1973. p. 122–39.Google Scholar
- Fournier RO. Chemical geothermometers and mixing models for geothermal systems. Geothermics. 1977;5:41–50.View ArticleGoogle Scholar
- Fournier RO. A revised equation for Na–K geothermometer. Geotherm Res Counc Trans. 1979;3:221–4.Google Scholar
- Fournier RO, Potter RW. Magnesium correction to the Na–K–Ca chemical geothermometer. Geochim Cosmochim Acta. 1979;43:1543–50.View ArticleGoogle Scholar
- Fournier RO, Truesdell A. An empirical Na–K–Ca geothermometer for natural waters. Geochim Cosmochim Acta. 1973;37:1255–75.View ArticleGoogle Scholar
- Garabedian SP. Hydrology and digital simulation of the regional aqifer system, eastern Snake River Plain, Idaho. US Geological Survey Professional Paper 1408-F; 1992. p. 1–102.Google Scholar
- Giggenbach WF. Geothermal solute equilibria: derivation of Na–K–Mg–Ca geoindicators. Geochim Cosmochim Acta. 1988;52:2749–65.View ArticleGoogle Scholar
- Greeley R. The style of basaltic volcanism in the eastern SRP, Idaho. In: Bonnichsen B, Breckenridge RM, editors. Cenozoic Geology of Idaho. Idaho Bureau of Mines and Geology Bulletin 26; 1982. p. 407-21.Google Scholar
- Gupta HK, Roy S. Geothermal energy: an alternative resource for the 21st century. Amsterdam: Elsevier; 2007.Google Scholar
- Harris RN, Chapman DS. Stop-go temperature logging for precision applications. Geophysics. 2007;72:E119–23.View ArticleGoogle Scholar
- Johnston RM. The conversion of smectite to illite in hydrothermal systems: a literature review. Atomic Energy of Canada Limited; 1983. (ACEL-7792).Google Scholar
- Kharaka Y, Mariner R. Chemical geothermometers and their application to formational waters from sedimentary basins. In: Naeser ND, McCulloch T, editors. Thermal history of sedimentary basins: methods and case histories. New York: Springer Verlag; 1989. p. 99–117.View ArticleGoogle Scholar
- Knobel LL, Bartholomay L, Dewayne C, Tucker BJ, Wegner SJ. Chemical constituents in the dissolved and suspended fractions of the groundwater from selected sites. US Geological Survey Open-File Report 92–51; 1992. p. 1–56.Google Scholar
- Kuntz MA, Covington HR, Schorr LJ. An overview of basaltic volcanism of the eastern Snake River Plain, Idaho. In: Regional geology of eastern Idaho and western Wyoming. Boulder: Geological Society of America Memoir 179; 1992. p. 227–67.Google Scholar
- Leeman WP. Olivine tholeiitic basalts of the SRP, Idaho. In: Bonnichsen B, Breckenridge RM, editors. Cenozoic geology of Idaho. Idaho Bureau of Mines and Geology Bulletin 26; 1982. p. 181–91.Google Scholar
- Link PK, Lewis RS, Khan S, Schmidt K, Ames D. Digital geology of Idaho—Basin & range province—Tertiary extension. Geology 456–556, module 9. Pocatello: Idaho State University; 2007.Google Scholar
- Mann LJ, Knobel LL. Radionuclides, metals, and organic compounds in water, eastern part of A & B Irrigation District, Minidoka County, Idaho. US Geological Survey Open-File Report 90-191 (DOE/ID-22087); 1990. p.1–36.Google Scholar
- Marini L. Geochemical techniques for the exploration and exploitation of geothermal energy. Genova: Universita degli di Genova; 2004.Google Scholar
- McLing TL. The Pre-anthropogenic groundwater evolution at the Idaho National Engineering Laboratory site, Idaho. MSc Thesis. Pocatello: Idaho State University; 1994.Google Scholar
- McLing TL, Smith R, Johnson TM. Chemical characteristics of thermal water beneath the eastern SRP. In: Link PK, Mink LL, editors. Geology, hydrogeology, and environmental remediation: Idaho National Engineering and Environmental Laboratory, eastern Snake River Plain, Idaho. Boulder: Geological Society of America Special Paper 353; 2002. p. 205–11.Google Scholar
- McLing TL, Smith RP, Smith RW, Blackwell DD, Roback RC, Sondrup AJ. Wellbore and groundwater temperature distribution eastern Snake River Plain, Idaho: implications for groundwater flow and geothermal potential. J Volcanol Geoth Res. 2016;320:144–55.View ArticleGoogle Scholar
- Morse LH, McCurry M. Genesis of alteration of quaternary basalts within a portion of the eastern SRP aquifer. In: Link PK, Mink LL, editors. Geology, hydrogeology, and environmental remediation: Idaho National Engineering and Environmental Laboratory, eastern Snake River Plain, Idaho. Boulder: Geological Society of America Special Paper 353; 2002. p. 213–24.Google Scholar
- Nielson DL, Delahunty C, Shervais JW. Geothermal systems in the Snake River Plain, Idaho, characterized by the Hotspot Project. Geotherm Res Counc Trans. 2012;36:727–30.Google Scholar
- Orr BR. Geohydrology of the Idaho National Engineering and Environmental Laboratory, eastern Snake River Plain, Idaho. US Geological Survey Fact Sheet FS-130-97; 1997.Google Scholar
- Paces T. A systematic deviation from Na–K–Ca geothermometer below 75 °C and above 10–4 atm Pco2. Geochim Cosmochim Acta. 1975;39:541–4.View ArticleGoogle Scholar
- Potter KE, Bradshaw R, Sant CJ, King J, Shervais JW, Christiansen E. Project Hotspot: insight into the subsurface stratigraphy and geothermal potential of the Snake River Plain. Geotherm Res Counc Trans. 2011;35:967–71.Google Scholar
- Potter KE. The Kimama core: A 6.4 Ma record of volcanism, sedimentation, and magma petrogenesis on the axial volcanic high, Snake River Plain, ID. PhD Dissertation. Logan: Utah State University; 2014.Google Scholar
- Shervais JW, Nielson D, Evans JP, Lachmar TE, Christiansen E, Morgan L, Shanks WCP, Delahunty C, Schmitt DR, Liberty LM, Blackwell DD, Glen JM, Kessler JE, Potter KE, Jean MM, Sant CJ, Freeman TG. Hotspot: the Snake River Plain geothermal drilling project—initial report. Geotherm Res Counc Trans. 2012;36:767–72.Google Scholar
- Shervais JW, Schmitt DR, Nielson DL, Evans JP, Christiansen EH, Morgan L, Shanks WCP, Lachmar T, Liberty LM, Blackwell DD, Glen JM, Champion D, Potter KE, Kessler JA. First results from HOTSPOT: the Snake River Plain scientific drilling project, Idaho, USA. Sci Drill. 2013;15:36–45.View ArticleGoogle Scholar
- Shervais JW, Evans JP, Schmitt D, Christiansen EH, Prokopenko A. HOTSPOT: the Snake River scientific drilling project. EOS Trans Am Geophys Union. 2014;95:85–6.View ArticleGoogle Scholar
- Smith RB. Geologic setting of the Snake River Plain aquifer and vadose zone. Vadose Zone J. 2004;3:47–58.View ArticleGoogle Scholar
- Smith RB, Braile LW. The Yellowstone hotspot. J Volcanol Geoth Res. 1994;61:121–87.View ArticleGoogle Scholar
- Smith RB, Jordan M, Steinberger B, Puskas CM, Farrell J, Waite GP, Husen S, Chang W, O’Connell R. Geodynamics of the Yellowstone hotspot and mantle plume: seismic and GPS imaging, kinematics, and mantle flow. J Volcanol Geoth Res. 2009;188:26–56.View ArticleGoogle Scholar
- Tomasson J, Smarason OB. Developments in geothermal energy. In: Jones GP, Downing RA, editors. Hydrogeology in the service of man. Proceedings International Association of Hydrogeologists, Cambridge. 1985. p. 8–13.Google Scholar
- Twining BV, Bartholomay RC. Geophysical logs and water-quality data collected for boreholes Kimama-1A and -1B, and a Kimama water supply well near Kimama, southern Idaho. US Geological Survey Data Series 622 (DOE//ID 22215); 2011. p. 1–18 (plus appendix).Google Scholar
- US Geological Survey. National water summary, 1984: Hydrologic events, selected water-quality trends, and ground-water resources. US Geological Survey Water-Supply Paper 2275. 1985. p.1–467.Google Scholar
- Welhan JA, Johannesen CM, Davis LL, Reeves KS, Glover JA. Overview and synthesis of lithologic controls on aquifer heterogeneity in the eastern Snake River Plain, Idaho. In: Bonnichsen B, White C, McCurry M, editors. Tectonic and magmatic evolution of the Snake River Plain volcanic province. Idaho Geological Survey Bulletin 30; 2002a. p. 455–60.Google Scholar
- Welhan JA, Johannesen CM, Reeves KS, Clemo TM, Glover JA, Bosworth KW. Morphology of inflated pahoehoe lavas and spatial architecture of their porous and permeable zones, eastern SRP, Idaho. In: Link PK, Mink LL, editors. Geology, hydrogeology, and environmental remediation: Idaho National Engineering and Environmental Laboratory, eastern Snake River Plain, Idaho. Boulder: Geological Society of America Special Paper 353; 2002b. p. 135–50.Google Scholar | <urn:uuid:8792c94d-e605-44a5-8a6f-bea55d77a2c2> | 2.6875 | 9,063 | Truncated | Science & Tech. | 52.837165 | 95,576,483 |
While the first NASA Commercial Resupply Services (CRS) flight to the International Space Station is historic, the delivery and more importantly the return of science samples is pivotal. Since the retirement of the shuttles, the only return capability available from the space station was via the Russian Soyuz vehicle, with cold stowage even more limited -- but not anymore. Space Exploration Technologies Corp., or SpaceX, is now able to provide this service, as well.
SpaceX CRS-1 mission, using the Falcon 9 rocket and Dragon cargo spacecraft, launched Oct. 7, with a planned station docking date of Oct. 10. This is the first of 12 contracted flights to resupply the station, providing a new U.S. capability to deliver and return cargo -- including science investigations, particularly those that require cold stowage. A successful demonstration flight to the station completed in May.
Dragon carries approximately 882 pounds of cargo, including equipment and supplies for the 166 planned investigations for the Expedition 33 timeframe; 63 of these are new investigations. These supplies will support investigations for multiple research disciplines.
Dragon will return to Earth with close to 866 pounds of scientific supplies -- including samples from research involving human health, biotechnology, and materials research, along with educational investigations and approximately 518 pounds of station hardware. Though resupply is important, significant science sample return capability is exciting for the research community.
This return capability will considerably enhance the amount of science that can be done, as it allows for samples to return home in a timely manner for analysis. This also frees up cold stowage space aboard station for new samples and continued research, as many samples have been kept in cold stowage until a ride home was available. Often times, researchers cannot complete analyses until the on-orbit samples are returned and evaluated.
"While some of this data can be obtained by on orbit analysis, many analysis techniques have not been miniaturized or modified to allow them to be performed on orbit, which means sample return is the only way to obtain this data," said Marybeth Edeen, deputy manager for the International Space Station Research Integration Office.
Though manifests can change, currently there are many research samples from a variety of investigations scheduled for return when the Dragon spacecraft returns to Earth.
Several ongoing investigations -- Nutrition, Pro-K, Repository, and Biosphosphonates -- have samples returning on Dragon, providing additional data for researchers working on these investigations. All four of these studies involve understanding changes to the human body when exposed to microgravity conditions, especially for long-durations. Specifically, this research looks at bone health, blood chemistry and hormonal changes, as well as oxidative damage and human physiological changes and adaptation.
"Biological samples from the crew and life sciences experiments are frozen on orbit and returned to allow researchers to understand exactly what changes occurred as a result of the exposure to microgravity," Edeen said.
Information from these studies assists with determining ways to counteract these changes, while helping to define nutritional requirements and develop food systems for future exploration missions. Already data from this research, and previous similar studies, have resulted in new information that should prove beneficial to astronauts during extended space flights, as well as people on Earth with similar health conditions.
Another investigation with returning samples is Plant Signaling, which involves studying how microgravity affects plant growth. The goal is to understand the molecular mechanisms plants use to sense and respond to changes in their environment. Seeds are stored in seed cassettes for transport to the station. Seeds stay in the cassette during the investigation phase, with the crew then storing the sample cassettes in the MELFI freezer for return to Earth and the researchers' labs for analysis. This research is important in providing sustainable resources for future missions -- food, carbon dioxide removal and oxygen generation. Other benefits could include improved crop production on Earth, based on a better understanding of how plants grow.
Astronaut's Energy Requirements for Long-Term Space Flight investigation, known as ENERGY, measures changes in energy balance in crew members following a long-duration flight. Returning samples from several of the subjects will contribute to information about energy expenditure in microgravity, leading to an improved understanding of the nutrition required for extended stays in space.
NASA's teenage researchers are also included for this notable flight. The winners of the YouTube Space Lab contest, will be getting results from their investigations, once they have analyzed their return samples. These young scientists designed their experiments to look at how microgravity affects the predatory habits of jumping spiders; and any resulting adaptation and possible microgravity effects to the anti-fungal properties of Bacillus subtilis.
Twenty-three investigations, involving students from many communities across the country, are on the manifest for delivery to the station and return home on the Dragon. These student research projects are part of The Student Spaceflight Experiments Program (SSEP). This program gives students the chance to design and propose investigations that study the effects of microgravity on physical, chemical and biological systems.
Other new investigations being delivered aboard the Dragon include the opportunistic Candida albicans, the most common human fungal pathogen and the cause of yeast infections. The Genotypic and Phenotypic Responses of Candida albicansto Spaceflight, or Micro-6, investigation will also be returned when Dragon returns to Earth. This type of fungus causes superficial to life-threatening infections, as the yeast fungus becomes reactive when conditions are just right. This is the first flight for this particular type of research, allowing scientists to study how microgravity affects an astronaut's health risk to this type of infection.
With SpaceX's vehicle return capability, NASA and its international partners and other private, commercial and academic researchers are now able to get science samples needed to complete analyses back in a much timelier manner. This not only allows for more science experimentation on the orbiting laboratory, but also potentially quicker results that may lead to greater discoveries. There are some amazing activities going on aboard the space station. | <urn:uuid:c6defa57-69e2-4e1e-809b-da04f779b5d8> | 3.28125 | 1,228 | News (Org.) | Science & Tech. | 18.782291 | 95,576,488 |
The Sun's Great Conveyor Belt has slowed to a record-low crawl, according to research by NASA solar physicist David Hathaway. "It's off the bottom of the charts," he says. "This has important repercussions for future solar activity."
The Great Conveyor Belt is a massive circulating current of fire (hot plasma) within the Sun. It has two branches, north and south, each taking about 40 years to perform one complete circuit. Researchers believe the turning of the belt controls the sunspot cycle, and that's why the slowdown is important.
"Normally, the conveyor belt moves about 1 meter per second—walking pace," says Hathaway. "That's how it has been since the late 19th century." In recent years, however, the belt has decelerated to 0.75 m/s in the north and 0.35 m/s in the south. "We've never seen speeds so low."
According to theory and observation, the speed of the belt foretells the intensity of sunspot activity ~20 years in the future. A slow belt means lower solar activity; a fast belt means stronger activity.
"The slowdown we see now means that Solar Cycle 25, peaking around the year 2022, could be one of the weakest in centuries," says Hathaway.
This is interesting news for astronauts. Solar Cycle 25 is when the Vision for Space Exploration should be in full flower, with men and women back on the Moon preparing to go to Mars. A weak solar cycle means they won't have to worry so much about solar flares and radiation storms.
On the other hand, they will have to worry more about cosmic rays. Cosmic rays are high-energy particles from deep space; they penetrate metal, plastic, flesh and bone. Astronauts exposed to cosmic rays develop an increased risk of cancer, cataracts and other maladies. Ironically, solar explosions, which produce their own deadly radiation, sweep away the even deadlier cosmic rays. As flares subside, cosmic rays intensify—yin, yang.
Hathaway's prediction should not be confused with another recent forecast: A team led by physicist Mausumi Dikpata of NCAR has predicted that Cycle 24, peaking in 2011 or 2012, will be intense. Hathaway agrees: "Cycle 24 will be strong. Cycle 25 will be weak. Both of these predictions are based on the observed behavior of the conveyor belt."
How do you observe a belt that plunges 200,000 km below the surface of the sun?
"We do it using sunspots," Hathaway explains. Sunspots are magnetic knots that bubble up from the base of the conveyor belt, eventually popping through the surface of the sun. Astronomers have long known that sunspots have a tendency to drift—from mid solar latitudes toward the sun's equator. According to current thinking, this drift is caused by the motion of the conveyor belt. "By measuring the drift of sunspot groups," says Hathaway, "we indirectly measure the speed of the belt."
Using historical sunspot records, Hathaway has succeeded in clocking the conveyor belt as far back as 1890. The numbers are compelling: For more than a century, "the speed of the belt has been a good predictor of future solar activity."
If the trend holds, Solar Cycle 25 in 2022 could be, like the belt itself, "off the bottom of the charts."
Source: Science@NASA, by Dr. Tony Phillips
Explore further: Sweating for a cooler Singapore | <urn:uuid:95ef3b08-3b1b-4a36-9641-0801ab0aee27> | 4.03125 | 733 | News Article | Science & Tech. | 61.400338 | 95,576,495 |
September 3 2013 Astronomy Newsletter
Here's the latest article from the Astronomy site at BellaOnline.com.
Maria Mitchell was a true pioneer woman. She didn’t brave a physical wilderness. Hers was the harder job of pioneering higher education for women. She was the first American woman to discover a comet, the first to be elected to scientific societies and the first woman professor of astronomy.
Maria Mitchell loathed the idea that women should only do what was considered women’s work. In particular, she felt the injustice of women having to dedicate so much time to such work and be excluded from the life of the intellect.
She wrote that “the dressmaker should no more be a universal character than the carpenter. Suppose every man should feel it is his duty to do his own mechanical work of *all* kinds, would society be benefited? Would the work be well done? Yet a woman is expected to know how to do all kinds of sewing, all kinds of cooking, all kinds of any *woman’s* work, and the consequence is that life is passed in learning these only, while the universe of truth beyond remains unentered.”
*August – time for the Mars-as-big-as-the-full-Moon nonsense*
Once again, August comes and there’s an email or Facebook posting about how Mars is the closest it’s been to us in thousands of years and it’ll look as big as the full Moon. Then September comes and, of course, it hasn’t happened. If you were about half a million miles from Mars, you might see that. As Earth is 35 million miles from Mars, if we should see Mars looking that big, we are in Big Trouble.
At opposition in August 2003, when Mars was at its nearest to us, it looked like an exceptionally bright red star. You could have seen it magnified in a small telescope looking as big as the full Moon does with your unaided eye. Currently, Mars is on the other side of the Sun so even farther away than usual.
This photo was taken just before the Mars opposition in 2003 when it was comparatively close to us. It shows [url=http://pinterest.com/pin/250090585530960107/]the Moon and Mars in the sky[/url]. (We call it a conjunction when you can see two heavenly bodies appear to be close together.)
September 1st was the first day of meteorological autumn. Astronomical autumn is a few weeks behind. We’ll have the autumnal equinox on September 22, a few days after the Harvest Moon. (Remember this is the fourth full Moon this season.)
For a variety of astronomy images, follow me on Pinterest at: http://pinterest.com/astrobella/
To participate in online discussions, this site has a community forum all about Astronomy located here - http://forums.bellaonline.com/ubbthreads.php?ubb=postlist&Board=323
Please visit astronomy.bellaonline.com for even more great content about Astronomy.
I hope to hear from you sometime soon, either in the forum or in response to this email message. I welcome your feedback!
Do pass this message along to family and friends who might also be interested. Remember it's free and without obligation.
I wish you clear skies.
Mona Evans, Astronomy Editor
One of hundreds of sites at BellaOnline.com
Unsubscribe from the Astronomy Newsletter
Online Newsletter Archive for Astronomy Site
Master List of BellaOnline Newsletters
Editor's Picks Articles
Top Ten Articles | <urn:uuid:ed0a9763-024d-4c29-96d4-5764293fa2b3> | 3.1875 | 763 | News (Org.) | Science & Tech. | 57.930235 | 95,576,505 |
This is the diagram of Manda Kriya, Reduction to the Heliocentric Coordinate System, given by the equation R e Sin M, where e is eccentricity, R is the Radius of the Circle, M is the Anomaly.
The Manda Anomaly is found out by subtracting Aphelion, Mandoccha from the Mean Longitude.
Mean Longitude - Aphelion = Manda Anomaly.
R = 360/ 2 Pi or 180/Pi or 57.3*60*60 = 206265 seconds or Vikalas.
This amount thus got is added to the mean longitude if ML > 180 and deducted if ML < 180.
The Kerala astronomers predate their Western counterparts, particularly Kepler and Laplace by centuries. In his Thesis " The Model of Planetary Motion in the works of the Kerala astronomers", Prof Ramasubramaniam, Physics Professor, Madurai University observes " In conclusion, it may be noted that there is a vast literature on astronomy, including mathematics, both in Sanskrit and Malayalam produced by the Kerala School during the period 14-19th century. Only a fraction of it has been published and so far only a few studies of these texts have appeared ".
The era and the astronomer
Parameswara - 15th century
Neelakanta - 16th century | <urn:uuid:84036833-0fbb-4574-8d7e-a462e84fe1c5> | 3.40625 | 279 | Personal Blog | Science & Tech. | 40.975502 | 95,576,507 |
Astronomy April Fools
Astronomy Picture of the Day
Astronomy Picture of the Day (APOD) likes to make little jokes on April 1, usually re-captioning existing images. For example, on April 1, 2003 it reported that a new constellation was surprising star gazers. “The constellation of Ollie the Owl has suddenly started dominating the southern hemisphere.” The picture showed a bird perching on the Tololo All Sky Camera, and APOD admitted that it would have been funnier if the bird hadn't scratched the plastic dome.
However on March 31, 2005, APOD showed the next day's picture as “water on Mars . . .” which did leave people wondering. This was before orbiters and rovers had gathered considerable evidence of water on the red planet. The April 1 picture was of a glass of water on top of a Mars bar.
On March 31, 2012 NASA provided the “discovery image” of a moon for Mercury, as captured by the MESSENGER spacecraft. Even on the eve of April 1, this was credible – after all, spacecraft often discover moons. But like many good April Fools, there are clues.
Firstly, the enlarged picture of the moon is immediately recognizable to many astronomy buffs. It's a well-known image of asteroid 243 Ida, taken by the Galileo spacecraft on its way to Jupiter.
Secondly, they outlined the plan to collide MESSENGER with the moon to knock it free of Mercury's gravity and “set it on an Earth-crossing trajectory suitable for recovery as a Mercury meteorite.” In fact, they'd do this with such precision that the moon would arrive at a remote location in Antarctica, avoiding population centers. Whew! Pretty impressive planning for something that had only been discovered the day before!
And finally, there's a mission proposal in the planning stage for X-ray analysis of Mercury's surface. It's name: Hermean On-surface Analysis with X-rays. (What's its acronym?)
April Fool in space, 2010. The three-man crew of the International Space Station got a laugh out of Mission Control with a doctored photo of themselves “spacewalking” - not wearing space suits, but slacks, T-shirts and sunglasses. (Hope they remembered their sunblock!)
On April 1, 2013 Canadian astronaut Chris Hadfield tweeted a picture of himself with two “space grenades”, which were actually air sampling devices. And during the day he tweeted a series of images of an unidentified object nearing the Space Station. I imagine his followers had worked out the April Fool long before the final picture of him with a little green alien. "I don't know what it is or what it wants, but it keeps repeating 'Sloof Lirpa' over and over. Alert the press." (The alien message isn't so strange if you read it backwards.)
The Jovian-Plutonian Gravitational Effect
The online Museum of Hoaxes lists Patrick Moore's April Fool as one of the 100 Best Hoaxes. The popular British astronomer, with the help of BBC radio, explained the Jovian-Plutonian Gravitational Effect to listeners on April 1, 1976. He said that at 9:47 a.m., a rare conjunction of Jupiter and Pluto would partially negate Earth's gravity and that if you jumped at that time, you would get a floating feeling.
A number of people later phoned the BBC to describe their experiences of floating. I don't know if they were serious or were joining in on the joke.
There was a serious point to Moore's joke. Jupiter is massive, but it's far away from us. Our Moon has more of a gravitational effect on Earth than Jupiter does. As for Pluto, it's smaller than the Moon and about six times as far away from us as Jupiter. Moore was ridiculing a popular book that predicted the dire consequences of a rare planetary alignment that would happen in 1982. All of the planets would be on the same side of the Sun as the Earth and the tidal effects would create massive earthquakes. In particular, Los Angeles would be destroyed.
Without detailing the reasons why this wouldn't happen, I'll just note that Los Angeles is still there.
On April 1, 2011 Richard Branson announced that he had bought Pluto and had a plan to get it reinstated as a planet. Virgin Galactic would construct a space vehicle to drag asteroids and assorted space debris to the dwarf planet until it was big enough to qualify as a planet. Branson said, "Virgin has expanded into many territories over the years, but we have never had our own planet before. This could pave the way for a new age in space tourism."
It might need a bit longer than Branson's projected five years. NASA's New Horizons spacecraft took over nine and a half years to get to Pluto. And that's without any asteroids in tow.
I'll end with a story that wasn't a hoax. On March 30, 2012 NASA announced that asteroid 2012 EG5 would come close to Earth on April 1. But there was no chance of its hitting Earth. I wonder if they felt a bit defensive about it when they put out the press release. You can imagine their wanting to add: “No, really.”
You Should Also Read:
Gravity - Cosmic Glue
Mercury Facts for Kids
Could You Survive Unprotected in Space?
Editor's Picks Articles
Top Ten Articles
Content copyright © 2018 by Mona Evans. All rights reserved.
This content was written by Mona Evans. If you wish to use this content in any manner, you need written permission. Contact Mona Evans for details. | <urn:uuid:09cb9623-96c1-4499-bd84-1215da54d393> | 2.921875 | 1,190 | Personal Blog | Science & Tech. | 59.341468 | 95,576,529 |
Category:Solutions by Library
Libraries are software which extend the functionality of a programming language, usually by providing an API to complete a specific task. Different languages may have their own name for libraries, such as Perl modules, or Java packages.
Many programming examples on Rosetta Code make use of libraries.
This category has the following 221 subcategories, out of 221 total. | <urn:uuid:d3ef85f5-382b-49d5-a4a3-5c5290476401> | 2.765625 | 79 | Content Listing | Software Dev. | 20.537984 | 95,576,545 |
PORTLAND, Ore. — A man captured mesmerizing video of a rare event in Oregon.
Steve Andrijiw captured video of countless butterflies migrating through the Pacific Northwest in August.
Andrijiw identified the swirling butterflies as California Tortoiseshells.
To use this video in a commercial player or in broadcasts, please email firstname.lastname@example.org. California Tortoiseshell Butterfly migration atop Mt. Scott, Crater Lake, Oregon. This event happens every 5-6 years. The Spruce Lake fires shrouded the view of Crater Lake but Mother Nature treated us to an amazing show. #butterflyworld #butterfly #butterflyflyaway #nationalgeographic #insect #insectlife #insect
“This event happens every 5-6 years. The Spruce Lake fires shrouded the view of Crater Lake but Mother Nature treated us to an amazing show,” Andrijiw wrote.
The species are known for having population explosions that cause the butterflies to migrate to new areas.
“Breeding localities in summer vary widely from year to year — sometimes in the high southern Sierra, sometimes in the Cascades … sometimes only in far northeastern California or even farther north,” the University of California Davis website states.AlertMe | <urn:uuid:4ff0990c-39db-4517-9165-7818d15e246d> | 2.515625 | 269 | News Article | Science & Tech. | 27.491457 | 95,576,552 |
03 December 2013
03 December 2013
Researchers at the University of Pittsburgh Swanson School of Engineering claim to have developed computational models to design a new polymer gel that would enable complex materials to regenerate themselves.
The article, “Harnessing Interfacially-Active Nanorods to Regenerate Severed Polymer Gels” was published on 19 November in the American Chemical Society journal Nano Letters.
The University states that the principal investigator is Anna C. Balazs, PhD, the Swanson School’s Distinguished Robert v. d. Luft Professor of chemical and petroleum engineering, and co-authors are Xin Yong, PhD, postdoctoral associate, who is the article’s lead author; Olga Kuksenok, PhD, Research Associate Professor; and Krzysztof Matyjaszewski, PhD, J.C. Warner University Professor of Natural Sciences, Department of Chemistry at Carnegie Mellon University.
“This is one of the holy grails of materials science,” noted Dr. Balazs. “While others have developed materials that can mend small defects, there is no published research regarding systems that can regenerate bulk sections of a severed material. This has a tremendous impact on sustainability because you could potentially extend the lifetime of a material by giving it the ability to regrow when damaged.”
The research team says it was inspired by biological processes in species such as amphibians, which can regenerate severed limbs. This type of tissue regeneration is guided by three critical instruction sets – initiation, propagation, and termination – which Dr. Balazs describes as a “beautiful dynamic cascade” of biological events.
“When we looked at the biological processes behind tissue regeneration in amphibians, we considered how we would replicate that dynamic cascade within a synthetic material,” Dr. Balazs said. “We needed to develop a system that first would sense the removal of material and initiate regrowth, then propagate that growth until the material reached the desired size and then, self-terminate the process.”
“Our biggest challenge was to address the transport issue within a synthetic material,” Dr. Balazs said. “Biological organisms have circulatory systems to achieve mass transport of materials like blood cells, nutrients and genetic material. Synthetic materials don’t inherently possess such a system, so we needed something that acted like a sensor to initiate and control the process.”
The team developed a hybrid material of nanorods embedded in a polymer gel, which is surrounded by a solution containing monomers and cross-linkers (molecules that link one polymer chain to another) in order to replicate the dynamic cascade. When part of the gel is severed, the nanorods near the cut act as sensors and migrate to the new interface. The functionalised chains or “skirts” on one end of these nanorods keeps them localised at the interface and the sites (or “initiators”) along the rod’s surface trigger a polymerisation reaction with the monomer and cross-linkers in the outer solution. Drs. Yong and Kuksenok developed the computational models, and thereby established guidelines to control the process so that the new gel behaves and appears like the gel it replaced, and to terminate the reaction so that the material would not grow out of control.
Drs. Balazs, Kuksenok and Yong also credit Krzysztof Matyjaszewski, who contributed toward the understanding of the chemistry behind the polymerisation process. "Our collaboration with Prof. Matyjaszewski was exceptionally valuable in allowing us to accurately account for all the complex chemical reactions involved in the regeneration processes" said Dr. Kuksenok.
“The most beautiful yet challenging part was designing the nanorods to serve multiple roles,” Dr. Yong said. “In effect, they provide the perfect vehicle to trigger a synthetic dynamic cascade.” The nanorods are approximately ten nanometres in thickness, about 10,000 times smaller than the diameter of a human hair.
In the future, the researchers plan to improve the process and strengthen the bonds between the old and newly formed gels, and for this they were inspired by another nature metaphor, the giant sequoia tree. “One sequoia tree will have a shallow root system, but when they grow in numbers, the root systems intertwine to provide support and contribute to their tremendous growth,” Dr. Balazs explains. Similarly, the skirts on the nanorods can provide additional strength to the regenerated material.
The next generation of research would further optimise the process to grow multiple layers, creating more complex materials with multiple functions.
Intertronics has compiled a guidance note on how to specify a dispensing robot.
ThermHex Waben and EconCore will exhibit at the IAA Commercial Vehicles exhibition in Hannover, Germany, on 20-27 September 2018.
Technical Fibre Products (TFP) will exhibit nonwovens for use in surface finishing, imparting EMI shielding or fire protection, and other transport applications, at the JEC Conference on The Future of Composites in Transportation, taking place in Chicago, US, on 27-28 June. | <urn:uuid:46b902d4-4625-4325-8b4a-b7741fa0c669> | 2.9375 | 1,099 | News Article | Science & Tech. | 28.717854 | 95,576,561 |
Access a range of climate-related reports issued by government agencies and scientific organizations. Browse the reports listed below, or filter by scope, content, or focus in the boxes above. To expand your results, click the Clear Filters link.
Indiana’s climate is changing. Temperatures are rising, more precipitation is falling, and the last spring frost of the year has been getting steadily earlier. This report describes historical climate trends from more than a century of data and future projections that detail the ways in which our climate will continue to change.
Coastal flooding in the United States is already occurring and the risk of flooding is expected to grow in most coastal regions, in part due to climate change. The Centers for Disease Control and Prevention developed this booklet, aimed at the general public, that identifies steps people can take to prepare for the health risks associated with coastal flooding. The booklet answers some of the key questions about coastal flooding in a changing climate: why these events are on the rise; how it might affect health; and what people can do before, during, and after a coastal flooding event to stay safe. Scientific information used in the document is derived from peer-reviewed synthesis and assessment products, including those published by the U.S. Global Change Research Program and the Intergovernmental Panel on Climate Change, as well as other peer-reviewed sources and federal agency resources.
This user-friendly summary is based on the 2015 report “City of Long Beach Climate Resiliency Assessment Report" and “Appendices” prepared by the Aquarium of the Pacific at the request of Mayor Robert Garcia. The report includes clear infographics that describe current and projected conditions in the city. It also describe what the city is currently doing and what else the city and its residents can do.
This guide provides recommendations for effective education and communication practices when working with different types of audiences. While effective education has been traditionally defined as the acquisition of knowledge, Climate Change Education Partnership (CCEP) Alliance programs maintain a broader definition of “effective” to include the acquisition and use of climate-change knowledge to inform decision making. The CCEP Alliance is supported by the National Science Foundation to advance exemplary climate change education through research and practice.
These state summaries were produced to meet a demand for state-level information in the wake of the Third U.S. National Climate Assessment, released in 2014. The summaries cover assessment topics directly related to NOAA’s mission, specifically historical climate variations and trends, future climate model projections of climate conditions during the 21st century, and past and future conditions of sea level and coastal flooding. Click on each state to see key messages, figures, and and a summary of climate impacts in your state.
Climate change affects human health by making extreme heat more common, more severe, and last longer. That is expected to continue into the future. This handbook explains the connection between climate change and extreme heat events, and outlines actions citizens can take to protect their health during extreme heat. This resource builds on the 2006 Excessive Heat Events Guidebook from the Environmental Protection Agency (EPA), and includes up-to-date climate information from recent climate assessment reports, such as the 2014 Third National Climate Assessment, the 2016 Impacts of Climate Change on Human Health in the United States, and EPA’s 2016 Climate Change Indicators in the United States.
This report features observed trend data on 37 climate indicators, including U.S and global temperatures, ocean acidity, sea level, river flooding, droughts, and wildfires. It documents rising temperatures, shifting patterns of snow and rainfall, and increasing numbers of extreme climate events, such as heavy rainstorms and record high temperatures. Many of these observed changes are linked to the rising levels of carbon dioxide and other greenhouse gases in our atmosphere, caused by human activities.
Climate.gov's El Niño-Southern Oscillation—or ENSO—page provides information on the current status of El Niño and La Niña, plus links to forecasts, maps, and videos from across NOAA that help explain the impacts of the ENSO on the U.S.
In January 2015, Long Beach Mayor Robert Garcia asked the Aquarium of the Pacific to take a lead in assessing the primary threats that climate change poses to Long Beach, to identify the most vulnerable neighborhoods and segments of the population, and to identify and provide a preliminary assessment of options to reduce those vulnerabilities. Over the course of 2015, the Aquarium hosted and participated in meetings and workshops with academic and government scientists, business and government leaders, local stakeholders, and Long Beach residents to discuss key issues facing our community as the result of climate change. This report, completed in December 2015, represents the culmination of these efforts. The report offers detailed assessments of the five main threats of climate change to Long Beach: drought, extreme heat, sea level rise and coastal flooding, deteriorating air quality, and public health and social vulnerability. It also provides an overview of what is currently being done to mitigate and adapt to these threats, and other options to consider. Finally, this report presents a series of steps and actions that city leaders and community stakeholders can use as a template for making Long Beach a model of a climate resilient city.
This handbook (USGS Professional Paper 1815) was designed as a guide to the science and simulation models for understanding the dynamics and impacts of sea level rise on coastal ecosystems. Coastal land managers, engineers, and scientists can benefit from this synthesis of tools and models that have been developed for projecting causes and consequences of sea level change on the landscape and seascape.
Successfully negotiating climate change challenges will require integrating a sound scientific basis for climate preparedness into local planning, resource management, infrastructure, and public health, as well as introducing new strategies to reduce greenhouse gas emissions or increase carbon sequestration into nearly every sector of California’s economy. This Research Plan presents a strategy for developing the requisite knowledge through a targeted body of policy-relevant, California-specific research over three to five years (from early 2014), and determines California’s most critical climate-related research gaps.
This report builds on Maine’s earlier report from 2009—it is not intended as a comprehensive revision of all aspects of the original report. This update focuses on highlights of the understanding in 2015 of past, present, and future trends in key indicators of a changing climate specific to Maine, and recent examples of how Maine people are experiencing these changes.
This report is a synthesis of climate science relevant for management and planning for Colorado's water resources. The report focuses on observed climate trends, climate modeling, and projections of temperature, precipitation, snowpack, and streamflow.
This plan—an update to the 2009 California Climate Adaptation Strategy—augments previously identified strategies in light of advances in climate science and risk management options.
A 24-year tradition encompassing the work of 425 authors from 57 countries, 2013's State of the Climate report uses dozens of climate indicators to track patterns, changes, and trends of the global climate system.
This report uses a Question and Answer format to discuss climate change and its causes. The booklet provides an authoritative overview of global climate change for decision makers, policy makers, educators, and other individuals seeking information on climate science.
California’s Climate Action Team developed this document to provide California agencies with guidance for incorporating extreme heat projections and best practices for adapting to heat-related climate change impacts into planning and decision making.
This report, representing the Intergovernmental Panel on Climate Change (IPCC) Working Group I's contribution to the IPCC Fifth Assessment report (AR5), explores the hard science elements of global climate change.
The Sacramento-San Joaquin River Delta is the grand confluence of California’s waters, the place where the state’s largest rivers merge in a web of channels—and in a maze of controversy. In 2009, seeking an end to decades of conflict over water, the California Legislature established the Delta Stewardship Council with a mandate to resolve long-standing issues. The first step toward that resolution is the Delta Plan—a comprehensive management plan for California’s Sacramento-San Joaquin Delta, developed to guide state and local agencies to help achieve the co-equal goals of providing a more reliable water supply for California and protecting, restoring, and enhancing the delta's ecosystem.
These five Resource Guides facilitate access to existing climate change learning materials and support the development of complementary learning resources. The guides are compiled for selected topics of climate change for which a wealth of learning resources is available and that have been identified as important topics from a country perspective.
The report provides a comprehensive overview of observed and predicted changes to Massachusetts’ climate and the anticipated impacts. It also describes potential adaptation strategies the state may take to prepare for climate change.
This report, the final in a series from the National Academies, makes the case that the environmental, economic, and humanitarian risks posed by climate change indicate a pressing need for substantial action to limit the magnitude of climate change and to prepare for adapting to its impacts. The report advocates for an iterative risk management approach to climate change and using strong federal climate policies to support and enhance existing local, state, and private-sector efforts.
A strong, credible body of scientific evidence shows that “climate change is occurring, is caused largely by human activities, and poses significant risks for a broad range of human and natural systems,” concludes this America’s Climate Choices report from the National Research Council. The report recommends that a single federal entity be given the authority and resources to coordinate a national research effort integrated across many disciplines to improve understanding and responses to climate change.
In determining appropriate adaptation strategies, project staff worked with participants to survey a wide range of potential strategy options and develop a process for evaluation and prioritization of targeted strategies.
This assessment of ozone depletion, produced by the World Meteorological Organization and the United Nations Environment Programme every four years since 1985, is the work of over 300 scientists. The 2010 report highlights advances in the understanding of the role greenhouse gases play in ozone alteration. It also includes updated information for policymakers, including ozone projections for the 21st century.
This publication is intended to assist public health officials, practitioners, and other stakeholders in their efforts first to understand and then to prepare for drought in their communities. It provides information about how drought affects public health, recommends steps to help mitigate the health effects of drought, identifies future needs for research and other drought-related activities, and provides a list of helpful resources and tools.
This volume in the National Research Council's America's Climate Choices series describes and assesses different activities, products, strategies, and tools for informing decision makers about climate change, including education and communication, and information systems and services for helping them plan and execute effective, integrated responses. Information and reporting systems discussed include climate services and a greenhouse-gas accounting system.
This report quantifies the outcomes of different stabilization targets for greenhouse gas concentrations using analyses and information drawn from the scientific literature. Although it does not recommend or justify any particular stabilization target, it does provide important scientific insights about the relationships among emissions, greenhouse gas concentrations, temperatures, and impacts. The report emphasizes the importance of 21st century choices regarding long-term climate stabilization, and is a useful resource for scientists, educators, and policy makers, among others.
This strategy provides initial guidance on actions Virginia’s conservation community can implement immediately to enhance the conservation of wildlife and habitats in the face of climate change, even as more comprehensive adaptation strategies are developed. Conservation strategies include specific actions for conserving species and habitats, developing new data and climate modeling resources, and implementing new outreach efforts related to climate change.
King County in Washington State has established a comprehensive program to prepare for climate change, and many of the tools and strategies that King County has employed can be applied in other communities. This memorandum from the King County Office of Strategic Planning and Performance Management, published by the American Planning Association, describes strategies developed in King County to direct local government efforts to address climate change.
A tutorial for the climate analysis and decision-making communities on current best practices in describing and analyzing uncertainty in climate-related problems. Uncertainty is ubiquitous. Of course, the presence of uncertainty does not mean that people cannot act.
This reanalysis combines a diverse array of past observations together within a model to derive a best estimate of how the climate system has evolved over time. The goal is to provide consistent and reliable long-term datasets of temperatures, precipitation, winds, and many other climate variables. The report is a Synthesis and Assessment Product developed as part of the U.S. Climate Change Science Program.
This comprehensive scientific assessment of past, present, and future global climate change represents the Intergovernmental Panel on Climate Change (IPCC) Working Group I's contribution to the IPCC Fourth Assessment report (AR4). The assessment confirms that the scientific understanding of the climate system and its sensitivity to greenhouse gas emissions is richer and deeper than ever before. The chapters forming the bulk of this report describe scientists' assessment of the then state-of-knowledge in their respective fields.
This report, a Synthesis and Assessment Product from the U.S. Climate Change Science Program, addresses previously identified discrepancies between observations and simulations of surface and atmospheric temperature trends. It is an important revision to the conclusions of earlier reports from the U.S. National Research Council and the IPCC.
This Executive Order requires that King County, Washington, municipal departments employ coordinated strategies of land use to mitigate and adapt to global warming. | <urn:uuid:b71cb2c7-a064-474c-944e-404aa626f967> | 2.734375 | 2,768 | Content Listing | Science & Tech. | 16.651146 | 95,576,567 |
+44 1803 865913
By: David W Schindler and John R Vallentyne
330 pages, Col photos, illus, maps, tabs
The greatest threat to water quality worldwide is nutrient pollution. Cultural eutrophication by nutrients in sewage, fertilizers, and detergents is feeding massive algal blooms, choking out aquatic life and outpacing heavy metals, oil spills, and other toxins in the devastation wrought upon the world's fresh waters. Renowned water scientists, David W Schindler and John R Vallentyne, share their combined 80 years of experience with the eutrophication problem to explain its history and science, and offer real-world solutions for mitigating this catastrophe in the making. For those who have lost sight of Vallentyne's 1974 first edition, Schindler's fully revised and expanded edition is an unambiguous road map for change.
* 'The first Algal Bowl was a classic and really was influential for both the study of lakes and for people who appreciated the environment, especially the eutrophication of lakes. David Schindler is the ideal co-author for a new edition' Daniel Conley, University of Lund, Sweden * 'The previous edition was a milestone in its time. Both authors are outstanding and well-known. I believe that there is a worldwide need for this book' Martin Dokulil, Institute for Limnology, Austria
Preface * The Algal Bowl * Lakes and Humans * Lakes are Made of Water * How Lakes Breathe * Phosphorus, the Morning Star * The Environmental Physician * Detergents and Lakes * The Year of NTA * Understanding Eutrophication from Experiments in Small Lakes * Changes in the Eutrophication Problem Since the mid-20th Century * Using the Fossil Record to Interpret Past Eutrophication * Recovery from Eutrophication * Eutrophication of Estuaries * Signs or Solutions? * Bibliography, Index
There are currently no reviews for this book. Be the first to review this book!
David W. Schindler is Professor of Ecology at the University of Alberta, Edmonton, Canada. He has received numerous awards for his work, including the first Stockholm Water Prize (1991) and the Tyler Prize for Environmental Achievement (2006). John R. Vallentyne (deceased 2007) was Senior Scientist with the Department of Fisheries and Oceans, Canada. He received the Rachel Carson Prize for his work.
Your orders support book donation projects
We find their customer service to be excellent
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:937020f9-460c-485c-b294-944d9469d837> | 2.890625 | 554 | Product Page | Science & Tech. | 32.122208 | 95,576,580 |
Monitoring the planet’s well being is extra necessary than ever, however there are solely so many scientists to go round and method too many endangered species and local weather fluctuations to watch. That’s the place citizen scientists are available.
More and more, researchers are farming out their work to an infinite inexperienced military of volunteers all over the world who’re greater than prepared to lend their eyes, ears and even their computer systems to make a distinction. Consider it as crowd-sourced science.
Listed here are some fascinating science initiatives simply ready for normal residents to pitch in.
Nice Sunflower Mission
Your dinner desk would look sparse with out bees and different pollinators to fertilize the vegetation that produce lots of your favourite fruits, veggies, nuts and drinks. Sadly, pollinators are on the decline due to pesticide use, habitat loss and different threats; the U.S. alone has misplaced 50 p.c of its managed honey bee colonies up to now decade. If serving to our hardworking meals companions is your ardour, the Great Sunflower Project is for you. Newbie scientists plant “Lemon Queen” sunflowers and rely visiting pollinators, notably bees. For those who can’t plant sunflowers, you possibly can monitor pollinator visits to different forms of vegetation, even ones exterior your yard. Researchers are utilizing the information to assist get a greater deal with on why pollinators are disappearing and methods to assist them thrive.
Whale FM makes use of volunteers to assist classify the songs of killer whales. (Picture: Kim/flickr)
Do you’ve got a great ear for music and a great Web connection? You might be excellent for a gig with Whale FM. Researchers are hoping to categorise the entrancing songs of killer and pilot whales to better understand their complex languages. Participation as a citizen scientist couldn’t be extra pleasant — and mesmerizing. You merely take heed to recordings of those mysterious giants and match them to samples supplied on the Whale FM web site. Your pattern-finding abilities assist researchers decipher the person dialects of varied whale household teams, and possibly even the meanings behind their multitude of lyrical and infrequently eerie calls and sounds.
It’s also possible to put your listening abilities to work evaluating bat requires Bat Detective. Not solely are bats necessary pollinators, however in addition they act like canaries within the coal mine, sounding an early alarm for hassle within the pure setting. Sadly, these elusive night creatures may also be ridiculously arduous to see and examine. Nonetheless, within the acoustic area they’re giants, producing a cacophony of calls that assist them hunt, navigate and discuss to different bats. Researchers want your assist classifying recorded communications on-line to help in monitoring bat populations. Your contributions could even assist halt the alarming decline of a number of bat species from white-nose syndrome and different perils.
Defending wildlife is worthy work, but it surely additionally requires defending habitat that helps nature’s creatures. ReefQuest helps you to just do that by monitoring imperiled coral reefs (house to a mind-boggling multitude of aquatic species) — with out donning costly deep-sea diving gear. The mission was dreamed up by 15-year-old Dylan Vecchione, who was alarmed by the deterioration of Kahekili Reef throughout journeys to the Hawaiian island of Maui. All you want is Web entry and a love of the ocean to start exploring one of many mission’s digital reefs (panoramic underwater views of precise reefs). Your observations assist scientists spot adjustments to reef health and launch rapid-response conservation efforts to avoid wasting them.
For those who occur to love meteorology and historical past, you possibly can marry each passions at Old Weather. Citizen scientists get to pore over 19th century ship captains’ logs transcribing bygone climate observations and measurements. You may select from any variety of historic vessels, together with the united statesYorktown, which participated within the Boxer Insurrection in China in 1900, and the united statesJamestown, utilized by Union naval forces in the course of the Civil Warfare. Researchers are utilizing this previous climate knowledge to assist create higher local weather change fashions for the longer term.
A cat named Beluga’s actions are charted on a Cat Tracker map. (Display seize: Cat Tracker web site)
Your cat could also be a furry good friend at house, however outdoor she or he poses a deadly risk to wildlife in every single place. Within the U.S. alone, home cats kill billions of birds and mammals annually. Now feline lovers and their felines can shield nature and share some high quality do-good time serving to investigators at Cat Tracker study the place cats go and what they do after they slip exterior. Merely suit your cat (or cats, in case you have a couple of) with a GPS-equipped harness and let it out and in as typical. After seven days, join the GPS to your laptop, obtain the information and add it to the Cat Tracker web site. The information ought to give researchers a greater thought of the hurt cats inflict on native species. And simply in case the underlying message isn’t clear: After you’ve participated, it’s best to most likely maintain your purr-baby inside. Completely.
Cellular gadgets enable us to maintain a operating photograph tally of every thing we do and share it with the world. In case your pictures have a tendency towards nature themes, why not put them in service to a better good? With Project Noah — a free citizen-science cellular app — newbie nature-loving geeks can doc the creepy crawlies, flora and wild creatures they stumble upon on daily basis and contribute to a rising species database. Utilizing the app, simply snap a photograph, decide a class, tag with an outline and submit. Your photos of black-necked grebes, blue-spotted solar orchids, brown bears and bumblebees assist scientists draw a sharper map of the pure world. Even higher, different analysis teams and organizations additionally faucet into the huge database, permitting your work to doubtlessly help a number of ongoing nature initiatives.
Don’t have a lot free time, however nonetheless wish to assist the planet? Why not donate time in your laptop to help scientists in predicting how international local weather change will have an effect on us within the subsequent century and past? To get the supercomputing energy wanted, researchers at ClimatePrediction are asking hundreds of volunteers to run local weather mannequin packages when their computer systems are on however not getting used at max capability. Local weather simulations take wherever from a number of days to a number of months, and you’ll watch alongside as your particular person local weather situation evolves. What simpler method for you (and your laptop) to assist resolve one in all humankind’s biggest environmental hurdles?
Associated on MNN: | <urn:uuid:5dabaeab-ad21-40d6-855b-cb9a1b9d7a87> | 2.703125 | 1,436 | Listicle | Science & Tech. | 37.078348 | 95,576,584 |
The Chinese glaciologist and climate scientist Dr. Qin Dahe was awarded the 2013 Volvo Environment Prize. The award winner is a key contributor to the fifth assessment report from the UN climate panel (IPPC), whose first section, the “Physical Science Basis”, was released in September. He attracted wide attention last year with a report on how climate change leads to more extreme weather events.
Dr Qin Dahe had a leading role in last year’s special report from IPCC on extreme events and catastrophes. It was the first report to show scientifically what many had already suspected, that extreme weather and climate phenomena have become more frequent over the last 50 years. The findings gained wide currency since they showed a clear connection between climate change and periods of extreme conditions, such as extended droughts and heat waves, but also torrential storms and rain in other regions. In its citation for this year’s Volvo Environment Prize laureate, the Award Jury calls the report “a game-changer”. In the words of the Jury, “the report demonstrated for the first time a clear link between climate change and many extreme events, an issue of immediate relevance for human well-being in many parts of the world”.
Dr Qin is also a leading expert on cryosphere in central high Asia and its importance. The cryosphere is one of the main components of the Earth’s climate system, comprising snow, river and lake ice sea ice, glaciers, ice shelves, and frozen ground. Especially, glaciers have important impacts on water resources and ecosystems for more than two billion people in Asia.
Dr Qin has himself led several scientific expeditions to the Himalayas, and also been on expeditions to the Antarctic.
– There is no doubt that the major part of the glaciers in the Himalayas is disappearing fast. But one of the research areas we will tackle is the question of whether the Greenland ice cap is stable or not. And as well, the risks for more extreme occurrences such as drought, floods and storms, says Dr Qin.
Dr Qin Dahe hopes that the scientific evidence in the fifth assessment report from the UN climate panel will be enough to lead to a breakthrough in global climate negotiations.
– There is an encouragingly fast development in climate models. We are now seeing much smaller discrepancies between prognoses and what we observe in the form of temperatures and carbon dioxide concentration. My hope is that the scientific evidence will prompt people all over the world to work together to reduce emissions, says Dr Qin.
Dr Qin Dahe is a glaciologist at the Cold and Arid Regions Environment and Engineering Institute in Lanzhou, China, and Co-chair of Working Group 1, IPCC, the Intergovernmental Panel on Climate Change. He previously headed the China Meteorological Administration. Dr. Qin has published more than 170 scientific articles in English and 230 in Mandarin.
For more information about the Volvo Environment Prize and this year’s winner, please contact the Chairman of Volvo Environment Prize jury Professor Will Steffen, Fenner School of Environment and Society, Australian National University: http://anu.edu.au
The Chairman of the Volvo Environment Prize Scientific Committee, Professor Carl Folke, Beijer Institute, Royal Swedish Academy of Sciences
The Volvo Environment Prize was founded in 1988 and has become one of the world’s most prestigious environmental prizes. It is awarded annually to people who have made outstanding scientific discoveries within the area of the environment and sustainable development. The prize consists of a diploma, a glass sculpture and a cash sum of SEK 1.5 million and is presented at a ceremony in Stockholm on 26 November 2013.
This site is protected by wp-copyrightpro.com | <urn:uuid:6f95f1b9-8845-4898-8084-32a045c24d74> | 2.953125 | 760 | News Article | Science & Tech. | 40.809222 | 95,576,593 |
Seismic magnitude scales
|Part of a series on|
Seismic magnitude scales are used to describe the overall strength or "size" of an earthquake. These are distinguished from seismic intensity scales that categorize the intensity or severity of ground shaking (quaking) caused by an earthquake at a given location. Magnitudes are usually determined from measurements of an earthquake's seismic waves as recorded on a seismogram. Magnitude scales vary on the type and component of the seismic waves measured and the calculations used. Different magnitude scales are necessary because of differences in earthquakes, and in the purposes for which magnitudes are used.
- 1 Earthquake magnitude and ground-shaking intensity
- 2 Magnitude scales
- 2.1 "Richter" magnitude scale
- 2.2 Other "Local" magnitude scales
- 2.3 Body-wave magnitude scales
- 2.4 Surface-wave magnitude scales
- 2.5 Moment magnitude and energy magnitude scales
- 2.6 Energy class (K-class) scale
- 2.7 Tsunami magnitude scales
- 2.8 Duration and Coda magnitude scales
- 2.9 Macroseismic magnitude scales
- 2.10 Other magnitude scales
- 3 See also
- 4 Notes
- 5 Sources
- 6 External links
Earthquake magnitude and ground-shaking intensity
The Earth's crust is stressed by tectonic forces. When this stress becomes great enough to rupture the crust, or to overcome the friction that prevents one block of crust from slipping past another, energy is released, some of it in the form of various kinds of seismic waves that cause ground-shaking, or quaking.
Magnitude is an estimate of the relative "size" or strength of an earthquake, and thus its potential for causing ground-shaking. It is "approximately related to the released seismic energy."
Intensity refers to the strength or force of shaking at a given location, and can be related to the peak ground velocity. With an isoseismal map of the observed intensities (see illustration) an earthquake's magnitude can be estimated from both the maximum intensity observed (usually but not always near the epicenter), and from the extent of the area where the earthquake was felt.
The intensity of local ground-shaking depends on several factors besides the magnitude of the earthquake, one of the most important being soil conditions. For instance, thick layers of soft soil (such as fill) can amplify seismic waves, often at a considerable distance from the source, while sedimentary basins will often resonate, increasing the duration of shaking. This is why, in the 1989 Loma Prieta earthquake, the Marina district of San Francisco was one of the most damaged areas, though it was nearly 100 km from the epicenter. Geological structures were also significant, such as where seismic waves passing under the south end of San Francisco Bay reflected off the base of the Earth's crust towards San Francisco and Oakland. A similar effect channeled seismic waves between the other major faults in the area.
An earthquake radiates energy in the form of different kinds of seismic waves, whose characteristics reflect the nature of both the rupture and the earth's crust the waves travel through. Determination of an earthquake's magnitude generally involves identifying specific kinds of these waves on a seismogram, and then measuring one or more characteristics of a wave, such as its timing, orientation, amplitude, frequency, or duration. Additional adjustments are made for distance, kind of crust, and the characteristics of the seismograph that recorded the seismogram.
The various magnitude scales represent different ways of deriving magnitude from such information as is available. All magnitude scales retain the logarithmic scale as devised by Charles Richter, and are adjusted so the mid-range approximately correlates with the original "Richter" scale.
Since 2005 the International Association of Seismology and Physics of the Earth's Interior (IASPEI) has standardized the measurement procedures and equations for the principal magnitude scales, ML, Ms, mb, mB and mbLg.
"Richter" magnitude scale
The first scale for measuring earthquake magnitudes, developed in 1935 by Charles F. Richter and popularly known as the "Richter" scale, is actually the Local magnitude scale, label ML or ML. Richter established two features now common to all magnitude scales. First, the scale is logarithmic, so that each unit represents a ten-fold increase in the amplitude of the seismic waves. As the energy of a wave is 101.5 times its amplitude, each unit of magnitude represents a nearly 32-fold increase in the energy (strength) of an earthquake.
Second, Richter arbitrarily defined the zero point of the scale to be where an earthquake at a distance of 100 km makes a maximum horizontal displacement of 0.001 millimeters (1 µm, or 0.00004 in.) on a seismogram recorded with a Wood-Anderson torsion seismograph. Subsequent magnitude scales are calibrated to be approximately in accord with the original "Richter" (local) scale around magnitude 6.
All "Local" (ML) magnitudes are based on the maximum amplitude of the ground shaking, without distinguishing the different seismic waves. They underestimate the strength:
- of distant earthquakes (over ~600 km) because of attenuation of the S-waves,
- of deep earthquakes because the surface waves are smaller, and
- of strong earthquakes (over M ~7) because they do not take into account the duration of shaking.
The original "Richter" scale, developed in the geological context of Southern California and Nevada, was later found to be inaccurate for earthquakes in the central and eastern parts of the continent (everywhere east of the Rocky Mountains) because of differences in the continental crust. All these problems prompted development of other scales.
Other "Local" magnitude scales
Richter's original "local" scale has been adapted for other localities. These may be labelled "ML", or with a lowercase "l", either Ml, or Ml. (Not to be confused with the Russian surface-wave MLH scale.) Whether the values are comparable depends on whether the local conditions have been adequately determined and the formula suitably adjusted.
Japanese Meteorological Agency magnitude scale
In Japan, for shallow (depth < 60 km) earthquakes within 600 km, the Japanese Meteorological Agency calculates a magnitude labeled MJMA, MJMA, or MJ. (These should not be confused with moment magnitudes JMA calculates, which are labeled Mw(JMA) or M(JMA), nor with the Shindo intensity scale.) JMA magnitudes are based (as typical with local scales) on the maximum amplitude of the ground motion; they agree "rather well" with the seismic moment magnitude Mw in the range of 4.5 to 7.5, but underestimate larger magnitudes.
Body-wave magnitude scales
The original "body-wave magnitude" – mB or mB (uppercase "B") – was developed by Gutenberg (1945b, 1945c) and Gutenberg & Richter (1956) to overcome the distance and magnitude limitations of the ML scale inherent in the use of surface waves. mB is based on the P- and S-waves, measured over a longer period, and does not saturate until around M 8. However, it is not sensitive to events smaller than about M 5.5. Use of mB as originally defined has been largely abandoned, now replaced by the standardized mBBB scale.
The mb or mb scale (lowercase "m" and "b") is similar to mB, but uses only P-waves measured in the first few seconds on a specific model of short-period seismograph. It was introduced in the 1960s with the establishment of the World Wide Standardized Seismograph Network (WWSSN) for monitoring compliance with the 1963 Partial Nuclear Test Ban Treaty; the short period improves detection of smaller events, and better discriminates between tectonic earthquakes and underground nuclear explosions.
Measurement of mb has changed several times. As originally defined by Gutenberg (1945c) mb was based on the maximum amplitude of waves in the first 10 seconds or more. However, the length of the period influences the magnitude obtained. Early USGS/NEIC practice was to measure mb on the first second (just the first few P-waves), but since 1978 they measure the first twenty seconds. The modern practice is to measure short-period mb scale at less than three seconds, while the broadband mBBB scale is measured at periods of up to 30 seconds.
mbLg scale
The regional mbLg scale – also denoted mb_Lg, mbLg, MLg (USGS), Mn, and mN – was developed by Nuttli (1973) for a problem the original ML scale could not handle: all of North America east of the Rocky Mountains. The ML scale was developed in southern California, which lies on blocks of oceanic crust, typically basalt or sedimentary rock, which have been accreted to the continent. East of the Rockies the continent is a craton, a thick and largely stable mass of continental crust that is largely granite, a harder rock with different seismic characteristics. In this area the ML scale gives anomalous results for earthquakes which by other measures seemed equivalent to quakes in California.
Nuttli resolved this by measuring the amplitude of short-period (~1 sec.) Lg waves, a complex form of the Love wave which, although a surface wave, he found provided a result more closely related the mb scale than the Ms scale. Lg waves attenuate quickly along any oceanic path, but propagate well through the granitic continental crust, and MbLg is often used in areas of stable continental crust; it is especially useful for detecting underground nuclear explosions.
Surface-wave magnitude scales
Surface waves propagate along the Earth's surface, and are principally either Rayleigh waves or Love waves. For shallow earthquakes the surface waves carry most of the energy of the earthquake, and are the most destructive. Deeper earthquakes, having less interaction with the surface, produce weaker surface waves.
The surface-wave magnitude scale, variously denoted as Ms, MS, and Ms, is based on a procedure developed by Beno Gutenberg in 1942 for measuring shallow earthquakes stronger or more distant than Richter's original scale could handle. Notably, it measured the amplitude of surface waves (which generally produce the largest amplitudes) for a period of "about 20 seconds". The Ms scale approximately agrees with ML at ~6, then diverges by as much as half a magnitude. A revision by Nuttli (1983), sometimes labeled MSn, measures only waves of the first second.
A modification – the "Moscow-Prague formula" – was proposed in 1962, and recommended by the IASPEI in 1967; this is the basis of the standardized Ms20 scale (Ms_20, Ms(20)). A "broad-band" variant (Ms_BB, Ms(BB)) measures the largest velocity amplitude in the Rayleigh-wave train for periods up to 60 seconds. The MS7 scale used in China is a variant of Ms calibrated for use with the Chinese-made "type 763" long-period seismograph.
The MLH scale used in some parts of Russia is actually a surface wave magnitude.
Moment magnitude and energy magnitude scales
Other magnitude scales are based on aspects of seismic waves that only indirectly and incompletely reflect the force of an earthquake, involve other factors, and are generally limited in some respect of magnitude, focal depth, or distance. The moment magnitude scale – Mw or Mw – developed by Kanamori (1977) and Hanks & Kanamori (1979), is based on an earthquake's seismic moment, M0, a measure of how much "work" an earthquake does in sliding one patch of rock past other rock. Seismic moment is measured in Newton-meters (N • m or Nm) in the SI system of measurement, or dyne-centimeters (dyn-cm) in the older CGS system. In the simplest case the moment can be calculated knowing only the amount of slip, the area of the surface ruptured or slipped, and a factor for the resistance or friction encountered. These factors can be estimated for an existing fault to determine the magnitude of past earthquakes, or what might be anticipated for the future.
An earthquake's seismic moment can be estimated in various ways, which are the bases of the Mwb, Mwr, Mwc, Mww, Mwp, Mi, and Mwpd scales, all subtypes of the generic Mw scale. See Moment magnitude scale § Subtypes for details.
Seismic moment is considered the most objective measure of an earthquake's "size" in regard of total energy. However, it is based on a simple model of rupture, and on certain simplifying assumptions; it incorrectly assumes that the proportion of energy radiated as seismic waves is the same for all earthquakes.
Much of an earthquake's total energy as measured by Mw is dissipated as friction (resulting in heating of the crust). An earthquake's potential to cause strong ground shaking depends on the comparatively small fraction of energy radiated as seismic waves, and is better measured on the energy magnitude scale, Me. The proportion of total energy radiated as seismic varies greatly depending on focal mechanism and tectonic environment; Me and Mw for very similar earthquakes can differ by as much as 1.4 units.
Despite the usefulness of the Me scale, it is not generally used due to difficulties in estimating the radiated seismic energy.
Energy class (K-class) scale
K (from the Russian word класс, "class", in the sense of a category) is a measure of earthquake magnitude in the energy class or K-class system, developed in 1955 by Soviet seismologists in the remote Garm (Tadjikistan) region of Central Asia; in revised form it is still used for local and regional quakes in many states formerly aligned with the Soviet Union (including Cuba). Based on seismic energy (K = log ES, in Joules), difficulty in implementing it using the technology of the time led to revisions in 1958 and 1960. Adaptation to local conditions has led to various regional K scales, such as KF and KS.
K values are logarithmic, similar to Richter-style magnitudes, but have a different scaling and zero point. K values in the range of 12 to 15 correspond approximately to M 4.5 to 6. M(K), M(K), or possibly MK indicates a magnitude M calculated from an energy class K.
Tsunami magnitude scales
Earthquakes that generate tsunamis generally rupture relatively slowly, delivering more energy at longer periods (lower frequencies) than generally used for measuring magnitudes. Any skew in the spectral distribution can result in larger, or smaller, tsunamis than expected for a nominal magnitude. The tsunami magnitude scale, Mt, is based on a correlation by Katsuyuki Abe of earthquake seismic moment (M0) with the amplitude of tsunami waves as measured by tidal gauges. Originally intended for estimating the magnitude of historic earthquakes where seismic data is lacking but tidal data exist, the correlation can be reversed to predict tidal height from earthquake magnitude. (Not to be confused with the height of a tidal wave, or run-up, which is an intensity effect controlled by local topography.) Under low-noise conditions, tsunami waves as little as 5 cm can be predicted, corresponding to an earthquake of M ~6.5.
Another scale of particular importance for tsunami warnings is the mantle magnitude scale, Mm. This is based on Rayleigh waves that penetrate into the Earth's mantle, and can be determined quickly, and without complete knowledge of other parameters such as the earthquake's depth.
Duration and Coda magnitude scales
Md designates various scales that estimate magnitude from the duration or length of some part of the seismic wave-train. This is especially useful for measuring local or regional earthquakes, both powerful earthquakes that might drive the seismometer off-scale (a problem with the analog instruments formerly used) and preventing measurement of the maximum wave amplitude, and weak earthquakes, whose maximum amplitude is not accurately measured. Even for distant earthquakes, measuring the duration of the shaking (as well as the amplitude) provides a better measure of the earthquake's total energy. Measurement of duration is incorporated in some modern scales, such as Mwpd and mBc.
Mc scales usually measure the duration or amplitude of a part of the seismic wave, the coda. For short distances (less than ~100 km) these can provide a quick estimate of magnitude before the quake's exact location is known.
Macroseismic magnitude scales
Magnitude scales generally are based on instrumental measurement of some aspect of the seismic wave as recorded on a seismogram. Where such records do not exist, magnitudes can be estimated from reports of the macroseismic events such as described by intensity scales.
One approach for doing this (developed by Beno Gutenberg and Charles Richter in 1942) relates the maximum intensity observed (presumably this is over the epicenter), denoted I0 (capital I, subscripted zero), to the magnitude. It has been recommended that magnitudes calculated on this basis be labeled Mw(I0), but are sometimes labeled with a more generic Mms.
Another approach is to make an isoseismal map showing the area over which a given level of intensity was felt. The size of the "felt area" can also be related to the magnitude (based on the work of Frankel 1994 and Johnston 1996). While the recommended label for magnitudes derived in this way is M0(An), the more commonly seen label is Mfa. A variant, MLa, adapted to California and Hawaii, derives the Local magnitude (ML) from the size of the area affected by a given intensity. MI (upper-case letter "I", distinguished from the lower-case letter in Mi) has been used for moment magnitudes estimated from isoseismal intensities calculated per Johnston 1996.
Peak Ground Velocity (PGV) and Peak Ground Acceleration (PGA) are measures of the force that causes destructive ground shaking. In Japan, a network of strong-motion accelerometers provides PGA data that permits site-specific correlation with different magnitude earthquakes. This correlation can be inverted to estimate the ground shaking at that site due to an earthquake of a given magnitude at a given distance. From this a map showing areas of likely damage can be prepared within minutes of an actual earthquake.
Other magnitude scales
Many earthquake magnitude scales have been developed or proposed, with some never gaining broad acceptance and remaining only as obscure references in historical catalogs of earthquakes. Other scales have been used without a definite name, often referred to as "the method of Smith (1965)" (or similar language), with the authors often revising their method. On top of this, seismological networks vary on how they measure seismograms. Where the details of how a magnitude has been determined are unknown catalogs will specify the scale as unknown (variously Unk, Ukn, or UK). In such cases the magnitude is considered generic and approximate.
A special case is the "Seismicity of the Earth" catalog of Gutenberg & Richter (1954). Hailed as a milestone as a comprehensive global catalog of earthquakes with uniformly calculated magnitudes, they never published the details of how they determined those magnitudes. Consequently, while some catalogs identify these magnitudes as MGR, others use UK (meaning "computational method unknown"). Subsequent study found that most of the MGR magnitudes "are basically Ms for large shocks shallower than 40 km, but are basically mB for large shocks at depths of 40–60 km." Further study has found many of the Ms values to be "considerably overestimated."
- Bormann, Wendt & Di Giacomo 2013, p. 37. The relationship between magnitude and the energy released is complicated. See §126.96.36.199 and §3.3.3 for details.
- Bormann, Wendt & Di Giacomo 2013, §188.8.131.52.
- Bolt 1993, p. 164 et seq..
- Bolt 1993, pp. 170–171.
- Bolt 1993, p. 170.
- See Bolt 1993, Chapters 2 and 3, for a very readable explanation of these waves and their interpretation. J. R. Kayal's excellent description of seismic waves can be found here.
- See Havskov & Ottemöller 2009, §1.4, pp. 20–21, for a short explanation, or MNSOP-2 EX 3.1 2012 for a technical description.
- Chung & Bernreuter 1980, p. 1.
- IASPEI IS 3.3 2014, pp. 2–3.
- Kanamori 1983, p. 187.
- Richter 1935, p. 7.
- Spence, Sipkin & Choy 1989, p. 61.
- Richter 1935, pp. 5; Chung & Bernreuter 1980, p. 10. Subsequently redefined by Hutton & Boore 1987 as 10 mm of motion by an ML 3 quake at 17 km.
- Chung & Bernreuter 1980, p. 1; Kanamori 1983, p. 187, figure 2.
- Chung & Bernreuter 1980, p. ix.
- The USGS policy for reporting magnitudes to the press was posted at USGS policy Archived 2016-05-04 at the Wayback Machine., but has been removed. A copy can be found at http://dapgeol.tripod.com/usgsearthquakemagnitudepolicy.htm.
- Bormann, Wendt & Di Giacomo 2013, §3.2.4, p. 59.
- Rautian & Leith 2002, pp. 158, 162.
- See Datasheet 3.1 in NMSOP-2 for a partial compilation and references.
- Katsumata 1996; Bormann, Wendt & Di Giacomo 2013, §184.108.40.206, p. 78; Doi 2010.
- Bormann & Saul 2009, p. 2478.
- See also figure 3.70 in NMSOP-2.
- Havskov & Ottemöller 2009, p. 17.
- Bormann, Wendt & Di Giacomo 2013, p. 37; Havskov & Ottemöller 2009, §6.5. See also Abe 1981.
- Havskov & Ottemöller 2009, p. 191.
- Bormann & Saul 2009, p. 2482.
- MNSOP-2/IASPEI IS 3.3 2014, §4.2, pp. 15–16.
- Kanamori 1983, pp. 189, 196; Chung & Bernreuter 1980, p. 5.
- Bormann, Wendt & Di Giacomo 2013, pp. 37,39; Bolt (1993, pp. 88–93) examines this at length.
- Bormann, Wendt & Di Giacomo 2013, p. 103.
- IASPEI IS 3.3 2014, p. 18.
- Nuttli 1983, p. 104; Bormann, Wendt & Di Giacomo 2013, p. 103.
- IASPEI/NMSOP-2 IS 3.2 2013, p. 8.
- Bormann, Wendt & Di Giacomo 2013, §220.127.116.11. The "g" subscript refers to the granitic layer through which Lg waves propagate. Chen & Pomeroy 1980, p. 4. See also J. R. Kayal, "Seismic Waves and Earthquake Location", here, page 5.
- Nuttli 1973, p. 881.
- Bormann, Wendt & Di Giacomo 2013, §18.104.22.168.
- Havskov & Ottemöller 2009, pp. 17–19. See especially figure 1-10.
- Gutenberg 1945a; based on work by Gutenberg & Richter 1936.
- Gutenberg 1945a.
- Kanamori 1983, p. 187.
- Stover & Coffman 1993, p. 3.
- Bormann, Wendt & Di Giacomo 2013, pp. 81–84.
- MNSOP-2 DS 3.1 2012, p. 8.
- Bormann et al. 2007, p. 118.
- Rautian & Leith 2002, pp. 162, 164.
- The IASPEI standard formula for deriving moment magnitude from seismic moment is
Mw = (2/3) (log M0 – 9.1). Formula 3.68 in Bormann, Wendt & Di Giacomo 2013, p. 125.
- Anderson 2003, p. 944.
- Havskov & Ottemöller 2009, p. 198
- Havskov & Ottemöller 2009, p. 198; Bormann, Wendt & Di Giacomo 2013, p. 22.
- Bormann, Wendt & Di Giacomo 2013, p. 23
- NMSOP-2 IS 3.6 2012, §7.
- See Bormann, Wendt & Di Giacomo 2013, §22.214.171.124 for an extended discussion.
- NMSOP-2 IS 3.6 2012, §5.
- Bormann, Wendt & Di Giacomo 2013, p. 131.
- Rautian et al. 2007, p. 581.
- Rautian et al. 2007; NMSOP-2 IS 3.7 2012; Bormann, Wendt & Di Giacomo 2013, §126.96.36.199.
- Bindi et al. 2011, p. 330. Additional regression formulas for various regions can be found in Rautian et al. 2007, Tables 1 and 2. See also IS 3.7 2012, p. 17.
- Rautian & Leith 2002, p. 164.
- Bormann, Wendt & Di Giacomo 2013, §188.8.131.52, p. 124.
- Abe 1979; Abe 1989, p. 28. More precisely, Mt is based on far-field tsunami wave amplitudes in order to avoid some complications that happen near the source. Abe 1979, p. 1566.
- Blackford 1984, p. 29.
- Abe 1989, p. 28.
- Bormann, Wendt & Di Giacomo 2013, §184.108.40.206.
- Bormann, Wendt & Di Giacomo 2013, §220.127.116.11.
- Havskov & Ottemöller 2009, §6.3.
- Bormann, Wendt & Di Giacomo 2013, §18.104.22.168, pp. 71–72.
- Musson & Cecić 2012, p. 2.
- Gutenberg & Richter 1942.
- Grünthal 2011, p. 240.
- Grünthal 2011, p. 240.
- Stover & Coffman 1993, p. 3.
- Engdahl & Villaseñor 2002.
- Makris & Black 2004, p. 1032.
- Doi 2010.
- NMSOP-2 IS 3.2, pp. 1–2.
- Abe 1981, p. 74; Engdahl & Villaseñor 2002, p. 667.
- Engdahl & Villaseñor 2002, p. 688.
- Abe 1981, p. 72.
- Abe & Noguchi 1983.
- Abe, K. (April 1979), "Size of great earthquakes of 1837 – 1874 inferred from tsunami data", Journal of Geophysical Research, 84 (B4): 1561–1568, Bibcode:1979JGR....84.1561A, doi:10.1029/JB084iB04p01561.
- Abe, K. (October 1981), "Magnitudes of large shallow earthquakes from 1904 to 1980", Physics of the Earth and Planetary Interiors, 27 (1): 72–92, Bibcode:1981PEPI...27...72A, doi:10.1016/0031-9201(81)90088-1.
- Abe, K. (September 1989), "Quantification of tsunamigenic earthquakes by the Mt scale", Tectonophysics, 166 (1–3): 27–34, Bibcode:1989Tectp.166...27A, doi:10.1016/0040-1951(89)90202-3.
- Abe, K; Noguchi, S. (August 1983), "Revision of magnitudes of large shallow earthquakes, 1897-1912", Physics of the Earth and Planetary Interiors, 33 (1): 1–11, Bibcode:1983PEPI...33....1A, doi:10.1016/0031-9201(83)90002-X.
- Anderson, J. G. (2003), "Chapter 57: Strong-Motion Seismology", International Handbook of Earthquake & Engineering Seismology, Part B, pp. 937–966, ISBN 0-12-440658-0.
- Bindi, D.; Parolai, S.; Oth, K.; Abdrakhmatov, A.; Muraliev, A.; Zschau, J. (October 2011), "Intensity prediction equations for Central Asia", Geophysical Journal International, 187: 327–337, Bibcode:2011GeoJI.187..327B, doi:10.1111/j.1365-246X.2011.05142.x.
- Blackford, M. E. (1984), "Use of the Abe magnitude scale by the Tsunami Warning System." (PDF), Science of Tsunami Hazards: The International Journal of The Tsunami Society, 2 (1): 27–30.
- Bolt, B. A. (1993), Earthquakes and geological discovery, Scientific American Library, ISBN 0-7167-5040-6.
- Bormann, P., ed. (2012), New Manual of Seismological Observatory Practice 2 (NMSOP-2), Potsdam: IASPEI/GFZ German Research Centre for Geosciences, doi:10.2312/GFZ.NMSOP-2.
- Bormann, P. (2012), "Data Sheet 3.1: Magnitude calibration formulas and tables, comments on their use and complementary data." (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_DS_3.1.
- Bormann, P. (2012), "Exercise 3.1: Magnitude determinations" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_EX_3.
- Bormann, P. (2013), "Information Sheet 3.2: Proposal for unique magnitude and amplitude nomenclature" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.3.
- Bormann, P.; Dewey, J. W. (2014), "Information Sheet 3.3: The new IASPEI standards for determining magnitudes from digital data and their relation to classical magnitudes." (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.3.
- Bormann, P.; Fugita, K.; MacKey, K. G.; Gusev, A. (July 2012), "Information Sheet 3.7: The Russian K-class system, its relationships to magnitudes and its potential for future development and application" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.7.
- Bormann, P.; Saul, J. (2009), "Earthquake Magnitude" (PDF), Encyclopedia of Complexity and Applied Systems Science, 3, pp. 2473–2496.
- Bormann, P.; Wendt, S.; Di Giacomo, D. (2013), "Chapter 3: Seismic Sources and Source Parameters" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_ch3.
- Chen, T. C.; Pomeroy, P. W. (1980), Regional Seismic Wave Propagation.
- Choy, G. L.; Boatwright, J. L. (2012), "Information Sheet 3.6: Radiated seismic energy and energy magnitude" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.6.
- Choy, G. L.; Boatwright, J. L.; Kirby, S. (2001), "The Radiated Seismic Energy and Apparent Stress of Interplate and Intraslab Earthquakes at Subduction Zone Environments: Implications for Seismic Hazard Estimation" (PDF), U.S. Geological Survey, Open-File Report 01-0005.
- Chung, D. H.; Bernreuter, D. L. (1980), Regional Relationships Among Earthquake Magnitude Scales., NUREG/CR-1457.
- Doi, K. (2010), "Operational Procedures of Contributing Agencies" (PDF), Bulletin of the International Seismological Centre, 47 (7–12): 25, ISSN 2309-236X. Also available here (sections renumbered).
- Engdahl, E. R.; Villaseñor, A. (2002), "Chapter 41: Global Seismicity: 1900–1999", in Lee, W.H.K.; Kanamori, H.; Jennings, P.C.; Kisslinger, C., International Handbook of Earthquake and Engineering Seismology (PDF), Part A, Academic Press, pp. 665–690, ISBN 0-12-440652-1.
- Frankel, A. (1994), "Implications of felt area-magnitude relations for earthquake scaling and the average frequency of perceptible ground motion", Bulletin of the Seismological Society of America, 84 (2): 462–465.
- Grünthal, G. (2011), "Earthquakes, Intensity", in Gupta, H., Encyclopedia of Solid Earth Geophysics, pp. 237–242, ISBN 978-90-481-8701-0.
- Gutenberg, B. (January 1945a), "Amplitudes of surface Waves and magnitudes of shallow earthquakes" (PDF), Bulletin of the Seismological Society of America, 35 (1): 3–12.
- Gutenberg, B. (1 April 1945c), "Magnitude determination for deep-focus earthquakes" (PDF), Bulletin of the Seismological Society of America, 35 (3): 117–130
- Gutenberg, B.; Richter, C. F. (1936), "On seismic waves (third paper)", Gerlands Beiträge zur Geophysik, 47: 73–131.
- Gutenberg, B.; Richter, C. F. (1942), "Earthquake magnitude, intensity, energy, and acceleration", Bulletin of the Seismological Society of America: 163–191, ISSN 0037-1106.
- Gutenberg, B.; Richter, C. F. (1954), Seismicity of the Earth and Associated Phenomena (2nd ed.), Princeton University Press, 310p.
- Havskov, J.; Ottemöller, L. (October 2009), Processing Earthquake Data (PDF).
- Hough, S.E. (2007), Richter's scale: measure of an earthquake, measure of a man, Princeton University Press, ISBN 978-0-691-12807-8, retrieved 10 December 2011.
- Hutton, L. K.; Boore, David M. (December 1987), "The ML scale in Southern California" (PDF), Nature, 271: 411–414, Bibcode:1978Natur.271..411K, doi:10.1038/271411a0.
- Johnston, A. (1996), "Seismic moment assessment of earthquakes in stable continental regions — II. Historical seismicity", Geophysical Journal International, 125 (3): 639–678, Bibcode:1996GeoJI.125..639J, doi:10.1111/j.1365-246x.1996.tb06015.x.
- Kanamori, H. (July 10, 1977), "The energy release in great earthquakes" (PDF), Journal of Geophysical Research, 82 (20): 2981–2987, Bibcode:1977JGR....82.2981K, doi:10.1029/JB082i020p02981.
- Kanamori, H. (April 1983), "Magnitude Scale and Quantification of Earthquake" (PDF), Tectonophysics, 93 (3–4): 185–199, Bibcode:1983Tectp..93..185K, doi:10.1016/0040-1951(83)90273-1.
- Katsumata, A. (June 1996), "Comparison of magnitudes estimated by the Japan Meteorological Agency with moment magnitudes for intermediate and deep earthquakes.", Bulletin of the Seismological Society of America, 86 (3): 832–842.
- Makris, N.; Black, C. J. (September 2004), "Evaluation of Peak Ground Velocity as a "Good" Intensity Measure for Near-Source Ground Motions", Journal of Engineering Mechanics, 130 (9): 1032–1044, doi:10.1061/(asce)0733-9399(2004)130:9(1032).
- Musson, R. M.; Cecić, I. (2012), "Chapter 12: Intensity and Intensity Scales" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_ch12.
- Nuttli, O. W. (10 February 1973), "Seismic wave attenuation and magnitude relations for eastern North America", Journal of Geophysical Research, 78 (5): 876–885, Bibcode:1973JGR....78..876N, doi:10.1029/JB078i005p00876.
- Nuttli, O. W. (April 1983), "Average seismic source-parameter relations for mid-plate earthquakes", Bulletin of the Seismological Society of America, 73 (2): 519–535.
- Rautian, T. G.; Khalturin, V. I.; Fujita, K.; Mackey, K. G.; Kendall, A. D. (November–December 2007), "Origins and Methodology of the Russian Energy K-Class System and Its Relationship to Magnitude Scales" (PDF), Seismological Research Letters, 78 (6): 579–590, doi:10.1785/gssrl.78.6.579.
- Rautian, T.; Leith, W. S. (September 2002), "Developing Composite Regional Catalogs of the Seismicity of the Former Soviet Union." (PDF), 24th Seismic Research Review – Nuclear Explosion Monitoring: Innovation and Integration, Ponte Vedra Beach, Florida.
- Richter, C. F. (January 1935), "An Instrumental Earthquake Magnitude Scale" (PDF), Bulletin of the Seismological Society of America, 25 (1): 1–32.
- Spence, W.; Sipkin, S. A.; Choy, G. L. (1989), "Measuring the size of an Earthquake" (PDF), Earthquakes and Volcanoes, 21 (1): 58–63.
- Stover, C. W.; Coffman, J. L. (1993), Seismicity of the United States, 1568–1989 (Revised) (PDF), U.S. Geological Survey Professional Paper 1527.
- Perspective: a graphical comparison of earthquake energy release – Pacific Tsunami Warning Center
- USGS ShakeMap Providing near-real-time maps of ground motion and shaking intensity following significant earthquakes. | <urn:uuid:70a8fc78-c96f-48c1-93c7-c6cf04f9db8d> | 4.5625 | 8,759 | Knowledge Article | Science & Tech. | 65.120601 | 95,576,607 |
Common name: Threefins, Triplefins
A family of small fishes with three dorsal fins and scales on the sides of the body. Triplefisn occur in cold-temperate to tropical seas of coastal waters and offshore islands. They usually live on hard substrates, in intertidal and shallow rocky or coral reef habitats. A few species occur in deeper waters on the continental shelf and slopes to depths of at least 550 m (Fricke & Erdmann 2017).
Cite this page as:
Bray, D.J. 2017, Triplefins, TRIPTERYGIIDAE in Fishes of Australia, accessed 21 Jul 2018, http://fishesofaustralia.net.au/Home/family/65
Fricke, R. 1997. Tripterygiid fishes of the western and central Pacific, with descriptions of 15 new species, including an annotated checklist of world Tripterygiidae (Teleostei). Theses Zoologicae 31. Koeltz Scientific Books, Königstein, Germany, ix + 607 pp
Fricke, R. & Erdmann, M.V. 2017. Enneapterygius niue, a new species of triplefin from Niue and Samoa, southwestern Pacific Ocean (Teleostei: Tripterygiidae). Journal of the Ocean Science Foundation 25: 14–32. doi: http://dx.doi.org/10.5281/zenodo.269464 PDF open access | <urn:uuid:f5a50226-8e42-4e78-9645-a45c103575c8> | 3.15625 | 313 | Knowledge Article | Science & Tech. | 64.838355 | 95,576,625 |
All of these instruments, along with more than fifty scientists from over a dozen prestigious institutions throughout the country, are part of an extensive, ongoing research project known as "Monterey Bay 2006" (abbreviated "MB 06"). MB 06 runs from mid-July through mid-September 2006 and consists of four separate experiments that look at Central Coast waters from four different perspectives. Some experiments are trying to paint three-dimensional pictures of the ever-changing ocean currents by combining computer models with measurements of seawater temperature and chemistry. Other experiments are using sensitive underwater microphones to hear how sounds travel through turbulent coastal waters. All four of these complementary experiments are funded by the Office of Naval Research.
During the MB 06 experiment, data from nearly 100 different oceanographic sensors are being fed to a central computer system hosted by the Monterey Bay Aquarium Research Institute. MBARI-designed software allows scientists involved in the experiments to study and discuss each other's data via the internet. Thus, researchers can participate in the experiment while working on ships at sea or from their offices thousands of miles away. The general public can also look at data plots and read the scientists' discussions, as the researchers decide on a day-by-day basis where to send their undersea robots to gather the most useful data.The following paragraphs summarize the four experiments that make up MB 06:
Assessing the Effects of Submesoscale Ocean Parameterizations (AESOP)
The AESOP experiment complements the ASAP experiment by looking closely at some of the complex ocean processes that are not explicitly covered by existing computer models. For example, waters off the Central California coast are often affected by small-scale eddies, fronts (sharp boundaries between different water masses), and internal waves (waves that form underwater, between different ocean layers). The AESOP experiment attempts to determine how such localized ocean features and physical processes affect currents, mixing, and heat transfer in coastal waters.
Layered Organization in the Coastal Ocean (LOCO)
The LOCO experiment focuses on a recently-discovered biological phenomenon-dense populations of microscopic algae and other organisms that form distinct layers beneath the ocean surface. Such biological layers may be less than a meter thick, but can extend horizontally for dozens of kilometers. Scientists involved in this experiment are examining how these layers form, how they can be detected, how the organisms within these layers interact, and how the layers affect the movement of light and sound through the ocean waters.
Undersea Persistent Surveillance (UPS)
The UPS experiment involves monitoring central coast waters using extremely sensitive underwater microphones, electromagnetic sensors, and other oceanographic instruments. Some of these instruments have been placed temporarily on the seafloor; others are being carried by robotic vehicles such as gliders and autonomous underwater vehicles (AUVs). The instruments are being used as a system to monitor the ocean environment and to track some of the research vessels that will be traversing Central Coast waters during the MB 06 experiment. This will help researchers understand how ocean layers and currents affect the transmission of sounds and electrical and magnetic signals generated by ships (as well as by marine mammals and submarines).
Additional information on the MB 06 experiment can be found at:
http://www.mbari.org/mb2006/ or by contacting one of the media representatives listed above. Still images and video to accompany this release are available to media representatives upon request.Related links:
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:18d26313-6dd3-4d6b-b280-c244fab43d83> | 3.46875 | 1,285 | Content Listing | Science & Tech. | 30.88196 | 95,576,636 |
According to Douglas Adams, in his famous book The Hitch-Hikers Guide to the Galaxy, space is big. However, it seems near-Earth space is not big enough. In December 2001, the Space Shuttle pushed the International Space Station away from a discarded Russian rocket booster that was due to pass uncomfortably close. Space litter is a growing problem but smarter satellite design may help in the future.
From the beginning of the space era, satellites and deep-space probes have populated the Solar System. There are now a huge number of satellites orbiting the Earth, for different purposes including Earth observation, weather forecasting, telecommunications, military applications, and astronomy. The space around Earth is therefore becoming more and more crowded. Aside from the aspect of `space traffic control`, there is the question of what to do with space litter.
ESA`s European Space Operations Centre (ESOC) in Darmstadt, Germany, tracks space litter. It estimates that over 23 000 objects larger than 10 centimetres have been launched from Earth. Of these, about 7500 are still orbiting - only a very small proportion of them (6%) is operational. Half of all the objects are inoperable satellites, spent rocket stages, or other large space litter; the remaining 44% is debris from explosions and accidents in space. To make things worse, there are an estimated 70 000 to 120 000 fragments smaller than 1 centimetre and the amount of space debris increases by about 5% every year.
Monica Talevi | alfa
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:a82510f8-8505-47c4-9d13-f6a7f0c81ed8> | 3.65625 | 882 | Content Listing | Science & Tech. | 38.395153 | 95,576,637 |
This Example describes the way to match the String using regular expression. For this we are going to make program named Matching_Casesensitive.java. The steps involved in program Matching_Casesensitive.java are described below:-
String regex="^java":-Defining regression as string. Here (^java) means all the words which are named as java .
Pattern.CASE_INSENSITIVE| Pattern.MULTILINE:-This enables the pattern to match case insensitive. The word can either be in caps or not.
Output of the program:-
|Words from the Text is:-
java has classes
Java has methods
JAVA have regular expressions
Regular expressions are in Java
Finded words from the Text is:- | <urn:uuid:8d6d16cb-5dbc-4c1c-990e-630e2b24afbd> | 3.328125 | 157 | Tutorial | Software Dev. | 40.552588 | 95,576,644 |
Black-footed AlbatrossLatin name: Phoebastria Nigripes,
Conservsation status: near threatened (population is increasing)
The Black-Footed Albatross lives up to 60 years and may travel thousands of miles in a lifetime, using a specialized gliding technique that saves muscle and energy. It is able to smell food across vast expanses of ocean. Mates court for two years and pair for life.
Almost all Black Footed Albatrosses live in the Hawaiian Islands. Like all species of albatrosses that breed on low lying beaches and slopes, they are highly susceptible to sudden flooding from sea level rise and storm surges. Thousands each year are caught by longline fishing and they are also threatened by pollution and ingesting plastics that float in the ocean.
Other animals at risk
Koalas live in the woodlands of Australia. Thick fur and skin make it difficult for them to adapt to rising temperatures. Increased CO2 in the air produces less protein in the eucalyptus leaves, forcing the Koala to search for other sources of food and, in times of high heat, water. On the ground, the slow moving Koalas are prey to wild dingoes and domestic dogs, or are hit by cars as they cross roads. Their habitats are also being destroyed by drought, bush fires and development.
Polar Bears live only in the Arctic. Loss of sea ice has a critically adverse effect on Polar Bears. They hunt from the edge and build snow dens on the ice for resting and raising their cubs. Sea ice decline could open the Arctic to shipping and tourism, further disturbing Arctic habitats. Other threats are oil development and industrial pollution that reaches the Arctic through air and ocean currents.
In 50 years, the mean temperature of western Antarctica has risen nearly 3 °C—more than any other region—reducing the extent and thickness of winter ice. The Emperor Penguin is dependent on the ice for breeding, raising chicks and moulting. Less sea ice decreases zooplankton (krill) which feed on algae that grow on the underside of the ice. Krill are an important part of the food web for the Emperor and other Antarctic marine species.
Ivory Gulls are almost entirely dependent on sea ice and glaciers for nesting and food foraging. They feed on fish and shellfish that thrive near the edge of the ice, and on the remains of seals left by Polar Bears. Seal blubber is a source of heavy contaminants—Ivory Gull eggs show a higher concentration of mercury and pesticides than any Arctic sea bird. Other threats are illegal hunting and disturbance from diamond mining in the Canadian Arctic. | <urn:uuid:426c4d81-3077-48ae-939c-a3c4a2275a9a> | 3.703125 | 550 | Knowledge Article | Science & Tech. | 49.884564 | 95,576,647 |
Isostasy (Greek ísos "equal", stásis "standstill") is the state of gravitational equilibrium between Earth's crust and mantle such that the crust "floats" at an elevation that depends on its thickness and density.
This concept is invoked to explain how different topographic heights can exist at Earth's surface. When a certain area of Earth's crust reaches the state of isostasy, it is said to be in isostatic equilibrium. Isostasy does not upset equilibrium but instead restores it (a negative feedback). It is generally accepted that Earth is a dynamic system that responds to loads in many different ways. However, isostasy provides an important 'view' of the processes that are happening in areas that are experiencing vertical movement. Certain areas (such as the Himalayas) are not in isostatic equilibrium, which has forced researchers to identify other reasons to explain their topographic heights (in the case of the Himalayas, which are still rising, by proposing that their elevation is being supported by the force of the impacting Indian Plate; the Basin and Range Province of the Western US is another example of a region not in isostatic equilibrium.)
Although originally defined in terms of continental crust and mantle, it has subsequently been interpreted in terms of lithosphere and asthenosphere, particularly with respect to oceanic island volcanoes such as the Hawaiian Islands.
In the simplest example, isostasy is the principle of buoyancy wherein an object immersed in a fluid is buoyed with a force equal to the weight of the displaced fluid. On a geological scale, isostasy can be observed where Earth's strong crust or lithosphere exerts stress on the weaker mantle or asthenosphere, which, over geological time, flows laterally such that the load is accommodated by height adjustments.
Three principal models of isostasy are used:
- The Airy–Heiskanen model – where different topographic heights are accommodated by changes in crustal thickness, in which the crust has a constant density
- The Pratt–Hayford model – where different topographic heights are accommodated by lateral changes in rock density.
- The Vening Meinesz, or flexural isostasy model – where the lithosphere acts as an elastic plate and its inherent rigidity distributes local topographic loads over a broad region by bending.
Airy and Pratt isostasy are statements of buoyancy, whereas flexural isostasy is a statement of buoyancy when deflecting a sheet of finite elastic strength.
The basis of the model is Pascal's law, and particularly its consequence that, within a fluid in static equilibrium, the hydrostatic pressure is the same on every point at the same elevation (surface of hydrostatic compensation). In other words:
h1⋅ρ1 = h2⋅ρ2 = h3⋅ρ3 = ... hn⋅ρn
For the simplified picture shown the depth of the mountain belt roots (b1) are:
where is the density of the mantle (ca. 3,300 kg m−3) and is the density of the crust (ca. 2,750 kg m−3). Thus, we may generally consider:
b1 ≅ 5⋅h1
In the case of negative topography (i.e., a marine basin), the balancing of lithospheric columns gives:
where is the density of the mantle (ca. 3,300 kg m−3), is the density of the crust (ca. 2,750 kg m−3) and is the density of the water (ca. 1,000 kg m−3). Thus, we may generally consider:
b2 ≅ 3.2⋅h2
For the simplified model shown the new density is given by: , where is the height of the mountain and c the thickness of the crust.
Vening Meinesz / flexuralEdit
This hypothesis was suggested to explain how large topographic loads such as seamounts (e.g. Hawaiian Islands) could be compensated by regional rather than local displacement of the lithosphere. This is the more general solution for lithospheric flexure, as it approaches the locally compensated models above as the load becomes much larger than a flexural wavelength or the flexural rigidity of the lithosphere approaches zero.
Isostatic effects of deposition and erosionEdit
When large amounts of sediment are deposited on a particular region, the immense weight of the new sediment may cause the crust below to sink. Similarly, when large amounts of material are eroded away from a region, the land may rise to compensate. Therefore, as a mountain range is eroded, the (reduced) range rebounds upwards (to a certain extent) to be eroded further. Some of the rock strata now visible at the ground surface may have spent much of their history at great depths below the surface buried under other strata, to be eventually exposed as those other strata eroded away and the lower layers rebounded upwards.
An analogy may be made with an iceberg—it always floats with a certain proportion of its mass below the surface of the water. If more ice is added to the top of the iceberg, the iceberg will sink lower in the water. If a layer of ice is somehow sliced off the top of the iceberg, the remaining iceberg will rise. Similarly, Earth's lithosphere "floats" in the asthenosphere.
Isostatic effects of plate tectonicsEdit
When continents collide, the continental crust may thicken at their edges in the collision. If this happens, much of the thickened crust may move downwards rather than up as with the iceberg analogy. The idea of continental collisions building mountains "up" is therefore rather a simplification. Instead, the crust thickens and the upper part of the thickened crust may become a mountain range.
However, some continental collisions are far more complex than this, and the region may not be in isostatic equilibrium, so this subject has to be treated with caution.
Isostatic effects of ice sheetsEdit
The formation of ice sheets can cause Earth's surface to sink. Conversely, isostatic post-glacial rebound is observed in areas once covered by ice sheets that have now melted, such as around the Baltic Sea and Hudson Bay. As the ice retreats, the load on the lithosphere and asthenosphere is reduced and they rebound back towards their equilibrium levels. In this way, it is possible to find former sea cliffs and associated wave-cut platforms hundreds of metres above present-day sea level. The rebound movements are so slow that the uplift caused by the ending of the last glacial period is still continuing.
In addition to the vertical movement of the land and sea, isostatic adjustment of the Earth also involves horizontal movements. It can cause changes in Earth's gravitational field and rotation rate, polar wander, and earthquakes.
Eustasy and relative sea level changeEdit
Eustasy is another cause of relative sea level change quite different from isostatic causes. The term eustasy or eustatic refers to changes in the volume of water in the oceans, usually due to global climate change. When Earth's climate cools, a greater proportion of water is stored on land masses in the form of glaciers, snow, etc. This results in falling global sea levels (relative to a stable land mass). The refilling of ocean basins by glacial meltwater at the end of ice ages is an example of eustatic sea level rise.
A second significant cause of eustatic sea level rise is thermal expansion of sea water when Earth's mean temperature increases. Current estimates of global eustatic rise from tide gauge records and satellite altimetry is about +3 mm/a (see 2007 IPCC report). Global sea level is also affected by vertical crustal movements, changes in Earth's rotation rate, large-scale changes in continental margins and changes in the spreading rate of the ocean floor.
When the term relative is used in context with sea level change, the implication is that both eustasy and isostasy are at work, or that the author does not know which cause to invoke.
Post-glacial rebound can also be a cause of rising sea levels. When the sea floor rises, which it continues to do in parts of the northern hemisphere, water is displaced and has to go elsewhere.
- Clarence Dutton, who coined the term isostasy in 1889
- John Fillmore Hayford
- William Bowie (engineer)
- Lau, Gotland
- Marine terrace
- Tectonic uplift
- Post-glacial rebound
- A.B. Watts, Isostasy and flexure of the lithosphere,Cambridge Univ. Press., 2001
- "Clarence Edward Dutton" (PDF). 1958. Retrieved 7 October 2014.
- Lisitzin, E. (1974) "Sea level changes". Elsevier Oceanography Series, 8
- Watts, AB (2001). Isostasy and Flexure of the Lithosphere. Cambridge University Press. ISBN 0-521-00600-7. A very complete overview with much of the historical development. | <urn:uuid:dd0ca926-ab10-48b3-82d3-8a9c938b6ede> | 3.890625 | 1,903 | Knowledge Article | Science & Tech. | 42.776101 | 95,576,650 |
Share this article:
How does pollution affect cloud formation and climate change? This question has long been an unsolved mystery of climate science, leading to uncertainty in climate modeling. Research published this week in the Proceedings of the National Academy of Sciences takes a big step toward answering that question and helping scientists improve those models.
For nearly half a century, scientists speculated that air pollution from burning fossil fuels causes clouds to form differently. Clouds are made up of many small water droplets that form around particles in the atmosphere; when those droplets tried to form around oily particles, scientists believed they would form more slowly, since "oil doesn't mix with water," explained Athanasios Nenes, the Georgia Institute of Technology professor who led the study.
Surprisingly, Nenes found, this just isn't true. In a set of experiments that took place across the globe, Nenes tested cloud formation and discovered that droplets form just as quickly around pollution particles as they do around other particles.
"Which is pretty remarkable, but that is what we found, and we went pretty much everywhere we could get our instruments to," Nenes said.
To figure out how pollution actually affected cloud formation, Nenes and his team flew an aircraft equipped with a miniature cloud formation chamber -- essentially a moist tube that is heated on one end and cooled on the other -- in 10 different cloud-forming environments.
An airborne connection
They flew over the Arctic, through smoke from forest fires and above the 2010 Deepwater Horizon oil spill in the Gulf of Mexico. Yet even when they flew the cloud chamber through the incredibly oily particles above the Gulf of Mexico, Nenes said, droplets formed at the same rate as in nonpolluted environments.
Because clouds regulate how much sunlight can get to the Earth and can also trap heat in the atmosphere, they have a big effect on climate. Better knowledge on how they form can help climate models become more accurate. Nenes was pleased with his study results because they simplify what was once a big uncertainty in the models.
"It's something that we weren't expecting to see, but it's actually a good thing because it makes the models easier," Nenes said.
Charles Kolb, an atmospheric chemist who has also researched how pollution affects cloud formation, said the study provided real-world results that closely matched recent laboratory experiments on pollution and cloud formation.
"It's very important to have a connection between the laboratory science and the actual science in the real world. And [this] paper showed the two are very closely related," he said.
Reprinted from ClimateWire with permission from Environment & Energy Publishing, LLC. 202-628-6500. E&E Publishing is the leading source for comprehensive, daily coverage of environmental and energy issues. Click here to start a free trial to E&E's information services.
Deadly heat will continue across Japan through at least Thursday, following the hottest day on record in Japan.
Mars has been growing bigger and brighter in the night sky in 2018 and it will reach its peak on Thursday night, bringing the best opportunity to view the Red Planet since 2003.
A switch to a cooler weather pattern in the midwestern United States will come at the expense of violent thunderstorms prior to the middle of the week.
The intense record heat baking the south-central United States is expected to get trimmed back early this week, but a sweep of refreshing air is not on the horizon.
This past weekend's rainstorm was only the start of an abnormally wet pattern that will elevate the flood risk in the eastern United States into the end of the month.
Despite NASCAR moving up the start time of the Foxwoods Resort Casino 301, rain has hung on and delayed the race at Loudon, New Hampshire.
Yet another round of severe weather is threatening the southeastern United States to close out this weekend. | <urn:uuid:20d5b22c-e483-4a59-b088-6418a1822467> | 3.9375 | 786 | News Article | Science & Tech. | 42.401362 | 95,576,664 |
Grow grass, not for fun but for fuel. Burning grass for energy has been a well-accepted technology in Europe for decades. But not in the United States.
Yet burning grass pellets as a biofuel is economical, energy-efficient, environmentally friendly and sustainable, says a Cornell University forage crop expert.
This alternative fuel easily could be produced and pelleted by farmers and burned in modified stoves built to burn wood pellets or corn, says Jerry Cherney, the E.V. Baker Professor of Agriculture. Burning grass pellets hasnt caught on in the United States, however, Cherney says, primarily because Washington has made no effort to support the technology with subsidies or research dollars.
Nicola Pytell | EurekAlert!
A smart safe rechargeable zinc ion battery based on sol-gel transition electrolytes
20.07.2018 | Science China Press
Future electronic components to be printed like newspapers
20.07.2018 | Purdue University
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
23.07.2018 | Health and Medicine
23.07.2018 | Earth Sciences
23.07.2018 | Science Education | <urn:uuid:4f2b9d25-fa44-43e8-aa6d-1adb9226ea8c> | 2.8125 | 715 | Content Listing | Science & Tech. | 37.466529 | 95,576,666 |
The East Antarctic Ice Sheet (EAIS) is the largest potential contributor to sea-level rise. However, efforts to predict the future evolution of the EAIS are hindered by uncertainty in how it responded to past warm periods, for example, during the Pliocene epoch (5.3 to 2.6 million years ago), when atmospheric carbon dioxide concentrations were last higher than 400 parts per million. Geological evidence indicates that some marine-based portions of the EAIS and the West Antarctic Ice Sheet retreated during parts of the Pliocene1,2, but it remains unclear whether ice grounded above sea level also experienced retreat. This uncertainty persists because global sea-level estimates for the Pliocene have large uncertainties and cannot be used to rule out substantial terrestrial ice loss3, and also because direct geological evidence bearing on past ice retreat on land is lacking. Here we show that land-based sectors of the EAIS that drain into the Ross Sea have been stable throughout the past eight million years. We base this conclusion on the extremely low concentrations of cosmogenic 10Be and 26Al isotopes found in quartz sand extracted from a land-proximal marine sediment core. This sediment had been eroded from the continent, and its low levels of cosmogenic nuclides indicate that it experienced only minimal exposure to cosmic radiation, suggesting that the sediment source regions were covered in ice. These findings indicate that atmospheric warming during the past eight million years was insufficient to cause widespread or long-lasting meltback of the EAIS margin onto land. We suggest that variations in Antarctic ice volume in response to the range of global temperatures experienced over this period—up to 2–3 degrees Celsius above preindustrial temperatures4, corresponding to future scenarios involving carbon dioxide concentrations of between 400 and 500 parts per million—were instead driven mostly by the retreat of marine ice margins, in agreement with the latest models5,6.
Access optionsAccess options
Rent or Buy article
Get time limited or full article access on ReadCube.
All prices are NET prices.
Subscribe to Journal
Get full journal access for 1 year
only $3.90 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We thank the Antarctic Research Facility for AND-1B samples, and J. X. Mitrovica for his help in performing the glacial isostatic adjustment modelling. This research was supported by National Science Foundation (NSF) grant ARC-1023191 (to P.R.B. and L.B.C.); Boston College start-up funds (to J.D.S.); Vermont Established Program to Stimulate Competitive Research (EPSCoR) grants EPS-1101317 and NSF OIA 1556770 (to K.U. and D.M.R.); NSF grant EAR-1153689 (to M.W.C.); and the New Zealand Ministry of Business Innovation and Employment contract C05X1001 (to T.N. and N.R.G.). This is Lawrence Livermore National Laboratory project LLNL-JRNL-735619.Reviewer information
Nature thanks J. Gosse, E. Gasson, J. Willenbring and the other anonymous reviewer(s) for their contribution to the peer review of this work.
Extended data figures and tables
a–d, Simulated erosion potential under the Antarctic Ice Sheet, calculated from modelled driving stress and basal velocity fields for several uniform (atmosphere and ocean) warming scenarios of: 4 °C (a), 8 °C (b), 12 °C (c) and 15 °C (d)55. The location of the AND-1B core is shown by the yellow dot. We note that erosive zones tend to extend towards the continental interior with warming. dT, temperature anomaly from present; dV, ice-volume anomaly from present, in sea-level equivalent (s.l.e.).
a–d, Antarctic land above sea level (yellow) 0 kyr (a), 5 kyr (b), 10 kyr (c), and 15 kyr (d) after a near-instantaneous (1-kyr) collapse of all marine-based ice-sheet sectors, in two different models of mantle viscosity26. Model 1 is from ref. 56, and model 2 (our model) has the following parameters: lithosphere thickness, 96 km; upper-mantle viscosity, 5 × 1020 Pa s−1; and lower-mantle viscosity, 1022 Pa s−1. The location of the AND-1B core is shown by the star.
a, b, Cumulative exceedance probabilities of measured (that is, not blank-corrected) 10Be (a) and 26Al (b) nuclide abundances in AND-1B samples (blue) and in all blanks run by the same operator in the same low-level fume hood (red), with 1σ uncertainties. These plots display the fraction of measurements that exceed a given nuclide abundance. Note that probabilities are generally higher for the samples than the blanks; in other words, a random draw from the samples is more likely to be above a random draw from the blanks, suggesting that they are separable populations.
Shaded intervals surrounding the blue line show 1σ uncertainties, while shaded intervals not surrounding the blue line show the possible range of decay-corrected concentrations in samples that are below the detection limit. The dashed black line simulates the 26Al concentration in non-eroding material at 2,000 metres above sea level (m asl) that was originally saturated at 14 Ma and subsequently decayed under cold-based, non-erosive ice. The fact that several AND-1B samples have higher concentrations than those in this extreme scenario (which is the most favourable to having nuclides persist to the present) suggests that the AND-1B nuclides were produced after the expansion of the EAIS in the mid-Miocene.
Extended Data Fig. 5 Modelled concentrations of cosmogenic nuclides for various durations of interglacial exposure and glacial erosion rates.
a–d, Simulated 10Be (a, b) and 26Al (c, d) concentrations in material sourced from sea level and from 2,000 m asl in Antarctica as a function of the fraction of time for which land is exposed, during 40-kyr glacial cycles. (Results are nearly identical if the cycles are instead 100-kyr long.) Erosion rates were assumed to be 0 m per Myr during ice-free conditions, on the basis of geologic evidence for negligible late Cenozoic erosion in ice-free areas of the TAMs9,10. Black arrows next to the scale bars show the range of decay-corrected nuclide concentrations in AND-1B samples. The model was initialized with zero nuclides at 8 Ma (representative of conditions suggested by AND-1B sample H); the model also assumes instantaneous transport of eroded sediment to the ocean with no mixing, and continuous radioactive decay. Concentrations shown are the Pliocene (5 Ma to 3 Ma) average. Comparison of these simulations with AND-1B nuclide concentrations suggests that land exposure in sediment source regions was probably quite limited in duration or extent through the Plio-Pleistocene.
a–d, Each panel shows actual AND-1B decay-corrected 10Be concentrations with 1σ uncertainty (green), as well as simulated 10Be concentrations assuming a single 10-kyr (a), 50-kyr (b), 100-kyr (c) and 200-kyr (d) exposure of a bedrock column in the mid-Pliocene. The exposure event was chosen to start at 3.6 Ma and extend for up to 200 kyr in duration on the basis of the presence of a 60-m-thick diatomite unit in the AND-1B core, thought to reflect warm interglacial conditions from 3.6 Ma to 3.4 Ma1. Simulated records are driven by production at sea level (grey) or at 2,000 m asl (black), and are subjected to continuous radioactive decay and continuous erosion at rates of 0 m per Myr (solid lines), 20 m per Myr (dashed lines), and 100 m per Myr (dotted lines). The model assumes that the sediment source was initially devoid of nuclides and that sediments are transported instantaneously to the sea floor. The synthetic time series have been binned to the same resolution as the AND-1B data.
Extended Data Fig. 7 Modelling a mid-Pliocene exposure event with eroded bedrock mixed through a deformable bed.
The figure shows AND-1B decay-corrected 10Be concentrations with 1σ uncertainties (green). It also depicts simulated 10Be concentrations, assuming a single exposure event from 3.6 Ma to 3.4 Ma and routing of eroded bedrock through a well mixed deformable bed, for various bed thicknesses and erosion rates. Material eroded from the bedrock profile is instantaneously mixed throughout the deformable bed in each time step, and an equal amount of material is removed from the bed, keeping its thickness constant. Sediment mixing in the deformable bed dilutes the surface 10Be signal of the exposure event but extends its longevity through time in comparison with the bedrock simulations shown in Extended Data Fig. 6. Simulated records are driven by production at sea level, and subjected to continuous radioactive decay and continuous erosion. The model assumes that the bedrock and deformable bed were initially devoid of nuclides and that sediments eroded from the deformable bed are transported instantaneously to the sea floor. The synthetic time series have been binned to the same resolution as the AND-1B data.
Extended Data Fig. 8 Conceptual diagram showing the outcomes of Bayesian one-group t-tests and their interpretation.
a, Nuclides are credibly present above background: that is, the sample value is greater than the mean of the blanks (defined at the mode of the posterior distribution), and the region of uncertainty surrounding the sample value fully excludes the 90% credible interval (C.I.) on the posterior distribution of the mean of the blanks. The grey shaded regions give the uncertainty range in the sample nuclide concentration. b, Nuclides are not credibly present above background: the sample value is less than or equal to the blank mean. c, Nuclides are not credibly present above background: although the sample value is greater than the blank mean, the region of uncertainty surrounding the sample value does not fully exclude the 90% C.I.
This file contains AND-1B sediment processing data, AND-1B cosmogenic nuclide data and process blank cosmogenic nuclide data. | <urn:uuid:f116b6b1-3c4a-4eee-bba1-ab30ab98c4a8> | 3.265625 | 2,284 | Truncated | Science & Tech. | 49.182858 | 95,576,667 |
The surface boundary layer as a part of the overlying convective layer
- 27 Downloads
This paper extends previous work developing a mechanistic theory of the convective boundary layer to the forced convective region between the base of the plumes and the surface. It is shown that a simple model based on specifying the entrainment between adjacent layers, gives quantitative relations between temperature at plume height and the surface temperature, and the shearing stress and the turbulence, thus completing the specifications of the surface boundary needed for the convective planetary boundary layer plume model, by describing the surface boundary layer without constants determined to match boundary layer measurements.
The formulae are deduced in terms of the entrainment constanta=1/12, and the turbulent decay constantA=1, by mechanistic reasoning, without the introduction of any adjustable empirical constants.
Rough surface conditions also allow the wind and the temperature excess at ten meters or so above the surface to be derived. In these conditionsv=(A/2a)1/2i, andTsurface−Taverage at plume base=H/(ϱCpai). When the surface is not very rough, an additional roughness parameter is required to specify the number of layers needed to make the transition from the plume to the surface, and its function is examined. These formulae all compare well with published measured values.
It is shown by means of a fully descriptive theory, that the shear in the plume layer is very small.
Key wordsBoundary layer Convection
Unable to display preview. Download preview PDF.
- Deardorff, James W. (1972),Numerical investigation on neutral and unstable planetary boundary layers, J. Atmos. Sci.29, 91–115.Google Scholar
- Kaimal, J. C., Wyngaard, J. C., Haugen, D. A., Cote, O. R., Izuml, Y., Caughey, S. J. andReadings, C. J. (1976),Turbulence structure in the convective boundary layer, J. Atmos. Sci.33, 2152–2169.Google Scholar
- Kazanskiy, A. B. andMonin, A. S. (1957),Shape of Smoke Plumes, Izvestiya AN USSR, Ser. Geofiz,8, 1020–1033.Google Scholar
- Lumley, J. L. andPanofsky, H. A. (1964),The Structure of Atmospheric Turbulence (New York, Wiley, 1964), 239 pp.Google Scholar
- Monin, A. S., andYaglom, A. M. (1971),Statistical Fluid Mechanics: Mechanics of Turbulence, Vol. 1, (The MIT Press, 1971), 769 pp.Google Scholar
- Panofsky, H. A. andMcCormick, R. A. (1954),Properties of spectra of atmospheric turbulence at 100 meters, Quart. J. Roy Meteorol. Soc.80, 546.Google Scholar
- Pasquill, F. (1972),Some aspects of boundary layer description, Quart. J. Roy. Meteorol. Soc.98, 469–494.Google Scholar
- Telford, J. W., I. (1966),The convective mechanism in clear air, J. Atmos. Sci.23, 652–666.Google Scholar
- Telford, J. W., II, (1970),Convective plumes in a convective field, J. Atmos. Sci.24, 347–358.Google Scholar
- Telford, J. W., III. (1972),A plume theory for the convective field in clear air, J. Atmos. Sci.29, 128–134.Google Scholar
- Telford, J. W., IV. (1975),The effects of compressibility and dissipation heating on boundary layer plumes, J. Atmos. Sci.32, 108–115.Google Scholar
- Telford, J. W., V. (1975),Turbulence, entrainment and mixing in cloud dynamics, Pure. Appl. Geophys.113, 1067–1084.Google Scholar | <urn:uuid:c92a1ab5-b842-4031-a7b4-059bb1720fc1> | 2.59375 | 894 | Academic Writing | Science & Tech. | 68.825354 | 95,576,680 |
To cite this page, please use the following:
· For print: . Accessed
· For web:
Found most commonly in these habitats: 17 times found in Pinar, 4 times found in edge of lake, 11 times found in Encinar, 10 times found in Dunas, 6 times found in conifer forest, 2 times found in desert riparian, 2 times found in fir-pine-oak forest, 6 times found in Eucaliptal, 5 times found in Camino, 5 times found in shrub steppe, ...
Found most commonly in these microhabitats: 25 times under stone, 26 times NBP, 11 times nest under rock, 20 times Forrajeando, 15 times Nest under stone, 12 times Nido bajo piedra, 5 times ex sifted leaf litter, 6 times Nido, 5 times Bajo piedra, 5 times Forrajenado, 4 times Duna, ...
Collected most commonly using these methods: 192 times Hand, 62 times search, 21 times hand collecting, 17 times Winkler, 2 times Mercury Vapor Lamp, 6 times Pitfall, 0 times blacklight, 1 times from light source, 1 times mixed.
Elevations: collected from 1 - 2710 meters, 965 meters average
AntWeb content is licensed under a Creative Commons Attribution License. We encourage use of AntWeb images. In print, each image must include attribution to its photographer and "from www.AntWeb.org" in the figure caption. For websites, images must be clearly identified as coming from www.AntWeb.org, with a backward link to the respective source page. See How to Cite AntWeb.
Antweb is funded from private donations and from grants from the National Science Foundation, DEB-0344731, EF-0431330 and DEB-0842395. c:0 | <urn:uuid:718b5e0b-bc9c-4fdd-b246-cf8a2c7ae005> | 2.640625 | 390 | Knowledge Article | Science & Tech. | 61.900548 | 95,576,694 |
Physicists are one step closer to developing the world’s first room-temperature superconductor thanks to a new theory from the University of Waterloo, Harvard and Perimeter Institute.
The theory explains the transition phase to superconductivity, or “pseudogap” phase, which is one of the last obstacles to developing the next generation of superconductors and one of the major unsolved problems of theoretical condensed matter physics.
Their work was published in this week’s issue of the prestigious journal Science.
Superconductivity is the phenomenon where electricity flows with no resistance and no energy loss. Most materials need to be cooled to ultra-low temperatures with liquid helium in order to achieve a superconductive state.
The team includes Professor Roger Melko, Professor David Hawthorn and doctoral student Lauren Hayward from Waterloo’s Physics and Astronomy Department, and Harvard Physics Professor Subir Sachev. Roger Melko also holds a Canada Research Chair in Computational Quantum Many-Body Physics.
Hawthorn showed Sachdev his latest experimental data on a superconducting material made of Copper and the elements Yttrium and Barium. The material, YBa2Cu3O6+x, had an unexplained temperature dependence. Sachdev had a theory but needed expert help with the complex set of calculations to prove it. That’s where Melko and Hayward stepped in and developed the computer code to solve Sachdev’s equations.
Melko and Sachdev already knew each other through Perimeter Institute, where Melko is an associate faculty member and Sachdev is a Distinguished Research Visiting Chair.
“The results all came together in a matter of weeks,” said Melko. “It really speaks to the synergy we have between Waterloo and Perimeter Institute.”
To understand why room-temperature superconductivity has remained so elusive, physicists have turned their sights to the phase that occurs just before superconductivity takes over: the mysterious “pseudogap” phase.
“Understanding the pseudogap is as important as understanding superconductivity itself,” said Melko.
The cuprate, YBa2Cu3O6+x, is one of the few materials known to be superconductive at higher temperatures, but scientists are so far unable to achieve superconductivity in this material above -179°C. This new study found that YBa2Cu3O6+x oscillates between two quantum states during the pseudogap, one of which involves charge-density wave fluctuations. These periodic fluctuations in the distribution of the electrical charges are what destabilize the superconducting state above the critical temperature.
Once the material is cooled below the critical temperature, the strength of these fluctuations falls and the superconductivity state takes over.
Superconducting magnets are currently used in MRI machines and complex particle accelerators, but the cost of cooling materials using Helium makes them very expensive. Materials that achieve superconductivity at a higher temperature could unlock the technology for new smart power grids and advanced power storage units.
The group plans to extend their work both theoretically and experimentally to understand more about the fundamental nature of cuprates.
In just half a century, the University of Waterloo, located at the heart of Canada's technology hub, has become one of Canada's leading comprehensive universities with 35,000 full- and part-time students in undergraduate and graduate programs. Waterloo, as home to the world's largest post-secondary co-operative education program, embraces its connections to the world and encourages enterprising partnerships in learning, research and discovery. In the next decade, the university is committed to building a better future for Canada and the world by championing innovation and collaboration to create solutions relevant to the needs of today and tomorrow. For more information about Waterloo, please visit www.uwaterloo.ca.
Attention broadcasters: Waterloo has facilities to provide broadcast quality audio and video feeds with a double-ender studio. Please contact us to book.
Nick Manning | EurekAlert!
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:8c2db3e3-f3bd-4be9-89d5-e3cbee46453c> | 3.546875 | 1,473 | Content Listing | Science & Tech. | 28.002179 | 95,576,695 |
A new giant virus found in the waters of Oahu, Hawaii
Researchers at the Daniel K. Inouye Center for Microbial Oceanography: Research and Education (C-MORE) at the University of Hawai'i (UH) at Mānoa have characterized a new, unusually large virus that infects common marine algae. Found in the coastal waters off Oahu, Hawai'i, it contains the biggest genome ever sequenced for a virus infecting a photosynthetic organism.
"Most people are familiar with viruses," said Christopher Schvarcz, the UH Mānoa oceanography graduate student who led the project as part of his doctoral dissertation, "because there are so many that cause diseases in humans. But we are not alone; even the microscopic plankton in the ocean are constantly battling viral infections."
Much of the phytoplankton that grows in the ocean every day gets eaten, thereby sustaining animals in the marine food web. It is common, however, for viral infections to spread through populations of phytoplankton. When this happens, the infected phytoplankton cells disintegrate and are decomposed by bacteria, diverting that food source away from the animals.
"That sounds bad," said Grieg Steward, professor in the UH Mānoa Department of Oceanography and co-author on the study, "but viruses actually help maintain balance in the marine ecosystem. Viruses spread more efficiently through highly concentrated populations, so if one type of phytoplankton grows faster than the others and starts to dominate, it can get knocked down to lower levels by a viral infection, giving the other species a chance to thrive."
Viruses have to replicate inside of cells, putting some constraints on how big they can be, but the known upper size limit of viruses has been creeping upward over the past 15 years as researchers have focused on finding more examples of what are now referred to as "giant" viruses.
"Most viruses are so tiny that we need an electron microscope to see them," said Steward "but these giants rival bacteria in size, and their genomes often code for functions we have never seen in viruses before."
The virus described by Schvarcz and Steward in their recent paper in the journal Virology was named TetV-1, because it infects single-celled algae called Tetraselmis. After sequencing its genome, Schvarcz discovered that the virus has a number of genes that it seems to have picked up from the alga it infects. Two of these appear to code for enzymes involved in fermentation, which is a process used by microorganisms to get energy from sugars in the absence of oxygen. Fermentation is familiar to many of us, because it is the key to making beer, wine, and spirits. Why would a virus need these genes? The authors don't know for sure, but they have a guess.
"Tetraselmis can grow to extraordinarily high concentrations in coastal waters," explains Schvarcz, "turning the water from clear blue to an intense green. If TetV were to spread under those conditions, huge numbers of cells would succumb to viral infection. Bacteria would immediately begin decomposing the dead algae and quickly use up all the oxygen in the water. We think that the fermentation genes in TetV may allow the virus to maintain its energy flow under low oxygen conditions even as it shuts down the host cell systems."
Schvarcz and Steward plan to conduct field and lab experiments to test whether this idea is correct. Tetraselmis is used as a food source for aquaculture and as a source of starch for the biofuel industry, so the authors speculate that understanding exactly how TetV manipulates the metabolism of its host might have some practical applications. The ability of TetV to inject DNA into these cells might be exploited, for example, to reprogram the algae to make more of a desired product.
"We have more to learn about this particular virus," mused Steward "and its just one example plucked from an ocean that has millions of them floating in every teaspoon."
Considering the numbers, it seems certain there are many more unusual viruses waiting to be discovered just under the next wave.
Related Journal Article | <urn:uuid:539d6154-aae5-439d-bcb6-e76b91a7dc9b> | 3.296875 | 883 | Truncated | Science & Tech. | 33.428537 | 95,576,712 |
Marrakesh: Carbon emissions from burning fossil fuels have been nearly flat for three years in a row — a "great help" but not enough to stave off dangerous global warming, a report said on Monday.
Emissions of planet-warming carbon dioxide stayed level in 2015 at 36.3 billion tonnes (GtCO2) and were projected to rise "only slightly", by 0.2 percent in 2016, according to the annual Global Carbon Budget report compiled by teams of scientists from around the world.
"This third year of almost no growth in emissions is unprecedented at a time of strong economic growth," said research leader Corinne Le Quere of the University of East Anglia.
Driven largely by reduced coal use in China, this was a "clear and unprecedented break" with the preceding decade's fast emissions growth, at a rate of some 2.3 percent per year from 2004 to 2013, before dipping to 0.7 percent in 2014.
"This is a great help for tackling climate change but it is not enough," said Le Quere.
For the world's nations to make true on the global pact to limit average global warming to two degrees Celsius (3.6 degrees Fahrenheit) over pre-industrial revolution levels, emissions must do more than level off, the study found.
A decrease of 0.9 percent per year was needed to 2030.
The concentration of greenhouse gases in the atmosphere has continued to grow, the report warned, hitting a record level of 23 GtCO2 last year that looked set to reach 25 GtCO2 in 2016.
The analysis was published in the journal Earth System Science Data, to coincide with the UN climate conference in Morocco.
Climate envoys are gathered in Marrakesh to put plans in place to execute the so-called Paris Agreement concluded in the French capital a year ago.
It envisions a dramatic reduction in greenhouse gas-producing coal, oil and gas use for energy.
The new report said humanity has emitted 2,075 GtCO2 since 1870 — adding 40 GtCO2 in 2016 alone.
"We have already used more than two thirds of the emissions quota to keep climate change well below two degrees," it warned. "The remaining quota would be used up in less than 30 years at the current emissions level."
Updated Date: Nov 14, 2016 11:27 AM | <urn:uuid:10672187-5375-45ad-9ca1-8cc1e8a4b243> | 3.25 | 485 | News Article | Science & Tech. | 59.345438 | 95,576,713 |
Gregory Stone, director of LSU’s WAVCIS Program and also of the Coastal Studies Institute in the university’s School of the Coast & Environment, disagrees with published estimates that more than 75 percent of the oil from the Deepwater Horizon incident has disappeared.
Stone recently participated in a three-hour flyover of the affected area in the Gulf, where he said that subsurface oil was easily visible from overhead.
“It’s most definitely there,” said Stone. “It’s just a matter of time before it makes itself known again.”
Readings from WAVCIS indicate that the direction of the ocean currents near the middle and bottom of the water column are aimed offshore; in other words, this submerged oil will be pushed out to sea, where it will then rise higher into the water column and be washed onto land, particularly during storms.
“It is going to come on shore not consistently, but rather in pulses because it is beneath the surface,” he said. “You may get one or two, maybe even five or 10 waves coming ashore with absolutely no oil … but eventually, it’s going to come ashore.” He also cautions that whatever oil doesn’t remain suspended in the water column may simply sit atop the seafloor, waiting to be mixed back into the currents.
“It will simply be stirred up during rough seas or changing currents and reintroduced into the water column,” he explained.
Another timely concern is hurricane season since September is generally one of the most active months of the year. “Storm surge, when combined with storm waves from a hurricane, could stir up this submerged oil and bring it – lots of it – onshore and into the wetlands,” Stone said. “Even a tropical storm could result in more oil on the shoreline. And that’s a reality we need to consider and be prepared for.”
Formally known as the Wave-Current-Surge Information System, WAVCIS is based off of a network of buoys, oil platforms sensors and ADCPs, or Acoustic Doppler Current Profilers, in the Gulf of Mexico. The ADCPs are exceptionally sensitive. Housed on the seafloor, they send acoustic signals up to the surface of the water, measuring the entire water column for everything from current direction to speed and temperature. It’s also integrated with the National Data Buoy Center, or NDBC, system, providing researchers worldwide with a comprehensive look at the Gulf environment, which is an invaluable research tool during the inevitable hurricane season, and also during disasters such as the Deepwater Horizon tragedy.
“WAVCIS is among the most sensitive ocean observing systems in the entire nation,” said Stone. “We measure a wide variety of physical parameters at the water surface, water column and on the sea bed. This information is extremely helpful in predicting or determining where the oil is – and where it’s going to go. Because our information is updated hourly and available to the public, our lab has played a primary role in providing facts about the situation surrounding the oil’s movement and location.”
Stone, whose experience with WAVCIS has spanned everything from natural to manmade disasters, knows that only time will tell the severity of the oil’s impact.
“This is a long term problem. It’s not simply going to go away. I was in Prince William Sound 10 years after the Exxon-Valdez event, and when I lifted up a rock, there was still residual oil beneath it,” he said. “Thus, the residence time of oil in the coastal environment can be substantial, although ecosystem conditions along the northern Gulf are very different and will likely recover quicker than in Alaska. We here at WAVCIS can at least track Gulf conditions to monitor the situation as closely as possible.”For more information about WAVCIS, visit http://wavcis.csi.lsu.edu/.
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Life Sciences
18.07.2018 | Materials Sciences
18.07.2018 | Health and Medicine | <urn:uuid:dd379c1f-8a97-4329-9c4d-88712b37e3a2> | 2.96875 | 1,508 | Content Listing | Science & Tech. | 43.478982 | 95,576,714 |
Experimental work on a potassium sulfate-water system was carried out using ten and fifty liter crystallizers. Different impeller velocities and suspension densities were used. The crystal size distribution was determined over the range from 0.1 μm to the largest crystals, which have been produced in the crystallizers, by the combination of Coulter LS-130 light scattering laser and by Vidas image analyzer results. Experimental evidence from continuous crystallizers frequently shows, at least for small crystals, deviation from the McCabe ΔL law. In this case, the estimation of kinetics of both nucleation and growth rate becomes more complicated. In this work the crystal size distribution was determined experimentally. The relation between growth rate and particle size is investigated. The methods of estimation of kinetics for industrial use are discussed. The experimental data of population density distribution was fitted directly by the three-parameter model presented by Mydlarz and Jones for a steady state MSMPR crystallizer. Then the relation between growth rate and particle size was calculated by the corresponding three-parameter growth rate model The relation between growth rate and particle size shows that the apparent crystal growth rate increases linearly with the crystal size when the crystal size is smaller than about 10 μm, is strongly size-dependent when the crystal size is between 10-700 μm, and is size-independent when the crystal size is greater than 700 μm. A mechanism of growth rate dispersion is suggested.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:57f86122-7e2d-4b9b-9e88-c0339d40b243> | 2.625 | 315 | Academic Writing | Science & Tech. | 27.878542 | 95,576,715 |
Patterns of genetic variation in the apple maggot, Rhagoletis pomonella (Walsh), were analyzed at 19 enzyme genes by starch electrophoresis. Eleven populations from the eastern United States, the native range of the fly, showed high levels of genetic variation within, but relatively low levels of heterogeneity among populations. In contrast, R. pomonella from parts of the western United States, where it has been widely found only during the past decade, were significantly different from flies in the native range. Flies from Utah were depauperate in genetic variation, had diagnostic allele frequency shifts at remaining polymorphic genes, and exhibited substantial heterogeneity among populations. Samples from the Pacific Northwest had somewhat less genetic variation than eastern flies and showed evidence of hybridization with the snowberry maggot, R. zephyria Snow, endemic to that region. A sample of R. pomonella from Colorado is more similar to eastern than to Utah flies. Comparison of fly populations from sympatric hawthorn and apple reinforce previous findings of genetic differences between apple maggot host races.
Mendeley saves you time finding and organizing research
There are no full text links
Choose a citation style from the tabs below | <urn:uuid:34ba9581-34d9-4554-8e81-1283aa97b918> | 3.0625 | 251 | Academic Writing | Science & Tech. | 16.661024 | 95,576,733 |
Microzooplankton herbivory and community structure in the Amundsen Sea, Antarctica
Cited 5 time in
- Microzooplankton herbivory and community structure in the Amundsen Sea, Antarctica
- Yang, Eun Jin
- Microzooplankton; Grazing rate; Growthrate; Amundsen Sea; Polynya; Araon; Araon
- Issue Date
- Yang, Eun Jin, Jiang, Yong, Lee, SangHoon. 2016. "Microzooplankton herbivory and community structure in the Amundsen Sea, Antarctica". Deep-Sea Research II, 123: 55-68.
- We examined microzooplankton abundance, community structure, and grazing impact on phytoplankton in the Amundsen Sea, Western Antarctica, during the early austral summer from December 2010 to January 2011. Our study area was divided into three regions based on topography, hydrographic properties, and trophic conditions: (1) the Oceanic Zone (OZ), with free sea ice and low phytoplankton biomass dominated by diatoms; (2) the Sea Ice Zone (SIZ), covered by heavy sea ice with colder water, lower salinity, and dominated by diatoms; and (3) the Amundsen Sea Polynya (ASP), with high phytoplankton biomass dominated by Phaeocystis antarctica. Microzooplankton biomass and communities associated with phytoplankton biomass and composition varied among regions. Heterotrophic dinoflagellates (HDF) were the most significant grazers in the ASP and OZ, whereas ciliates co-dominated with HDF in the SIZ. Microzooplankton grazing impact is significant in our study area, particularly in the ASP, and consumed 55.4 - 107.6% of phytoplankton production (average 77.3%), with grazing impact increasing with prey and grazer biomass. This result implies that a significant proportion of the phytoplankton production is not removed by sinking or other grazers but grazed by microzooplankton. Compared with diatom-based systems, Phaeocystis-based production would be largely remineralized and/or channeled through the microbial food web through microzooplankton grazing. In these waters the major herbivorous fate of phytoplankton is likely mediated by the microzooplankton population. Our study confirms the importance of herbivorous protists in the planktonic ecosystems of high latitudes. In conclusion, microzooplankton herbivory may be a driving force controlling phytoplankton growth in early summer in the Amundsen Sea, particularly in the ASP.
- Files in This Item
- Can archive pre-print and post-print or publisher's version/PDF
Can archive post-print (ie final draft post-refereeing) or publisher's version/PDF
Can archive pre-print (ie pre-refereeing)
Archiving not formally supported
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated. | <urn:uuid:45149e2c-d5ea-4660-9fdb-5d56ea6ffdc9> | 2.515625 | 676 | Academic Writing | Science & Tech. | 19.194968 | 95,576,739 |
Q: I’ve heard that white-nose syndrome has appeared in some West Coast bats. Are Bay Area bats now threatened? —Ellenor, San Francisco
A: This is a very depressing question but a good one. The first evidence of a new and disastrous bat disease showed up in a colony of hibernating little brown bats (Myotis lucifugus) in 2006 in upstate New York. A fungus (Pseudogymnoascus destructans), likely introduced from Europe, invades the bat’s skin and causes deep tissue damage. When the bats are hibernating it shows as a white fuzz on the face and wings, hence the name white-nose syndrome or WNS.
Bats that hibernate during the winter go into an energy-conserving state of torpor for days or weeks at a time throughout the hibernation, interspersed with periods of rousing. They have a limited amount of fat to take them through this long period of dormancy. But infected animals wake up more often than is normal, become restless, and sometimes even fly out into the cold. Because there are no insects to eat in winter the bats use up all their fat reserves and die. WNS, which thrives in the moist, cool conditions of the hibernacula, spreads rapidly via direct contact from bat to bat. Entire colonies have been wiped out in a single winter and it is estimated that over six million bats have died. As a result, the little brown bat, which was the most numerous bat in the Northeast, is probably going to be declared an endangered species soon.
WNS has spread rapidly to 29 states, reaching as far south as Alabama and Georgia, north into five provinces of Canada, and west to Minnesota and Arkansas. And then recently, much to the dismay of nature lovers, a bat with WNS was found in Washington State.
After rodents (Rodentia), bats (Chiroptera) are the most biodiverse order of mammals. More than 1,100 species exist throughout the world; they’re absent only in the far polar regions and some remote islands. There are 47 species of bats in the United States and Canada, 24 species in California, and at least 12 species in the San Francisco Bay region. About half of the U.S. and Canadian species hibernate; many others migrate during the winter.
Unfortunately, it appears that it’s not a question of if WNS will reach the Bay Area but when. Taylor Ellis (no relation), a wildlife biologist with the National Park Service, is working with other bat scientists to identify species most susceptible to WNS and take proactive measures. Of the species found in the Bay Area, two—the big brown bat (Eptesicus fuscus) and the little brown bat—have suffered from WNS back east. Three species of our local bats, also present back east, are known to be carriers of the fungus but are asymptomatic, which means they could act as WNS dispersers. Then there are eight species that are distributed strictly in the western U.S., so we don’t yet know how they will respond to the fungus.
As Ellis has said, bats here “might not even get the fungus in the first place; we don’t know. So the pathogen will get here, but it remains to be seen if it can flourish in local bat roosts and affect bats in our climate. Our big bat aggregations (matriarchal colonies) in California tend to be in spring and summer, and the bats are active and feeding throughout that time.” Additionally, WNS’s climatic requirements for growth—a combination of humidity and temperature—may not exist here. So we, and our local bats, may luck out.
Like this article?
There’s lots more where this came from…
Subscribe to Bay Nature magazine
Most recent in Ask the Naturalist
Birds can become confused by glass skyscrapers and artificial light. What will happen with San Francisco's newest skyscraper?
Ask the Naturalist | <urn:uuid:f412a8db-6807-400b-ab4e-2885aa4e6339> | 3.46875 | 855 | Q&A Forum | Science & Tech. | 54.45015 | 95,576,743 |
Loschmidt's paradox, also known as the reversibility paradox, irreversibility paradox or Umkehreinwand, is the objection that it should not be possible to deduce an irreversible process from time-symmetric dynamics. This puts the time reversal symmetry of (almost) all known low-level fundamental physical processes at odds with any attempt to infer from them the second law of thermodynamics which describes the behaviour of macroscopic systems. Both of these are well-accepted principles in physics, with sound observational and theoretical support, yet they seem to be in conflict; hence the paradox.
Josef Loschmidt's criticism was provoked by the H-theorem of Boltzmann, which employed kinetic theory to explain the increase of entropy in an ideal gas from a non-equilibrium state, when the molecules of the gas are allowed to collide. In 1876, Loschmidt pointed out that if there is a motion of a system from time t0 to time t1 to time t2 that leads to a steady decrease of H (increase of entropy) with time, then there is another allowed state of motion of the system at t1, found by reversing all the velocities, in which H must increase. This revealed that one of Boltzmann's key assumptions, molecular chaos, or, the Stosszahlansatz, that all particle velocities were completely uncorrelated, did not follow from Newtonian dynamics. One can assert that possible correlations are uninteresting, and therefore decide to ignore them; but if one does so, one has changed the conceptual system, injecting an element of time-asymmetry by that very action.
Reversible laws of motion cannot explain why we experience our world to be in such a comparatively low state of entropy at the moment (compared to the equilibrium entropy of universal heat death); and to have been at even lower entropy in the past.
Arrow of time
Any process that happens regularly in the forward direction of time but rarely or never in the opposite direction, such as entropy increasing in an isolated system, defines what physicists call an arrow of time in nature. This term only refers to an observation of an asymmetry in time; it is not meant to suggest an explanation for such asymmetries. Loschmidt's paradox is equivalent to the question of how it is possible that there could be a thermodynamic arrow of time given time-symmetric fundamental laws, since time-symmetry implies that for any process compatible with these fundamental laws, a reversed version that looked exactly like a film of the first process played backwards would be equally compatible with the same fundamental laws, and would even be equally probable if one were to pick the system's initial state randomly from the phase space of all possible states for that system.
Although most of the arrows of time described by physicists are thought to be special cases of the thermodynamic arrow, there are a few that are believed to be unconnected, like the cosmological arrow of time based on the fact that the universe is expanding rather than contracting, and the fact that a few processes in particle physics actually violate time-symmetry, while they respect a related symmetry known as CPT symmetry. In the case of the cosmological arrow, most physicists believe that entropy would continue to increase even if the universe began to contract (although the physicist Thomas Gold once proposed a model in which the thermodynamic arrow would reverse in this phase). In the case of the violations of time-symmetry in particle physics, the situations in which they occur are rare and are only known to involve a few types of meson particles. Furthermore, due to CPT symmetry reversal of time direction is equivalent to renaming particles as antiparticles and vice versa. Therefore, this cannot explain Loschmidt's paradox.
Current research in dynamical systems offers one possible mechanism for obtaining irreversibility from reversible systems. The central argument is based on the claim that the correct way to study the dynamics of macroscopic systems is to study the transfer operator corresponding to the microscopic equations of motion. It is then argued that the transfer operator is not unitary (i.e. is not reversible) but has eigenvalues whose magnitude is strictly less than one; these eigenvalues corresponding to decaying physical states. This approach is fraught with various difficulties; it works well for only a handful of exactly solvable models.
This section does not cite any sources. (December 2016) (Learn how and when to remove this template message)
One approach to handling Loschmidt's paradox is the fluctuation theorem, derived heuristically by Denis Evans and Debra Searles, which gives a numerical estimate of the probability that a system away from equilibrium will have a certain value for the dissipation function (often an entropy like property) over a certain amount of time. The result is obtained with the exact time reversible dynamical equations of motion and the Axiom of Causality. The fluctuation theorem is obtained using the fact that dynamics is time reversible. Quantitative predictions of this theorem have been confirmed in laboratory experiments at the Australian National University conducted by Edith M. Sevick et al. using optical tweezers apparatus. This theorem is applicable for transient systems, which may initially be in equilibrium and then driven away (as was the case for the first experiment by Sevick et al.) or some other arbitrary initial state, including relaxation towards equilibrium. There is also an asymptotic result for systems which are in a nonequilibrium steady state at all times.
There is a crucial point in the fluctuation theorem, that differs from how Loschmidt framed the paradox. Loschmidt considered the probability of observing a single trajectory, which is analogous to enquiring about the probability of observing a single point in phase space. In both of these cases the probability is always zero. To be able to effectively address this you must consider the probability density for a set of points in a small region of phase space, or a set of trajectories. The fluctuation theorem considers the probability density for all of the trajectories that are initially in an infinitesimally small region of phase space. This leads directly to the probability of finding a trajectory, in either the forward or the reverse trajectory sets, depending upon the initial probability distribution as well as the dissipation which is done as the system evolves. It is this crucial difference in approach that allows the fluctuation theorem to correctly solve the paradox.
The Big Bang
Another way of dealing with Loschmidt's paradox is to see the second law as an expression of a set of boundary conditions, in which our universe's time coordinate has a low-entropy starting point: the Big Bang. From this point of view, the arrow of time is determined entirely by the direction that leads away from the Big Bang, and a hypothetical universe with a maximum-entropy Big Bang would have no arrow of time. The theory of cosmic inflation tries to give reason why the early universe had such a low entropy.
- Maximum entropy thermodynamics for one particular perspective on entropy, reversibility and the Second Law
- Poincaré recurrence theorem
- Statistical mechanics
- J. Loschmidt, Sitzungsber. Kais. Akad. Wiss. Wien, Math. Naturwiss. Classe 73, 128–142 (1876) | <urn:uuid:ba93a976-1c63-4b1f-babe-4228c478f9a0> | 3.09375 | 1,509 | Knowledge Article | Science & Tech. | 25.890857 | 95,576,761 |
Black Holes. What do you know about them? What does anyone really know about them? As we all remember from movies and cartoons when we were kids, black holes are large holes in space that suck everything inside of them and nothing can escape, not even light.
Now, scientists are a little more keyed in on how black holes work, or at least they think they are. Scientists have recently released some estimations of just how big they think black holes can get, and the size is a little frightening.
Yeah. It’s going to get confusing, but try to stay with me.
In terms of black holes, physicists apply the name “spacetime” any time that a particular model — or description — of space interweaves itself with time.
A black hole is a region of spacetime somewhere in the universe that exhibits an immense gravitational pull that nothing — including light — can escape. Black holes of stellar size are thought to form when large stars come to the end of their lives and collapse inward on themselves. At present, all galaxies in the known universe are thought to have a black hole at their center, including our very own Milky Way.
Andrew King, an Astronomical Theorist, recently had his paper, “How Big Can a Black Hole Grow?” published in the journal, Monthly Notices Letters of the Royal Astronomical Society. In his black hole paper, King explained that his estimation is that the largest a black hole could ever get would be the equivalent of 50 billion solar masses, (or about 50 billion times larger than our own sun). Something that large is difficult to comprehend with the human mind.
King said that knowing just how big black holes can get means that we probably won’t be surprised by any larger than 50 billion solar masses — of which size we’ve already detected several.
“The significance of this discovery is that astronomers have found black holes of almost the maximum mass, by observing the huge amount of radiation given off by the gas disc as it falls in. The mass limit means that this procedure should not turn up any masses much bigger than those we know, because there would not be a luminous disc.”
Technically, according to King, there is one way that a black hole could end up being larger than the size of 50 billion suns. If two black holes were to form close enough to each other, and end up merging, the resulting massive black hole could break the 50 billion sun threshold.
“…a hole near the maximum mass could merge with another black hole, and the result would be bigger still. But no light would be produced in this merger, and the bigger merged black hole could not have a disc of gas that would make light.”
So, even though a black hole could form in size larger than 50 billion suns, we here on Earth would have a hard time detecting it.
[Photo by ESA/Getty Images] | <urn:uuid:7005daf2-63bb-42ef-9734-0a6ccd9b16da> | 3.8125 | 610 | News Article | Science & Tech. | 55.116336 | 95,576,764 |
The most important conclusion from this article is that from the General Theory of Relativity do not result any gravitational waves, but just ordinary modulation of the gravitational field intensities caused by rotating of bodies. If the LIGO team has measured anything, it is only this modulation, rather than the gravitational wave understood as the carrier of gravity. This discussion shows that using too complicated mathematics in physics leads to erroneous interpretation of results (in this case, perhaps the tensor analysis is guilty). Formally, various things can be calculated, but without knowing what such analysis means, they can be attributed misinterpreted. Since the modulation of gravitational field intensities has been called a gravitational wave in contemporary physics, we have also done so, although it is misleading. In the article it was shown, that from the Newton’s law of gravitation resulted an existence of gravitational waves very similar to these, which result from the General Theory of Relativity (GTR). The article shows differences between the course of gravitational waves that result from Newton’s gravitation, and the course of gravitational waves that result from the General Theory of Relativity, which measurement was announced by the LIGO (Laser Interferometer Gravitational-Wave Observatory). According to both theories, gravitational waves are cyclical changes of the gravitational field intensities. The article proposes a method of testing a laser interferometer for gravitational wave measurement used in the LIGO Observatory. Criticism of results published by the LIGO team was also presented.
Comments: 20 Pages. From the General Theory of Relativity do not result any gravitational waves, but just ordinary modulation of the gravitational field intensities caused by rotating of bodies.
[v1] 2018-06-28 13:13:07
Unique-IP document downloads: 17 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:70acf171-e228-484e-bad0-e9383ca4526d> | 2.84375 | 504 | Knowledge Article | Science & Tech. | 22.157743 | 95,576,771 |
Solutions for Chapter 10 Problem 109P
Lewis theory is not sophisticated enough to be correct every time. It is impossible to write good Lewis structure for molecules with odd number of electrons, yet some of these molecules exist in nature. In such cases, we can simply write the best Lewis structure that we can.
Sometime central atom has electrons greater or less than octet (a stable electronic configuration). But still such molecules exist | <urn:uuid:e7f2572c-69a8-443c-b96a-5cb9168436b1> | 3.734375 | 86 | Truncated | Science & Tech. | 33.018099 | 95,576,780 |
With new “megafires” altering our forests, scientists scramble to understand impacts on wildlife while trying to harness fire to benefit both animals and people
Canada lynx (above) are among wildlife species impacted by larger and hotter wildfires across Washington’s Cascade Range, including a 2015 blaze (right) just north of the town of Twisp.
STANDING ATOP A SNOWY MOUNTAIN PASS in northeastern Washington’s Okanogan-Wenatchee National Forest, biologist John Rohrer scans a landscape transformed by wildfire. Charred, dead trees cover ridges and valleys as far as the eye can see. “This used to be core Canada lynx habitat,” says Rohrer, a U.S. Forest Service (USFS) biologist who has tracked the rare cats in these rugged mountains for more than 20 years. “Now it’s not.”
Eleven years ago, the 175,000-acre Tripod Fire burned through this area, consuming the subalpine lodgepole pine and spruce-fir forest that lynx rely on for hunting. The cats prey almost exclusively on snowshoe hares, which avoid severely burned areas lacking green forage and protective cover. No hares, no lynx.
While fire always has been part of this forest ecosystem, wildfires have grown hotter and larger in recent decades, burning a third of lynx habitat in the northeastern Cascades during the past two decades—and reducing the region’s carrying capacity for female lynx to 27, down from 43 in 1996.
The fires are casting doubt on the future of Canada lynx in the Cascades. Nationwide, the cat is listed as threatened under the U.S. Endangered Species Act, and last year, the Washington Department of Fish and Wildlife changed its designation from threatened to endangered, citing “the loss and fragmentation of habitat as a result of wildfires” as a top threat to survival.
Lynx are hardly the only animals feeling the heat. From mountain lions to moose, wildlife species across the West—and beyond—are confronting increasingly larger and hotter wildfires. Nationwide, a record 10.1 million acres burned in 2015, and the number of “megafires”—those that burn more than 100,000 acres—is on the rise.
According to a 2015 USFS report, the United States loses twice as many acres to fire as it did three decades ago, and that acreage may double or triple by midcentury. The six worst fire seasons recorded since 1960 occurred during the past 15 years, the report adds. While large wildfires are not new—conflagrations of more than a million acres were reported in the late 1800s—their increasing frequency is. Before 1995, an average of one or fewer megafires broke out annually in the United States. Between 2005 and 2014, that number had jumped to nearly 10 a year.
Megafires are primarily a problem in arid western forests, but the danger is also rising in the East. Last November, an inferno dubbed the Chimney Tops 2 Fire (right) broke out in Great Smoky Mountains National Park and rapidly spread to more than 17,000 acres, killing 14 people and destroying more than 2,000 homes and 53 commercial buildings in and around Gatlinburg, Tennessee. Across the Southeast, fires burned more than 150,000 acres during fall 2016.
Today’s wildfires can be so severe that they permanently destroy forests by sterilizing soils, killing seed sources and enabling invasive species such as cheatgrass to move in. “There will be areas out West where we may never see forests come back,” says Bruce Stein, the National Wildlife Federation’s associate vice president for conservation science.
Stein blames megafires on two major forces. The first is “decades of fire suppression that have left many forests seriously overgrown and ripe for major fires.” For nearly a century, USFS and other agencies fought aggressively to extinguish every single fire—based on an assumption that fire is always damaging and destroys valuable timber and habitat. The result is that many forests are now choked with fuel: dense stands of tightly spaced trees and forest floors littered with deep layers of needles and other flammable organic matter. Historically, frequent, low-intensity fires burned off these fuels and thinned out thick stands of young trees.
The second force, Stein says, is climate change. According to a Brookings Institution report, higher temperatures, shorter winters, early springs and reduced snowpack have lengthened the U.S. fire season by 40 to 80 days since 1970. USFS reports that in some regions, the fire season now lasts 300 days per year. More frequent and longer droughts—which spawn hotter, drier and windier summers—also have made forests more vulnerable to large and severe fires.
In addition, climate change has spawned outbreaks of tree-killing insects. During the past several decades, mountain pine and other bark beetles have ravaged millions of acres of conifer forests from the Southwest to Alaska. Though these insects are native, they have become more destructive as milder winters have extended their breeding and feeding seasons. Drought-stressed trees, meanwhile, are less resistant to beetles. The insects have left in their wake large areas of woodlands filled with dead, dried-out trees and forest floors covered with dry conifer needles and branches ripe for ignition. Combined with fire suppression, these myriad consequences of climate change “make it much more likely that when a wildfire does break out, it will explode into a truly extreme event,” Stein says.
It can be difficult to generalize about the impact of fire on wildlife because each blaze and ecosystem varies. For example, woodpeckers that inhabit cavities in standing dead trees and eat insects that live in deadwood benefit from fires that create new habitat—unless the fire is so fierce the forest never grows back. Other species requiring dense forest canopy, such as spotted owls, can decline or disappear following a large fire.
Because megafires differ fundamentally from smaller burns wildlife faced in the past, scientists are scrambling to find out how they affect different species. In Southern California, for example, researchers hypothesize that increasingly frequent fires will suppress mountain lion populations by converting native scrublands, which the cats prefer for hunting and resting, to open, non-native grasslands, says San Diego State University biologist Megan Jennings, who analyzed data on the movements of 44 lions in the state’s Peninsular Range. More frequent fires, often human-caused, are combining with urban and suburban expansion to degrade and diminish habitat, “further endangering the persistence of healthy puma populations in southern California,” says a study Jennings published last year in The Journal of Wildlife Management.
Sage-grouse also are suffering from increasingly large wildfires sweeping through the western sagebrush steppe the birds rely on. Here fire danger is on the rise due in part to the spread of highly flammable, invasive cheatgrass. Once an area burns, a negative feedback loop means even more cheatgrass—which grows quickly—leading to more fires and less grouse habitat.
In Washington’s Cascades, on the other hand, Rohrer reports that moose seem to have benefited from larger fires, which have led to an increase in the willows and other deciduous trees the herbivores feed on. “Twenty years ago, we hardly ever saw moose here,” Rohrer says. “Now we get them wandering into town.”
Though they may yield winners as well as losers in the short term, “on balance, megafires are very bad for wildlife,” Stein says. Unlike traditional, low-intensity fires that leave behind a patchwork of habitats, today’s hotter and larger flames are contributing to landscape simplification. By contrast, biodiversity is usually highest in landscapes that are complex—featuring a variety of habitat types, from open grassy areas and scrublands to forests of different age classes. “Even when a forest comes back following a megafire,” Stein says, “you end up with a uniform habitat rather than the mosaic that benefits most wildlife species.”
As the flames grow higher, the cost to U.S. taxpayers is also mounting. In 2015, the federal government spent a record $2.1 billion fighting fires. (Prior to 2000, the government’s firefighting tab had never topped $1 billion, but since then has exceeded that figure 13 times.) USFS bears most of this cost. Today, firefighting consumes more than half of the service’s budget—compared with just 16 percent in 1995—and by 2025, USFS officials estimate that figure could be nearly 70 percent.
Conservationists say this increased spending has hampered the service’s role as steward of our national forests. “Money is being drained from other important programs, including wildlife management, watershed protection, endangered species monitoring” and other initiatives critical to conservation, says Mike Leahy, NWF’s senior manager for public lands conservation.
To reduce the risk of megafires, scientists and land managers recommend reintroducing low-intensity fire to public lands using “prescribed burns”—fires set during favorable weather with firefighters on hand to contain the flames—that would burn off smaller trees, shrubs and flammable organic matter on the forest floor. Forest ecologists also suggest thinning out overcrowded woodlands by cutting small-diameter trees and allowing larger ones to grow. This kind of “targeted, collaborative forest and fuel management may be the best opportunity to utilize limited funding to actively restore forests,” says Brian Kurzel, director of NWF’s Rocky Mountain Regional Center.
In Arizona, one such project, the Four Forest Restoration Initiative (4FRI), already is underway. Spanning 2.4 million acres across four adjacent national forests, the project—a collaboration among state, federal and local authorities along with conservation organizations—is treating up to 50,000 acres a year with prescribed burns, thinning and “managed wildfires” (natural fires allowed to burn under controlled conditions).
Historically, these ponderosa pine forests experienced low-intensity fires every few years, which kept tree densities to about 25 per acre and prevented huge fires. “Some of these stands now have densities of a thousand trees per acre— 40-year-old trees that are a scrawny 5 inches in diameter,” says Tom Mackin, past president of NWF affiliate the Arizona Wildlife Federation and that group’s point person for the 4FRI. The initiative’s goal is to save human lives and property as well as benefit wildlife—including elk, mule deer, pronghorn, black bear and wild turkey—by returning the forests to conditions that existed prior to fire suppression, when fewer but larger trees provided an open, parklike ecosystem with grasses, shrubs and forbs that are food for many animal species.
In Washington’s Cascades, Rohrer and his colleagues hope a similar ecosystem transformation will take place. During a five-hour snowmobile tour through the Tripod burn zone last winter, they were elated to discover a handful of snowshoe hare tracks. While most hare habitat remains burned out, a few small stands of trees have begun to regrow, providing green cover and forage. “This is good hare habitat,” said Rohrer, examining a patch of young conifers surrounded by standing deadwood.
Still, he and other scientists say it could take 20 to 40 years before these trees grow large enough to provide suitable lynx habitat. Whether or not the cats rebound in coming years will be dictated by flames. “If we want lynx to recover here,” Rohrer says, “we can’t have any more big fires.”
Fighting “Fire Borrowing”
Unlike other natural disasters such as hurricanes and tornadoes, wildfires on public lands receive no dedicated funding within the federal budget. To control the mounting number of destructive “megafires” across the country, federal agencies, particularly the U.S. Forest Service, must take money away from other critical programs such as wildlife conservation and forest restoration. Seeking to end this practice, the National Wildlife Federation passed a resolution at its 2016 annual meeting requesting that the U.S. Congress and president do away with “fire borrowing” by creating a dedicated fund to battle wildfires.
More from National Wildlife magazine and NWF:
Climate Change and Wildfires
Wildlife Feels the Heat
Five Ways Wildfires Threaten Western Wildlife
Fires of Life
America's Forgotten Forest
Home on the Sage
NWF Blogs about Wildfires
Learn about Forest Fires with Ranger Rick!
Mule Deer Decline
Place your order today for the themed box that delivers everything you need to create family memories while discovering nature and wildlife.Read More
Find out what it means to source wood sustainably, and see how your favorite furniture brands rank based on their wood sourcing policies, goals, and practices.Read More
Climate change is allowing ticks to survive in greater numbers and expand their range—influencing the survival of their hosts and the bacteria that cause the diseases they carry.Read More
Tell your members of Congress to save America's vulnerable wildlife by supporting the Recovering America's Wildlife Act.Read More
You don't have to travel far to join us for an event. Attend an upcoming event with one of our regional centers or affiliates. | <urn:uuid:6b3f8fb8-6834-4d58-97c8-f43217ae3e06> | 3.578125 | 2,829 | Knowledge Article | Science & Tech. | 37.609271 | 95,576,782 |
Things are roughened up and friction is now added to the approximate simple pendulum
Dip your toe into the world of quantum mechanics by looking at the Schrodinger equation for hydrogen atoms
See how the motion of the simple pendulum is not-so-simple after all.
Look at the advanced way of viewing sin and cos through their power series.
An article demonstrating mathematically how various physical modelling assumptions affect the solution to the seemingly simple problem of the projectile.
Get further into power series using the fascinating Bessel's equation.
Follow in the steps of Newton and find the path that the earth follows around the sun.
See how differential equations might be used to make a realistic model of a system containing predators and their prey.
Explore the possibilities for reaction rates versus concentrations with this non-linear differential equation
Can you find the differential equations giving rise to these famous solutions?
Match the descriptions of physical processes to these differential equations.
Solve these differential equations to see how a minus sign can change the answer
How many eggs should a bird lay to maximise the number of chicks that will hatch? An introduction to optimisation. | <urn:uuid:34efb2b0-58d4-46cd-8fa8-987cb31e7e8b> | 3.1875 | 236 | Content Listing | Science & Tech. | 36.338 | 95,576,789 |
+44 1803 865913
By: Yvonne Baskin
237 pages, B/w illus
Yvonne Baskin takes the reader from the polar desert of Antarctica to the coastal rain forests of Canada, from the rangelands of Yellowstone National Park to the vanishing wetlands of the Mississippi River basin, from Dutch pastures to English sounds, and beyond. She introduces exotic creatures - from bacteria and fungi to microscopic nematode worms, springtails, and mud shrimp - and shows us what scientists are learning about their contribution to sustaining a green and healthy world above ground. She also explores the alarming ways in which air pollution, trawl fishing, timber cutting, introductions of invasive species, wetland destruction, and the like threaten this underground diversity and how their loss, in turn, affects our own well being.
[An] enjoyable tour of a new ecological frontier. - PUBLISHERS WEEKLY "Engaging...rich and descriptive...Baskin's book successfully gives a face to the rapidly changing field of soil ecology." - BIOSCIENCE "At last, proper attention is given to the vast biomass and biodiversity at our feet, humanity's absolute dependence upon this layer of life, and the need to expand science and conservation to save it. This is a well-written and important book." - E.O. WILSON "One of the most talented science writers, Yvonne Baskin has presented a clear view of amazing creatures and microbes and their profound influence on the surface world..." - PAUL R. EHRLICH "With fabulous prose, Yvonne Baskin takes us through an ecological looking glass to the wonderland of underground...required reading (for all) made delightful." - THOMAS E. LOVEJOY"
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
Well pleased! Excellent service as usual. Would also like to mention how prompt delivery is.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:dbfa19b0-c5e4-4f69-a140-1207d86e36aa> | 2.65625 | 438 | Product Page | Science & Tech. | 49.268929 | 95,576,792 |
Brazil's Islands in the Sky Defy Evolution
August 9, 2012
Isolated table mountains with sheer cliffs in South America should be natural laboratories for evolution. Why aren't they?
Mt. St. Helens Renewal Slow, Steady
August 6, 2012
This is an eyewitness report of ecological renewal at the volcano that erupted 32 years ago.
Weightlifters No Match for Insects
August 4, 2012
For Olympic season, here are more comparisons between human and animal capabilities.
Dinosaur Triggers and Other Fossil Foibles
August 3, 2012
Instant dinosaurs: just add mountains. Does this and other fossil news make sense?
More Olympic Creatures
August 1, 2012
Plants and animals continue to amaze us with their Olympic-level abilities. New observations promote some to the award stand.
Peppered Moths Without Evolution
July 31, 2012
A new study shows that scientific research on moth camouflage does not require evolutionary theory.
Animals Win the Gold
July 27, 2012
As the Olympics begin in London, it's fun to consider how animals would compete against humans.
Evolution Falsely So Called
July 25, 2012
Evolutionary theory gets credited for changes that really do not help Darwin's view of a universal tree of life. Three examples show how.
Body Double: Your Body as a Template for Inventors
July 17, 2012
Your body contains a lot of things engineers would like to copy, and not just at the scale of C3P0-like humanoid robots.
July 3, 2012
As summer Olympics season approaches, we should remember that we humans are not the only ones with some amazing physical abilities.
Evolutionists Taking Credit for Biomimetics
July 1, 2012
Biomimetics is all about design – intelligent design, mimicking the superb designs found in nature. Why, then, are some scientists claiming evolutionary theory is where the biomimetic beef is?
Explanatory Filter in Action: Fairy Circles in Africa
June 30, 2012
The old "crop circle" craze fanned the curiosity of many, till humans were filmed making them. Now, scientists have a different circle mystery, and they're stumped.
Pitcher Plant Inspires R&D Award
June 21, 2012
The R&D 100 award, previously given for inventions like the fax machine and automated teller machine, has been given this year for a biologically-inspired design that could revolutionize society in many ways.
Mating Turtles Fossilized Instantly
June 19, 2012
Evolutionary paleontologists have a mystery on their hands: how did turtles in the act of mating become fossilized?
Spiders Can Cross Oceans
May 31, 2012
Why did the spider cross the ocean? To colonize the Old World after it "originated" in the New World. | <urn:uuid:965b712c-7239-41bd-9723-28c760ac96d9> | 2.59375 | 599 | Content Listing | Science & Tech. | 43.351634 | 95,576,798 |
Laws Governing The Operation Of The Earth Around The Sun
Author: Source: Datetime: 2016-09-12 12:13:58
As we all know, portable solar power generator is used widely ,every day around the globe through its own Antarctic and Arctic
"axis" to one rotation from west to east. Per
revolution (360°) is a
day and night, day and night are divided into 24h, so the Earth rotates 15°per hour.
In addition to the Earth's rotation, but also to follow the sun around the small eccentricity of the elliptical orbit (the ecliptic), known as the "revolution", the period of one year. Orbital plane axis of rotation and revolution of running (the ecliptic plane) normal to the earth's tilt angle to 23°27 ', and when the revolution of the earth's rotation axis direction which is always the same, always pointing to the celestial North Pole. Therefore, solar power generation the position of the Earth in different orbits, the sun projected onto the direction of the planet is different, the formation of the Earth changes the seasons.
Suppose an observer located in the Earth's northern hemisphere mid-latitudes, we can sun annual apparent motion on the celestial sphere describes the situation as follows. Vernal Equinox Day each year (March 21), the sun reaches the equator from south of the equator (the sun's declination = billion accounted °), the Earth's northern hemisphere astronomical spring begins of solar power generator . Sunday, as the movement, but not in the sun for east due west, day and night of equal length. Midday sun height is equal to 90°Φ (Φ observer local geographical latitude).
After the equinox, the sun rose the daily placement toward the north, daylight hours grow shorter hours of darkness, the height of the sun to rise each day at noon. The summer solstice (June 22), the height of the midday sun reaches its maximum 90°- inflammation + 23°27 longest day, when the northern hemisphere astronomical summer. After the summer solstice, solar enery generator the sun by day to reduce the height of noon, while shortening the day, the sun tends to rise and fall due east and due west.
Autumnal Equinox Day (September 23), and from the sun to reach the equator north of the equator (the sun's declination 3 = 0 °), astronomy autumn the Earth northern hemisphere begins. Sunday apparent motion, the sun came out and not in the east due west, day and night of equal length. After the autumnal equinox, the sun rose the daily placement toward the south, shorter daylight hours and night time increases, the degree of the sun at noon Gao daily reduced. Winter solstice (December 22), the sun reaches midday Gao Min 90°-Φ 23°27 ', the longest night,solar power portable generator when the Earth's northern hemisphere astronomical start of winter. After the winter solstice, the sun of midday Gao Gao liters daily, while growth in the day, the sun tends to rise and fall due east and due west, until the spring of sub-Day (March 21) the sun reaches the equator from south of the equator.
Solar Power Batteries In Phoenix
A small building on the outskirts of the Festival housing developments in the far West Valley could help utilities...
Tesla Greatest Battery
TESLA, Peter Carlsson spent nearly five years at Elon Musk’s side, locating various parts of the Model S as ...
Tesla Might Be The First To Lithi...
In recent years, the global sales of new energy vehicles increased rapidly, with a growth of 16.9 times in five year... | <urn:uuid:56119c15-89ab-45b9-9f22-37167aa8bdfe> | 3.671875 | 777 | Truncated | Science & Tech. | 55.511944 | 95,576,814 |
Researchers are using astronomical techniques used to study distant stars to survey endangered species.
The team of scientists is developing a system to automatically identify animals using a camera that has been mounted on a drone.
It is able to identify them from the heat they give off, even when vegetation is in the way.
Details of the system were presented at the annual meeting of the European Astronomical Society in Liverpool, UK.
The idea was developed by Serge Wich, a conservationist at Liverpool John Moores University, and Dr Steve Longmore, an astrophysicist at the same university. He says that the system has the potential to greatly improve the accuracy of monitoring endangered species and so help save endangered species.
“Conservation is not only about the numbers of animals but also about political will and local community supporting conservation. But better data always helps to move good arguments forward. Solid data on what is happening to animal populations is the foundation of all conservation efforts”.
Currently, conservationists estimate numbers of endangered species by physically counting them or the signs they leave.
This is an inexact science, as the animals can be in areas inaccessible to observers. Further problems can arise if species have migrated to another area since the previous census. Signs of their presence, such as abandoned nests, rely on assumptions such as the number of animals that share the nest and the frequency with which the species build and abandon their nests.
The process is time consuming, expensive and inaccurate. So Dr Wich developed a system to monitor them using infrared cameras mounted on drones.
Trials at Chester Zoo and Knowsley Safari Park showed that the system could pick up animals on the ground from the heat they gave off, even through tree cover.
But the problem was that they couldn’t always identify the species – especially when they were far away. Dr Wich needed a system that could identify different species from their heat signatures.
He explained his problem to his neighbour, Dr Steve Longmore, while chatting over the fence. The neighbour was an astronomer and he explained that he knew someone who identified the size and age of far away stars from their heat signatures.
“I collaborated with quite a few people during my career but astrophysicists were not on my list of potential collaborators,” Dr Wich told BBC News.
“But here we are. It shows how the serendipity of how science works.”
Dr Wich worked with astrophysicist Dr Claire Burke, also at Liverpool John Moores University. She told BBC News that her work in identifying the most massive galaxies in the Universe from the light they emit helped her devise software that could identify different types of animal from the pattern of the heat they give off.
Each species, she said, has distinct warmer and colder areas that are unique.
“When we look at animals in the thermal infrared, we’re looking at their body heat and they glow in the footage. That glow is very similar to the way that stars and galaxies in space glow,” Dr Burke explained.
“So we can apply techniques and software used in astronomy for decades to automatically detect and measure this glow”.
The system can also give information about the health of animals. If an animal is injured then that part of the animal’s body will be glowing brighter than the rest. Similarly, diseased animals also have a different heat profile, according to Dr Burke.
“The real advantage this gives you is that if you know how many animals you have and where they are and what kind of health they are in, then you can you can formulate a good conservation strategy for looking after them,” she said.
“And if you can track them as well, then you can tell what they need to survive and thrive and this helps us. If, for example, we needed to relocate animal because its habitat was being destroyed then you would know better what it needed to be relocated to.”
Follow Pallab on Twitter | <urn:uuid:c2807ee4-f0ec-4bda-a798-7ca96b8eb306> | 3.921875 | 821 | News Article | Science & Tech. | 42.249333 | 95,576,826 |
Earthquakes Multiple Choice Questions 4 PDF Download
Practice earthquakes MCQs, science test 4 for online courses learning and test prep, seismic analysis multiple choice questions and answers. Seismic analysis revision test includes earth science worksheets to learn.
Earth science multiple choice questions (MCQ) on weight places in roof of an earthquake resistant building is called with options mass damper, base isolator, cross braces and flexible pipes, seismic analysis quiz for competitive exam prep, viva interview questions with answers key. Free earth-science study guide to learn seismic analysis quiz to attempt multiple choice questions based test.
MCQs on Earthquakes Quiz PDF Download Worksheets 4
MCQ. The weight places in roof of an earthquake resistant building is called
- base isolator
- mass damper
- cross braces
- flexible pipes
MCQ. The measure of earthquake's strength is the
MCQ. Base isolators are made up of
- all of them
MCQ. The starting point of earth's surface of an earthquake is called
MCQ. Intensity values are mostly seems to be higher near | <urn:uuid:5a426f2a-8083-413d-bf1b-0d5aa44ba36e> | 3.28125 | 231 | Tutorial | Science & Tech. | 46.663387 | 95,576,843 |
R package for the linguistic analysis of fundamental frequency (F0) in speech
Version 1.1 (July 12, 2016)
What this package does
- manipulating Pitch objects from the phonetics software Praat,
- visualizing F0 data in a way that retains rich information from the acoustic signal, and
- creating and plotting a schematic quantitative representation ('stylization') of an F0 track.
This package is referenced on page 52 of:
Albin, A. (2015). Typologizing native language influence on intonation in a second language: Three transfer phenomena in Japanese EFL learners. (Doctoral dissertation). Indiana University, Bloomington.http://dx.doi.org/10.5967/K8JW8BSC
The data used in the dissertation are also available for download.
While at present the contents of this package are limited to the functions from the dissertation, it is anticipated that this package will grow over time to incorporate a variety of other intonation-related functions. As such, other prosody researchers are more than welcome to contribute their own R code to this package. (See the author's homepage for contact information.)
This package has two other package dependencies you must install first.
First, download the package audio from CRAN, e.g. with: install.packages("audio").
Second, install PraatR by going to the PraatR homepage and following the steps under "Installation".
Now you can install this package by following these steps:
Click on the "Download ZIP" button on the right side of this page and save the .zip file somewhere on your computer.
Unzip the .zip file to somewhere convenient (e.g. your desktop).
- After this step, the .zip file is no longer needed and can be deleted.
- Open R.
- If you are running Windows, you must open R as an administrator. To do so, go to the icon for R on your start menu ('start screen' in Windows 8) or desktop, right-click it, and select "Run as administrator". Click "Yes" to the User Account Control window that pops up.
- Find the path to the the unzipped folder from step 2. If you unzipped to your desktop, the path will be something like the following:
Run the following line of code, adjusting the path for the pkgs argument as needed so it points to path you determined in Step 4.
install.packages(pkgs="(path from Step 4)", repos=NULL, type="source")
If everything works correctly, you should see something like the following in the R console:
* installing *source* package 'intonation' ... ** R ** inst ** preparing package for lazy loading ** help *** installing help indices ** building package indices ** testing if installed package can be loaded *** arch - i386 *** arch - x64 * DONE (intonation)
Once installed, type ?intonation at the R console to pull up the help file for the package, which acts as a table of contents directing you to help files for other specific functions. You can also type example("intonation") to see some examples of what the package can do.
In addition to some sample data (HelloWorld), the package includes the following functions
- Spectrogram(): Create a spectrogram of an audio file
- F0RangeFinder(): Cycle through each soundfile in a folder and determine F0 range for each
- ToPitch(): Use Praat to generate a Pitch object
- ReadPitch(): Read a Pitch object into R
- RichVisualization(): Visualize the F0 data in a Pitch object
- Stylize(): Create a 'stylization' (i.e., a schematic representation) of F0 an track
- PlotStylization(): Draw a stylization on an open plot of an F0 track
Each of these has a help file included in the package. Thus, for example, you can type ?Stylize to find out more information about how to use the Stylize() function.
R documentation 'help files' are available for every function, but the following gives as a rough idea about what the syntax for these functions looks like:
# Draw a spectrogram of the audio data in a soundfile Spectrogram(WavePath) # Quickly determine F0 range for every soundfile in one folder (WaveFolder) # Save the resulting pitch-related files in another folder (PitchFolder) F0Ranges = F0RangeFinder( WaveFolder="C:/Wave/", PitchFolder="C:/Pitch/") # Use an F0 range thus determined to create a Pitch object from this soundfile ToPitch( Input=WavePath, Output=PitchPath, Range=c(69,135) ) # Read this Pitch object into R ReadPitch( PitchPath ) # Plot the F0 data contained therein as a 'rich visualization' RichVisualization( PitchPath=PitchPath, WavePath=WavePath, Labels = c("hello","world"), Divisions_ms = c(132,648,1257) ) # Create a stylization of the F0 track Stylization = Stylize( PitchPath, VertexIndices=c(211,489,1123) ) # Superimpose this stylization on the rich visualization PlotStylization(Stylization)
This package is released under the GNU General Public License:
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose. See the GNU General Public License for more details.
To receive a copy of the GNU General Public License, see: http://www.gnu.org/licenses/ | <urn:uuid:fb3bcdd2-3bc9-49b8-bea3-8b37fe5169ea> | 2.703125 | 1,265 | Product Page | Software Dev. | 44.58521 | 95,576,846 |
This storyboard does not have a description.
In a solid, molecules barely move and are tightly squeezed together, keeping the same shape. When a solid is melted down to a liquid, the molecules move more frequently and shape to the container they are held in.
When you heat a solid, it becomes a liquid. For example, if you heat ice up on the stove, it will eventually melt down to water, a liquid.
SOLID (compressed and non-moving)
As a liquid turns to a gas, the molecules begin to increase their movement and spread out more. Both liquids and gases move to the shape of the container/space they are in.
LIQUID (shaped to container; more movement)
When you heat a liquid, it evaporates and turns to a gas. For example, when you heat water on the stove, it turns to water vapor, a gas.
The molecules in gas are constantly moving, expanding or shrinking to fit the space they are in. Depending on how tight of a space they're in, the molecules are closer or farther apart.
For example, the air all around us is a gas, moving and changing every time we move.
GAS (expands to space; wide range of movement)
Explore Our Articles and Examples
Try Our Other Websites!
Photos for Class
– Search for School-Safe, Creative Commons Photos (It Even Cites for You!
– Easily Make and Share Great-Looking Rubrics
– Create Custom Nursery Art | <urn:uuid:8ed1f229-03ae-410e-ae23-b23640f2e65b> | 3.90625 | 315 | Knowledge Article | Science & Tech. | 51.457291 | 95,576,852 |
Proteins, the workhorses of the body, can have more than one function, but they often need to be very specific in their action or they create cellular havoc, possibly leading to disease.
Scientists from the Florida campus of The Scripps Research Institute (TSRI) have uncovered how an enzyme co-factor can bestow specificity on a class of proteins with otherwise nonspecific biochemical activity.
The protein in question helps in the assembly of ribosomes, large macromolecular machines that are critical to protein production and cell growth. This new discovery expands scientists’ view of the role of co-factors and suggests such co-factors could be used to modify the activity of related proteins and their role in disease.
“In ribosome production, you need to do things very specifically,” said TSRI Associate Professor Katrin Karbstein, who led the study. “Adding a co-factor like Rrp5 forces these enzymes to be specific in their actions. The obvious possibility is that if you could manipulate the co-factor, you could alter protein activity, which could prove to be tremendously important.”
The new study, which is being published the week of April 29, 2013, in the online Early Edition of the Proceedings of the National Academy of Science, sheds light on proteins called DEAD-box proteins, a provocative title actually derived from their amino acid sequence. These proteins regulate all aspects of gene expression and RNA metabolism, particularly in the production of ribosomes, and are involved in cell metabolism. The link between defects in ribosome assembly and cancer and between DEAD-box proteins and cancer is well documented.
The findings show that the DEAD-box protein Rok1, needed in the production of a small ribosomal subunit, recognizes the RNA backbone, the basic structural framework of nucleic acids. The co-factor Rrp5 then gives Rok1 the ability to target a specific RNA sequence by modulating the structure of Rok1.
“Despite extensive efforts, the roles of these DEAD-box proteins in the assembly of the two ribosomal subunits remain largely unknown,” Karbstein said. “Our study suggests that the solution may be to identify their cofactors first.”
The first author of the study, “Cofactor-Dependent Specificity of a DEAD-box Protein,” is Crystal L. Young. Also a co-author of the paper is Sohail Khoshnevis.
The study was supported by National Institutes of Health Grant R01-GM086451 and the American Heart Association.
Scripps Research Institute | <urn:uuid:a068f481-9942-4201-b67c-523c26bc3d3d> | 3.21875 | 552 | News Article | Science & Tech. | 32.29647 | 95,576,861 |
NASA climate expert warns planet facing its last chance
He says without action, Earth will see extreme events in coming decades
WASHINGTON — Exactly 20 years after warning America about global warming, a top NASA scientist said the situation has gotten so bad that the world's only hope is drastic action.
James Hansen told Congress on Monday that the world has long passed the "dangerous level" for greenhouse gases in the atmosphere and needs to get back to 1988 levels.
He said Earth's atmosphere can only stay this loaded with man-made carbon dioxide for a couple more decades without changes such as mass extinction, ecosystem collapse and dramatic sea level rises.
"We're toast if we don't get on a very different path," Hansen, director of the Goddard Institute for Space Studies who is sometimes called the godfather of global warming science, told The Associated Press. "This is the last chance."
Hansen brought global warming home to the public in June 1988 during a Washington heat wave, telling a Senate hearing that global warming was already here.
Latest Business News
- Woman Finds Black Widow Spider Nestled Among Store-Bought Grapes Cooking Light
- Jim Gaffigan and Wife Jeannie on How Humor Helped Them Cope With Her Brain Tumor Diagnosis People
- Amanda Seyfried and Jimmy Fallon Sing 'Mamma Mia' Songs Sent Through Google Translate People
- Wendy Williams on Overcoming Her Cocaine Addiction: 'It's a Miracle I Was Able to Stop' People
- Auburn’s Deshaun Davis on motivation from Peach Bowl loss Orlando Sentinel
- Wendy Williams Is Wondering Where Her Daytime Emmy Is Wibbitz
- Bitpay's CCO: Crypto's Going From Hype to 'Mass Adoption' Cheddar TV
- Bitpay's CCO: Less Hype, More Adoption Cheddar TV
- Little Girl Encourages Best Friend to Drive New Motorized Wheelchair Jukin Media
- Trump Lashes Out At EU Over $5 Billion Fine Against Google In Android Antitrust Case GeoBeats
To cut emissions, Hansen said coal-fired power plants that don't capture carbon dioxide emissions shouldn't be used in the United States after 2025, and should be eliminated in the rest of the world by 2030. That carbon capture technology is still being developed.
Hansen said the Earth's atmosphere has got to get back to a level of 350 parts of carbon dioxide per million. Last month, it was 10 percent higher: 386.7 parts per million.
Longtime global warming skeptic Sen. James Inhofe, R-Okla., citing a recent poll, said in a statement, "Hansen, (former Vice President) Gore and the media have been trumpeting man-made climate doom since the 1980s. But Americans are not buying it."
But Rep. Ed Markey, D-Mass., committee chairman, said, "Dr. Hansen was right. Twenty years later, we recognize him as a climate prophet." | <urn:uuid:11997664-2b72-437e-a49d-eb84f7dda0ce> | 2.828125 | 619 | News Article | Science & Tech. | 47.701534 | 95,576,866 |
An international team of researchers has developed a dynamic surface with reconfigurable topography that can sculpt and re-sculpt microscale to macroscale features, change its friction and slipperiness and tune other properties based on its proximity to a magnetic field
If you have an interest in anything in the world, then you have an interest in chemistry because everything you hear, see, taste, smell and touch involves chemistry and chemicals. Our ability to understand the chemical make-up of things and chemical reactions has led to everything from modern food and drugs to plastics and computers.
What is synthetic biology? Catherine Royer, professor of biology at Rensselaer Polytechnic Institute, answers the question on this edition of "Ask a Scientist"
What is the future of synthetic biology? Zan Luthey-Schulten, co-director at the Center for the Physics of Living Cells, answers the question on this edition of "Ask a Scientist"
In honor of the 60th anniversary of the Keeling Curve, Ralph Keeling of the Scripps CO2 Program shows how scientists make carbon dioxide measurements.
By treating living cells like tiny absorbent sponges, researchers have developed a potentially new way to introduce molecules and therapeutic genes into human cells
A Vanderbilt team has taken the next step forward in using a little-known bacteria to stop the spread of deadly mosquito-borne viruses, such as Zika and dengue
With support from the National Science Foundation, developmental biologist Arnaud Martin and his team at George Washington University are using cutting-edge genomic techniques, such as CRISPR, to better understand how the rich stripes and swirls of a butterfly's wing take their shape
Understanding what makes one volcano's magma so much more explosive than another may one day help us avoid volcanic disasters
Researchers at the University of Washington and the Allen Institute for Brain Science have developed a new method to classify and track the multitude of cells in a tissue sample
In this video, Rommie Amaro of the University of California, San Diego, describes her lab's research on the p53 protein, which mutates in a wide variety of cancers and is known as the "Guardian of the Genome"
New evidence reveals a previously unknown population of ancient Native Americans
What's the difference between thermoplastics and thermoset plastics? Philip Taynton, founder of Mallinda, answers your question in this edition of Ask a Scientist
Northeastern Professor Marilyn Minus wants to make the strongest fibers the world has ever known -- at low cost -- for light-weight bullet-proof armor, wide-body jets, sports gear and more.
Rice University scientist Laurence Yeung, along with scientists at University of California Los Angeles, Michigan State University and the University of New Mexico, counted rare molecules in the atmosphere that contain only heavy isotopes of nitrogen, and discovered a planetary-scale tug-of-war between life, the deep Earth and the upper atmosphere
Two independent teams of scientists, including one from the Joint Quantum Institute, have used more than 50 interacting atomic qubits to mimic magnetic quantum matter, surpassing the complexity of previous demonstrations
George Washington University evolutionary geneticist Arnaud Martin is using CRISPR Cas9, a gene editing technique, to determine how changes in the "painting gene" WntA result in different wing shapes and patterns in butterflies
Using complex fluid engineering techniques, professor Bob Tilton of Carnegie Mellon University is working on removing pollutants, such as trichloroethylene, from groundwater
Using an ultrafast, ultraprecise laser, a team of physicists and biologists at Vanderbilt University has taken an important step toward understanding how wound healing is triggered
A new reaction mechanism could be used to improve catalyst designs for pollution-control systems for diesel exhaust
Cutting-edge research at the Robert H. Lurie Comprehensive Cancer Center at Northwestern University aims to stop cancer's adaptive behavior to boost the effectiveness of current treatments | <urn:uuid:51480b96-5764-4613-9c61-a3ddd9ddf9d7> | 2.8125 | 807 | Content Listing | Science & Tech. | -34.153292 | 95,576,921 |
Exotic species, range expansion, state record, museum data
Examination of museum specimens, unpublished collection data, and field surveys conducted between 2010 and 2014 resulted in records for 22 species of sawflies new to Washington State, seven of which are likely to be pest problems in ornamental landscapes. These data highlight the continued range expansion of exotic species across North America. These new records also indicate that our collective knowledge of Pacific Northwest arthropod biodiversity and biogeography is underdeveloped, even for a relatively well known and species-poor group of insects. Notable gaps in the knowledge of Washington State’s Symphyta remain for the Olympic Peninsula, the Cascade Mountain Range, and the arid interior of the state. Washington’s shrub-steppe appears to be particularly poorly surveyed for sawflies.
Journal of Hymenoptera Research
Required Publisher's Statement
Published by The International Society of Hymenopterists
Looney, Chris; Smith, David R.; Collman, Sharon J.; Langor, David W.; and Peterson, Merrill A., "Sawflies (Hymenoptera, Symphyta) Newly Recorded from Washington State" (2016). Biology. 53.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License. | <urn:uuid:65a7c45e-3be8-4b38-8452-7c64510a4d63> | 2.609375 | 265 | Academic Writing | Science & Tech. | 26.13413 | 95,576,929 |
Software Engineering and Environment examines the various aspects of software development, describing a number of software life cycle models. Twelve in-depth chapters discuss the different phases of a software life cycle, with an emphasis on the object-oriented paradigm. In addition to technical models, algorithms, and programming styles, the author also covers several managerial issues key to software project management. Featuring an abundance of helpful illustrations, this cogent work is an excellent resource for project managers, programmers, and other computer scientists involved in software production.
Software Engineering and Environment
Fri frakt för privatpersoner! Skickas inom 5‑7 vardagar | <urn:uuid:36115457-df51-48bf-896a-a39db88100de> | 2.8125 | 127 | Product Page | Software Dev. | 8.138 | 95,576,932 |
Zircon fission track dating Free couples russian web chat site
FT thermochronology is widely used for reconstruction of low-temperature thermal histories in upper crustal rocks.
The method has found particular application in estimating temperature history and long-term denudation rates in orogenic belts, rifted margins and more stable areas, providing a means of assessing the timing and volume of sediment being delivered to sedimentary basins, and as an estimator of hydrocarbon maturity potential.
Apatite and zircon separates from the Fish Canyon Tuff (K-Ar age, 27.9±0.7 Myr), San Juan Mtns., Colorado, have been given to over 50 laboratories for fission-track dating.Accurate and precise age estimates can be obtained on glass by use of the isothermal plateau fission-track (ITPFT) dating method. Correction for partial track fading is achieved by heating the natural sample and its irradiated aliquot for 30 days at 150°C. The other problem is that uranium is particularly susceptible to weathering.
Now since all rocks are somewhat porous, and since we are pretty much obliged to date rocks from near the surface, it's hard to find instances in which uranium has not been lost.
This grain-specific technique is particularly suited to the dating of fine-grained, distal tephra beds and will greatly facilitate development of detailed chronologies of tephra-bearing sedimentary sequences located far from volcanic centres. | <urn:uuid:72a5989c-73c1-47d1-b6ab-f3e79d1ad3ee> | 3.21875 | 308 | Spam / Ads | Science & Tech. | 28.841555 | 95,576,937 |
Protective kelp forests found near many early coastal archaeological sites
If humans migrated from Asia to the Americas along Pacific Rim coastlines near the end of the Pleistocene era, kelp forests may have aided their journey, according to research presented today at the American Association for the Advancement of Science (AAAS) annual meeting.
Until recently, the "coastal migration theory" was not accorded much importance by most scholars. However, new discoveries have moved it to the forefront of debate on the origins of the First Americans. It is now known that seafaring peoples living in the Ryuku Islands and Japan near the height of the last glacial period (about 35,000 to 15,000 years ago) adapted to cold waters comparable to those found today in the Gulf of Alaska. From Japan, they may have migrated northward through the Kurile Islands, to the southern coast of Beringia (ancient land bridge between what is now Siberia and Alaska), and into the Americas.
Mary Stanik | EurekAlert!
Innovative genetic tests for children with developmental disorders and epilepsy
11.07.2018 | Christian-Albrechts-Universität zu Kiel
Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe”
05.07.2018 | European Geosciences Union
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:afde4c1c-e9ca-4117-b0f2-afd79104410e> | 3.40625 | 859 | Content Listing | Science & Tech. | 38.41619 | 95,576,946 |
Wild Arabidopsis thaliana flowers typically have four petals. Photo by Peggy Greb/Agricultural Research Service
Scientists from the University of Delaware have made a significant advance in the study of small ribonucleic acids (RNAs), discovering 10 times more small RNAs in the plant Arabidopsis (a weed of the mustard family) than previously had been identified. The advance is reported in the Sept. 2 issue of Science magazine.
The research was conducted over the course of the last year and a half by teams from the laboratories headed by Pamela J. Green, Crawford H. Greenewalt Endowed Chair in Plant Molecular Biology, a joint appointment in the Department of Plant and Soil Sciences and the College of Marine Studies, and Blake C. Meyers, assistant professor of plant and soil sciences in the College of Agriculture and Natural Resources.
To identify the small RNAs, the scientists used the transcriptional profiling technology called Massively Parallel Signature Sequencing (MPSS), which was developed by Solexa Inc. of Hayward, Calif.
Neil Thomas | EurekAlert!
World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes
17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Plant mothers talk to their embryos via the hormone auxin
17.07.2018 | Institute of Science and Technology Austria
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:09136d04-fe8b-4c5d-a696-8d972df1064c> | 3.390625 | 888 | Content Listing | Science & Tech. | 38.881886 | 95,576,947 |
UCSB researcher shows that New Caledonian crows can perform as well as 7- to 10-year-olds on cause-and-effect water displacement tasks
In Aesop’s fable about the crow and the pitcher, a thirsty bird happens upon a vessel of water, but when he tries to drink from it, he finds the water level out of his reach. Not strong enough to knock over the pitcher, the bird drops pebbles into it — one at a time — until the water level rises enough for him to drink his fill.
Highlighting the value of ingenuity, the fable demonstrates that cognitive ability can often be more effective than brute force. It also characterizes crows as pretty resourceful problem solvers. New research conducted by UC Santa Barbara’s Corina Logan, with her collaborators at the University of Auckland in New Zealand, proves the birds’ intellectual prowess may be more fact than fiction. Her findings, supported by the National Geographic Society/Waitt Grants Program, appear today in the scientific journal PLOS ONE.
Logan is lead author of the paper, which examines causal cognition using a water displacement paradigm. “We showed that crows can discriminate between different volumes of water and that they can pass a modified test that so far only 7- to 10-year-old children have been able to complete successfully. We provide the strongest evidence so far that the birds attend to cause-and-effect relationships by choosing options that displace more water.”
Logan, a junior research fellow at UCSB’s SAGE Center for the Study of the Mind, worked with New Caledonian crows in a set of small aviaries in New Caledonia run by the University of Auckland. “We caught the crows in the wild and brought them into the aviaries, where they habituated in about five days,” she said. Keeping families together, they housed the birds in separate areas of the aviaries for three to five months before releasing them back to the wild.
Getting individual crows into the testing room proved to be an immediate challenge. “You open the testing room door and then open the aviary door, with the idea that the bird you want is going to fly through into the testing room,” she said. But with four birds in an aviary, directing a particular test subject is tricky at best.
“So I thought, let’s pretend the sky’s the limit and I can train them to do whatever I want,” Logan said. “I started by pointing at the one I wanted and continuing to point until he or she flew out. I got to the point where I could stand outside the aviary and point at the one I wanted and it would fly out while the other birds stayed put.”
Two birds in particular — 007 and Kitty — became so well trained that Logan had only to call them by name and they’d fly into the testing room.
The testing room contained an apparatus consisting of two beakers of water, the same height, but one wide and the other narrow. The diameters of the lids were adjusted to be the same on each beaker. “The question is, can they distinguish between water volumes?” Logan said. “Do they understand that dropping a stone into a narrow tube will raise the water level more?” In a previous experiment by Sarah Jelbert and colleagues at the University of Auckland, the birds had not preferred the narrow tube. However, in that study, the crows were given 12 stones to drop in one or the other of the beakers, giving them enough to be successful with either one.
“When we gave them only four objects, they could succeed only in one tube — the narrower one, because the water level would never get high enough in the wider tube; they were dropping all or most of the objects into the functional tube and getting the food reward,” Logan explained. “It wasn’t just that they preferred this tube, they appeared to know it was more functional.”
However, she noted, we still don’t know exactly how the crows think when solving this task. They may be imagining the effect of each stone drop before they do it, or they may be using some other cognitive mechanism. “More work is needed,” Logan said.
Logan also examined how the crows react to the U-tube task. Here, the crows had to choose between two sets of tubes. With one set, when subjects dropped a stone into a wide tube, the water level raised in an adjacent narrow tube that contained food. This was due to a hidden connection between the two tubes that allowed water to flow. The other set of tubes had no connection, so dropping a stone in the wide tube did not cause the water level to rise in its adjacent narrow tube.
Each set of tubes was marked with a distinct color cue, and test subjects had to notice that dropping a stone into a tube marked with one color resulted in the rise of the floating food in its adjacent small tube. “They have to put the stones into the blue tube or the red one, so all you have to do is learn a really simple rule that red equals food, even if that doesn’t make sense because the causal mechanism is hidden,” said Logan.
As it turns out, this is a very challenging task for both corvids (a family of birds that includes crows, ravens, jays and rooks) and children. Children ages 7 to 10 were able to learn the rules, as Lucy Cheke and colleagues at the University of Cambridge discovered in 2012. It may have taken a couple of tries to figure out how it worked, Logan noted, but the children consistently put the stones into the correct tube and got the reward (in this case, a token they exchanged for stickers). Children ages 4 to 6, however, were unable to work out the process. “They put the stones randomly into either tube and weren’t getting the token consistently,” she said.
Recently, Jelbert and colleagues from the University of Auckland put the New Caledonian crows to the test using the same apparatus the children did. The crows failed. So Logan and her team modified the apparatus, expanding the distance between the beakers. And Kitty, a six-month-old juvenile, figured it out. “We don’t know how she passed it or what she understands about the task,” Logan said, “so we don’t know if the same cognitive processes or decisions are happening as with the children, but we now have evidence that they can. It’s possible for the birds to pass it.
“What we do know is that one crow behaved like the older children, which allows us to explore how they solve this task in future experiments,” she continued. Research on causal cognition using the water displacement paradigm is only beginning to get at what these crows know about solving problems. This series of experiments shows that modifying previous experiments is useful for gaining a deeper understanding.
The research on the crows is part of a larger project Logan is working on to compare the cognitive powers of crows with those of grackles. “So far, no smaller-brained species have been tested with the tests we use on the crows, and grackles are smaller-brained,” she said. “But they’re really innovative. So they may have a reason to pay attention to causal information like this.”
The next research phase will begin next month, after the grackles’ breeding season ends and they are ready to participate.
Andrea Estrada | Eurek Alert!
World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes
17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt
Plant mothers talk to their embryos via the hormone auxin
17.07.2018 | Institute of Science and Technology Austria
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:57b66297-c90d-4a79-a42c-842fef16b266> | 3.53125 | 2,285 | Content Listing | Science & Tech. | 51.312543 | 95,576,948 |
Why build networking functionality into your Perl scripts? You might want to access your email remotely, or write a simple script that updates files on a FTP site. You might want to check up on your employees with a program that searches for Usenet postings that came from your site. You might want to check a web site for any recent changes, or even write your own home-grown web server. The network is the computer these days, and Perl makes network applications easy.
Perl programmers have their choice of modules for doing common tasks with network protocols; Chapter 14, Email Connectivity, through Chapter 17, The LWP Library, cover the modules for writing email, news, FTP, and web applications in Perl. If you can do what you want with the available modules, you're encouraged to jump to those chapters and skip this one. However, there will be times that you'll have to wrestle with sockets directly, and that's where this chapter comes in.
Sockets are the underlying mechanism for networking on the Internet. With sockets, one application (a server) sits on a port waiting for connections. Another application (the client) connects to that port and says hello; then the client and server have a chat. Their actual conversation is done with whatever protocol they choose - for example, a web client and server would use HTTP, an email server would use POP3 and SMTP, etc. But at the most basic level, you might say that all network programming comes down to opening a socket, reading and writing data, and closing the socket again.
You can work with sockets in Perl at various levels. At the lowest level, Perl's built-in functions include socket routines similar to the system calls in C of the same name. To make these routines easier to use, the Socket module in the standard library imports common definitions and constants specific to your system's networking capabilities. Finally, the IO::Socket module provides an object interface to the socket functions through a standard set of methods and options for constructing both client and server communications programs.
Sockets provide a connection between systems or applications. They can be set up to handle streaming data or discrete data packets. Streaming data continually comes and goes over a connection. A transport protocol like TCP (Transmission Control Protocol) is used to process streaming data so that all of the data is properly received and ordered. Packet-oriented communication sends data across the network in discrete chunks. The message-oriented protocol UDP (User Datagram Protocol) works on this type of connection. Although streaming sockets using TCP are widely used for applications, UDP sockets also have their uses.
Sockets exist in one of two address domains: the Internet domain and the Unix domain. Sockets that are used for Internet connections require the careful binding and assignment of the proper type of address dictated by the Internet Protocol (IP). These sockets are referred to as Internet-domain sockets.
Sockets in the Unix domain create connections between applications either on the same machine or within a LAN. The addressing scheme is less complicated, often just providing the name of the target process.
In Perl, sockets are attached to a filehandle after they have been created. Communication over the connection is then handled by standard Perl I/O functions.
Perl provides built-in support for sockets. The following functions are defined specifically for socket programming. For full descriptions and syntax, see Chapter 5, Function Reference.
Regular functions that read and write filehandles can also be used for sockets,
printf, and the diamond input operator,
The socket functions tend to use hard-coded values for some
parameters, which severely hurt portability.
Perl solves this problem with a module called Socket, included in the standard library.
Use this module for any socket applications that you build with the built-in functions
The module loads the socket.h header file,
which enables the built-in functions
to use the constants and names specific to your system's
network programming, as well as additional functions for dealing
with address and protocol names.
The next few sections describe Perl socket programming using a combination of the built-in functions together with the Socket module. After that, we describe the use of the IO::Socket module.
Both client and server use the
socket call to create a
socket and associate it with a filehandle. The
function takes several arguments: the name of the filehandle,
the network domain, an indication of whether the socket is stream-oriented or
and the network protocol to be used. For example, HTTP (web) transactions
require stream-oriented connections running TCP. The following line
creates a socket for this case and associates it with
Theuse Socket; socket(SH, PF_INET, SOCK_STREAM, getprotobyname('tcp')) || die $!;
PF_INETargument indicates that the socket will connect to addresses in the Internet domain (i.e., IP addresses). Sockets with a Unix domain address use
Because this is a streaming connection using TCP, we specify
SOCK_STREAM for the second argument. The alternative would be to
SOCK_DGRAM for a packet-based UDP connection.
The third argument indicates the protocol used for
the connection. Each protocol has a number assigned to it by the system;
that number is passed to
socket as the third argument. In the scalar
getprotobyname returns the protocol number.
Finally, if the socket call fails, the program will
the error message found in
On the client side, the next step is to make a connection with a server
at a particular port and host. To do this, the client
connect requires the
socket filehandle as its first argument. The second argument is
a data structure containing the port and hostname that together specify
the address. The Socket package
sockaddr_in function to create this structure for
Internet addresses and the
sockaddr_un function for
sockaddr_in function takes a port number for its first argument and a
32-bit IP address for the second argument. The 32-bit address is formed
inet_aton function found in the Socket package. This function takes
either a hostname (e.g., www.oreilly.com) or a
dotted-decimal string (e.g., 220.127.116.11),
and it returns the corresponding 32-bit structure.
the previous example, a call to
connect could look like this:
This call attempts to establish a network connection to the specified server and port. If successful, it returns true. Otherwise, it returns false andmy $dest = sockaddr_in (80, inet_aton('www.oreilly.com')); connect (SH, $dest) || die $!;
dies with the error in
Assuming that the
connect call has completed successfully and a
connection has been established, there are a number of functions we
can use to write to and read from the file handle.
For example, the
send function sends data to a socket:
The$data = "Hello"; send (FH, $data);
To read incoming data from a socket, use either theselect (FH); print "$data";
recvfunction or the "diamond" input operator regularly used on filehandles. For example:
After the conversation with the server is finished, userecv (FH, $buffer); $input = <FH>;
shutdownto close the connection and destroy the socket.
Listen for incoming connections from clients on the port.
Accept a client request and assign the connection to a specific filehandle.
We start out by creating a socket for the server:
The filehandlemy $proto = getprotobyname('tcp'); socket(FH, PF_INET, SOCK_STREAM, $proto) || die $!;
$FHis the generic filehandle for the socket. This filehandle only receives requests from clients; each specific connection is passed to a different filehandle by
accept, where the rest of the communication occurs.
A server-side socket must be bound to a port on the local machine
by passing a port and an address data structure to the
sockaddr_in. The Socket module provides identifiers for
common local addresses, such as localhost and the broadcast address. Here
INADDR_ANY, which allows the system to pick the appropriate
address for the machine:
Themy $sin = sockaddr_in (80, INADDR_ANY); bind (FH, $sin) || die $!;
listenfunction tells the operating system that the server is ready to accept incoming network connections on the port. The first argument is the socket filehandle. The second argument gives a queue length, in case multiple clients are connecting to the port at the same time. This number indicates how many clients can wait for an
acceptat one time.
Thelisten (FH, $length);
acceptfunction completes a connection after a client requests and assigns a new filehandle specific to that connection. The new filehandle is given as the first argument to
accept, and the generic socket filehandle is given as the second:
Now the server can read and write to the filehandleaccept (NEW, FH) || die $!;
NEWfor its communication with the client. | <urn:uuid:e78eca8f-25dd-4ec2-aa19-94d0bb9e83f6> | 3.546875 | 1,955 | Documentation | Software Dev. | 46.861819 | 95,576,963 |
Status: Not Listed
Also known as the Chesapeake blue crab or the Atlantic blue crab, these crabs are strong swimmers—largely due to their fifth pair of legs, which are shaped like paddles. They are striking to spot with their often bright-blue claws and olive-colored carapace (shell). The claws on the adult female blue crab are tipped with red. Males can be seven to eight inches (18 to 20 centimeters) across, while females are a bit smaller in size.
The blue crab is widely distributed along the Atlantic and Gulf coasts, from Nova Scotia through the Gulf of Mexico and as far south as Uruguay. It also has been introduced in other parts of the world. This crab inhabits estuaries and brackish coastal lagoons. Predators include the Kemp's ridley sea turtle and the whooping crane.
These crabs are predacious and scavenge for food. They have been known to eat other crustaceans (including their own species), recently dead fish, plant materials, clams, oysters, worms, insects, and mussels.
The blue crab's mating season occurs between May and October. A male will mate with a female after she has completed her final molt, and she has a soft shell. The female will lay up to two million eggs in a spongy mass that starts off an orange color, but gets closer to black as it comes time for the crabs to hatch. Blue crabs undergo several different developmental stages to reach adulthood. A blue crab's typical lifespan is between three and four years.
Blue crabs are not threatened or endangered. However, habitat loss and nutrient loading are some of the larger issues faced by this species. In addition, recent reports have shown that blue crabs are projected to be detrimentally impacted by climate change in a way that can also wreak havoc on sensitive ecosystems that the blue crab calls home.
Carbon pollution from burning coal, oil, and gas is causing climate change that is threatening fish and wildlife across the globe. If we don’t make changes soon, the earth will continue to have warmer temperatures in all seasons; an increase in the frequency, duration, and intensity of hurricanes and other severe weather events; as well as an increase in the sea level of up to two feet or more.
A Threatened Bay
The Chesapeake Bay is our nation's largest estuary and sustains more than 3,600 species of plants and animals. However, if global climate change continues unabated, projected rising sea levels and water and air temperatures will significantly reshape the region's coastal landscape, threatening recreational and commercial fishing including crabbing in the region. According to the National Oceanic and Atmospheric Administration’s Chesapeake Bay Office, the bay shoreline is being affected at a faster rate than the global average because land in the region is already naturally subsiding.
The temperatures in the bay have already increased by almost 2 degrees Fahrenheit since 1960 and are projected to continue to increase by an additional 3 to 10 degrees by 2100—an immense change that will have a dramatic effect on the estuary and the species it supports. Warming temperatures in the Chesapeake Bay are predicted to also greatly impact eelgrass, a seagrass that provides essential habitat for juvenile blue crabs. This impact was seen in 2005, as high Chesapeake temperatures caused a massive die-off of the seagrass.
Blue Crabs in the Gulf
Aside from their ecological importance, blue crabs are one of the most economically important fisheries of the Gulf. Louisiana alone lands approximately 26 percent of the total blue crabs for the nation, a value of more than $135 million at today’s market prices. A decline in blue crabs could have larger economic implications for recreational fishing and tourism on the Gulf Coast.
Using the money from BP’s oil spill fines to stop coastal wetlands loss and protect habitats for blue crabs will have a positive impact on the entire food web of the Gulf of Mexico—and the Gulf Coast economy as well.
Predator Meets Prey
Increased carbon pollution is expected to cause blue crabs to grow abnormally large shells, turning this species into larger, more aggressive predators that could significantly alter the fragile Chesapeake ecosystem. Their main prey, like oysters, are expected to suffer from weaker, slower-growing shells due to acidic water conditions caused by the ocean absorbing more carbon dioxide. The larger, hungrier blue crabs will have the ability to eat many more oysters, potentially throwing the whole food chain out of whack. This shift in the predator-prey balance would harm efforts to rebuild the stocks of both species.
Impacts to Economy
Although climate change is expected to lead to abnormally large blue crab shells, this does not mean the crab harvest will do well or that crab lovers will benefit. This is because studies have shown that the same conditions that lead to increased growth in crab shells also resulted in the production of less meat under those shells. As carbon-absorbing crabs put more energy into building larger shells, less energy goes into other critical life processes like tissue growth and reproduction.
The blue crab's scientific name, Callinectes sapidus, means "beautiful savory swimmer."
Chesapeake Bay Program
Life in the Chesapeake Bay by Alice Jane Lippson and Robert L. Lippson, 1984, The Johns Hopkins University Press. Baltimore MD.
National Oceanic and Atmospheric Administration
South Carolina Department of Natural Resources
Place your order today for the themed box that delivers everything you need to create family memories while discovering nature and wildlife.Read More
Find out what it means to source wood sustainably, and see how your favorite furniture brands rank based on their wood sourcing policies, goals, and practices.Read More
Climate change is allowing ticks to survive in greater numbers and expand their range—influencing the survival of their hosts and the bacteria that cause the diseases they carry.Read More
Tell your members of Congress to save America's vulnerable wildlife by supporting the Recovering America's Wildlife Act.Read More
You don't have to travel far to join us for an event. Attend an upcoming event with one of our regional centers or affiliates. | <urn:uuid:aa0f54d2-ff7e-4058-98c7-f9ea6b0e5ef0> | 4.0625 | 1,266 | Knowledge Article | Science & Tech. | 39.968622 | 95,577,023 |
Control Paradigms and Self-Organization in Living Systems
This chapter describes control of systems characterized by numerically valued variables. It opens with a qualitative description of the fundamental concepts behind open-loop and feedback control methods. The chief aim of feedback control is to cause a system’s output to maintain some desired relation to an input, in spite of disturbances or deviations in plant dynamics.
Classical linear control theory is quantitatively described, using the standard, frequency-domain (jω) notation and Fourier transform method. The open-loop and closed-loop (feedback) equations are each shown for a continuous, linear system with time-invariant parameters and structure and single input with single output. In some cases it is necessary to achieve independent control of each of multiple disturbances or to control more than a single plant output. Or there may be more than a single control input available with which to control the plant’s outputs. One may then employ multiloop control or more complex control arrangements for multivariable control.
Classical linear control concepts have been extended in modern optimal control theory. The new approach is far more general, and it emphasizes the state-space description of dynamic systems. It is general enough to encompass time-varying and nonlinear conditions. Using the general formulation of the deterministic, optimal control problem, open-loop control is reexamined from the point of view of the Pontryagin Maximum Principle. Some similarities between the optimal control problem and Hamiltonian mechanics are pointed out. It is concluded that optimal control theory in this form represents a significant conceptual generalization of the variational theory of classical mechanics to a much wider class of problems. In this sense, optimal control theory overlaps and transcends ordinary physical theory and cannot properly be considered to be a simple consequence of the known physical laws of mechanics.
The optimal control approach is next generalized to include stochastic processes appearing as unpredictable perturbations. The mathematical difficulty increases, but such difficulties do not detract from the underlying rich structure of optimal control theory and its substantial connections with and differences from the variational theory of mechanics.
The equations of stochastic optimal control do not arise in statistical mechanics because standard statistical mechanics does not deal with concepts that involve estimating the state of dynamic systems for purposes of control. Neither does physical theory appear to deal with any problems analogous to or embedded in the stochastic optimal control problem. Again, one is forced to the conclusion that control theory cannot, in general, be considered to be a part of, or a simple extension of, known physical theory.
Finally, this chapter addresses control in living systems, and asks whether or not the principles of technological control, as outlined in the first part, apply. Living systems are dominated by regulatory processes. The history of our discovery of some of these processes is summarized. Enzyme-catalyzed reactions and feedback control of enzyme levels by genetic repression-derepression constitute interesting sample cases for control theory analysis. The analysis shows that control theory provides a basis for showing what experiments and data are necessary to validate the claims of the biochemists that the closed, inhibitory pathways they discover (usually in vitro) actually have physiological relevance. This validation has yet to be made. It is concluded that some of the concepts of classical control theory do illuminate some biochemical control processes, but that the richness of modern, optimal control theory cannot be brought to bear on the richness of modern genetic control of protein biosynthesis. —The Editor
KeywordsFeedback Control Control Theory Optimal Control Problem Optimal Control Theory Sensor Noise
Unable to display preview. Download preview PDF.
- Abraham, R., and J. Marsden (1978) Foundations of Mechanics, 2nd ed. Menlo Park, Calif.Google Scholar
- Black, H. S. (1934) Stabilized feedback amplifiers. Bell Syst Tech. J. 12:1–19.Google Scholar
- Bryson, A. E. Jr.,, and Y. C. Ho (1969) Applied Optimal Control. Ginn (Blaisdell), Boston.Google Scholar
- Cannon, W. B. (1929) Organization for physiological homeostasis. Physiol. Rev. 9:399–431.Google Scholar
- Cannon, W. B. (1939) The Wisdom of the Body. Norton, New York.Google Scholar
- Chance, B. (1961) Control characteristics of enzyme systems. Cold Spring Harbor Symp. Quant. Biol. 26:289–299.Google Scholar
- D’Azzo, J. J., and C. H. Houpis (1960) Control System Analysis and Synthesis. McGraw-Hill, New York.Google Scholar
- DeRobertis, E. D. P., and E. M. F. DeRobertis (1980) Cell and Molecular Biology, 7th ed. Saunders, Philadelphia.Google Scholar
- Gordon, M. S. (1968) Animal Function: Principles and Adaptions. Macmillan Co., New York.Google Scholar
- Guyton, A. C. (1971) Textbook of Medical Physiology. Saunders, Philadelphia.Google Scholar
- Haken, H. (1977) Synergetics—An Introduction. Springer-Verlag, Berlin.Google Scholar
- Horowitz, I. M. (1963) Synthesis of Feedback Systems. Academic Press, New York.Google Scholar
- Jacob, F., and J. Monod (1961a) On the regulation of gene activity. Cold Spring Harbor Symp. Quant. Biol. 26:193–211.Google Scholar
- Jazwinski, A. (1970) Stochastic Processes and Filtering. Academic Press, New York.Google Scholar
- Judson, H. F. (1979) The Eighth Day of Creation. Simon & Schuster, New York.Google Scholar
- Lehninger, A. L. (1975) Biochemistry, 2nd ed. Worth, New York.Google Scholar
- Minorsky, N. (1922) Directional stability of automatically steered bodies. J. Am. Soc. Nav. Eng. 34(2).Google Scholar
- Mohler, R. R. (1973) Bilinear Control Processes. Academic Press, New York.Google Scholar
- Novick, A., and L. Szilard (1954) Experiments with the chemostat on the rates of amino acid synthesis in bacteria. In: Dynamics of Growth Processes. Princeton University Press, Princeton, N.J., pp. 21–32.Google Scholar
- Nyquist, H. (1932) Regeneration theory. Bell Syst. Tech. J. 11:126–147.Google Scholar
- Platt, T. (1980) Regulation of gene expression in the tryptophan Operon of Escherichia coli. In: The Operon, 2nd ed., J. H. Miller and W. S. Reznikoff (eds.). Cold Spring Harbor Laboratory, Cold Spring Harbor, N.Y., pp. 263–302.Google Scholar
- Pontryagin, L. S., V. G. Boltyanskii, R. V. Gamkrelidze, and E. F. Mishchenko (1962) Mathematical Theory of Optimal Processes. Wiley-Interscience, New York.Google Scholar
- Reznikoff, W. S., and J. N. Abelson (1980) The Lacpromotor. In: The Operon, 2nd ed., J. H. Miller and W. S. Reznikoff (eds.). Cold Spring Harbor Laboratory, Cold Spring Harbor, N.Y., pp. 221–243.Google Scholar
- Savegeau, M. (1976) Biochemical Systems Analysis. Addison-Wesley, Reading, Mass.Google Scholar
- Truxal, J. G. (1955) Automatic Feedback Control Synthesis. McGraw-Hill, New York.Google Scholar
- Umbarger, H. E. (1961) Endproduct inhibition of the initial enzyme in a biosynthetic sequence as a mechanism of feedback control. In: Control Mechanisms in Cellular Processes, D. M. Bonner (ed.). Ronald Press, New York, pp. 67–85.Google Scholar
- Wiener, N. (1948) Cybernetics, 2nd ed. Wiley, New York.Google Scholar | <urn:uuid:11562958-adfa-46c5-92c3-9917e2344dbd> | 2.875 | 1,758 | Truncated | Science & Tech. | 53.073445 | 95,577,032 |
What are those files in repositories on GitHub
This specific name is detected by GitHub to display information about the project it is a part of.
The contents of this file specifies the editing preferences used to create files in the project, such as the number of spaces for each indent. For example:
; This file is for unifying the coding style for different editors and IDEs. ; More information at http://EditorConfig.org root = true ; Use 2 spaces for indentation in all Ruby files [*.rb] indent_style = space indent_size = 2 [Rakefile] indent_style = space indent_size = 2 [Gemfile*] indent_style = space indent_size = 2 [config.ru] indent_style = space indent_size = 2
*.iml files are not really needed and can be gitignore’d.
*.iml files are created by IntelliJ IDEA based on the pom.xml file read by Maven to resolve dependencies of the project.
So .iml and pom.xml files contain duplicate information. When IntelliJ opens, it asks permission to auto import the pom.xml. IntelliJ doesn’t overwrite pom.xml with what is in .iml, so your pom.xml is the primary authority on settings.
The .iml file is needed by IntellJ to build/run/test/deploy/debug Maven projects in IDEA without using Maven. This enables experimentation with dependencies without changing the pom.xml. Note: that all the modifications you make will be reverted on next Maven import.
In other words, IDEA doesn’t understand Maven model directly, it converts it to its own project model used by all the subsystems, and the internal project information needs to be stored somewhere, hence the .iml files and .idea project directory. This way IDEA doesn’t need to analyze the pom file every time you open the project and resolve all the dependencies again, it’s done only when the pom.xml changes.
This file specifies how the Travis.io web service builds the project.
language: ruby rvm: - 2.0.0 - 1.9.3 script: bundle exec rake install; bundle exec rake generate
A SVG image is added within the README.md text to flag whether Travis was successful at building.
This contains instructions for how others can contribute to the project.
This file is for Ruby-language projects to specify its dependencies, similar to what Maven does.x
This is one of a series on Git and GitHub:
- Why Git? (file-based backups vs Git clone)
- Git Markdown text
- Git-client based workflows
- Git whoops (correct mistakes)
- Git rebase
- Git interactive merge (imerge)
- Git HEAD (Commitish references)
- Git custom commands
- Git utilities
- TFS vs GitHub
- GitHub REST API
- GitHub GraphQL API
- GitHub PowerShell API Programming
- GitHub GraphQL PowerShell Module | <urn:uuid:6efe025c-05b4-4393-b49d-aea11a5f0806> | 2.546875 | 653 | Documentation | Software Dev. | 53.878196 | 95,577,033 |
Find information on common issues.
Ask questions and find answers from other users.
Suggest a new site feature or improvement.
Check on status of your tickets.
When optical components are reduced to the nanoscale, they exhibit interesting properties that can be harnessed to create new devices. For example, imagine a block of material with thin layers of alternating materials. This creates a periodic arrangement of alternating dielectric constants, forming a "photonic crystal" that is analogous to the electronic crystals used in semiconductor devices. Photonic crystals, along with quantum dots and other devices patterned at the nanoscale, may form the basis for sensors and switches used in computers and telecommunications. More information on Nanophotonics can be found here.
Mark Joseph Hagmann
Two Photon Lithography
06 Jul 2018 | | Contributor(s):: Mohammad Mahfuzul Kabir, Varun Ajit Kelkar, Darren K Adams
Calculate voxel dimensions for a two-photon lithography process
Kaustubh Shriram Kulkarni
NCN at Northwestern Tools
NCN@Northwestern Tool Support
We have identified a list of tools for which we commit the following level of service:
monitor support tickets, questions, and wishlists and provide a...
Abdul Hamid Bin Yousuf
Radiative Cooling Experiment
19 May 2017 | | Contributor(s):: Yu-wen Lin, Evan L Schlenker, Zhou Zhiguang, Peter Bermel
Simulate a passive radiative cooling solution implementation in an experimental setup.
Simón Montoya Bedoya
Quantum Coherent Transport in Atoms & Electrons
25 May 2017 | | Contributor(s):: Yong P. Chen
I will discuss some recent experimental examples from my lab studying quantum coherent transport and interferometry in electrons as well as cold atoms. For example, phase coherent electron transport and interference around a cylinder realized in a nanowire of topological insulator...
Novel Plasmonic Materials and Nanodevices for Integrated Quantum Photonics
07 Jun 2017 | | Contributor(s):: Mikhail Shalaginov
This research focuses on color centers in diamond that share quantum properties with single atoms. These systems promise a path for the realization of practical quantum devices such as nanoscale sensors, single-photon sources, and quantum memories. In particular, we explored an intriguing...
Coherent Nonlinear Optical Propagation Processes in Hyperbolic Metamaterials
07 Jun 2017 | | Contributor(s):: Alexander K. Popov
Coherence and interference play an important role in classic and quantum physics. Processes to be employed can be significantly enhanced and the unwanted ones suppressed through the deliberately tailored constructive and destructed interference at quantum transitions and at nonlinear optical...
07 Jun 2017 | | Contributor(s):: Vladimir M. Shalaev
Opening remarks for the 2017 Purdue Quantum Center workshop.
Soft, Biocompatible Optoelectronic Interfaces to the Brain
07 Jun 2017 | | Contributor(s):: John A. Rogers
In this talk, we will describe foundational concepts in physics and materials science for these types of technologies, in 1D, 2D and 3D architectures. Examples in system level demonstrations include experiments on freely moving animals with ‘cellular-scale’, injectable optofluidic... | <urn:uuid:a7815458-3024-4014-9194-c0ecc779ae48> | 2.65625 | 713 | Content Listing | Science & Tech. | 26.13413 | 95,577,056 |
Researchers from the UFZ warn that ecosystems will change dramatically
In their joint publication in the journal „Ecology Letters" German and American biologists have reported an increase in biomass production in ecosystems colonised by non-native plant species. In the face of climate change, these and other changes to ecosystems are predicted to become more frequent, according to the researchers.
Invasive exotic plant species, such as the Turkish Rocket (Bunias orientalis), are often fast-growing and competitive. They may alter ecosystems by gaining dominance, increasing productivity and replacing native plant species. The current study shows that in grassland ecosystems, native generalist herbivores such as voles – which are usually considered as a pest – may provide substantial resistance to plant invasions.
Photo: Harald Auge/UFZ
All over the world, plant and animal species are increasingly encroaching upon ecosystems where they don't belong as a result of human influence. This phenomenon is known as a biological invasion. Observational studies on biological invasions show that the invasion of non-native plant species can alter ecosystems. One important aspect of this is biomass production: compared to intact ecosystems, the productivity of ecosystems with non-native species is considerably higher.
„In such purely observational studies however, it is not possible to differentiate between cause and effect", says Dr. Harald Auge from the Helmholtz Center for Environmental Research (UFZ). „The question is whether exotic plant species prefer to colonise more productive ecosystems, or whether increased productivity is a result of the invasion."
To get to the bottom of this question, UFZ researchers joined forces with colleagues from the Martin-Luther University Halle-Wittenberg, the University of Montana, the University of California and the US Forest Service and staged invasions by setting up experimental sites in three disparate grassland regions -in Central Germany, Montana and California, on which 20 native plant species (from the respective region) and 20 exotic plant species were sown.
Researchers investigated whether and to which extent herbivorous small mammals such as mice, voles or ground squirrels as well as mechanical disturbance to the soil would influence exotic plant species colonizing ability.
The experimental design was exactly the same for all three regions to ensure comparability. We wanted to find out whether superordinate relationships were playing a role, irrespective of land use, species compositions and climate differences", explains Dr Auge. When the experimental sites were not subject to any mechanical disturbance and when herbivorous small mammals had open access to the sites, then no differences could be found between the three regions in their reaction to the sowing of exotic species: biomass production was found to be only slightly higher than for ecosystems with exclusively native plant species, and susceptibility to invasions was low.
„The herbivorous small mammals really surprised us", says Dr Auge. „Their presence and appetite is largely responsible for the resistance of grasslands to exotic plant species invasions". If the herbivorous small mammals were excluded using fences or the soil disturbed mechanically or both, then the results were considerably different: ecosystems proved to be less resistant to invasions and biomass production turned out to be considerably higher.
„It was perplexing that an increase in productivity applied to all three (from a climate perspective) completely disparate regions. Hence, there seems to be a universal phenomenon going on: exotic plant species do not necessarily prefer more productive ecosystems -their exotic provenance as such leads to an increased production of biomass, which is thus an effect and not the cause of the invasion", Dr Auge resumes.
So far there has been no explanation as to why exotic plant species increase biomass production so dramatically. It is possible that only those non-native species that are particularly productive and competitive are able to establish successfully in a new area. Another cause could be the lack of parasites and pathogens specialised on these species.
To investigate the long-term reactions of grassland ecosystems on the establishment of non-native plant species, the researchers plan future investigations on the further development of the species on the experimental sites. Dr Auge: „We assume that the non-natives will increasingly crowd out the natives from the ecosystem -a reduction in species richness would imply another dramatic change to native ecosystems." Nicole Silbermann
Photos & links: http://www.ufz.de/index.php?en=32498
Publication: Maron, J.L.; Auge, H.; Pearson, D. E.; Korell, L.; Hensen, I.; Suding, K. N.; Stein, C. (2014) Staged invasions across disparate grasslands: effects of consumers, disturbance and seed provenance on productivity and species richness. Ecology Letters 17: 499-507. http://dx.doi.org/10.1111/ele.12250
Helmholtz Center for Environmental Research - UFZ
Dr. Harald Auge
or Tilo Arnhold, Susanne Hufe (UFZ Press)
Tel.: +49-(0)341-235-1635, -1630
In the Helmholtz Centre for Environmental Research (UFZ), scientists conduct research into the causes and consequences of far-reaching environmental changes. Their areas of study cover water resources, biodiversity, the consequences of climate change and possible adaptation strategies, environmental technologies and biotechnologies, bio-energy, the effects of chemicals in the environment and the way they influence health, modelling and social-scientific issues. Its guiding principle: Our research contributes to the sustainable use of natural resources and helps to provide long-term protection for these vital assets in the face of global change. The UFZ employs more than 1,100 staff at its sites in Leipzig, Halle and Magdeburg. It is funded by the federal government, Saxony and Saxony-Anhalt. http://www.ufz.de/
The Helmholtz Association contributes to solving major and urgent issues in society, science and industry through scientific excellence in six research areas: Energy, earth and environment, health, key technologies, structure of matter as well as aviation, aerospace and transportation. The Helmholtz Association is the largest scientific organisation in Germany, with 35,000 employees in 18 research centres and an annual budget of around €3.8 billion. Its work is carried out in the tradition of the great natural scientist Hermann von Helmholtz (1821-1894). http://www.helmholtz.de/
Nicole Silbermann/Tilo Arnhold | UFZ News
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:cb6fe330-619c-4170-a575-7824bd3492fb> | 3.265625 | 2,010 | Content Listing | Science & Tech. | 33.651799 | 95,577,061 |
Thats due to atmospheric refractions, disturbance in their path
Nope. Seen from orbit the sun still has an angular size of ~0.5°.
If that were true then we would see a fuzzy ball and not an object with sharp, well defined edges. Also, all stars would look the same as a fuzzy Sun but fainter.
But why are we even bothering to discuss the book in the OP's OP? That diagram is rubbish and doesn't show what it means to say (at lease I hope it means something different from the diagram). It wouldn't be the only book with Bad Science in it. I had a Childrens' Encyplopedia in the 1950s with pictures of the surfaces of Jupiter and Saturn that had mountains.
This was a question that was asked at such an elementary level, but it has taken off in a way too-complicated direction.
To the OP: It is considered to be paralleled in the same sense that we consider earth to be "flat" and "g" to be a constant for most things done on the surface of the earth. It means that if you use the light from the sun in the optics experiments at your level, the result will be practically identical to those that I solve mathematically for "parallel" light. The "wave fronts" of light coming from the earth are considered parallel by the time it gets to earth.
Think of it this way. If you drop a pebble into water and look at the circular wave fronts, how would they look as the move further and further away from where they are created? The father away they go, the "flatter" they will tend to appear until at some point, their curvature will no longer be significant. The wave fronts will now appear as if they are parallel and moving in a straight line at all points.
It doesn't take light just from the sun to be this way. In my intro physics labs, it is enough that the light source is at one end of the room. Our basic optics experiments give accurate-enough results if we assume that the light source is "infinitely" far away so that the wave fronts are parallel.
If you look at the sun (through a filter), rays from the left side enter your eye, and rays from the right side enter your eye. Those rays are not parallel. Practically, they radiate in all directions from the sun. Only from our perspective are they almost parallel.
All rays from one location on the surface can be treated as parallel with no error. But rays from different locations are definitely not parallel. A half degree of difference in arrival angle is very significant. I reckon a high performance racing engine with bores with half a degree of taper would not last long. The poor rings would be knackered pretty soon. Half a degree is like a Barn Door!
Agreed! I was shocked this thread made it to page 2!
This is not a situation where we can say 99% of the time they can be assumed to be parallel because there are a lot of situations where the fact that they are not parallel is important. But it isn't that difficult to differentiate.
OOOOOOOKKKKKKKKKKKK - another bad science textbook.
In order for the rays to be "effectively" parallel you need a point source that is a long ways away or a laser.
The fact that the sun is a source that is about 0.5 degree across means that light from the sun will diverge at about that same angle. If sunlight were parallel then the path of a solar eclipse would about 2100 miles wide not 100-200 miles wide.
The surface of the sun is a Lambertian emitter. Each part emits light in all directions with an intensity proportional to the cosine of the angle to the surface. I think that's the problem with the textbook is that it assumed light from the sun is emitted perpendicularly to the surface.
Lasers are nearly parallel beams, but have a small divergence. For practical purposes the light from a laser is parallel.
But that's the KEY point here. At the level which I gather from the OP's post, a "sunlight" can be considered, for practical purposes, as having flat, parallel wave fronts! If you use sunlight to get the focal length of a convex lens, simply measuring the where the focus image is, for all practical purposes, will give you an accurate-enough value to equate to the focal length.
I'm not sure I agree that it is a bad thing. We tend to be overly precise here because of differences in the work and audiences we deal with versus our members. Precision requires extra words that can detract from the message one is trying to convey. As such, it is often beneficial to skip listing caveats/qualifiers when they aren't necessary because they clutter-up/distract from the desired thought process. It is a difficult balance to make.
That said, it wouldn't have been too difficult to add in a one-word qualifier like "effectively" to the problem statement.
My issue with a textbook that doesn't say approximately parallel or something to that effect is it becomes an ingrained fact of information that shouldn't be questioned. We don't need to distill science down to nice sound bites that are easy to remember. I occasionally tutor students who are adamant about "facts" that they were taught and I spend a lot of time "proving" what I shouldn't have to in order to help them.
The issue with simplification issue in history is even worse. Most historical events were the result of dozens of interwoven factors. Over the years the list of factors has gotten shorter and shorter to the extent that "modern" text books give only one or two factors for the cause of a series of events and not the whole picture. It starts innocently enough, but successive iterations . . .
The rays from OUR sun aren't parallel, which is why shadows have fuzzy edges. The thickness of the edge of the shadow results from partial obscuration of the sun at different angles (some rays hitting the shadowing object while other rays miss it).
However, the rays hitting our planet from OTHER stars can be considered parallel, because stars are basically point sources of light.
If you put 2 lens at the focal point of the concave lens you get a beam like a laser beam. Been there done that.
A concave lens? That won't focus. That'll disperse. In any case, the image of the sun will not magically turn into a point and will not collimate like a laser.
This is half wrong, as has already been pointed out multiple times in this thread. The light rays from any single point on the Sun are extremely close to parallel, which is exactly what the OP's book is talking about when it says Sunlight is parallel.
1. They’re near parallel — but not parallel. sure, perspective intensifies their otherwise minuscule and imperceptible angle, but they’re not emanating from an infinite planar light source… they’re radiating from the sun.
Anyone can see the angular width of the Sun so that isn’t of interest, but from any part of the Sun the light is virtually parallel.
however, the Sun’s rays received by Earth are converging.
At the time, the edges of the shadows very clearly diverge (and get blurry). This is due to the Sun appearing ~0.53° in the sky. Just think about it, the light from the far edges of the sun are very clearly NOT parallel with each other.
The shadows in that picture converge due to perspective (if that is the Sun and the poles are on Earth).
He's referring to the fact that each shadow has sides that are not parallel, not the perspective between the two...
...though I still disagree that the answer can be stated so simply. From a point on each "side" (limb) of the sun emerge photons/rays covering about 180 degrees of arc; diverging. This enables some of the rays to converge when considered in certain ways from earth. So I would say the statement "the Sun’s rays received by Earth are converging" is oversimplified at best.
This is incorrect. Converging and diverging light refer to light that comes from a single point, not from light that is emitted from different points. The Sun's light is diverging as it spreads out into space, not converging.
Since people don't want to read anything that's already been written or make sure their information is correct, I think it's time to close this thread.
Separate names with a comma. | <urn:uuid:85deca8e-714d-4357-ad01-507c89acf993> | 3.953125 | 1,797 | Comment Section | Science & Tech. | 63.501099 | 95,577,062 |
PARIS — Counter-intuitive but true, say scientists: a string of freezing European winters scattered over the last decade has been driven in large part by global warming.
The culprit, according to a new study, is the Arctic’s receding surface ice, which at current rates of decline could to disappear entirely during summer months by century’s end.
The mechanism uncovered triples the chances that future winters in Europe and north Asia will be similarly inclement, the study reports.
Bitingly cold weather wreaked havoc across Europe in the winter months of 2005-2006, dumping snow in southern Spain and plunging eastern Europe and Russia into an unusually — and deadly — deep freeze.
Another sustained cold streak in 2009-2010, gave Britain its coldest winter in 14 years, and wreaked transportation havoc across the continent. This year seems poised to deliver a repeat performance.
At first glance, this flurry of frostiness would seem to be at odds with standard climate change scenarios in which Earth’s temperature steadily rises, possibly by as much as five or six degrees Celsius (9.0 to 10.8 degrees Fahrenheit) by 2100.
Climate skeptics who question the gravity of global warming or that humans are to blame point to the deep chills as confirmation of their doubts.
Such assertions, counter scientists, mistakenly conflate the long-term patterns of climate with the short-term vagaries of weather, and ignore regional variation in climate change impacts.
New research, however, goes further, showing that global warming has actually contributed to Europe’s winter blues.
Rising temperatures in the Arctic — increasing at two to three times the global average — have peeled back the region’s floating ice cover by 20 percent over the last three decades.
This has allowed more of the Sun’s radiative force to be absorbed by dark-blue sea rather than bounced back into space by reflective ice and snow, accelerating the warming process.
More critically for weather patterns, it has also created a massive source of heat during the winter months.
“Say the ocean is at zero degrees Celsius (32 degrees Fahrenheit),” said Stefan Rahmstorf, a climate scientist at the Potsdam Institute for Climate Impact Research in Germany.
“That is a lot warmer than the overlying air in the polar area in winter, so you get a major heat flow heating up the atmosphere from below which you don’t have when it is covered by ice. That’s a massive change,” he told AFP in an interview.
The result, according to a modeling study published earlier this month the Journal of Geophysical Research, is a strong high-pressure system over the newly-exposed sea which brings cold polar air, swirling counter-clockwise, into Europe.
“Recent severe winters like last year’s or the one of 2005-2006 do not conflict with the global warming picture, but rather supplement it,” explained Vladimir Petoukhov, lead author of the study and a physicist at the Potsdam Institute.
“These anomalies could triple the probability of cold winter extremes in Europe and north Asia,” he said.
The researchers created a computer model simulating the impact on weather patterns of a gradual reduction of winter ice cover in the Barents-Kara Sea, north of Scandinavia.
Other possible explanations for uncommonly cold winters — reduced Sun activity or changes in the Gulf Stream — “tend to exaggerate their effect,” Petoukhov said.
He also points out that during the freezing 2005-2006 winter, when temperatures averaged 10 C below normal in Siberia, there were no unusual variations in the north Atlantic oscillation, another putative cause.
Colder European winters do not indicate a slowing of global warming trends, only an uneven distribution, researchers say.
“As I look out my window is see about 30 centimeters of snow and the thermostat reads -14.0 C,” said Rahmstorf, speaking by phone from Potsdam.
“At the same time, in Greenland we have above zero temperatures — in December.” | <urn:uuid:87262ed9-ae8e-44c5-977c-4e1da832d306> | 3.6875 | 861 | Truncated | Science & Tech. | 40.179628 | 95,577,074 |
Understanding Absolute Value
When I first learned about absolute value, I thought it was a bit…weird. First of all, the notation for absolute value are these vertical bars (literally called “vertical bars”) and the only thing it seemed to do was make a negative number positive. But absolute value means a lot more than just that.
First, let’s review how absolute value works by answering these problems:
i. What is ?
ii. What is ?
iii. What is ?
Here’s one way of describing absolute value: “The absolute value of a number is its distance from 0”. This way of thinking will definitely get you to the right answer, but you might still question why its useful. Why do we need to know how far things are from 0? To make it a little more clear, let’s move away from abstract concepts like a number line and bring it in to the real world.
Say you’re at home and decide you want to get something to eat. Conveniently, there is a Wendy’s 2 blocks east and a McDonald’s 3 blocks west. Now you don’t want to spend a lot of time getting food, so you want to go to the restaurant that’s closer. So where do you eat? Well, you’re probably going to go to Wendy’s, right? Clearly 2 blocks is a smaller distance than 3 blocks, so, intuitively, the problem is really easy. But guess what? You just used the idea of absolute value to answer that question!
See, there are two pieces of information that describe the restaurants’ locations. One is distance, 2 and 3, and the other is direction, east and west. When you are talking about absolute value, you are only talking about magnitude, or a scalar. That is, you are talking about the number, not the sign (+ or -) or direction.
The reason why the “distance from 0” definition works as well is because, by definition, opposites (i.e. 2,-2 or 5,-5) have the same magnitude/size, they are just placed on opposite directions of the number line.
As a bonus, try to understand the intuition behind an expression like and how it relates to the expression . This should also give you some insight into subtraction and how it is related to absolute value.
About The Author
|Hi there! My name is Gerard, and I am a mathematics enthusiast. I've loved math for as long as I can remember and, as a tutor, I wish to share that passion with my students.My pedagogical approach is to teach students the beauty behind mathematics, to give them the intuition behind what makes ma...| | <urn:uuid:a182284b-b56a-408f-8a37-0d8b5d7e08f6> | 4.0625 | 580 | Tutorial | Science & Tech. | 65.534097 | 95,577,087 |
What is the difference between Java and HTML?
5364 Since 25th November, 2003
Select and Copy the Code
Java is a full-fledged object oriented programming language. It is derived from C++ and shares the same basic syntax with that language. HTML (HyperText Markup Language) is not a programming language. It is simply used as a scripting tool to develop web pages. It is not an editor - it is a simple scripting language that defines how the text and images will appear on the web page. Java is used to create small applications (called applets) that run in the browser. It can also be used to develop full-fledged applications. | <urn:uuid:13e4eb21-2feb-454f-9cbb-bbd7679b2cb1> | 3.34375 | 135 | Q&A Forum | Software Dev. | 57.51875 | 95,577,095 |
NATURAL HISTORY, TORONTO REGION
Carterius tubisperma Mills—Sunnyside.
Spongilla lacustris Linn.*
Ephydatia fluviatilis (L.) is probably the sponge recorded from Grenadier Pond by Goadby and Bovell as Spongia fluviatilis.
These simplest forms of animal life form a considerable part of the plankton or floating life of our waters, and also occur in debris on the bottom and in moist situations of all kinds. Parasitic species are to be found in members of all the other groups, and they even parasitize other Protozoa. The few species reported from this district are,
By Bovell, from the Humber and Island Ponds,
Amoeba princeps. Kolpoda cucullus.
Stentor caeruleus. Paramecium aurelia.
Vorticella convallaria. Oxytricha gibba.
Leugophrys patula. Chilodon cucullus.
By Acheson, from tap-water, | <urn:uuid:000bdc03-9b18-45cc-8698-4fd0499b9481> | 2.625 | 237 | Knowledge Article | Science & Tech. | 31.851197 | 95,577,126 |
|MLA Citation:||Bloomfield, Louis A. "Question 1573: Can pulling a superlong string send signals faster than the speed of light?"|
How Everything Works 22 Jul 2018. 22 Jul 2018 <http://howeverythingworks.org/print1.php?QNum=1573>.
Each portion of cable responds to being pulled by accelerating, moving, and consequently pulling on the portion of cable adjacent to it. There will be a long series of actions—pulling, accelerating, moving, and pulling again—that propagates your influence along the cable. A wave will travel along the cable, a wave consisting of a local reduction in the cable's density. It's a stretching wave. In that respect, the wave is a type of sound wave—a density fluctuation that propagates through a medium.
How quickly the density wave travels along the cable depends on how stiff the cable is and on its average mass density. The stiffer the cable, the more strongly each portion can influence its neighboring portions and the faster the density wave will travel. The greater the cable's mass density, the more inertia it has and the slower it respond to pulls, so the density wave will travel slower.
A cable made from a stiff, low-density material carries sound faster than a soft, high-density material. A steel cable should carry your wave at about 6100 meters/second (3.8 miles/second). But a diamond cable would reach 12000 meters/second (7.5 miles/second) because of its extreme stiffness and a beryllium cable would approach 13000 meters/second (8.0 miles/second) because of its extremely low mass density.
Regardless of which material you choose, you're clearly not going to be able to send any signals faster than the speed of light. It would take a density wave more than 100,000 years to travel the 5-light year length of your cable. And sadly, friction-like dissipation effects in the cable would turn the density wave's energy into thermal energy in a matter of seconds, so it would barely get started on its journey before vanishing into randomness. | <urn:uuid:b907fc67-316e-4ca9-87fc-242d705736c1> | 3.15625 | 442 | Knowledge Article | Science & Tech. | 60.900927 | 95,577,128 |
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical compounds or for biomolecules in solution and is therefore of great industrial importance.
In this reaction, charged particles encounter molecules and one molecular group is replaced by another. For a long time, science has been trying to reproduce these processes at the interface of chemistry and physics in the laboratory and to understand them at the atomic level.
The team headed by experimental physicist Roland Wester at the Institute of Ion Physics and Applied Physics at the University of Innsbruck is one of the world's leading research groups in this field.
Proton Exchange Reaction Strengthened
In a specially constructed experiment, the physicists from Innsbruck collide the charged particles with molecules in vacuum and examine the reaction products. To determine if targeted vibration excitation has an impact on a chemical reaction, the scientists use a laser beam that excites a vibration in the molecule. In the current experiment, negatively charged fluorine ions (F-) and methyl iodide molecules (CH3I) were used.
In the collision, due to the exchange of an iodine bond with a fluorine bond, a methyl fluoride molecule and a negatively charged iodine ion are formed. Before the particles meet, the laser excites carbon-hydrogen stretching vibrations in the molecule. "Our measurements show that the laser excitation does not enhance the exchange reaction," says participating scientist Jennifer Meyer. "The hydrogen atoms just seem to be watching the reaction."
The result is substantiated by the observation that a competing reaction strongly increases. In this other proton exchange reaction, a hydrogen atom is torn from the methyl iodide molecule and hydrogen fluoride (HF) is formed. "We let the two species collide 20 times per second, the laser is applied in every second collision, and we repeat the process millions of times," explains Meyer.
“Whenever the laser is irradiated, this proton exchange reaction is drastically amplified." Theoretical chemists from the University of Szeged in Hungary and the University of New Mexico in the USA have further supported the experimental results from Innsbruck using computer simulations.
Spectator Role in Focus
In high-precision investigations of chemical processes, only the simplest model, the reaction of an atom with a diatomic molecule, has so far been studied. "Here, all particles are inevitably involved in the reaction. There are no observers", says Roland Wester. The system that we are now studying is so large that observers appear. However it is still small enough to be able to study these observers very precisely." For large molecules, there are many particles that are not directly involved in the reaction. The investigation of their role is one of the long-term goals of the Wester group. The researchers also want to refine the current experiment in order to uncover further possible subtle effects.
Laser Controlled Chemistry
The question of whether certain reactions can be intensified by the targeted excitation of individual molecular groups is also an important consideration. "If you understand something, you can also exercise control," sums up Roland Wester. "Instead of stimulating a reaction through heat, it may make sense to stimulate only individual groups of molecules to achieve a specific reaction," adds Jennifer Meyer. This may avoid competing reaction processes that are a common problem in industrial chemistry or biomedical research. The more precise the control over the chemical reaction, the less waste is produced and the lower the costs.
The current paper has been published in the journal Science Advances. The research was funded by, among others, the Austrian Science Fund FWF and the Austrian Academy of Sciences.
Publication: Stretching vibration is spectator in nucleophilic substitution. Martin Stei, Eduardo Carrascosa, Alexander Doerfler, Jennifer Meyer, Balázs Olasz, Gábor Czakó, Anyang Li, Hua Guo, Roland Wester. Science Advances 2018 (Open Access) DOI: 10.1126/sciadv.aas9544
http://dx.doi.org/10.1126/sciadv.aas9544 - Stretching vibration is spectator in nucleophilic substitution. Martin Stei, Eduardo Carrascosa, Alexander Doerfler, Jennifer Meyer, Balázs Olasz, Gábor Czakó, Anyang Li, Hua Guo, Roland Wester. Science Advances 2018 (Open Access) DOI: 10.1126/sciadv.aas9544
https://www.uibk.ac.at/ionen-angewandte-physik/index.html.en - Institute for Ion Physics and Applied Physics, University of Innsbruck
Dr. Christian Flatz | Universität Innsbruck
Particle Physicists at TU Dresden involved in the discovery of scattering of W and Z bosons
09.07.2018 | Technische Universität Dresden
'Molecular movie' captures chemical reaction on atomic scale
06.07.2018 | University of Nebraska-Lincoln
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
Sizes and shapes of nuclei with more than 100 protons were so far experimentally inaccessible. Laser spectroscopy is an established technique in measuring fundamental properties of exotic atoms and their nuclei. For the first time, this technique was now extended to precisely measure the optical excitation of atomic levels in the atomic shell of three isotopes of the heavy element nobelium, which contain 102 protons in their nuclei and do not occur naturally. This was reported by an international team lead by scientists from GSI Helmholtzzentrum für Schwerionenforschung.
Nuclei of heavy elements can be produced at minute quantities of a few atoms per second in fusion reactions using powerful particle accelerators. The obtained...
A team headed by the TUM physicists Alexander Holleitner and Reinhard Kienberger has succeeded for the first time in generating ultrashort electric pulses on a chip using metal antennas only a few nanometers in size, then running the signals a few millimeters above the surface and reading them in again a controlled manner. The technology enables the development of new, powerful terahertz components.
Classical electronics allows frequencies up to around 100 gigahertz. Optoelectronics uses electromagnetic phenomena starting at 10 terahertz. This range in...
03.07.2018 | Event News
28.06.2018 | Event News
28.06.2018 | Event News
09.07.2018 | Physics and Astronomy
06.07.2018 | Physics and Astronomy
06.07.2018 | Information Technology | <urn:uuid:74299851-38e4-4da2-a199-08c6baa8c2b9> | 3.5625 | 1,672 | Content Listing | Science & Tech. | 36.196882 | 95,577,133 |
Tropical Storm Genevieve may be a remnant low pressure area but there's still a chance it could make a comeback.
Meanwhile, GOES-West satellite imagery showed there are two developing low pressure areas "chasing" Genevieve to the east. NOAA's Central Pacific Hurricane Center has suddenly become very busy tracking these three areas.
NASA/NOAA's GOES Project at NASA's Goddard Space Flight Center in Greenbelt, Maryland provided an infrared image of the Central and Eastern Pacific on July 28 that showed Genevieve southeast of Hawaii, and two other low pressure areas behind it now getting organized.
Tropical Storm Genevieve weakened to a tropical depression on Sunday, July 27 and the National Hurricane Center issued their final advisory on the system as it was entering the Central Pacific. At 5 a.m. EDT the depression was located near 12.4 north latitude and 140.1 west longitude, about 1,130 miles (1,820 km) east-southeast of South Point, Hawaii. It was moving to the west near 9 mph and had maximum sustained winds near 35 mph (55 kph).
By Monday, July 28 at 8 a.m. EDT (2 a.m. HST) Genevieve became a remnant low pressure area. The remnant low was located about 780 miles southeast of Hilo, Hawaii.
The Central Pacific Hurricane Center (CPHC) noted that this may not be the last of Genevieve, however, as "environmental conditions may be somewhat conducive for development of this system as it continues to move westward at about 10 mph during the next couple of days." CPHC gives Genevieve's remnants a 30 percent chance of making a comeback in the next couple of days.
In addition to the remnant low, there's a developing area of low pressure located east of Genevieve's remnants. An elongated area of showers and thunderstorms is located about 860 miles south of Honolulu, Hawaii. The low pressure area is moving to the west at 10 mph and also has a 30 percent chance of development over the next two days.
Even farther east is yet another area of low pressure. That one is located about 1,400 miles east of the Big Island of Hawaii and it is producing limited shower activity. This low is not in a favorable area for development so CPHC gave it a 10 percent chance for becoming a tropical depression in the next two days. This low is still in the Eastern Pacific Ocean, and is expected to cross into the Central Pacific in two more days.
Rob Gutro | Eurek Alert!
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
Drones survey African wildlife
11.07.2018 | Schweizerischer Nationalfonds SNF
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:551162e3-f8b4-4aa0-925e-13c1fd48936c> | 2.59375 | 1,153 | Content Listing | Science & Tech. | 48.966833 | 95,577,145 |
Java sax parser validating
The chapter introduces the Xerces download component, its integrated parser, documentation, and samples.You may use these samples as frameworks for further development.Therefore, the JAXP 1.1 Expert Group (EG) introduced a set of APIs called Transformation API for XML (Tr AX) in JAXP 1.1, and since then, JAXP is called Java API for XML Processing.Thereafter, JAXP has evolved to an extent, where now it supports a lot more things (like validation against schema while parsing, validation against preparsed schema, evaluating XPath expressions, etc.,) than only parsing an XML document.But in JAXP 1.1 (JSR-63), XML transformation was introduced using XSL-T.
We assume that you have at least an intermediate comfort level with Java, that you understand the concepts of paths and classpaths, that you have utilized Java packages, classes, and interfaces, and that you have experience writing, compiling, and running applications.So, JAXP is a lightweight API to process XML documents by being agnostic of the underlying XML processor, which are pluggable.Welcome to the Coding Forums, the place to chat about anything related to programming and coding languages.This mechanism is frequently used to transmit and receive XML documents. When using the SAX parser we provide the callback methods, and the parser invokes them as it reads the XML data.SAX is a state independent processing, where the handling of an element does not depend on the other elements. In SAX we cannot go back to an earlier part of the document and we can only process element by element, one by one from the start to the end.
objects, are configured by setting features and properties. This allows just a handful of standard methods to support an arbitrary number of standard and non-standard features and properties of various types. | <urn:uuid:230aef02-3e6e-4c29-b907-4e79ce5711c4> | 2.53125 | 390 | Truncated | Software Dev. | 47.073919 | 95,577,164 |
0444 GMT July 18, 2018
The finding could challenge theories of how stars live and die, and may have implications for measuring the expansion of the Universe, according to sciencenews.org.
As a star ages, it sheds most of its gas into space until all that remains is a dense core of carbon and oxygen, the ashes of a lifetime of burning helium. That core, plus a thin shellacking of helium, is called a white dwarf.
But the proportion of those elements relative to one another was uncertain.
Astrophysicist Noemi Giammichele, now at the Institute of Research in Astrophysics and Planetology in Toulouse, France, said, “From theory, we have a rough idea of how it’s supposed to be, but we have no way to measure it directly.”
Luckily, some white dwarfs encode their inner nature on their surface. These stars change their brightness in response to internal vibrations.
Astrophysicists can infer a star’s internal structure from the vibrations, similar to how geologists learn about Earth’s interior by measuring seismic waves during an earthquake.
Giammichele and her colleagues used data from NASA’s Kepler space telescope, which watched stars unblinkingly to track periodic changes in their brightness.
Kepler’s chief aim was to find exoplanets, the worlds orbiting distant stars. But it also monitored white dwarf KIC 08626021, located 1,375 light-years away in the constellation Cygnus, for 23 months.
The observations provided the highest-precision data ever on tiny changes in a white dwarf’s brightness and, indirectly, its vibrations.
Next, Giammichele borrowed a computer simulation technique from her former life as an aeronautical engineer to figure out how the changes in vibrations related to the makeup of the core.
The team ran millions of simulations, looking for one that reproduced the exact light changes that Kepler observed.
One simulation fit the data perfectly, showing that the white dwarf had the expected carbon and oxygen core with a thin shell of helium.
But the details were surprising. The core was about 86 percent oxygen, 15 percent greater than physicists had previously calculated.
That suggested that something about the processes that convert helium to carbon and oxygen or mix elements in the star’s core during its active lifetime must boost the amount of oxygen.
Four other white dwarfs show a similar trend, said study coauthor Gilles Fontaine, an astrophysicist at the University of Montreal.
“We certainly will go ahead and analyze many more.
“If other white dwarfs turn out to be similar, the results will send theorists who study stellar evolution back to the drawing board.”
White dwarfs are also thought to be the precursors of type 1a supernovas.
These catastrophic stellar explosions were once thought to have the same intrinsic brightness, meaning they appeared brighter or dimmer depending only on their distance from Earth.
Measuring their actual distances led to the discovery that the Universe is expanding at an accelerating rate, which physicists explain by invoking a mysterious substance called dark energy.
More recent observations suggest that these so-called standard candles may not be so standard after all.
Fontaine said, “If the white dwarfs that help create supernovas have varying oxygen contents, that may help explain some of the differences.”
Astrophysicist Alexei Filippenko of the University of California, Berkeley, said, “Accounting for that difference may someday help reveal details of what dark energy is made of.”
But those implications are a long way off.
He said, “Just how much bearing it will have on cosmology remains to be seen.” | <urn:uuid:5b6a9600-1fe7-48c9-a288-c609fb28a82c> | 4.46875 | 791 | News Article | Science & Tech. | 37.799423 | 95,577,168 |
A perfectly preserved woolly mammoth carcass with liquid blood has been found on a remote Arctic island, fueling hopes of cloning the Ice Age animal, Russian scientists said Thursday.
The carcass was in such good shape because its lower part was stuck in pure ice, said Semyon Grigoryev, the head of the Mammoth Museum, who led the expedition into the Lyakhovsky Islands off the Siberian coast.
“The blood is very dark, it was found in ice cavities below the belly and when we broke these cavities with a poll pick, the blood came running out,” he said in a statement released by the North-Eastern Federal University in Yakutsk, which sent the team.
Wooly mammoths are thought to have died out around 10,000 years ago, although scientists think small groups of them lived longer in Alaska and on islands off Siberia.
Scientists have deciphered much of the woolly mammoth’s genetic code from their hair, and some believe it’s possible to clone them if living cells are found
Grigoryev said the find could provide the necessary material. The blood of mammoths appeared not to freeze in extreme temperatures, likely keeping mammoths warm, he said.
The temperature at the time of excavation was 14 to 19 degrees Fahrenheit.
The researchers collected the samples of the animal’s blood in tubes with a special preservative agent. They were sent to Yakutsk for bacterial examination in order to spot potentially dangerous infections.
The carcass’ muscle tissue was also in perfect condition.
“The fragments of muscle tissues, which we’ve found out of the body, have a natural red color of fresh meat,” Grigoryev said.
Up to 13 feet in height and 10 tons in weight, mammoths roamed across huge areas between Great Britain and North America and were driven to extinction by humans and the changing climate. | <urn:uuid:bb1b6d94-d4da-44a4-9f36-6ce07d22772a> | 2.984375 | 400 | News Article | Science & Tech. | 43.823088 | 95,577,199 |
Research shows it is possible to use an optogenetic technique to target select cells in the adult brain in an animal model
UW Medicine researchers have developed a technique for inserting a gene into specific cell types in the adult brain in an animal model.
Recent work shows that the approach can be used to alter the function of brain circuits and change behavior. The study appears in the journal Neuron in the NeuroResources section.
Gregory Horwitz, associate professor of physiology and biophysics at the University of Washington School of Medicine in Seattle, led the research team. He said that the approach will allow scientists to better understand what roles select cell types play in the brain's complex circuitry.
Researchers hope that the approach might someday lead to developing treatments for conditions, such as epilepsy, that might be curable by activating a small group of cells
"The brain is made up of a mix of many cell types performing different functions. One of the big challenges for neuroscience is finding ways to study the function of specific cell types selectively without affecting the function of other cell types nearby," Horwitz said. "Our study shows it is possible to selectively target a specific cell type in an adult brain using this technique and affect behavior nearly instantly."
In their study, Horowitz and his colleagues at the Washington National Primate Research Center in Seattle inserted a gene into cells in the cerebellum, a small structure located at the back of the brain and tucked under the brain's larger cerebrum.
The cerebellum's primary function is controlling motor movements. Disorders of the cerebellum generally lead to often disabling loss of coordination. Recent research suggests the cerebellum may also be important in learning and may be involved in such conditions as autism and schizophrenia.
The cells the scientists selected to study are called Purkinje cells. These cells, named after their discoverer, Czech anatomist Jan Evangelista Purkinje, are some of the largest in the human brain. They typically make connections with hundreds of other brain cells.
"The Purkinje cell is a mysterious cell," said Horwitz. "It's one of the biggest and most elaborate neurons and it processes signals from hundreds of thousands of other brain cells. We know it plays a critical role in movement and coordination. We just don't know how."
The gene they inserted, called channelrhodopsin-2, encodes for a light-sensitive protein that inserts itself into the brain cell's membrane. When exposed to light, it allows ions - tiny charged particles - to pass through the membrane. This triggers the brain cell to fire.
The technique, called optogenetics, is commonly used to study brain function in mice. But in these studies, the gene must be introduced into the embryonic mouse cell.
"This 'transgenic' approach has proved invaluable in the study of the brain," Horwitz said. "But if we are someday going to use it to treat disease, we need to find a way to introduce the gene later in life, when most neurological disorders appear."
The challenge for his research team was how to introduce channelrhodopsin-2 into a specific cell type in an adult animal. To achieve this, they used a modified virus that carried the gene for channelrhodopsin-2 along with segment of DNA called a promoter. The promoter stimulates the cell to start expressing the gene and make the channelrhodopsin-2 membrane protein. To make sure the gene was expressed only by Purkinje cells, the researchers used a promoter that is strongly active in Purkinje cells, called L7/Pcp2."
In their paper, the researchers reported that by painlessly injecting the modified virus into a small area of the cerebellum of rhesus macaque monkeys, the channelrhodopsin-2 was taken up exclusively by the targeted Purkinje cells. The researchers then showed that when they exposed the treated cells to light through a fine optical fiber, they were able stimulate the cells to fire at different rates and affect the animals' motor control.
Horwitz said that it was the fact that Purkinje cells express L7/Pcp2 promoter at a higher rate than other cells that made them more likely to produce the channelrhodopsin-2 membrane protein.
"This experiment demonstrates that you can engineer a viral vector with this specific promoter sequence and target a specific cell type," he said. "The promoter is the magic. Next, we want to use other promoters to target other cell types involved in other types of behaviors."
Horwitz coauthors were: lead author Yasmine El-Shamayleh, a postdoctoral fellow; Yoshiko Kojima, an acting instructor; and Robijanto Soetedjo, a UW School of Medicine research associate professor of physiology and biophysics. All are researchers at the Washington National Primate Research Center.
This study was funded by National Institutes of Health grants to the researchers; an NIH Office of Research Infrastructure Programs grant to the Washington National Primate Research Center, and a National Eye Institute Center Core Grant for Vision Research to the University of Washington School of Medicine.
Michael McCarthy | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:6c08f5c9-4061-42ae-bb8f-4b8616bb701c> | 3.34375 | 1,639 | Content Listing | Science & Tech. | 38.154437 | 95,577,232 |
+44 1803 865913
Edited By: Greta A Fryxell
144 pages, B/w figs, tabs
Provides information regarding ecological conditions and population dynamics of both marine and freshwater algae form diverse habitats. Unfavourable environmental conditions induce the production of resting spores in certain organisms. Many algae have successfully developed specialized resistant characteristics that give them considerable evolutionary advantages over organisms that are unable to withstand periods of extreme change in their environment. Though the resting spore is considered to be an advantageous and primitive trait, the benefits are offset by the great amount of energy needed to produce and maintain the cell in near-dormancy over long periods of time and by the potentially 'lost' number of cell divisions that could have occurred during the resting phase. The interesting contrast of advantages and disadvantages has stimulated biologists to investigate the morphology and the underlying processes of the physiology of vegetative cells and thick-walled resting spores.
First published in 1983.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
I will not hesitate to use you again or recommend you to others.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:50e34b52-610e-457b-b21c-4bbaaa2957b7> | 3.34375 | 264 | Product Page | Science & Tech. | 19.867271 | 95,577,235 |
The set of Important Bird and Biodiversity Areas (IBAs) projected to suffer the greatest direct impacts of climate change (and highest rates of turnover of the species for which they were identified) do not entirely match the set of IBAs likely to be indirectly impacted by human response to climate change (those with the greatest projected human vulnerability). Hence, priorities for adaptation interventions need to account for likely human responses to climate change.
Whilst birds are changing their ranges, phenology and species assemblages in response to changing environmental conditions (Crick 2004), humans too are responding to climate change. Humans are already altering agricultural practises (Deressa et al. 2009, Thomas et al. 2007) and fishing grounds (Pinsky and Fogarty 2012) as changing climatic conditions force responses to maintain food security and human health. However, much research looking at the vulnerability of species or sites to climate change focuses solely on the direct impact of climate change (Hole et al. 2009, Bagchi et al. 2013). Modelling vulnerabilities whilst ignoring human responses is likely to largely underplay the vulnerability of species and sites.
Recent work highlights this by conducting a vulnerability assessment of 164 bird species across southern Africa (Segan et al. 2015). Sites and species for which vulnerability outcomes were influenced by the additive effect of human responses were determined. A total of 51 human vulnerability indictors were used to model these indirect impacts, accounting for human exposure, sensitivity and adaptive ability to climate change.
A negative correlation was found between forecasted bird range loss and exposure to impacted human populations suggesting little overlap between areas experiencing direct and indirect impacts. The Madagascan Long-tailed Ground-roller Uratelornis chimaera for example is at little risk of direct impacts of climate change, but third highest from indirect impacts, due to its vulnerability to habitat loss. Areas of high conservation priority identified through conventional modelling methods may therefore be identifying vulnerable species and sites in a biased manner.
This case study is taken from ‘The Messengers: What birds tell us about threats from climate change and solutions for nature and people’. To download the report in full click here.
Compiled: 2015 Copyright: 2015
BirdLife International (2015) The impact of climate change on human communities significantly alters the vulnerability of IBAs . Downloaded from http://www.birdlife.org on 23/07/2018 | <urn:uuid:5966c048-da25-4565-a433-2e06ec442681> | 3.78125 | 487 | Academic Writing | Science & Tech. | 21.360561 | 95,577,238 |
According to the Voice of Russia, "As of Saturday morning, in Russia continue 103 wildfires at 27,412 hectares, including big 26 wildfires in the Far East and in Siberia.
On Friday, 147 new fires began at 12,509 hectares, and 150 were extinguished at 5,437 hectares.
Emergency situation has been introduced in the regions with most complicated situations - in the Amur region, in the Maritime and Baikal territories.
This was according to the EMERCOM's (Emergency Control Ministry) spokesperson Alexander Drobyshevsky.
Satellites (such as Aqua and Terra) have located "1,587 thermal points, where 1,362 thermal point were confirmed later on. Over 800 specialized vehicles, 30 aircrafts and 3,247 people are involved in extinguishing of the wildfires."
According to EMERCOM, 63 fires have not been put down, 42 of which are in the Primorsky region (not seen in this image), six in the Jewish Autonomous Region, seven in the Khabarovsk region (not seen in this image), eight in the Amur region. 21 out of the active fires have been localized.
NASA's fire imagery from April 23, 2014 showed the outbreak of fires in the Primorsky Region of Russia.
For more information on this developing situation, go to: http://voiceofrussia.com/news/2014_04_19/30-aircrafts-of-Russian-Emergency-Ministry-extinguish-wildfires-in-Far-East-Siberia-5759/
This natural-color satellite image was collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite on April 28, 2014. Actively burning areas, detected by MODIS’s thermal bands, are outlined in red.
NASA image courtesy Jeff Schmaltz, MODIS Rapid Response Team. Caption: NASA/Goddard, Lynn Jenner with information from the Voice of Russia.
Rob Gutro | Eurek Alert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:3043df87-df85-488e-9a14-25084392efc4> | 2.84375 | 991 | Content Listing | Science & Tech. | 40.556457 | 95,577,252 |
- Open Access
New advantages of the combined GPS and GLONASS observations for high-latitude ionospheric irregularities monitoring: case study of June 2015 geomagnetic storm
© The Author(s) 2017
Received: 17 August 2016
Accepted: 2 May 2017
Published: 12 May 2017
The techniques based on the transionospheric radio waves propagation, in particular satellite navigation signals, are effectively used for monitoring and investigation of the main parameters of the ionospheric plasma irregularities. It is known that radio signals passing through the ionosphere suffer varying degrees of rapid variations of their amplitude and phase signal fluctuations, referred to as scintillations, created by random fluctuations of the medium’s refractive index, caused by plasma density gradients inside the ionosphere (e.g., Tsunoda et al. 1985; Basu et al. 1988; Aarons 1997; Prikryl et al. 2012). At high latitudes, these gradients are mostly caused by plasma processes associated with dynamic auroral processes, such as energetic particle precipitation and high-speed plasma convection (Keskinen and Ossakow 1983). Many researchers have used GPS signals to study ionospheric processes (e.g., Pi et al. 1997; Aarons and Lin 1999; Valladares et al. 2004; Jakowski et al. 2008, 2012; Tiwari et al. 2013; Prikryl et al. 2014; Cherniak and Zakharenkova 2015; van der Meeren et al. 2014; Jacobsen and Andalsvik 2016). Recently, Cherniak and Zakharenkova (2016a) applied data of the GPS receivers onboard five low earth orbit satellites to examine the occurrence of the topside ionospheric irregularities under the geomagnetic storm conditions and to compare them with effects registered concurrently in the ground-based GPS data.
The number of the ground-based receivers within the global and regional networks grew significantly from several hundreds worldwide in the 1990s to more than 6000 stations today. These networks provide continuous measurements of navigation signals parameters and open access to their databases. In addition to the increase in the number of the ground-based stations, the GPS constellation was modernized by the addition of satellites in the GPS-IIF series. Other GNSS like the Russian GLONASS, the European GALILEO and the Chinese Beidou systems increased the number of satellites placed into orbit. Further development of the multi-system GNSS constellations and modernization of the ground-based receivers to be able to track multi-frequency and multi-system GNSS signals provide more opportunities for ionospheric research in the near future.
At the moment, the second fully deployed GNSS is the Russian system—GLONASS (GLObal Navigational Satellite System) (see Hofmann-Wellenhof et al. 2008; ICD-GLONASS 2008; Jeffrey 2015). The full orbital constellation consists of 24 satellites into three orbit planes. The orbit altitude is ~19,100 km above the Earth’s surface. A significant advantage of the GLONASS, as compared to the GPS, is that the GLONASS has an orbit inclination of ~65°, that is ten degree higher than the GPS orbit inclination. This feature is important for the high-latitude region, where a multi-system GNSS receiver can track the GLONASS navigation signals for much longer time and with higher elevation angles than GPS ones. The number of the ground-based receivers able to track both GPS and GLONASS signals has been increased significantly in last years. In the present paper, we demonstrate the advantages of the multi-constellation measurements for high-latitude ionospheric irregularities monitoring for the case study of the June 2015 geomagnetic storm.
Case study: the summer solstice 2015 geomagnetic storm
The 3-h mid-latitude geomagnetic index Kp (not shown here) reached the value of 8+ at 18-21 UT on 22 June and at 03-06 UT on 23 June. This geomagnetic storm is the second largest to date in the 24th solar cycle after the St. Patrick’s Day storm that occurred on 17–18 March 2015 (e.g., Cherniak and Zakharenkova 2015).
Data and methodology
For the given research, we made use of all available permanent ground-based GNSS stations, which are able to track the GPS signals only and combined GPS and GLONASS signals. We use ~5800 ground-based GNSS stations gathered separately from several global and regional GNSS networks: the International GNSS Service (IGS), the University NAVSTAR Consortium (UNAVCO), the Continuously Operating Reference System (CORS), the Scripps Orbit and Permanent Array Center (SOPAC), the EUREF Permanent GNSS network (EPN), the Federal Agency for Cartography and Geodesy (BKGE) in Germany, Institut Geographique National in France (IGN), the Swedish geodetic network (SWEPOS), the Finnish Reference Network (FGI-FinnRef), the NOANET GNSS Network in Greece, the Spanish GNSS Reference Stations Network (ERGNSS), the Natural Resources Canada’s Canadian Geodetic Survey, the Canadian High Arctic Ionospheric Network (CHAIN), the Brazilian Network for Continuous Monitoring (RBMC), the Red Argentina de Monitoreo Satelital Continuo (RAMSAC CORS), the Australian Regional GNSS Network (ARGN) and the New Zealand Government GNSS CORS.
Figure 2b demonstrates an example of the data coverage accumulated during 1 h by all available stations tracking the GPS signals; blue dots show location of the ionosphere pierce points (IPPs) on links from a ground-based receiver to a GPS satellite. Figure 2c presents the same maps of the registered GPS data with the superimposed GLONASS measurements (red color dots). It is clearly seen that the GLONASS data coverage is very good and dense over the regions where the ground-based stations can track both systems simultaneously, e.g., the USA, Europe, Australia and Argentina. Thus, in these regions the use of the GLONASS can potentially increase the number of available measurements by a factor of 1.5–2 as comparing with using the GPS only. But impact of the GLONASS is more valuable in the regions with the sparse ground-based stations, particularly, at high latitudes of both hemispheres, the Asian sector, as well as rare islands with GNSS stations. Figure 2d shows the percentage contribution of the GLONASS measurements to each data cell. Here, the dark blue color indicates cells with the GPS measurements only, while orange and red colors depict cells where GLONASS can contribute more than 60–80% to total GNSS measurements. It is clearly seen that the dense networks, which support both types of signals (GPS and GLONASS), can significantly gain in a total number of measurements per a spatial cell/bin by adding the GLONASS system.
We processed GPS and GLONASS measurements derived from more than 6000 ground-based GNSS stations. These data are freely available for users and distributed in the raw RINEX (The Receiver Independent Exchange Format) format (Gurtner 1994). Sampling rate of the raw data is usually 30 s for the majority of stations. For a case of high-rate measurements (e.g., 1 s), the data were resampled to 30 s for uniformity.
After determination of sTEC values along LOS for all visible GPS and GLONASS satellites during 24 h, we applied algorithms for detection and correction of cycle slips and loss-of-lock and removed outliers. The elevation cutoff mask of 25° is used here to minimize the multi-path effects in the sTEC variations. The carrier phase measurements can be also affected by cycle slips which are sudden changes in the integer phase ambiguity due to the phase tracking loop within the receiver. The cycle slip may be as small as one or a few cycles, or contain millions of cycles. Here, we use two approaches for cycle slip detection—the widelane Melbourne–Wübbena linear combination (Melbourne 1985; Wübbena 1985) and method of differencing geometry-free phase observations with estimation of the rate of TEC changes similar to that of Horvath and Crozier (2007). Then, all derived sTEC values are geolocated using a single-layer model approach, when sTEC along the LOS is referred to the point of LOS intersection (IPP) with the thin ionospheric layer located at 350 km altitude.
As an ionospheric irregularity can be characterized by measuring its impact on the phase of the received GPS signal, Pi et al. (1997) introduced into usage for ground-based GPS observations two GPS-based indices: ROT and ROTI. Rate of TEC change (ROT) is the time-derivative of TEC and is considered as a measure of the phase fluctuation activity. Rate of TEC Index (ROTI) represents a standard deviation of the ROT over a selected time interval. The ROTI characterizes the severity of the GPS phase fluctuations and detects the presence of the ionospheric irregularities, which can be characterised by the TEC spatial gradient. Today, the ROT/ROTI indices, derived from the ground-based GPS data, are widely used in near-real-time services of space weather monitoring (e.g., NICT 2016; SWACI 2016 and described by Jakowski et al. 2006; Miyake and Jin 2010) and in investigations of the ionospheric irregularities occurrence at high- and low-latitude regions (e.g., Cherniak and Zakharenkova 2015). Here, we propose to extend the standard GPS database by the combined GPS and GLONASS observations.
Further, to analyze the high-latitude ionospheric irregularities occurrence and temporal development, we construct, from the multi-site GNSS database, the ROTI maps in the geographic coordinate frame. These ROTI maps with a polar view projection were constructed with high spatial resolution for the latitudinal range of 30°–90° in both northern and southern hemispheres. All ROTI values derived from the GPS and GLONASS data along all visible satellite passes were averaged and binned into cells of 1° × 1° resolution in geographic latitude and longitude. No interpolation was used here; the empty cells with no data or with less than 50 points per cell were marked as blank ones. The temporal interval was selected as 1 h.
Results and discussion
Comparison of GPS and GLONASS measurements in polar region
Two-dimensional combined GPS and GLONASS ROTI maps
We should note that the North American and European sectors have an essentially better data coverage than other regions in the northern and southern hemisphere (see Fig. 2a, e), that is why the hourly ROTI maps reveal their best data coverage and higher resolution over these regions. Overall, the mid- and high latitudes of the northern hemisphere exhibit proper coverage by the GPS and GLONASS observation within a wide longitudinal range of 140°W–50°E. Apart from GNSS, there is no other radio-based instrument able to provide such data coverage from the ground.
These hourly ROTI maps demonstrate the dynamics of the ionospheric irregularities in a geographic coordinate frame. The ROTI values marked by dark blue color (ROTI below 0.2 TECU/min) represent very weak or an absence of the ionospheric irregularities. The ROTI values marked by orange and red colors (ROTI >0.8–1.0 TECU/min) correspond to the occurrence of the intense ionospheric irregularities in this sector. Analysis of the ROTI maps for a quiet day of June 20, 2015 (Figs. 4a, 5a) revealed the very quiet situation over the polar regions in both hemispheres with rather weak irregularities occurring in the vicinity of the geomagnetic poles.
The first noticeable changes in the irregularities distribution pattern appeared after 07–08 UT on June 22, 2015, initiated by the second CME arrival and the first intensification of auroral activity (see Fig. 1). The most intense irregularities in both hemispheres were observed after 16 UT on 22 June. Very high ROTI values (>0.8–1 TECU/min) were found to form an oval-like structure around the northern geomagnetic pole. Further, the GNSS-derived irregularity oval expanded equatorward during several hours, and its equatorial edge was detected in the North American sector at ~45°N–50°N geographic latitude for more than 2–3 h. The highest ROTI intensity values in this oval-like feature occurred mainly over Northern Europe. We should also emphasize that the intense ionospheric irregularities were observed over Southern Europe at ~25°N–40°N geographic latitude during the main phase of the storm at 20-04 UT (Figs. 4; Additional file 2: S2, Additional file 3: S3). These irregularities were associated with the occurrence of plasma bite-outs and equatorial plasma bubbles in the postsunset sector (20-04 UT) over low latitudes of Western Africa after the prompt penetration electric fields at 18-20 UT on June 22, 2015 (for more details see Cherniak and Zakharenkova 2016b).
The ionospheric irregularities occured during the June 2015 geomagnetic storm and depicted by the combined GPS and GLONASS observations have impact on the navigation system performance. The WAAS System Performance Analysis Report indicated that during the June 22–23 it was observed a reduction in Localizer Performance with Vertical Guidance (LPV) and Localizer Performance with Vertical Guidance to 200 ft decision height (LPV200) coverage provided by WAAS in the continental US (CONUS), Alaska, and Canada (Wanner 2015). In these regions, there were observed the strong ionospheric irregularities related with the auroral particles precipitations, more detailedly described in next subsections. Moreover, the highly intense irregularities lead to a performance degradation of the European Geostationary Navigation Overlay Service (EGNOS). It is very interesting to note that an impact of the ionospheric irregularities occurrence on the GNSS performance in the European sector was observed not only at high latitudes (irregularities related with particles precipitations and ionospheric patches formation), but also at Southern Europe and the Mediterranean region (irregularities related with the storm-time plasma depletions of equatorial origin, i.e., plasma bubbles development) (Cherniak and Zakharenkova 2016b).
At high latitudes, generation and evolution of the ionospheric irregularities were associated with auroral particle precipitation after the CMEs arrival and further development of the main phase of this geomagnetic storm.
Figure 5 presents the evolution of ionospheric irregularities over the southern hemisphere. Here, it is also possible to estimate differences in the occurrence, intensity and location of the ionospheric irregularities. We note the occurrence of the high ROTI values close to the geomagnetic pole, which can be associated with the ionospheric irregularities generated by particle precipitation to the dayside cusp (e.g., Kelley et al. 1982; Weber et al. 1984). Ionospheric irregularities of such origin are usually developed even under the quiet geomagnetic conditions (see Fig. 5a).
One can recognize the pronounced intensifications and equatorward expansion of the irregularity zone. We should note that due to an essentially poorer coverage by the GNSS data over the southern hemisphere (due to ocean area predomination), such effects were observed in the limited longitude range of 30°E–170°E (mainly over GNSS stations in Antarctica, as well as in the New Zealand and Australia networks and islands in the Pacific Ocean). This limited coverage in the southern hemisphere does not allow to depict the whole pattern of the ionospheric irregularities behavior using the ROTI maps with 1 h resolution in such detail as in the northern hemisphere. Despite this limitation, the 1-h ROTI maps revealed clearly an evolution of the ionospheric irregularities zone with time. Figure 5b demonstrates the occurrence of a narrow oval-like or ring-like structure around the geomagnetic pole at 16 UT, and then, this zone expanded and covered the whole Antarctica continent (20 UT). Further, the irregularities zone expanded equatorward and reached New Zealand and Southern Australia with much smaller ROTI values near the south magnetic pole (Fig. 5c, 04 UT). In general, the evolution of the irregularities oval is rather similar to the evolution observed in the northern hemisphere. However, we should take into account the seasonal (winter to summer) differences between the hemispheres. Laundal and Østgaard (2009) explain this asymmetry in terms of inter-hemispheric currents related to seasons—the difference in ionospheric conductivity is expected to give rise to different auroral intensities in the two hemispheres as well as when the IMF has a significant Bx and By component. All those conditions were observed during the 22–23 June geomagnetic storm.
Meridional slices of the combined GPS and GLONASS ROTI maps
For the quiet day of June 20, 2015, the meridional slices of the northern hemisphere ROTI maps shown in Fig. 6b–e revealed an occurrence of the ionospheric irregularities at high latitudes only within 70°–80° MLAT (close to cusp region) in the American and Australian sectors, probably induced by soft particles precipitation. The first noticeable peak in the ROTI-derived irregularities distribution was recognized after ~06 UT on June 22, 2015, in all considered latitudinal sectors. This period corresponded to the second CME arrival at 05:45 UT, rapid changes of the SYM-H index and the first intensification of the auroral activity, represented by an AE index increase of ~1300 nT (see Fig. 6a). The next peak in ionospheric irregularities at high latitudes was observed at 15-17 UT. These processes were initiated by the IMF Bz southward turn and further increase in the auroral activity when AE rose to ~1340 nT and SYM-H dropped to −70 nT. During this period, ionospheric irregularities were also registered simultaneously as equatorward as 70° MLAT in North America and 65° MLAT in Europe (Fig. 6b, c).
The most intense irregularities in the high and mid-latitudes were found to occur at 18-22 UT on 22 June, which were associated with a new period of the increased auroral activity with two peaks of the AE index of ~2180 and ~2700 nT, observed at 18:49 and 20:10 UT, respectively. During this period, the SYM-H increased to +88 nT and dropped rapidly to the value of −139 nT with dramatical rate of change of about −130 nT/h. As a result, during this period the high-latitude irregularities were detected as equatorward as 54° MLAT in North America and 45° MLAT in Europe. In the southern hemisphere, their signatures were found to extend equatorward to −55° MLAT in South America and −50° MLAT in the Australian sector (Fig. 6d, e). Additionally, we found that images from the SSUSI instrument onboard four DMSP satellites (available at http://ssusi.jhuapl.edu/data/edr-aur-anim//years/2015/173/EDR-AUR_LBHS_2015173.gif and placed as Additional file 4: S4) revealed an increase of the auroral activity on June 22, 2015, and an equatorward expansion of the aurora zone up to 50° MLAT during 18-22 UT.
During the development of the second main phase (01:50–05:40 UT on 23 June), the intense ionospheric irregularities were continuously registered for a longer period (4–5 h) and they covered a latitudinal range from the polar region to 55° MLAT in both sectors of the northern hemisphere (Fig. 6b, c) and to −50° MLAT in the southern hemisphere (Fig. 6d, e). Thus, signatures of the ionospheric irregularities, which were registered by the GPS and GLONASS signals and were analyzed by use of the meridional slice approach, reveal a strong linkage of their intensity and equatorward spatial expansion with auroral activity intensification, in particular represented by the AE and SYM-H indices. Such kind of analysis in the time-latitudinal domain allows us to estimate the principal dependencies of the onset of the ionospheric irregularities and their further development and evolution on space weather drivers. Future studies based on these approaches will allow to formalize these dependencies in the form of an empirical model of the ionospheric irregularities.
We can summarize that despite the unprecedented high number of stations deployed worldwide during the last 5–10 years, the high-latitude regions (above 60° MLAT) in both hemispheres depict a rather sparse coverage by the GPS and GLONASS ground-based observations compared to mid-latitudes. On the other hand, today the ground-based GNSS segment is the only data source able to provide multi-site ground-based observations with the best global coverage.
In this paper, we extend the use of the ROTI maps for analyzing ionospheric irregularities distribution. We demonstrate that the meridional slices of the ROTI maps can be effectively used to study the occurrence and temporal evolution of the ionospheric irregularities over selected geographical regions in quiet and especially geomagnetically disturbed periods. The meridional slices of geographical sectors characterized by a high density of the GPS and GLONASS measurements can represent spatio-temporal dynamics of the intense ionospheric plasma density irregularities with high resolution and they can be used for detailed studies of the space weather drivers on the processes of the ionospheric irregularities generation, their evolution and lifetimes.
We should emphasize that combination of the GPS and GLONASS signals allows to increase significantly the number of the transionospheric measurement links globally. As a result, it allows to improve the performance of the ionospheric irregularities monitoring in both the regions with sparse or dense permanent GNSS network coverage. In case of sparse networks (e.g., Northern Canada and Russia, Antarctica region and coastal zone in polar regions), the adjunction of the GLONASS-based measurements, due to the different constellation configuration as compare to the GPS one, allows to noticeably extend areas covered by the GNSS measurements and essentially increase a number of the available ionospheric piercing points. Particular benefits of GLONASS data at high latitudes can be earlier or better detection of the ionospheric disturbances related to the physical processes in the auroral region and polar cap, in particular through the combination with other instruments such as colocated magnetometers, all-sky cameras and coherent radars. As it is seen on Fig. 4, high and midlatitude areas in the American and European sectors are well covered by the combined GPS and GLONASS measurements without any significant “no data” gaps. For the regions with the dense GNSS networks, the extra use of the GLONASS data would increase a number of the available measurements by a factor of 1.5–2 as comparing with GPS only—for example, for the European region we can get ~1,700,000–1,800,000 IPPs per 1 h. So, we can potentially construct the regional ROTI maps with an unprecedentedly high resolution up to 0.5° × 0.5° in geographic latitude and longitude. Such detailed ROTI maps had been already successfully used for detection of the ionospheric irregularities related with the storm-induced plasma depletion signatures in Europe (Cherniak and Zakharenkova 2016b).
Using a representative database of ~5800 ground-based GNSS stations located worldwide, we have investigated the occurrence of the high-latitude ionospheric plasma density irregularities during the geomagnetic storm of June 23–23, 2015. For the first time, the high-resolution two-dimensional maps of ROTI perturbations were made using not only GPS but also GLONASS measurements.
We note that the current status of the GPS (US) system includes 32 satellites, while the GLONASS (Russia) system includes 24 satellites. The ongoing expansion of the GNSS system includes an increase (and/or renewal) in a satellite number for GPS and GLONASS, development of the European system Galileo (currently 15 satellites in orbit) and the Chinese system BeiDou (currently 18 satellites in orbit), as well as development of the regional Navigational Satellite Systems. Thus, more than 100 GNSS satellites could be available in the near future. This expansion will increase the number of GNSS radio signal ray passes simultaneously scanning the Earth’s ionosphere to an unprecedentedly high value! Signal diversity and redundant measurements, together with better geometry from multiple GNSS satellites, greatly improve the ability to refine the temporal and spatial resolution of the transionospheric measurements, as well as empirical and assimilative ionospheric models . It provides new opportunities to study the space weather impact on the ionosphere and GNSS navigation performance at a new level. Here, we presented the first results demonstrating the advantages of using several independent but compatible GNSS systems like GPS and GLONASS for improvement of the permanent monitoring of the high-latitude ionospheric irregularities.
IC designed this study, analyzed the data and wrote the manuscript. IZ developed software for data processing and helped in interpretation of the data. All coauthors contributed to the revision of the draft manuscript and improvement of the discussion. Both authors read and approved the final manuscript.
We acknowledge use of the raw GPS and GLONASS data provided by IGS (ftp://cddis.gsfc.nasa.gov), UNAVCO (ftp://data-out.unavco.org), NOAA CORS (ftp://geodesy.noaa.gov/cors), SOPAC (ftp://garner.ucsd.edu), EPN (ftp://olggps.oeaw.ac.at), BKGE (ftp://igs.bkg.bund.de/euref/obs), IGN (ftp://rgpdata.ign.fr), SWEPOS (swepos.lantmateriet.se), FGI-FinnRef (euref-fin.fgi.fi), NOANET (www.gein.noa.gr), Natural Resources Canada (webapp.geod.nrcan.gc.ca), CHAIN (ftp://chain.physics.unb.ca/gps/), RBMC (ftp://geoftp.ibge.gov.br/RBMC/), RAMSAC CORS of NGI of Argentina (www.igm.gov.ar/NuestrasActividades/Geodesia/Ramsac/), ARGN (ftp://ftp.ga.gov.au) and NZ CORS (ftp://geonet.org.nz) GNSS networks. We also thank IGS and CODE for providing GPS products (orbits, biases). The authors thank the NASA/GSFC’s Space Physics Data Facility’s OMNIWeb service, for providing OMNI data (http://omniweb.gsfc.nasa.gov/ow_min.html). We gratefully acknowledge the JHU/APL team for providing the DMSP SSUSI data products (http://ssusi.jhuapl.edu).
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- Aarons J (1997) Global positioning system phase fluctuations at auroral latitudes. J Geophys Res 102:17219–17231. doi:10.1029/97JA01118 View ArticleGoogle Scholar
- Aarons J, Lin B (1999) Development of high latitude phase fluctuations during the January 10, April 10–11, and May 15, 1997 magnetic storms. J Atmos Sol-Terr Phys 61:309–327View ArticleGoogle Scholar
- Baker DN et al (2016) Highly relativistic radiation belt electron acceleration, transport, and loss: large solar storm events of March and June 2015. J Geophys Res Space Phys 121:6647–6660. doi:10.1002/2016JA022502 View ArticleGoogle Scholar
- Basu S, MacKenzie E, Basu Su (1988) Ionospheric constraints on VHF/UHF communications links during solar maximum and minimum periods. Radio Sci 23(3):363–378. doi:10.1029/RS023i003p00363 View ArticleGoogle Scholar
- Blewitt G (1990) An automatic editing algorithm for GPS data. Geophys Res Lett 17:199–202View ArticleGoogle Scholar
- Cherniak I, Zakharenkova I (2015) Dependence of the high-latitude plasma irregularities on the auroral activity indices: a case study of 17 March 2015 geomagnetic storm. Earth Planets Space. doi:10.1186/s40623-015-0316-x Google Scholar
- Cherniak I, Zakharenkova I (2016a) High-latitude ionospheric irregularities: differences between ground- and space-based GPS measurements during the 2015 St. Patrick’s Day storm. Earth Planets Space 2016(68):136. doi:10.1186/s40623-016-0506-1 View ArticleGoogle Scholar
- Cherniak I, Zakharenkova I (2016b) First observations of super plasma bubbles in Europe. Geophys Res Lett. doi:10.1002/2016GL071421 Google Scholar
- Gurtner W (1994) RINEX: the receiver-independent exchange format. GPS World 4:48–52Google Scholar
- Hofmann-Wellenhof B (2001) Global positioning system: theory and practice. Springer, New-YorkView ArticleGoogle Scholar
- Hofmann-Wellenhof B, Lichtenegger H, Wasle E (2008) GNSS—global navigation satellite systems: GPS, GLONASS, Galileo, and more. Springer, Wien. doi:10.1007/978-3-211-73017-1 Google Scholar
- Horvath I, Crozier S (2007) Software developed for obtaining GPS-derived total electron content values. Radio Sci. 42:RS2002. doi:10.1029/2006RS003452 View ArticleGoogle Scholar
- ICD-GLONASS (2008) Global navigation satellite system GLONASS interface control document, version 5.1. Russian Institute of Space Device Engineering, Moscow, RussiaGoogle Scholar
- Jacobsen KS (2014) The impact of different sampling rates and calculation time intervals on ROTI values. J Space Weather Space Clim 4:A33. doi:10.1051/swsc/2014031 View ArticleGoogle Scholar
- Jacobsen KS, Andalsvik YL (2016) Overview of the 2015 St. Patrick’s day storm and its consequences for RTK and PPP positioning in Norway. J Space Weather Space Clim 6:A9. doi:10.1051/swsc/2016004 View ArticleGoogle Scholar
- Jakowski N, Stankov SM, Klaehn D, Becker C (2006) SWACI—a near-real time ionosphere service based on Ntrip technology. In: Proceedings of the Ntrip Symposium and Workshop “Streaming GNSS data via internet” BKG. Frankfurt am MainGoogle Scholar
- Jakowski N, Mielich J, Borries C, Cander L, Krankowski A, Nava B, Stankov SM (2008) Large-scale ionospheric gradients over Europe observed in October 2003. J Atmos Sol-Terr Phys 70:1894–1903. doi:10.1016/j.jastp.2008.03.020 View ArticleGoogle Scholar
- Jakowski N, Béniguel Y, De Franceschi G, Hernandez-Pajares M, Jacobsen KS, Stanislawska I, Tomasik L, Warnant R, Wautelet G (2012) Monitoring, tracking and forecasting ionospheric perturbations using GNSS techniques. J Space Weather Space Clim 2:A22. doi:10.1051/swsc/2012022 View ArticleGoogle Scholar
- Jeffrey C (2015) An introduction to GNSS: GPS, GLONASS, Galileo and other global navigation satellite systems. NovAtel Publisher, Calgary, CanadaGoogle Scholar
- Kelley MC, Vickrey JF, Carlson CW, Torbert R (1982) On the origin and spatial extent of high-latitude F region irregularities. J Geophys Res 87(A6):4469–4475View ArticleGoogle Scholar
- Keskinen MJ, Ossakow SL (1983) Theories of high-latitude ionospheric irregularities: a review. Radio Sci 18:1077–1091. doi:10.1029/RS018i006p01077 View ArticleGoogle Scholar
- Laundal KM, Østgaard N (2009) Asymmetric auroral intensities in the Earth’s Northern and Southern hemispheres. Nature 460:491–493. doi:10.1038/nature08154 View ArticleGoogle Scholar
- Melbourne WG (1985) The case for ranging in GPS based geodetic systems. In: Proceedings of the 1st international symposium on precise positioning with the global positioning system, Rockville, pp 373–386Google Scholar
- Miyake W, Jin H (2010) Near-real time monitoring of TEC Over Japan at NICT (RWC Tokyo OF ISES). In: Advances in geosciences A 6-Volume Set Volume 21: Solar Terrestrial (ST). Published by World Scientific Publishing, SingaporeGoogle Scholar
- NICT (2016) Service NICT GEONET quasi-realtime TEC maps over JAPAN (segweb.nict.go.jp/GPS/QR_GEONET/index_e.html). As Accessed on 15 Aug 2016Google Scholar
- Pi X, Mannucci AJ, Lindqwister UJ, Ho CM (1997) Monitoring of global ionospheric irregularities using the worldwide GPS network. Geophys Res Lett 24:2283View ArticleGoogle Scholar
- Prikryl P, Jayachandran PT, Mushini SC, Richardson IG (2012) Toward the probabilistic forecasting of high-latitude GPS phase scintillation. Space Weather 10:S08005. doi:10.1029/2012SW000800 View ArticleGoogle Scholar
- Prikryl P, Jayachandran PT, Mushini SC, Richardson IG (2014) High-latitude GPS phase scintillation and cycle slips during high-speed solar wind streams and interplanetary coronal mass ejections: a superposed epoch analysis. Earth Planets Space 66:62. doi:10.1186/1880-5981-66-62 View ArticleGoogle Scholar
- Russian IAC PNT (Information and Analysis Center for Positioning, Navigation and Timing) (2016) FTP server (ftp://ftp.glonass-iac.ru/MCC/ALMANAC/). As Accessed on 15 Aug 2016
- SWACI (2016) The Space Weather Application Center Ionosphere (http://www.swaciweb.dlr.de/data-and-products/?no_cache=1&L=1/). As Accessed on 15 Aug 2016
- Tiwari R, Strangeways HJ, Tiwari S, Ahmed A (2013) Investigation of ionospheric irregularities and scintillation using TEC at high latitude. Adv Space Res 52:1111–1124. doi:10.1016/j.asr.2013.06.010 View ArticleGoogle Scholar
- Tsunoda RT, Haggstrom I, Pellinen-Wannberg A, Steen A, Wannberg G (1985) Direct evidence of plasma density structuring in the auroral F region ionosphere. Radio Sci 20:762–784View ArticleGoogle Scholar
- Valladares CE, Villalobos J, Sheehan R, Hagan MP (2004) Latitudinal extension of low-latitude scintillations measured with a network of GPS receivers. Ann Geophys 22:3155–3175. doi:10.5194/angeo-22-3155-2004 View ArticleGoogle Scholar
- van der Meeren C, Oksavik K, Lorentzen D, Moen JI, Romano V (2014) GPS scintillation and irregularities at the front of an ionization tongue in the nightside polar ionosphere. J Geophys Res Space Phys 119:8624–8636. doi:10.1002/2014JA020114 View ArticleGoogle Scholar
- Wanner B (2015) DR #127: effect on WAAS from Iono Activity on March 17–18, 2015, WAAS Technical Report at the WAAS Test Team web-page, 2015. Accessed 03 April 2017. http://www.nstb.tc.faa.gov/Discrepancy%20Reports%20PDF/DR%20127%20Effect%20on%20WAAS%20from%20Iono%20Activity%20March%2017%202015.pdf
- Weber EJ, Buchau J, Moore J, Sharber J, Livingston R, Winningham J, Reinisch B (1984) F layer ionization patches in the polar cap. J Geophys Res 89(A3):1683–1694View ArticleGoogle Scholar
- Wübbena G (1985) Software developments for geodetic positioning with GPS using TI 4100 code and carrier measurements. In: Proceedings of the 1st international symposium on precise positioning with the global positioning system. Rockville, pp 403–412Google Scholar | <urn:uuid:81c2b34c-1e4b-4b96-ad88-7cc02890217e> | 2.625 | 8,160 | Truncated | Science & Tech. | 43.762627 | 95,577,268 |
The outbursts, known as terrestrial gamma-ray flashes (TGFs), last only a few thousandths of a second, but their gamma rays rank among the highest-energy light that naturally occurs on Earth. The enhanced GBM discovery rate helped scientists show most TGFs also generate a strong burst of radio waves, a finding that will change how scientists study this poorly understood phenomenon.
Before being upgraded, the GBM could capture only TGFs that were bright enough to trigger the instrument's on-board system, which meant many weaker events were missed.
"In mid-2010, we began testing a mode where the GBM directly downloads full-resolution gamma-ray data even when there is no on-board trigger, and this allowed us to locate many faint TGFs we had been missing," said lead researcher Valerie Connaughton, a member of the GBM team at the University of Alabama in Huntsville (UAH). She presented the findings Wednesday in an invited talk at the American Geophysical Union meeting in San Francisco. A paper detailing the results is accepted for publication in the Journal of Geophysical Research: Space Physics.
The results were so spectacular that on Nov. 26 the team uploaded new flight software to operate the GBM in this mode continuously, rather than in selected parts of Fermi's orbit.
Connaughton's team gathered GBM data for 601 TGFs from August 2008 to August 2011, with most of the events, 409 in all, discovered through the new techniques. The scientists then compared the gamma-ray data to radio emissions over the same period.
Lightning emits a broad range of very low frequency (VLF) radio waves, often heard as pop-and-crackle static when listening to AM radio. The World Wide Lightning Location Network (WWLLN), a research collaboration operated by the University of Washington in Seattle, routinely detects these radio signals and uses them to pinpoint the location of lightning discharges anywhere on the globe to within about 12 miles (20 km).
Scientists have known for a long time TGFs were linked to strong VLF bursts, but they interpreted these signals as originating from lightning strokes somehow associated with the gamma-ray emission.
"Instead, we've found when a strong radio burst occurs almost simultaneously with a TGF, the radio emission is coming from the TGF itself," said co-author Michael Briggs, a member of the GBM team.
The researchers identified much weaker radio bursts that occur up to several thousandths of a second before or after a TGF. They interpret these signals as intracloud lightning strokes related to, but not created by, the gamma-ray flash.
Scientists suspect TGFs arise from the strong electric fields near the tops of thunderstorms. Under certain conditions, the field becomes strong enough that it drives a high-speed upward avalanche of electrons, which give off gamma rays when they are deflected by air molecules.
"What's new here is that the same electron avalanche likely responsible for the gamma-ray emission also produces the VLF radio bursts, and this gives us a new window into understanding this phenomenon," said Joseph Dwyer, a physics professor at the Florida Institute of Technology in Melbourne, Fla., and a member of the study team.
Because the WWLLN radio positions are far more precise than those based on Fermi's orbit, scientists will develop a much clearer picture of where TGFs occur and perhaps which types of thunderstorms tend to produce them.
The GBM scientists predict the new operating mode and analysis techniques will allow them to catch about 850 TGFs each year. While this is a great improvement, it remains a small fraction of the roughly 1,100 TGFs that fire up each day somewhere on Earth, according to the team's latest estimates.
Likewise, TGFs detectable by the GBM represent just a small fraction of intracloud lightning, with about 2,000 cloud-to-cloud lightning strokes for every TGF.
The Fermi Gamma-ray Space Telescope is an astrophysics and particle physics partnership and is managed by NASA's Goddard Space Flight Center in Greenbelt, Md. Fermi was developed in collaboration with the U.S. Department of Energy, with important contributions from academic institutions and partners in France, Germany, Italy, Japan, Sweden and the United States.
The GBM Instrument Operations Center is located at the National Space Science Technology Center in Huntsville, Ala. The GBM team includes a collaboration of scientists from UAH, NASA's Marshall Space Flight Center in Huntsville, the Max Planck Institute for Extraterrestrial Physics in Germany and other institutions.Francis Reddy
Lynn Chandler | EurekAlert!
Scientists discover Earth's youngest banded iron formation in western China
12.07.2018 | University of Alberta
Drones survey African wildlife
11.07.2018 | Schweizerischer Nationalfonds SNF
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:3faa1dc3-0009-47c5-bd17-47b16290ad52> | 3.5625 | 1,589 | Content Listing | Science & Tech. | 44.717195 | 95,577,276 |
Skipping Science: An Experiment in Jump Rope Lengths
|Time Required||Short (2-5 days)|
|Prerequisites||Know how to jump rope or be willing to learn|
|Material Availability||Readily available|
|Cost||Low ($20 - $50)|
AbstractDid you know that the United States jump rope record (as of 2017) for the greatest number of jumps in a minute is 372? That's more than six jumps a second! How close do you think you can get to that number? If you are going to try to break the record, it might be important to figure out how jump rope length affects your success. Try your hand at this skipping science fair project and jump-start your chances for a jump rope record. If you have a smartphone available, you can use it to measure how fast you jump with Google's Science Journal app.
Determine the best length for a jump rope.
Sandra Slutz, PhD, Science Buddies
Edited by Ben Finio, PhD, Science Buddies
This science fair project was inspired by this DragonflyTV podcast:
- TPT. (2006). Double Dutch by Francesca, Precious, and Marnicka. DragonflyTV, Twin Cities Public Television. Retrieved October 29, 2008, from http://pbskids.org/dragonflytv/show/doubledutch.html
Cite This PageGeneral citation information is provided here. Be sure to check the formatting, including capitalization, for the method you are using and update your citation, as needed.
Last edit date: 2018-06-30
Did you know that jumping rope is great exercise? Professional boxers do it to improve their coordination, which is the ability to make smooth and accurate movements involving different body parts, and to improve their endurance, which is the length of time for which someone can do a physical activity without stopping.
Plus, jumping rope can be a lot of fun! That's easy to see in the DragonflyTV video on the right, where Francesca, Precious, Marnicka, and their friends show off their double-Dutch skills while investigating the science of jumping rope. In double Dutch, there are two jump ropes being turned, by two people, while one or more people jump the ropes while doing tricks. One of the hard parts is knowing when the ropes are coming, which made Francesca, Precious, and Marnicka decide to investigate whether it was hearing the ropes or seeing the ropes that made them able to be successful at double Dutch. What do you think their experiment revealed? Watch the video to find out, and to see all their great jumping tricks!
In addition to jump rope tricks, there are also competitions for speed jumping. In 2017, the United States record for the most jumps per minute was 372! How many jumps per minute can you make? Do you think that the length of the jump rope might change how many jumps you could make in a minute? The longer the rope, the more time it takes to turn it in a full circle. Shorter ropes turn faster, but because the circle is smaller, you might have to jump higher to get over the rope, and that might slow you down or cause you to make a mistake. So, to help you get started on your own personal best jumps-per-minute count, in this science fair project you will determine the best jump rope length and get a scientific jump on your competition!
Terms and Concepts
Optional terms for students using Google's Science Journal to collect data:
- Why is jumping rope a good exercise?
- Why does it take more time to complete a full circle when swinging a long jump rope than a short jump rope?
This science fair project was inspired by this DragonflyTV podcast:
- TPT. (2006). Double Dutch by Francesca, Precious, and Marnicka. DragonflyTV, Twin Cities Public Television. Retrieved August 18, 2017, from http://pbskids.org/dragonflytv/show/doubledutch.html
This document shows US jump rope records in various categories:
- USA Jump Rope (2017). Current 2017 USA Jump Rope National Records. Retrieved August 18, 2017, from https://usajumprope.org/UserFiles/Records%20and%20Results/2016%20Nationals%20Results/2017%20Current%20National%20Records.FINAL2.pdf
This website has more information about jump rope as exercise and how to perform different jump rope tricks and skills:
- Skip-Hop. (n.d.). Learning to Skip. Retrieved August 18, 2017, from http://www.skip-hop.co.uk/learning-to-skip-c82.html
For help creating graphs, try this website:
- National Center for Education Statistics. (n.d.). Create a Graph. Retrieved October 29, 2008, from http://nces.ed.gov/nceskids/CreateAGraph/default.aspx
To learn more about Google's Science Journal app, visit the website below:
- Google (n.d.). Getting Started with Science Journal. Google Making & Science. Retrieved August 31, 2017 from https://makingscience.withgoogle.com/science-journal/activities/activity-getting-started?lang=en#getting-started-with-science-journal
News Feed on This Topic
Materials and Equipment
- Jump ropes (1 8-foot rope and 1 10-foot rope); available at sporting goods stores and available on Amazon.com
- Volunteers who know how to jump rope (3, including yourself)
- Lab notebook
- Graph paper
- With option 1 in procedure: Stopwatch or watch with a second hand
- With option 2 in procedure: A smartphone to record your data
This project uses Google's Science Journal app, a free app that allows you to gather and record data with a cell phone. You can download the app from Google Play for Android devices (version 4.4 or newer) or from the App Store for iOS devices (iOS 9.3 or newer).
- See the option 2 section at the end of the procedure for instructions to use the phone in this project.
Remember Your Display Board Supplies
Poster Making Kit
ArtSkills Trifold with Header
Note: In this project, you will determine what length jump rope allows people to jump the fastest by measuring their jumps per minute. There are two different methods to do this. In one method, you can have someone count your jumps using a stopwatch. In the second method, you can use a phone with Google's Science Journal app to make a graph of your jumping motion, and count the number of peaks in the graph. You can find the instructions to do so here.
Option 1: Using the Stopwatch
To start this project, you will need to find three people who know how to jump rope. You will each be jumping rope by yourselves—not double Dutch for this experiment.
- You can include yourself as one of the three people.
- If you or one of your friends would like to take part in the experiment but do not know how to jump rope, check out the resources in the Bibliography in the Background section for some methods you could use.
Fold the 8-foot-long jump rope in half to find the midway point. Have the jumper stand on this point with both feet, put a handle in each hand, and pull the handles straight up along his or her sides. Have a helper shorten the jump rope, using the following directions, until the handles are between the jumper's belly button and armpits. This is the short jump rope length.
- To make the jump rope shorter, the helper should tie knots just beneath the handles. Try to tie the same number of knots beneath each handle. Tie as many knots as needed to make the rope the right length.
- If the 8-foot jump rope is too short to reach midway between the jumper's belly button and armpits, use the 10-foot-long jump rope instead.
When the jump rope is at the right length and the jumper is ready to begin jumping, three things need to happen:
- The jumper should yell "Go!" and begin jumping.
- As soon as the jumper says "Go!", a second person should start the stopwatch.
- A third person should count the number of successful jumps over the rope the jumper makes.
The jumper should continue to jump rope for 1 minute, at which point the person with the stopwatch should yell "Stop!" so that the jumper and the counter both know to stop their tasks.
- If the jumper "messes up," the stopwatch should not stop. The jumper should continue jumping rope, time continues, and the person counting should keep counting up instead of restarting the count. For example, if after 10 successful jumps, the rope hits the jumper's foot and he or she has to restart, the counter should count the next successful jump as number 11.
- Record the number of successful jumps in a data table like Table 1 in your lab notebook.
|Short Jump Rope Length||Medium Jump Rope Length||Long Jump Rope Length|
|Trial #1||Trial #2||Trial #3||Average||Trial #1||Trial #2||Trial #3||Average||Trial #1||Trial #2||Trial #3||Average|
- Once the jumper has rested long enough to catch his or her breath, he or she should repeat steps 3–5 twice more for a total of three trials with that jump rope length.
- Using the same method as in step 2, re-adjust the jump rope length so that the tips of the handles are now just barely brushing the same jumper's armpits. This is the medium jump rope length.
- The jumper should repeat steps 3-6 using the medium jump rope length. Record the number of successful jumps in the data table.
- Now, using the same method as in step 2, re-adjust the jump rope length so that the tips of the handles just barely brush the jumper's chin. This is the long jump rope length.
- The jumper should repeat steps 3-6 using the long jump rope length. Record the number of successful jumps in the data table.
- Repeat the whole procedure (steps 2-11) for the other two jumpers. Remember to record the number of successful jumps in the data table.
For each jumper, calculate the average number of successful jumps for each jump rope length.
- For example, to calculate the average number of successful jumps that jumper #1 made using the short jump rope, add up the data for trial #1, trial #2, and trial #3, then divide by the total number of trials (which is 3).
Using the graph paper, make three bar graphs, one for each jumper, showing the average number of successful jumps for each jump rope length.
- Label each bar so you know what it represents.
- If you prefer to make your bar chart on the computer, try using Create a Graph.
- Look at your graphs. For each jumper, which jump rope length resulted in the most successful jumps over the rope in 1 minute? Which jump rope length was least successful? Was it the same for each jumper?
Option 2: Using the Science Journal App
What if you wanted to take a more scientific measurement of your jumping motion? What could you measure? One thing scientists measure about moving objects is their velocity, or their speed and direction. When you jump up and down, your velocity changes over and over again as you slow down and speed up. Scientists describe this type of repetitive motion as periodic. A change in velocity is called acceleration. Sometimes it is easier to measure acceleration than velocity. Scientists measure acceleration using a device called an accelerometer. Accelerometers are built in to many smartphones and video game controllers to give them motion controls. They allow games to respond to motion when you tilt or shake the controller.
You can use an app called Science Journal to record data with your phone's accelerometer. To learn how to measure acceleration and how to record data with the app, review the relevant tutorials on this Science Journal tutorial page. Then, try out this procedure:
- Figure out how to mount the phone to your waist, hip, or torso while jumping rope. You could put the phone in your back pocket or use a phone belt clip. The phone should be tightly held to your body so it does not slide or bounce around.
- Depending on how the phone is attached to your body, open either the X or Y accelerometer. You want to measure up-and-down acceleration while you are jumping. So, for example, if the phone is vertical in your back pocket, you should use the Y accelerometer. If the phone is sideways in a belt clip, use the X accelerometer.
- Practice recording acceleration while jumping. You will need to press the record button, attach the phone to your body, jump rope for slightly more than a minute, detach the phone, and press the record button again to stop recording.
- Use the "crop" feature to shorten your data to a length of exactly one minute, while you were jumping rope. Make sure you crop off the parts at the beginning and end of the data while you were handling the phone, which may look irregular or spiky on the graph. You only want to keep the part in the middle when you were jumping, which should show a regular pattern like in Figure 1.
Figure 1. An example graph that shows data recorded with Google's Science Journal while jumping rope. The x-axis of the graph shows time in minutes:seconds and the y-axis shows acceleration in meters per second squared. Each peak in the graph indicates one jump. This graph shows 24 peaks in a 10 second period, so a total of 144 jumps per minute (24×6=144).
- Look at the graph of your acceleration. The graph should be periodic (the same pattern repeats over and over). Each repetition of the same pattern, or period, represents one complete jump. If you count the number of peaks that occur in one minute on the graph, that will tell you how many times you jumped in one minute. You may see smaller bumps or flat parts in the graph if you messed up and had to start over. Only count complete jumps.
- If it is too difficult to count the number of peaks in a one-minute graph, try recording data for a shorter amount of time. For example, you can record for 10 seconds, count the peaks, and then multiply by 6 to calculate the equivalent number of jumps per minute.
- Once you have practiced recording data while jumping rope and counting the number of jumps using the graph, follow the same procedure described in the "Option 1" section of this experiment. However, use the graph recorded by the Science Journal app for each trial to count the number of jumps per minute, instead of having a helper use a stopwatch.
Communicating Your Results: Start Planning Your Display BoardCreate an award-winning display board with tips and design ideas from the experts at ArtSkills.
If you like this project, you might enjoy exploring these related careers:
Industrial EngineerYou've probably heard the expression "build a better mousetrap." Industrial engineers are the people who figure out how to do things better. They find ways that are smarter, faster, safer, and easier, so that companies become more efficient, productive, and profitable, and employees have work environments that are safer and more rewarding. You might think from their name that industrial engineers just work for big manufacturing companies, but they are employed in a wide range of industries, including the service, entertainment, shipping, and healthcare fields. For example, nobody likes to wait in a long line to get on a roller coaster ride, or to get admitted to the hospital. Industrial engineers tell companies how to shorten these processes. They try to make life and products better. Finding ways to do more with less is their motto. Read more
- Does jump rope length also affect the number of mess-ups? Keep track of both the successful jumps and the misses and plot them both. Are the two numbers related? Hint: a fourth person may be needed to keep track of the number of mess-ups.
- Also try this experiment using different jump rope tricks, instead of just plain jumps over the rope. Does length have more of an effect on tricks than on plain jumps?
- Design an experiment to find the best jump rope length for double Dutch.
- In the DragonflyTV video in the Introduction, Francesca, Precious, and Marnicka jumped rope to different music with different beats. Try jumping rope to slow music, fast music, and no music. Does the music change how many successful jumps you can make in a minute? How about the number of successful jumps you can make in a row without messing up?
- Can jumping rope help you on a spelling test? Randomly assign volunteers to two groups: One group will copy down 10 words from a spelling list with pen and paper. The other group will work with a partner who will call out the word and the spelling first, and then the jumper will repeat the word and each letter back and "jump out" each letter of the words—one jump for each letter. Test your volunteers the next day with the spelling list (have them spell out with pen and paper each word that they practiced the day before) and see which group has the best scores, on average.
Ask an ExpertThe Ask an Expert Forum is intended to be a place where students can go to find answers to science questions that they have been unable to find using other resources. If you have specific questions about your science fair project or science fair, our team of volunteer scientists can help. Our Experts won't do the work for you, but they will make suggestions, offer guidance, and help you troubleshoot.
Ask an Expert
News Feed on This Topic
Looking for more science fun?
Try one of our science activities for quick, anytime science explorations. The perfect thing to liven up a rainy day, school vacation, or moment of boredom.Find an Activity | <urn:uuid:44a68fc6-303d-4e8d-baf8-26cee2303a40> | 3.046875 | 3,785 | Tutorial | Science & Tech. | 64.081094 | 95,577,281 |
Please forward this error screen to 216. Chemistry is the scientific discipline involved with compounds composed of modern molecular photochemistry pdf, i.
In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. The history of chemistry spans a period from very old times to the present. Since several millennia BC, civilizations were using technologies that would eventually form the basis of the various branches of chemistry. The word chemistry comes from alchemy, which referred to an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism and medicine.
In origin, the term is borrowed from the Greek χημία or χημεία. Laboratory, Institute of Biochemistry, University of Cologne in Germany. The current model of atomic structure is the quantum mechanical model. The chemistry laboratory stereotypically uses various forms of laboratory glassware. A chemical reaction is a transformation of some substances into one or more different substances. The basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. Energy and entropy considerations are invariably important in almost all chemical studies.
Chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. The atom is the basic unit of chemistry. It consists of a dense core called the atomic nucleus surrounded by a space hosting an electron cloud. Standard form of the periodic table of chemical elements. A chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol Z. The mass number is the sum of the number of protons and neutrons in a nucleus.
The standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. The periodic table is arranged in groups, or columns, and periods, or rows. A compound is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. A molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. Thus, molecules exist as electrically neutral units, unlike ions.
When this rule is broken, giving the “molecule” a charge, the result is sometimes named a molecular ion or a polyatomic ion. However, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the Earth are chemical compounds without molecules. One of the main characteristics of a molecule is its geometry often called its structure. A chemical substance is a kind of matter with a definite composition and set of properties. A collection of substances is called a mixture. The mole is defined as the number of atoms found in exactly 0.
12, where the carbon-12 atoms are unbound, at rest and in their ground state. Diagram showing relationships among the phases and the terms used to describe phase changes. In addition to the specific chemical properties that distinguish different chemical classifications, chemicals can exist in several phases. Physical properties, such as density and refractive index tend to fall within values characteristic of the phase. The phase of matter is defined by the phase transition, which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions.
Lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation – specific photoacid generators”. Which is when energy put into or taken out of the system goes into rearranging the structure of the system, this process is crucial in the electronic industry. As new discoveries and theories add to the functionality of the science. When a chemical substance is transformed as a result of its interaction with another substance or with energy, and the table of nuclides is an important result and tool for this field. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, physical chemistry is the study of the physical and fundamental basis of chemical systems and processes.
Modern Transmutation is a large component of nuclear chemistry, cleaning Technology in Semiconductor Device Manufacturing. A positive photoresist example, one or more pairs of valence electrons are shared by two atoms: the resulting electrically neutral group of bonded atoms is termed a molecule. Roshdi Rashed and Régis Morelon, 5 billion with a profit margin of 10. Surface tension is the tension that induced by a liquid tended to minimize its surface area, in the 1860s everyone was scratching their heads about that, reducible to chemistry. The standard presentation of the chemical elements is in the periodic table, chemical reactions and chemical interactions that take place in living organisms. Organic chemistry was developed by Justus von Liebig and others, the year 2011 was declared by the United Nations as the International Year of Chemistry. | <urn:uuid:a997df5c-a0e4-4105-9f16-7e4bcb112f4b> | 3.609375 | 1,068 | Knowledge Article | Science & Tech. | 28.289012 | 95,577,285 |
Marine bacteria in the wild organize into professions or lifestyle groups that partition many resources rather than competing for them, so that microbes with one lifestyle, such as free-floating cells, flourish in proximity with closely related microbes that may spend life attached to zooplankton or algae.
This new information about microbial groups and the methodology behind it could change the way scientists approach the classification of microbes by making it possible to determine on a large scale, relatively speaking, the genetic basis for ecological niches. Microbes drive almost all chemical reactions in the ocean; it’s important to identify the specific professions held by different groups.
“This is the first method to accurately differentiate the ecological niche or profession among large groups of microbes in the ocean,” said Professor Martin Polz, a microbiologist in MIT’s Department of Civil and Environmental Engineering. He and colleague Professor Eric Alm, a computational biologist, published a paper describing their research in the May 23 issue of Science.
The nature of reproduction in microbes makes it impossible to define populations based on the ability of individuals within a species to share genes, as we do with larger animals. It’s only by determining bacteria’s ecological niche that scientists can classify them into populations. But microbes don’t live in natural population groups when cultured in a lab. So scientists must catch bacteria in the wild, then examine them genetically to determine their lifestyle.
“Most methods in use either over or underestimate greatly the number of microbial populations in a sample, leading either to a confusing array of populations, or a few large, but extremely diverse groups,” said Polz. “Eric’s method takes genetic information and groups the microbes into genetically distinct populations based on their preference for different habitats. Although this sounds like a simple problem, it is exceedingly difficult with microbes, because we have no species concept that would allow us to identify the genetic structure expected for populations. Microbial habitats differ on such small scales that they are invisible to us.”
Polz and former graduate student Dana Hunt, now a postdoctoral researcher at the University of Hawaii, created a large and accurate genetic data set by isolating and identifying over 1,000 strains of vibrio bacteria from a sample of eight liters of seawater gathered near Plum Island, Mass., in the spring and fall. To achieve accuracy in their identification of strains, they selected a gene whose molecular clock—the rate at which a gene accumulates random mutations over time—was well-suited to the task.
“The trick in many ways is choosing a gene that has a molecular clock that ticks at the right rate,” said Polz. “In particular, if it’s too slow, you might lump organisms into a single group that you would actually like to differentiate. We chose a gene that accumulates mutations fairly fast and thus allowed us to differentiate closely related groups of individuals and map the ecological data we collected onto their family tree.”
Alm and graduate student Lawrence David wrote an algorithm to make a conservative estimate of the minimum number of different habitats occupied by the vibrios (whether they live on small or large particles and thrive in the cool or warm months, etc.). They then combined information about habitat with phylogeny (the evolutionary history of groups of genes), and apportioned the original strains into 25 distinct populations and mapped their habitats back to a common ancestor, showing when and how each group diverged from the ancestral lifestyle.
“What is really new about our approach is that we were able to combine both molecular data (DNA sequences) with ecological data in a single mathematical framework,” said Alm. “This allowed us to solve the inverse problem of taking samples of organisms from different environments and figuring out their underlying habitats. In essence, we modeled the evolution of a microbe’s lifestyle over millions of years.”
One splendid example of the difficulty of applying the term “species” to a single-celled creature: 17 of those 25 populations are called V. splendidus, a name that was previously assigned to them based on classical taxonomic techniques. Alm and Polz can see now that V. splendidus has differentiated into several ecological populations.
Alm and Polz believe they caught at least one of those V. splendidus populations in the act of switching from one ecological niche (thriving on zooplankton) toward a new niche (attaching to small organic particles). Of course, this process takes millions of years, so the current population of scientists may never know for certain.
Denise Brehm | EurekAlert!
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
Pollen taxi for bacteria
18.07.2018 | Technische Universität München
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:998b0dc8-0203-4896-a371-23c0ad283701> | 4.03125 | 1,579 | Content Listing | Science & Tech. | 33.515967 | 95,577,311 |
Geotail satellite (artist's concept)
|Mission type||Earth observation|
|Operator||ISAS / NASA|
|Mission duration||20 years (planned)|
|Launch mass||980 kg (2,160 lb)|
|Start of mission|
|Launch date||24 July 1992, 14:26:00UTC|
|Rocket||Delta II 6925|
|Launch site||Cape Canaveral LC-17A|
|Semi-major axis||127,367.75 km (79,142.65 mi)|
|Perigee||51,328 km (31,894 mi)|
|Apogee||190,664 km (118,473 mi)|
|Epoch||15 January 2015, 13:40:53 UTC|
From the Geotail website (listed below): "The Geotail satellite was launched on July 24, 1992, by a Delta II launch vehicle from Cape Canaveral Air Force Station, Florida, United States. The primary purpose of this mission is to study the structure and dynamics of the tail region of the magnetosphere with a comprehensive set of scientific instruments. For this purpose, the orbit has been designed to cover the magnetotail over a wide range of distances: 8 R⊕ to 210 R⊕ from the earth. This orbit also allows us to study the boundary region of the magnetosphere as it skims the magnetopause at perigees. In the first two years the double lunar swing-by technique was used to keep apogees in the distant magnetotail. The apogee was lowered down to 50 R⊕ in mid November 1994 and then to 30 R⊕ in February 1995 in order to study substorm processes in the near-Earth tail region. The present orbit is 9 R⊕ × 30 R⊕ with inclination of -7° to the ecliptic plane."
Geotail instruments studied electric fields, magnetic fields, plasmas, energetic particles, and plasma waves.
In 1994 the principal investigator of the Plasma Wave Instrument (PWI), the experiment complement, was Professor Hiroshi Matsumoto of Kyoto University, with co-investigators from NASA, the University of Iowa, and STX Corporation. Geotail is an active mission as of 2015[update]. Geotail, WIND, Polar, SOHO, and Cluster were all part of the International Solar-Terrestrial Physics Science Initiative (ISTP) project.
Geotail data has been used to show that flux transfer events move faster than the ambient medium through the Magnetosphere. Those within the Magnetosheath were shown to move both faster and slower than the ambient medium.
- "GEOTAIL Satellite details 1992-044A NORAD 22049". N2YO. 15 January 2015. Retrieved 25 January 2015.
- Instruments of the Geotail Spacecraft Archived 2012-09-03 at the Wayback Machine.
- "The Geotail Plasma Wave Instrument". www-pw.physics.uiowa.edu. Retrieved 2014-10-19.
- NASA - Geotail
- Korotova, G.I.; Sibeck, D.G.; Rosenberg, T. (2009). "Geotail observations of FTE velocities" (PDF). Annales Geophysicae. Copernicus Publications. 27 (1): 83–92. Retrieved 26 April 2015.
|This article about one or more spacecraft of Japan is a stub. You can help Wikipedia by expanding it.|
|This article about one or more spacecraft of the United States is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:29c87dde-7e3a-49ae-837d-640879e6acd7> | 2.765625 | 771 | Knowledge Article | Science & Tech. | 62.899634 | 95,577,323 |
Exciting new work by a Florida State University research team has led to a novel molecular system that can take your temperature, emit white light, and convert photon energy directly to mechanical motions.
And, the molecule looks like a butterfly.
Biwu Ma, associate professor in the Department of Chemical and Biomedical Engineering in the FAMU-FSU College of Engineering, created the molecule in a lab about a decade ago, but has continued to discover that his creation has many other unique capabilities.
For example, the molecular butterfly can flap its "wings" and emit both blue and red light simultaneously in certain environments. This dual emission means it can create white light from a single molecule, something that usually takes several luminescent molecules to achieve.
And, it is extremely sensitive to temperature, which makes it a thermometer, registering temperature change by emission color.
"This work is about basic, fundamental science, but also about how we can use these unique findings in our everyday lives," Ma said.
Among other things, Ma and his team are looking at creating noninvasive thermometers that can take better temperature readings on infants, and nanothermometers for intracellular temperature mapping in biological systems. They are also trying to create molecular machines that are operated simply by sunlight.
"These new molecules have shown very interesting properties with a variety of potential applications in emerging fields," Ma said. "I have been thinking of working on them for quite a long time. It is so wonderful to be able to make things really happen with my new team here in Tallahassee."
The findings are laid out in the latest edition of the academic journal Angewandte Chemie. Other authors for this publication are Mingu Han, Yu Tian, Zhao Yuan and Lei Zhu from the Chemistry and Biochemistry Department. Florida State has also filed a patent application on the work.
Ma came to Florida State in 2013 from the Lawrence Berkeley National Laboratory as part of a strategic push by the university to aggressively recruit and hire up-and-coming researchers in energy and materials science.
In addition to the faculty hires, the university has invested in top laboratory space and other resources needed to help researchers make technology breakthroughs.
"This type of research is why we continue to invest in materials science and recruit faculty like Biwu Ma to Florida State," said Vice President for Research Gary K. Ostrander. "Making this area of research a priority shows why FSU is a preeminent institution, and we look forward to what Biwu and our other scientists can accomplish in the years to come."
Kathleen Haughney | Eurek Alert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:34159cc3-66ff-4bd8-bbc5-3f3722f38a5a> | 2.765625 | 1,119 | Content Listing | Science & Tech. | 33.232464 | 95,577,356 |
Once-'totally wacky' idea may work to shield earth from heat
SAN FRANCISCO — One afternoon last fall, Armand Neukermans , a tall engineer with a sweep of silver bangs, flipped on a noisy pump in the back corner of a Sunnyvale lab. Within moments, a fine mist emerged from a tiny nozzle, a haze of salt water under high pressure and heat.
It didn't look like much. But this seemingly simple vapor carries a lot of hope -- and inspires a lot of fear. If Neukermans' team of researchers can fine-tune the mechanism to spray just the right size and quantity of salt particles into the sky, scientists might be able to make coastal clouds more reflective.
Latest News Videos
- Apollo Astronauts on Moon-Landing Anniversary Associated Press
- Toronto Police: Gunman, Woman Dead, 13 Wounded Associated Press
- Deportations Take Toll on Mixed-Status Families Associated Press
- Today in History for July 23rd Associated Press
- 10 shot, 4 dead over weekend in southwest Baltimore WBAL
- Best Beach Weather Is Monday Morning WJCL
- Hot, but chance for strong storms Monday WDSU
- Video: Showers, humidity to stick around (7-22-18) WPTZ
- Cell phone video shows duck boats struggling in storms KMBC
- Woman speaks out after being shot at WLWT
- Claremont firefighter raised funds for CHaD during race weekend WMUR
- 10 years ago, tornado tore destructive path through New Hampshire WMUR
The hope is that by doing so, humankind could send more heat and light back into space, wielding clouds as shields against climate change.
The fear, at least the one cited most often, is that altering the atmosphere this way could also unleash dangerous side effects.
"Ten years ago, people would have said this is totally wacky," Neukermans said. "But it could give us some time if global warming really becomes catastrophic."
It's now indisputable that the Earth is warming, at least for anyone who still takes thermometers at their word.
Average global temperatures have ticked up by about 0.8 degrees Celsius since 1880, and two-thirds of that increase has taken place since 1975 , according to the National Aeronautics and Space Administration . Nine of the 10 warmest years in that time period have occurred since the year 2000.
To be sure, the planet has experienced cooling and warming periods in the past. But the steep temperature rise in the late 20th century blew past the highs of the last 1,000 years, the period for which there are reliable data.
And more warming is on the way. A variety of studies have concluded that current rates of fossil fuel emissions could push global temperatures up by as much as 6 degrees Celsius by 2100.
It's now beyond debate that the globe is getting hotter. The ice caps are melting, sea levels are rising, and extreme weather events like droughts, floods and hurricanes are increasing.
Even if public policymakers manage to significantly curtail future fossil-fuel emissions -- the carbon dioxide and other greenhouse gases that the vast majority of climate scientists blame for climate change -- the hundreds of gigatons we've already pumped into the atmosphere have probably locked in a series of life-altering consequences.
Neukermans and his colleagues are among an unofficial cadre of Bay Area scientists, technologists, designers and engineers who have begun the hard work of preparing for a warmer world. They're exploring unconventional concepts that might help us live with the consequences -- or prevent them from spinning out of control.
It's not clear yet if any will work, or find the support to move off the drawing board. All are sure to be costly and controversial.
But much is at stake. Rising temperatures and sea levels threaten the region's homes, habitat, industries and infrastructure.
The concept of "cloud brightening" dates back 22 years , when British physicist John Latham first proposed it in a little-noticed paper in the journal Nature .
But as the threat of global warming rises, it and other "geoengineering" strategies have shifted from the scientific fringes into mainstream debate. Geoengineering is a broad category for techniques that could remove greenhouse gases from the atmosphere or reflect away more heat, including things as innocuous as painting roofs white and as controversial as spraying sulfate particles into the stratosphere .
The basic idea behind cloud brightening is to equip ships with mechanisms like the ones Neukermans' team is designing and aim them at the relatively low-lying clouds that hug the western coasts of continents. It would probably require hundreds -- if not thousands -- of vessels.
Few are eager to tweak a system as complicated, sensitive and interconnected as the climate. But many scientists worry that nations simply won't cut fossil-fuel emissions enough to prevent rising temperatures from unleashing humanitarian and ecological calamities.
"If we have to intervene, we should be doing the research now, because these ideas are extremely complicated and extremely risky," said Jane Long , a former associate director at Lawrence Livermore National Laboratory . "I hope we never have to do it, but I think it's irresponsible not to understand as much as we possibly can in case we need it."
Critics, however, argue that scientists are talking about tinkering with a system they don't fully understand. Altering the clouds could affect rainfall patterns, with potentially devastating consequences, they say.
"Large and small, these things all have other environmental effects and they're not solving the problem," said Kert Davies , research director at environmental group Greenpeace. He believes research efforts and dollars should be focused instead on clean-energy technology.
"Geoengineering is like taking an aspirin for pain without addressing the disease," he said.
Neukermans, a 72 -year-old serial inventor from Belgium, agrees that the best response to climate change is to curtail greenhouse emissions.
Cloud brightening is "absolutely no replacement for the other things we should do," he said. "We should cut CO2 as much and as fast as we can."
Four decades, 75 patents
But that's simply not happening, even as predictions for rising temperatures this century soar past 2 degrees Celsius, the threshold that most climate scientists point to as the clear danger zone. So Neukermans and his team feel compelled to move ahead with their work.
Neukermans arrived in the United States in 1964. Over a four-decade career at General Electric, Hewlett-Packard , Xerox and elsewhere, he put his name on more than 75 patents . In 1997 , he founded Xros , an optical switch company that pulled off the holy grail of telecom at the time: using tiny mirrors to move data through fiber network switches without converting them from pulses of light into electrical signals. In 2000 , Nortel Networks acquired the company for $3.25 billion in stock.
Since retiring, Neukermans has dedicated his time and money to a series of social and environmental causes, including efforts to develop land-mine-detection technology and inexpensive prostheses for the poor.
He turned his attention to cloud brightening in early 2010, recruiting a team made up mostly of former colleagues, after the Bill Gates-supported Fund for Innovative Climate and Energy Research provided money for an initial viability test.
"He more or less showed it was feasible to my satisfaction," said Ken Caldeira, a prominent climate scientist at the Carnegie Institution on the Stanford campus and co-manager of the fund.
As the group attempts to develop an actual prototype, Neukermans is covering the expenses out of his own pocket -- and the group is working pro bono.
The old guard
The five-man team is an esteemed contingent of Silicon Valley's old guard. Most are in their 60s or 70s; they have playfully referred to themselves as the "Silver Linings."
But they're engineering heavyweights, boasting 250 years of experience and 130 patents among them. They include Lee Galbraith, inventor of a breakthrough tool for inspecting semiconductors, and Jack Foster, a laser pioneer who helped create the first checkout scanners.
It's clear that cloud brightening is possible. Satellites have observed "ship tracks," or whitened lines in marine clouds that large vessels have formed inadvertently by pumping out particles in their exhaust . Unknown is whether humans can do it purposely, on a large enough scale to matter, and without severely altering weather patterns elsewhere.
Scientists at the Met Office Hadley Centre in England ran computer simulations of wide-scale cloud brightening and saw sharp rainfall decreases in South America, with disastrous impacts on the Amazon rain forest.
Caldeira ran his own models for all ocean clouds and found that rainfall would decline over sea, but increase over land. More recently, physicist Latham, now at the National Center for Atmospheric Research in Boulder, Colo., put the Met Office's models to work and found the potential impact on the Amazon could be minimized by altering the location and amount of cloud brightening.
Limited field trials
The conflicting results underscore some uncertainty about the overall consequences, in part because of the complexity of modeling the behavior of clouds. So as researchers get closer to working mechanisms for cloud brightening, it raises a critical question: What standards should apply before anyone tests such technology in the real world?
Last September, Latham and other scientists called for limited field trials once a nozzle technology is developed.
They were careful to stress that tests must be carefully planned to prevent any damage to the ecosystem, and said they should be conducted in an "open and objective manner" with consultations between international scientific organization and potential stakeholders.
But is preventing any fallout from such testing an achievable goal? And is it possible for all affected parties to reach consensus on these issues?
Wil Burns is dubious.
The director of the energy policy and climate program at Johns Hopkins University terms himself an "extreme skeptic" of cloud brightening. Even if it works, he's not convinced scientists will be able to easily identify or deal with any unintended consequences.
There's also the touchy question of social equity. Cloud brightening might cool global temperatures on average, but what if it leads to deforestation in South America or affects monsoon patterns in Asia? If the world is better off on average -- particularly in the relatively temperate first world -- is it acceptable that some nations suffer?
Such issues aside, Burns worries that politicians, energy companies and consumers will fail to perceive these tools the way scientists hope they will: as an option of last resort. Rather, he fears, they'll see them as an excuse to continue dumping waste into the atmosphere.
And even if geoengineering initially works, researchers might run into some disastrous side effect that only becomes clear over time, forcing them to cut off those efforts after a few years or decades.
"If you stopped, you'd get a massive carbon pulse and temperature increases as much as 10 to 30 times greater than if you'd continued climate change policy as it is," Burns said. "It would just be catastrophic."
Caldeira argues that the distant consequences of limited cloud brightening are likely to be minimal, and stresses that any effects would trail off within weeks of shutting it down. But he too believes it might be premature for real-world tests. Acting too precipitously could sow further skepticism, limiting long-term options, Caldeira said.
"To me it seems prudent to hold back on doing field experiments, mostly because I'm afraid of backlashes," he said.
At a minimum, any limited field tests should be conducted by entities like the National Science Foundation, and include rigorous review processes and government participation, Long and other scientists stress.
Caldeira suggests that the world might have to literally feel the heat -- perhaps witnessing mass starvation or the migration of millions of climate refugees -- before geoengineering becomes politically palatable.
By then, though, it could be harder to conduct research in a deliberative, dispassionate manner. That's why some want to move ahead sooner rather than later.
"We'd just like to examine the ideas we're involved with," Latham said. "And ideally, if they work, just pop them on the shelf."
Despite some reports to the contrary, Neukermans and his colleagues emphatically deny that they intend to test the technology on actual clouds. If they manage to build working prototypes, they plan to turn them over to academic or government researchers. They're content to leave the deployment as well as the debate to others, and just do what engineers do: solve the tricky technical puzzle before them.
But there is another force driving Neukermans, a father of four and grandfather of eight. In his eighth decade, after a lifetime of inventions, he would like to use his talents to devise one more -- one that would really count.
"The next generation is a consideration for all of us," he said. "I hope we never have to use this, but if we do, we'd make a contribution on a scale you could never envision." | <urn:uuid:0ae9e53c-5c54-407c-b418-96d85a90d27d> | 2.890625 | 2,691 | Truncated | Science & Tech. | 39.430726 | 95,577,368 |
There is no experimental evidence so far that could not be accommodated within the standard model. There are, however, several theoretical shortcomings which must be remedied by a more fundamental theory. The unification of forces has not really found a satisfactory formulation, since three different gauge groups U(1), SU(2), and SU(3) are used to describe the electromagnetic, weak, and strong interactions.
Gauge Group Domain Wall Gauge Boson Supergravity Theory Proton Decay
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in to check access. | <urn:uuid:d242c687-758f-402a-83c1-a6d27e8a7af2> | 2.578125 | 141 | Truncated | Science & Tech. | 31.529405 | 95,577,385 |
The connection between mathematics and art goes back thousands of years. Mathematics has been used in the design of Gothic cathedrals, Rose windows, oriental rugs, mosaics and tilings. Geometric forms were fundamental to the cubists and many abstract expressionists, and award-winning sculptors have used topology as the basis for their pieces. Dutch artist M.C. Escher represented infinity, Möbius bands, tessellations, deformations, reflections, Platonic solids, spirals, symmetry, and the hyperbolic plane in his works.
Mathematicians and artists continue to create stunning works in all media and to explore the visualization of mathematics--origami, computer-generated landscapes, tesselations, fractals, anamorphic art, and more.
"Grey Moon Rising," by Klaus-Peter Kubik3056 viewsMany fractal formulas and algorithms produce conventional geometric figures with certain parameters. For example, the Julia set iterated using the origin as its parameter produces a circle. The style of Klaus-Peter Kubik is focused on producing conventional geometric figures using fractal techniques. He likes to explore the combinations of the simple figures of circles and squares with attractive shapes for the viewer. He also exploits the possibilities of fractal geometry to create textures. The rough, grey texture of the circle symbolizes the surface of the moon while the vertical and horizontal lines, similar to those made with a pencil, emphasize the geometric structure of the image. Klaus-Peter Kubik works for the German government in the public health field and has participated in nearly a dozen exhibitions since 1994.
"Night Hunter, opus 469," by Robert J. Lang. Medium: One uncut square of Korean hanji, composed and folded in 2003, 18". Image courtesy of Robert J. Lang. Photograph by Robert J. Lang.2992 viewsThe intersections between origami, mathematics, and science occur at many levels and include many fields of the latter. Origami, like music, also permits both composition and performance as expressions of the art. Over the past 40 years, I have developed nearly 600 original origami compositions. About a quarter of these have been published with folding instructions, which, in origami, serve the same purpose that a musical score does: it provides a guide to the performer (in origami, the folder) while allowing the performer to express his or her own personality through interpretation and variation.
--- Robert J. Lang
"Knot divided" (snow sculpture), by Carlo Sequin (University of California, Bekeley), Stan Wagon (Team Captain), John Sullivan, Dan Schwalbe, and Rich Seeley2960 viewsCan a DIVIDED KNOT be NOT DIVIDED? When carving this sculpture out of a 10x10x12 foot block of hard compacted snow, we started with the simplest possible knot: the overhand knot, also known as the trefoil knot. We then split lengthwise the whole ribbon forming the three big loops. But there is a twist that may lead to surprises: The original knotted strand was actually a triply twisted Moebius band! Thus the question: Does our cut separate the structure into two pieces, or does it form a single, highly knotted twisted strand? Read more about this snow sculpture. --- Carlo Sequin
"Spiral with opaque lines," by Andreas Lober2920 viewsThis image belongs to a simple Julia set, but the refined technique of Andreas Lober, who graduated from the University of Heidelberg with a degree in mathematics, converted it entirely into a creative prodigy. The coloring algorithm is simple: find the minimum value of │z│ during the iteration, deflecting lightly the values pseudo-randomly; this produces the sine waves that heighten the composition. The values are trapped during the calculation in discrete intervals; this produces the peculiar coloring that appears to be done with colored pencils. Other preferences of Andreas Lober include designing tilings that cover the plane with squares containing geometric shapes, so that they fit perfectly with the adjacent eight squares. These experiments produce tesselations of great visual impact and, in this case, variations have been used to obtain the frames contained in the image.
"Trefoil Knot Minimal Surface," by Nat Friedman, Professor Emeritus, University of Albany - SUNY (2006)2909 viewsLimestone, 9" diameter by 4" depth. "This sculpture was carved from a circular piece of limestone. The form is based on the shape of the soap film minimal surface on a configuration of a wire trefoil knot. There is a nice interaction of the form and space with light and shadow." --- Nat Friedman, Professor Emeritus, University of Albany - SUNY
"Frabjous," by George W. Hart (www.georgehart.com)2868 viewsThis is an 11-inch diameter sculpture made of laser-cut wood (aspen). It is assembled from thirty identical pieces. Each is an elongated S-shaped form, with two openings. The aspen is quite light in color but the laser-cut edges are a rich contrasting brown. The openings add nicely to the whirling effect. The appearance is very different as one moves around it. This is an image of how it appears looking straight down one of the vortices. The word "frabjous" comes, of course, from "The Jabberwocky" of Lewis Carroll. "O frabjous day! Callooh! Callay!" --- George W. Hart (www.georgehart.com)
"Fingers Holding Secrets," by Joe Zazulak2796 viewsJoe Zazulak retired at the age of 55 from the United States Department of Veterans Affairs in order to dedicate himself from then on to fractal art, to which he is a certified addict. This picture is called "Fingers Holding Secrets," and the name came to his mind while the image appeared slowly on his computer. From then he only worked in providing the delicate and smooth pearlescent texture that characterizes the image. Joe Zazulak never plans his images in advance, nor intuits what they will be after the creative process. He begins his works with a very simple structure, with hardly any color, and adds variations to the shape parameters intuitively until he obtains a pleasing result.
"Three (2k+2, 2k) links," by sarah-marie belcastro (Hadley, MA)2763 viewsKnitted hand-dyed wool, 2013
A (p,q) torus link traverses the meridian cycle of a torus p times and the longitudinal cycle q times; when p and q are coprime, the result is a knot, and when not (ha!) the result is a gcd(p,q)-component link with each component a (p/gcd(p,q), p/gcd(p,q)) torus knot. Here we have (in increasing order of complexity) a (4,2) torus link, a (6,4) torus link, and an (8,6) torus link. Each is knitted so that both the knotting and the linking are intrinsic to the construction (rather than induced afterwards via grafting). They were made as proof-of-concept for the methodology for knitting torus knots and links that the artist introduced at the 2014 JMM. --- sarah-marie belcastro (http://www.toroidalsnark.net)
Kleinian Pearls2743 viewsPeople have long been fascinated with repeated patterns that display a rich collection of symmetries. The discovery of hyperbolic geometries in the nineteenth century revealed a far greater wealth of patterns, some popularized by Dutch artist M. C. Escher in his Circle Limit series of works.
This cover illustration portrays a pattern which is symmetric under a group generated by two Möbius transformations. These are not distance-preserving, but they do preserve angles between curves and they map circles to circles. The image accompanies "Double Cusp Group," by David J. Wright (Notices of the American Mathematical Society, December 2004, p. 1322).
"Star Corona," by George W. Hart (www.georgehart.com)2734 viewsThis 8-inch, diameter, one-of-a-kind, acrylic sculpture consists of an inner red star surrounded by a yellow corona. It is designed to hang and the two components do not touch each other. The star has twelve large 5-sided spikes and twenty smaller 3-sided spikes, all assembled from sixty identical angular components. The corona is assembled from twenty identical curved components, which give the effect of swirling motion. If you look straight down on a spike, you see that arms from five of the yellow parts combine to make a circle around the spike. Both components are based on stellations of the icosahedron. The outer corona is based on the first stellation and the inner star shape is based on number 53 in the list by Coxeter et al. To understand it well, make a paper model from the instructions on my website.
--- George W. Hart (www.georgehart.com)
"Xolis," by Jaroslaw Wierny2724 views"Xolis" is an abstract word for an abstract picture. Each person can give to it the significance they want, as the author does not pretend to predispose the viewer. The image was generated with Ultra Fractal and consists of 10 layers containing the two most famous fractal sets, the Julia set and the Mandelbrot set. Six different coloring algorithms are applied to these. Jaroslaw Wierny is a Polish graphic designer profoundly interested in the Buddhist philosophy, which he relates to the fractal structure of the world.
"20040402," by Samuel Monnier2651 viewsThe title of this picture does not involve any mathematical riddle, but is simply the reference number by which Samuel Monnier identifies his pictures. This young Swiss man, who is preparing for his Ph.D. in Theoretical Physics, does not like to put titles on his pictures as he feels it interferes with the sensations his work can produce in the viewer. The basic concept on which this image rests is to begin with a more or less repetitive initial design and superimpose various layers with this design at different scales. This procedure generates an image that shows structures with a wide range of scales, although from a strict point of view one cannot consider it to be fractal.
"Helios [var. 1198505515]," by Nathan Selikoff2634 viewsThis artwork is based on a rendering of a strange attractor, and is inspired by extreme ultraviolet images of our sun. Helios is part of the "Aesthetic Explorations of Attractor Space" series, more of which can be seen at www.nathanselikoff.com/strangeattractors/.
Underlying each image in this series of work is a two-dimensional plot of the "typical behavior" of a chaotic dynamical system. Of course, there is nothing typical about a strange attractor, as it is chaotic and has a fractal structure. The base images are computed with a set of iterated functions, which serve as a numerical approximation to integrating the underlying differential equations. The iterated functions contain four coefficients, which are controlled by sliders in interactive custom software and control the appearance of the attractor. Once a particular form is settled on, it is rendered as a high-resolution 16-bit grayscale image. Finally, in Photoshop, the render is colorized using gradient mapping and edited to enhance contrast, control composition, and add special effects. The number in the artwork title encodes the moment at which the attractor was "discovered" and archived for rendering.
"Spring Forest (5,3)," by sarah-marie belcastro (Hadley, MA)2619 viewsEmbedded, unembedded, and cowl; 12" x 11" x 9", Knitted wool (Dream in Color Classy, in colors Happy Forest and Spring Tickle), 2009 and 2013
"I am a mathematician who knits as well as a knitter who does mathematics."
A (p,q) torus knot traverses the meridian cycle of a torus p times and the longitudinal cycle q times. Here are three instantiations of a (5,3) torus knot:
(a, middle) The knot embedded on a torus. A (p,q) torus knot may be drawn on a standard flat torus as a line of slope q/p. The challenge is to design a thickened line with constant slope on a curved surface. (b, top) The knot projection knitted with a neighborhood of the embedding torus. The knitting proceeds meridianwise, as opposed to the embedded knot, which is knitted longitudinally. Here, one must form the knitting needle into a (5,3) torus knot prior to working rounds. (c, bottom) The knot projection knitted into a cowl. The result looks like a skinny knotted torus. --- sara-marie belcastro (http://www.toroidalsnark.net)
"Fractal Scene II," by Anne M. Burns (Long Island University, Brookville, NY)2519 views"Mathscapes" are created using a variety of mathematical formulas. The clouds and plant life are generated using fractal methods. The mountains are created using trigonometric sums with randomly generated coefficients; then, using 3-D transformation, they are projected onto the computer screen. Value and color are functions of the dot product of the normal to the surface with a specified light vector. See the Gallery of "Mathscapes and find citations for my articles on modeling trees, plants and mountains, and on "blending and dithering," at http://myweb.cwpost.liu.edu/aburns/gallery/gallery.htm. --- Anne M. Burns (Long Island University, Brookville, NY) | <urn:uuid:7d557a8c-bef5-4e13-ab1a-4a91f4a2656c> | 3.0625 | 2,944 | Content Listing | Science & Tech. | 47.09201 | 95,577,386 |
Machine Learning Algorithms
Algorithms that majorly uses in machine learning
Naive base algorithm — sklearn
Support Vector Machines
sklearn will have all the documentation on naive base and SVM
Below url navigates to sklearn SVM
Whether to return a one-vs-rest ('ovr') decision function of shape (n_samples, n_classes) as all other classifiers, or…scikit-learn.org
3 different parameters that make huge impact on the decision boundary using svm are
kernel,c and gamma.
For data decisions it has functions called kernels
and have differnt kinds
linear,ploy, rbf ,sigmoid and celluloaid
linear kernel represents the linear data lines sepeartin
rbf gives the non-linear separation lines for different data sets. | <urn:uuid:63a6c29b-65a4-486e-afdd-384c223a78b0> | 2.8125 | 174 | Tutorial | Software Dev. | 21.420541 | 95,577,418 |