text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
If xyz = 1 and x+y+z =1/x + 1/y + 1/z show that at least one of these numbers must be 1. Now for the complexity! When are the other numbers real and when are they complex?
Find the exact values of x, y and a satisfying the following system of equations: 1/(a+1) = a - 1 x + y = 2a x = ay
Find all the triples of numbers a, b, c such that each one of them plus the product of the other two is always 2.
Triangle ABC is an equilateral triangle with three parallel lines going through the vertices. Calculate the length of the sides of the triangle if the perpendicular distances between the parallel. . . .
A bag contains red and blue balls. You are told the probabilities of drawing certain combinations of balls. Find how many red and how many blue balls there are in the bag.
Find a quadratic formula which generalises Pick's Theorem.
For any right-angled triangle find the radii of the three escribed circles touching the sides of the triangle externally.
Show that for any triangle it is always possible to construct 3 touching circles with centres at the vertices. Is it possible to construct touching circles centred at the vertices of any polygon?
Find all positive integers a and b for which the two equations: x^2-ax+b = 0 and x^2-bx+a = 0 both have positive integer solutions.
Solve the system of equations to find the values of x, y and z: xy/(x+y)=1/2, yz/(y+z)=1/3, zx/(z+x)=1/7
Solve the equations to identify the clue numbers in this Sudoku problem.
Use graphs to gain insights into an area and perimeter problem, or use your knowledge of area and perimeter to gain insights into the graphs...
Can you find the values at the vertices when you know the values on the edges of these multiplication arithmagons?
This is a variation of sudoku which contains a set of special clue-numbers. Each set of 4 small digits stands for the numbers in the four cells of the grid adjacent to this set.
Four numbers on an intersection that need to be placed in the surrounding cells. That is all you need to know to solve this sudoku.
A Sudoku with a twist.
A group of 20 people pay a total of £20 to see an exhibition. The admission price is £3 for men, £2 for women and 50p for children. How many men, women and children are there in the group?
A Sudoku with a twist.
There are lots of different methods to find out what the shapes are worth - how many can you find?
All CD Heaven stores were given the same number of a popular CD to sell for £24. In their two week sale each store reduces the price of the CD by 25% ... How many CDs did the store sell at. . . .
How many intersections do you expect from four straight lines ? Which three lines enclose a triangle with negative co-ordinates for every point ?
Five equations... five unknowns... can you solve the system?
Change one equation in this pair of simultaneous equations very slightly and there is a big change in the solution. Why?
A, B & C own a half, a third and a sixth of a coin collection. Each grab some coins, return some, then share equally what they had put back, finishing with their own share. How rich are they?
The sum of any two of the numbers 2, 34 and 47 is a perfect square. Choose three square numbers and find sets of three integers with this property. Generalise to four integers.
You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku.
Can you make a tetrahedron whose faces all have the same perimeter?
Four jewellers share their stock. Can you work out the relative values of their gems?
Can you find the values at the vertices when you know the values on the edges?
The challenge is to find the values of the variables if you are to solve this Sudoku.
To make 11 kilograms of this blend of coffee costs £15 per kilogram. The blend uses more Brazilian, Kenyan and Mocha coffee... How many kilograms of each type of coffee are used?
If x, y and z are real numbers such that: x + y + z = 5 and xy + yz + zx = 3. What is the largest value that any of the numbers can have?
There is a particular value of x, and a value of y to go with it, which make all five expressions equal in value, can you find that x, y pair ?
Crack this code which depends on taking pairs of letters and using two simultaneous relations and modulus arithmetic to encode the message.
A simple method of defining the coefficients in the equations of chemical reactions with the help of a system of linear algebraic equations.
Plane 1 contains points A, B and C and plane 2 contains points A and B. Find all the points on plane 2 such that the two planes are perpendicular.
When I park my car in Mathstown, there are two car parks to choose from. Which car park should I use?
Investigate the effects of the half-lifes of the isotopes of cobalt on the mass of a mystery lump of the element.
Find out how to model a battery mathematically
Which is bigger, n+10 or 2n+3? Can you find a good method of answering similar questions? | <urn:uuid:784b53f0-2dba-41c4-b798-8fd41af90f53> | 3.015625 | 1,197 | Content Listing | Science & Tech. | 75.708034 | 95,512,427 |
Tropical forests have been called the lungs of the planet. They soak up vast quantities of carbon dioxide, hold the world’s greatest diversity of plants and animals, and employ millions of people. And these hot ecosystems–often a patchwork of trees and grasslands–are being deeply altered by logging and other land use change.
Now, a team of scientists have made a fundamental discovery about how fires on the edges of these forests control their shape and stability. Their study implies that when patches of tropical forest lose their natural shape it could contribute to the sudden, even catastrophic, transformation of that land from trees to grass.
The new knowledge could help protect tropical forests–and allow land managers to build new tools to predict the stability of both individual forest patches and larger regional-scale forests.
The study was published March 26 in the journal Ecology Letters.
LAW OF THE FOREST
Using high-resolution satellite data from protected forests in the savanna region of the Brazilian Cerrado, the scientists found that the shape of these natural forests follow a predictable mathematical relationship between a forest’s perimeter and its area–regardless of its climate region or its size. They call this a “3/4 power law” and it roughly means the forests all tend toward shapes that are neither skinny like a line, nor round and smooth like a circle. “If a forest could grow easily in all directions, we’d expect a circle,” says Laurent Hébert-Dufresne, a computer scientist at the University of Vermont who is the lead author on the new study, “but what we actually see is more dendritic, a bit like an octopus or deformed circle.”
The team of six scientists–that included modelers, ecologists and physicists from UVM, the Santa Fe Institute, Stanford, Boston University, Princeton, and the University of Washington–show that the 3/4 law holds true for tiny forest fragments not much bigger than a basketball court up to large forest patches covering dozens of square miles.
The scientists combined their understanding of real-world data with the results of a new computer model to explain why this happens: fires, that burn easily in the grasslands surrounding forests–and singe the forests’ wet edges–are in constant battle with the forests’ expansive growth out into grasslands. This interplay at the edge between grass and forest, the scientists discovered, creates forest patches that converge on a steady-state shape.
The results of the scientists’ model matched the observed results from real forests in Brazil. And an experiment the scientists ran on their model shows that the fate of forest patches over time–whether they expand or contract–is determined by their initial shape. Those with compact shapes of all sizes, over time, converge on the more octopus-like 3/4-power-scaling relationship*, while those with skinny shapes and larger perimeter-to-area ratios collapsed, disappearing into grasslands or fragmenting into very small patches.
Which means that this relationship between a forest’s perimeter and its area may help predict the stability of individual forest patches. The scientists are optimistic that the study can lead to practical tools that show how far a managed forest patch deviates from this natural geometry will help to determine its stability over time.
And the new research presents insights at a larger, regional scale into the possible fate of Brazil’s forests. “Stepping back and considering the macro scale–not looking at the shape of every patch, but, instead, at the state of the entire system–the model suggests that the collapse from forest to grassland can be dramatic,” says Andrew Berdahl, a researcher at the Santa Fe Institute and the senior author on the study. “These local, small-scale effects–perimeter growth and edge burning–can lead to a critical transition across a whole forest region between a forest-dominated-state and a grass-dominated-state.” And once large areas of forest switch to grass it can be difficult to recover the forests. “It is like stepping off a cliff,” says Berdahl. “You can’t simply step back up.”
Ecologists have historically looked at the elements within a forest to understand its condition–often focusing on its plants and animals–but there has been little exploration of the geometry of forests and how this might matter. The new study shows a powerful role for fire driving the shape of Brazil’s tropical forests, “and we’d now like to see if this pattern holds true in other parts of the world,” says UVM’s Hébert-Dufresne, an assistant professor in the Department of Computer Science and part of UVM’s Complex Systems Center. “Say in Africa we find that elephants pushing over trees changes the equation, or dryness in Australia–that would be very interesting.” And he’d like to expand the research to see whether the relationship observed in the new model–derived from wild forests–holds true in logged and other managed forests in Brazil.
“Our fundamental point though is that a forest’s shape is very important,” he says, “and that its shape is directly related to its stability.”
|Enjoy the article? Then please consider donating today to ensure that Eurasia Review can continue to be able to provide similar content.| | <urn:uuid:821f3940-6279-47a3-82b6-3784553c6181> | 4.28125 | 1,128 | Truncated | Science & Tech. | 35.965064 | 95,512,429 |
(Credit: Todd Chandler) Researchers from Oregon State University (OSU) have discovered that a population of blue whales found between the North and South islands of New Zealand are genetically distinct from other blue whales, and live there year-round. (From Forbes/ By Fiona McMillan) -- The first inklings that something unusual was going on began [...]
What It Was The House Natural Resources Committee held an oversight hearing titled, “Deficiencies in the permitting process for offshore seismic research.” Why It Matters The decisions we make on ocean use effect everyone. Our coastal waters can be used for shipping, fishing, tourism, research, energy production, security, among other things, and how we [...]
What It Was The House Natural Resources Committee held a markup of six environmental bills, including the Streamlining Environmental Approvals Act of 2017 (SEA Act; H.R. 3133), which amends the Marine Mammal Protection Act of 1972 (MMPA, P.L. 92-522). All bills passed out of committee. Why It Matters The ocean has myriad uses, from [...]
(Credit: University of East Anglia) Scientists at the University of East Anglia have been recording the sounds made by whales and porpoises off the coast of northern Scotland – using a fleet of pioneering marine robots. (From Phys.org) -- From the metallic clicks of deep-diving sperm whales to the eerie whistles made by pods of [...]
A new article by a UNSW Sydney-led team challenges the validity of current methods for forecasting the persistence of slow-growing species for conservation purposes, and provides a better approach to reducing the threat of extinction. (From Science Daily) -- Previous research on wild dolphins in Australia and wild bears in North America has revealed that [...]
(Credit: Flip Nicklin/ Minden Pictures/Getty Images) Narwhals — the unicorns of the sea — show a weird fear response after being entangled in nets. Scientists say this unusual reaction to human-induced stress might restrict blood flow to the brain and leave the whales addled. (From NPR/ by Nell Greenfieldboyce) -- The narwhals swim hard and dive [...]
To graduate student Sarah Fortune, the rocky crags off Baffin Island were just part of its stark beauty. Then, she saw a group of eight bowhead whales rubbing their bodies against the large boulders. Using aerial drones to watch the whales, she saw that they were using the rocks to help remove loose, dead skin.
The Stellar's sea cow went extinct within 27 years of it being first spotted by humans. An enormous skeleton of a sea cow, an extinct beast that roamed the icy waters surrounding the North Pacific near the Bering Sea, was found almost entirely intact, buried in the sands of a beach in the Komandorsky Nature Reserve in Siberia, Russia.
The Strengthening the Economy with Critical Untapped Resources to Expand (SECURE) American Energy Act (H.R. 4239) was discussed in a hearing by the House Natural Resources Subcommittee on Energy and Mineral Resources. The next day, it was marked up by the full committee, passing along a nearly party line vote (19-14).
Environmental disturbances such as El Niño shake up the marine food web off Southern California, new research shows, countering conventional thinking that the hierarchy of who-eats-who in the ocean remains largely constant over time.
HALIFAX, CANADA—In the fall of 1990, a few humpback whales showed up off the coast of western South Africa where they had rarely been seen before. Over the next couple years, a few more showed up, then a few more. Today, nearly 200 of the giant ocean mammals mill around a piece of ocean smaller than a U.S. football field for several months out of the year.
The University of Alaska has produced a procedure for what scientists on research vessels should do to avoid disrupting Indigenous communities’ traditional hunts. The university’s Brenda Konar hopes that other vessels will adopt codes of conduct. The Arctic Ocean is rapidly changing, and researchers are rushing to understand those changes. That means more research expeditions are coming into more frequent contact with Indigenous communities and the marine animals they depend on. To avoid those conflicts, a recent paper by researchers at the University of Alaska, Fairbanks lays out a “Community and Environmental Compliance Standard Operating Procedure,” or CECSOP. | <urn:uuid:6bef0532-ce68-4c46-b304-02ba8008ddaf> | 2.96875 | 907 | Content Listing | Science & Tech. | 50.267746 | 95,512,437 |
By Rachel and Brian, core 6
by Rachel Murphy and Brian Greco, core 6
To examine the effects of acidic substances’ reaction with non-acidic limestone chalk, modeling the effects of acid rain.
When submerged in the acidic liquids, the chalk will begin to disintegrate or completely dissolve.
•Chalk is calcium carbonate (CaC03)
• Vinegar/LJ react with chalk
• Erode to form calcium (Ca) and C02
•When in the acidic liquids, the chalk began to disintegrate
• Even in water, some wearing away could be seen
• Vinegar and lemon juice are acids: 1 and 2 on the PH scale
• The process displayed in this experiment models acid rain
by Rachel Murphy and Brian Greco | <urn:uuid:15588ae0-48e4-4f17-ae8a-5759f9f8781e> | 3.234375 | 162 | Tutorial | Science & Tech. | 11.764643 | 95,512,446 |
This is a bare bones example of TensorFlow, a machine learning package published by Google. You will not find a simpler introduction to it.
In each example, a straight line is fit to some data. Values for the slope and y-intercept of the line that best fit the data are determined using gradient descent. If you do not know about gradient descent, check out the Wikipedia page.
After creating the required variables, the error between the data and the line is defined. The definition of the error is plugged into the optimizer. TensorFlow is then started and the optimizer is repeatedly called. This iteratively fits the line to the data by minimizing the error.
Read the scripts in this order:
The purpose of this script is to illustrate the nuts and bolts of a TensorFlow model. The script makes it easy to understand how the model is put together. The error between the data and the line is defined using a for loop. Because of the way the error is defined, the calculation runs in serial.
This script goes a step farther than
serial.py although it actually requires fewer lines of code. The outline of the code is the same as before except this time the error is defined using tensor operations. Because tensors are used, the code can run in parallel.
You see, each point of data is treated as being independent and identically sampled. Because each point of data is assumed to be independent, the calculations are too. When you use tensors, each point of data is run on separate computing cores. There are 8 points of data, so if you have a computer with eight cores it should run almost eight times faster.
You are one buzzword away from being a professional. Instead of fitting a line to just eight datapoints, we will now fit a line to 8-million datapoints. Welcome to big data.
There are two major changes in the code. The first is bookkeeping. Because of all the data, the error must be defined using placeholders instead of actual data. Later in the code, the data is fed through the placeholders. The second change is that because we have so much data, only a sample of data is fed into the model at any given time. Each time an operation of gradient descent is called, a new sample of data is fed into the model. By sampling the dataset, TensorFlow never has to deal with the entire dataset at once. This works surprisingly well and there is theory that says it is okay to do this. There are a few conditions that the theory says are important, like the step size must decrease with each iteration. For now, who cares! It works.
As you worked through the scripts, you hopefully saw how the error can be anything you wanted it to be. It could be the error between a set of images and a convolutional neural network. It could be the error between classical music and a recurrent neural network. Let your imagination run wild. Once the error is defined, you can use TensorFlow to try and minimize it.
That's it. Hopefully you found this tutorial enlightening. | <urn:uuid:cfb54ed6-9d35-40d9-b8d3-c0efdb8884ff> | 3.90625 | 643 | Tutorial | Software Dev. | 55.459652 | 95,512,466 |
Carbon dating how does it work
by Dr Carl Wieland An attempt to explain this very important method of dating and the way in which, when fully understood, it supports a ‘short’ timescale.
In fact, the whole method is a giant ‘clock’ which seems to put a very young upper limit on the age of the atmosphere.
Archaeologists use the exponential, radioactive decay of carbon 14 to estimate the death dates of organic material.
The stable form of carbon is carbon 12 and the radioactive isotope carbon 14 decays over time into nitrogen 14 and other particles.
For teaching and sharing purposes, readers are advised to supplement these historic articles with more up-to-date ones suggested in the Related Articles below.
This does not mean that recalibration is bad, indeed it is necessary, but it should make one more soberly assess any reported dates as being tentative.
The problem is that most people reporting on these issues fail to report the initial number along with the calibrated date. The Jericho controversy is soundly rooted in C-14 calibration.
Carbon is naturally in all living organisms and is replenished in the tissues by eating other organisms or by breathing air that contains carbon.
At any particular time all living organisms have approximately the same ratio of carbon 12 to carbon 14 in their tissues.
Search for carbon dating how does it work:
When granite rock hardens, it freezes radioactive elements in place. | <urn:uuid:18302faf-8af0-4707-8a15-f3d0333f2118> | 3.921875 | 299 | Knowledge Article | Science & Tech. | 38.377727 | 95,512,479 |
Nuclear Fission and its Implications
When the neutron and its properties were discovered in 1932, the possibility of new types of nuclear reactions became apparent. The fact that neutrons are extremely small and have no charge makes them ideal nuclear missiles over a large range of energies. We have seen how this led to the production of radioisotopes among light elements and when Fermi in 1934 irradiated the heavier elements, notably uranium, with slow neutrons, many of the products were β−-active, as had been experienced earlier with lighter elements.
KeywordsFission Product Fast Neutron Compound Nucleus Fission Fragment Natural Uranium
Unable to display preview. Download preview PDF. | <urn:uuid:ca613afd-8564-4ce6-86fa-cef2531b9dc4> | 4.03125 | 145 | Truncated | Science & Tech. | 30.145776 | 95,512,487 |
By relentlessly miniaturizing a pre-World War II computer technology, and combining this with a new and durable material, researchers at Case Western Reserve University have built nanoscale switches and logic gates that operate more energy-efficiently than those now used by the billions in computers, tablets and smart phones.
Researchers may be one step closer to tapping into the full potential of solar cells. The team found a way to create large sheets of nanotextured, silicon micro-cell arrays that hold the promise of making solar cells lightweight, more efficient, bendable and easy to mass produce.
With one stomp of his foot, Zhong Lin Wang illuminates a thousand LED bulbs - with no batteries or power cord. The current comes from essentially the same source as that tiny spark that jumps from a fingertip to a doorknob when you walk across carpet on a cold, dry day. Wang and his research team have learned to harvest this power and put it to work.
Polymers can behave like insulators, semiconductors and metals - as well as semimetals. Twenty researchers, under the leadership of Xavier Crispin, Docent in organic electronics at Linköping University, are behind the breakthrough.
Sandia National Laboratories researchers have devised a novel way to realize electrical conductivity in metal-organic framework (MOF) materials, a development that could have profound implications for the future of electronics, sensors, energy conversion and energy storage.
EPFL researchers have developed a method for accurately determining the toxicity of nanomaterials. By using optical techniques, they are able to measure the concentration of the oxidizing substances produced by a damaged cell. Furthermore, this research also offers a new way to know more about the mechanisms of oxidative stress.
Researchers have discovered a new, potentially life-saving application for polyethylene terephthalate (PET), which is widely used to make plastic bottles. They have successfully converted PET into a non-toxic biocompatible material with superior fungal killing properties.
Researchers are one step closer to an eye drop-free reality with the development of a drug-eluting contact lens designed for prolonged delivery of latanoprost, a common drug used for the treatment of glaucoma, the leading cause of irreversible blindness worldwide. | <urn:uuid:2bd5d7de-24b9-4f97-b288-6ea476336de6> | 3.109375 | 466 | Content Listing | Science & Tech. | 17.975 | 95,512,502 |
Harvard scientists have developed a robotic bee that will hopefully attempt to alleviate the worldwide shortage of honeybees. Colony collapse disorder (CCD) has been widely reported on, yet it has not been established exactly why bee numbers have been falling so dramatically in the last number of years. The falling numbers is a very serious issue since we rely on honeybees for almost one third of the food we consume, and they account for more than $15 billion in value to U.S. agricultural crops each year.
Engineering professor Robert Wood led the team that developed the RoboBee, in the hope that they would be able to artificially pollinate a field of crops in the eventuality that the commercial pollination industry is not able to replenish the dwindling number of bees.
After over a decade’s work in Harvard’s School of Engineering and Applied Science’s lab, RoboBee took it’s maiden flight in May last year. The robotic bee weighs in at less than a tenth of a gram and is only half the size of a paperclip.
While Robobee is currently a prototype, as soon as 10 years from now these tiny bee-sized robots could be pollinating fields near you.
The 11 Most Student-Friendly Cities in the U.S.
Your Netflix Favorites Can Soon Be Downloaded
It’s Time For You To Change, Says Google
Apps That Will Make You More Useful
What Your Zodiac Sign Says About Your Phone Use
Spying On Your Lover Is Made Easier By This App
Breaking Your Favorite App Might Be The Most Fun Job Ever
5 Breakthrough Announcements At The Apple Event
When Apple Needs A Little Help From Google
Meet The 15 Best, New, And Free Apps Of 2015
Amazon Prime Day
Amazon Prime Day 2018: How to choose the best deals
The easiest way to fix an iPhone stuck in recovery mode without data loss
How are Canadians using mobile devices to relax?
The gaming apps dominating the market in Canada | <urn:uuid:8cf0afb1-336f-4140-9f21-db6815eb7a58> | 3.40625 | 412 | Content Listing | Science & Tech. | 46.170031 | 95,512,516 |
Install Rails Your Guide for Installing Ruby on This guide is desned for beginners who want to get started with a Rails application from scratch. What is it? is the easier way to install Rails on your computer. How does it work? There's no magic here. We use all of the standard tools that.
Learn Web Development with Rails Michael Hartl's Ruby on It does not assume that you have any prior experience with Rails. The Rails 5 version 4th edition of the Ruby on Rails Tutorial ebook in EPUB, MOBI, and PDF formats. Ruby on Rails Tutorial Solutions Manual for Exercises.
Getting Started with Rails — Ruby on Rails Guides However, to get the most out of it, you need to have some prerequisites installed: Rails is a web application framework running on the Ruby programming language. On Windows, if you installed Rails through Rails Installer, you already have SQLite installed. Others can find installation instructions at the SQLite3 website.
Ruby on Rails Guides If you have no prior experience with Ruby, you will find a very steep learning curve diving straht into Rails. Official introduction and general reference to learning and using Rails.
Ruby on Rails Installation - TutorialsPoint There are several curated lists of online resources for learning Ruby: Be aware that some resources, while still excellent, cover versions of Ruby as old as 1.6, and commonly 1.8, and will not include some syntax that you will see in day-to-day development with Rails. Ruby on Rails Installation - Learn Ruby on Rails in simple and easy steps. Please refer to a corresponding Database System Setup manual to set up your.
Learn Ruby on Rails - Updatey Rails is a web application development framework written in the Ruby language. This tutorial is a first step on your path to learn Ruby on Rails. I'm often asked, “Where's the Rails manual?
Documentation - Ruby It is desned to make programming web applications easier by making assumptions about what every developer needs to get started. Here you will find pointers to manuals, tutorials and references that will come in. Ruby & Rails Searchable API Docs Rails and Ruby documentation with smart.
Ruby on Rails Tutorial - of /s/f The source code in Ruby on Rails TM Tutorial is released under the MIT License. ISBN 13. CSS The Missing Manual by David Sawyer McFarland.
Install Rails 5.0 Ruby on Rails Installation Guide Nov 9, 2016. Install Ruby on Rails 5.0 on macOS, Ubuntu, or Windows. Up-to-date, detailed instructions on how to install the Rails newest release. How to.
DevDocs — Ruby on Rails 5.0 documentation Ruby on Rails 5.0 API documentation with instant search, offline mode, keyboard shortcuts, mobile version, and more.
Ruby on rail manual:
Rating: 99 / 100
Overall: 98 Rates | <urn:uuid:7d4b5d6a-c53b-430e-9559-3473417a5ec1> | 2.640625 | 586 | Content Listing | Software Dev. | 58.781237 | 95,512,525 |
Extinct kangaroos may have been hopless
Extinct giant kangaroos most likely could not hop and used a more rigid body posture to move their hind limbs one at a time, according to a Brown University study published this month in the journal PLOS ONE.
The short-faced, large-bodied sthenurine kangaroo – a relative to modern-day kangaroos – became extinct in the late Pleistocene, which ended approximately 17,700 years ago. The largest of these animals had an estimated body mass almost three times the size of the largest kangaroos alive today. Scientists speculate that a kangaroo of this size may not have been physically able to hop. Comparison of different sthenurine limb bones to those of other kangaroos shows a number of anatomical differences, especially in the larger species.
The physical differences suggest that these ancient kangaroo species lacked many specialized features for rapid hopping but had anatomy suggesting they supported their body with an upright posture and were able to support their weight on one leg at a time using their larger hips, knees and stabilized ankle joints. plos.org
LED breakthrough can mean warmer hues, cheaper cost
The phaseout of traditional incandescent bulbs in the United States, as well as a growing interest in energy efficiency, has given LED lighting a sales boost. But the light from white LED bulbs is generally colder than the warm glow of traditional bulbs. Plus, most of these lights are made with rare earth elements that are increasingly in demand for use in almost all other high-tech devices, adding to the cost of the technology.
But a research team led by Jing Li of Rutgers University has developed a group of lighting materials that don’t include rare earths and are instead made of copper iodide, which is an abundant compound. They tuned the materials to glow a warm white shade or various other colors using a low-cost solution process.
Their findings are reported in the Journal of the American Chemical Society. acs.org
Clemson team explores better storage for nuclear waste
Minerals that endure in nature for millions of years are inspiring a Clemson University-led research team to explore whether new materials could be developed to encase nuclear waste for safe storage.
Glass is now used to isolate nuclear waste, but a team led by Kyle Brinkman, a professor of materials science and engineering at Clemson, is hoping to develop materials that are more stable. Their work could help broaden disposal options and lower storage and disposal costs. The three-year project recently won an $800,000 research grant from the U.S. Department of Energy's Nuclear Energy University Programs.
The Clemson research is focused on crystalline ceramic that will be based on naturally occurring minerals that endure for millions of years. One example is hollandite, a mineral dug out of the Italian Alps that shows promise for housing cesium. clemson.edu | <urn:uuid:a216af52-25ff-45a1-aaa5-8a73c6a9ef12> | 3.703125 | 602 | Content Listing | Science & Tech. | 35.768234 | 95,512,535 |
Species Detail - Cochylimorpha straminea - Species information displayed is based on all datasets.
Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM).
Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84).
insect - moth
4 May (recorded in 1952)
22 September (recorded in 2013)
National Biodiversity Data Centre, Ireland, Cochylimorpha straminea, accessed 21 July 2018, <https://maps.biodiversityireland.ie/Species/80277> | <urn:uuid:5341f39e-0274-4e56-b5fd-bb93101f2112> | 2.65625 | 137 | Structured Data | Science & Tech. | 36.329042 | 95,512,549 |
Nobody saw it coming.
The rocky object showed up in telescope images the night of October 19. The Pan-STARRS1 telescope, from its perch atop a Hawaiian volcano, photographed it during its nightly search for near-Earth objects, like comets and asteroids. Rob Weryk, a postdoctoral researcher at the University of Hawaii Institute for Astronomy, was the first to lay eyes on it, as he sorted through the telescope’s latest haul. The object was moving “rapidly” across the night sky. Weryk thought it was probably a typical asteroid, drifting along in the sun’s orbit
“It was only when I went back and found it [in the data from] the night before that it became obvious it was something else,” he said. “I’d never expected to find something like this.”
Weryk and his colleagues scrambled to secure more telescope time to study this mysterious, fast-moving object. They called in reinforcements in the astronomy community. Initial observations suggested the space rock was a comet. When new data showed the object lacked some important properties of comets, they decided it did in fact have to be an asteroid. But it wasn’t acting like any asteroid they’d ever seen.
When astronomers examined and measured the object’s movements, they were stunned. The object didn’t originate in our solar system. It had come from somewhere else, and had traveled through interstellar space for who knows how long to get here.
Astronomers announced the discovery of the object October 26, calling it A/2017 U1. The University of Hawaii team eventually gave it a permanent name of Hawaiian origin, ‘Oumuamua, “a messenger from afar arriving first.” After weeks of follow-up observations, they have released more information about the finding in a new paper, published Monday in Nature, that confirms ‘Oumuamua is the first known interstellar object in our solar system.
‘Oumuamua is a cigar-shaped, 800-meter-long asteroid, red in color, with a surface similar to comets and organic-rich asteroids found elsewhere in our solar system, according to the astronomers. Little is known about its composition. But its existence is, for now, exciting enough.
Astronomers have long predicted this event could happen. Our solar system, in its adolescence, was a turbulent place. As the planets swirled into shape, some of the bigger ones jostled nearby material, sending some of it flying toward the edge of the solar system and beyond. Some of the rejected material could even make its way to another star. Since planet formation is quite uniform across the universe, astronomers believe ‘Oumuamua is one of these outcasts, tossed out of its home system. By this logic, there are likely pieces of our own solar system coasting somewhere in interstellar space or past another star.
Astronomers only had about two weeks after the discovery to observe ‘Oumuamua before it disappeared from the view of optical telescopes. “Because the object is moving fast, and the light we get from it is reflected sunlight, the faster it moves away from both the sun and the Earth, the faster it fades in brightness,” said Karen Meech, an astronomer at the University of Hawaii Institute for Astronomy and the lead author of the paper. She and her colleagues condensed weeks or months of work into days and raced to apply for observation time at the world’s most powerful telescopes, which is competitive and tightly scheduled. Observatories squeezed them in and other colleagues donated time out of their own projects.
Astronomers found that the properties of ‘Oumuamua are unlike any of the approximately 750,000 asteroids or comets known to humanity. “In our simulations, you can see that this could not have been from our solar system—it’s simply going too fast,” said Davide Farnocchia, a navigation engineer at NASA’s Jet Propulsion Laboratory who was responsible for figuring out ‘Oumuamua’s trajectory.
Its orbit was completely different, too, Meech said. Scientists can figure out the shape of the orbit of objects that move around our sun, a measurement known as eccentricity. The eccentricity of all objects bound to the gravity of the sun falls between 0 and 1. The highest known eccentricity, 1.058, belongs to a comet that was discovered in 1980, but astronomers interpret this, along with other measurements that stray from the norm, as the result of objects getting jostled as they moved past giant planets like Jupiter. The eccentricity of the interstellar visitor is nearly 1.2. The difference looks small on paper, but it’s big enough to confirm that ‘Oumuamua doesn’t play by our rules.
Naming the space rock posed an interesting challenge. Comets are usually named after their discoverers, while asteroids are named only after their orbits have been accurately computed and established. The International Astronomical Union, the organization in charge of naming these objects, didn’t have guidelines for christening an interstellar rock; in its designations, the IAU uses the letter C for comet and A for asteroid, and this thing wasn’t either, not really. “This object is only going by once,” said Paul Chodas, the manager of NASA’s Center For Near-Earth-Object Studies. The IAU eventually came up with a new designation: I, for interstellar.
Ground-based telescopes in Chile and Hawaii have already lost sight of ‘Oumuamua. The Hubble and Spitzer space telescope are observing the rock this week, and may be able to track it until December. ‘Oumuamua is now outside the orbit of Mars. It will pass the orbit of Jupiter next May, then Neptune in 2022, and Pluto in 2024. By 2025, it will coast beyond the outer edge of the Kuiper Belt, a field of icy and rocky objects. It will take many more years for the object to reach the Oort cloud, another region of floating objects, at the edge of the solar system.
The arrival of ‘Oumuamua had ignited the astronomical community, particularly asteroid researchers like Andy Rivkin, a planetary astronomer at Johns Hopkins University. Its departure feels just as abrupt. Rivkin put his own twist on an old refrain to describe how he felt about ‘Oumuamua fading from view. “Don’t be sad that it’s over. Be happy that you saw it,” he said. “Because it is really amazing that we saw it.” | <urn:uuid:c498d540-d79a-4b21-90cc-d6c59a5bce0e> | 3.40625 | 1,416 | News Article | Science & Tech. | 44.197805 | 95,512,550 |
The Infrared Thermal Mappers aboard the two Viking orbiters obtained solar reflectance and infrared emission measurements of the Martian north and south polar regions during an entire Mars year. The observations were used to determine annual radiation budgets, infer annual carbon dioxide frost budgets, and constrain spring season surface and atmospheric properties with the aid of a polar radiative model. The results provide further confirmation of the presence of permanent CO(2)frost deposits near the south pole and show that the stability of these deposits can be explained by their high reflectivities. In the north, the observed absence of solid CO(2) during summer was primarily the result of enhanced CO(2) sublimation rates due to lower frost reflectivities during spring. The results suggest that the present asymmetric behavior of CO(2)frost at the Martian poles is caused by preferential contamination of the north seasonal polar cap by atmospheric dust.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:91d293a9-d190-4c1b-8dd1-80ce1cc120a8> | 2.703125 | 201 | Academic Writing | Science & Tech. | 16.623857 | 95,512,554 |
Neutron capture is a nuclear reaction in which an atomic nucleus and one or more neutrons collide and merge to form a heavier nucleus. Since neutrons have no electric charge, they can enter a nucleus more easily than positively charged protons, which are repelled electrostatically.
Neutron capture plays an important role in the cosmic nucleosynthesis of heavy elements. In stars it can proceed in two ways: as a rapid (r-process) or a slow process (s-process). Nuclei of masses greater than 56 cannot be formed by thermonuclear reactions (i.e. by nuclear fusion), but can be formed by neutron capture. Neutron capture on protons yields a line at 2.223 MeV, predicted and commonly observed in solar flares.
Neutron capture at small neutron fluxEdit
At small neutron flux, as in a nuclear reactor, a single neutron is captured by a nucleus. For example, when natural gold (197Au) is irradiated by neutrons, the isotope 198Au is formed in a highly excited state, and quickly decays to the ground state of 198Au by the emission of γ rays. In this process, the mass number increases by one. This is written as a formula in the form 197Au+n → 198Au+γ, or in short form 197Au(n,γ)198Au. If thermal neutrons are used, the process is called thermal capture.
Neutron capture at high neutron fluxEdit
The r-process happens inside stars if the neutron flux density is so high that the atomic nucleus has no time to decay via beta emission in between neutron captures. The mass number therefore rises by a large amount while the atomic number (i.e., the element) stays the same. Only afterwards, the highly unstable nuclei decay via many β− decays to stable or unstable nuclei of high atomic number.
Capture cross sectionEdit
The absorption neutron cross-section of an isotope of a chemical element is the effective cross sectional area that an atom of that isotope presents to absorption, and is a measure of the probability of neutron capture. It is usually measured in barns (b).
Absorption cross section is often highly dependent on neutron energy. As a generality, the likelihood of absorption is proportional to the time the neutron is in the vicinity of the nucleus. The time spent in the vicinity of the nucleus is inversely proportional to the relative velocity between the neutron and nucleus. Other more specific issues modify this general principle. Two of the most commonly specified measures are the cross-section for thermal neutron absorption, and resonance integral which considers the contribution of absorption peaks at certain neutron energies specific to a particular nuclide, usually above the thermal range, but encountered as neutron moderation slows the neutron down from an original high energy.
The thermal energy of the nucleus also has an effect; as temperatures rise, Doppler broadening increases the chance of catching a resonance peak. In particular, the increase in uranium-238's ability to absorb neutrons at higher temperatures (and to do so without fissioning) is a negative feedback mechanism that helps keep nuclear reactors under control.
Neutron capture is involved in the formation of isotopes of chemical elements. As a consequence of this fact the energy of neutron capture intervenes in the standard enthalpy of formation of isotopes.
Neutron activation analysis can be used to remotely detect the chemical composition of materials. This is because different elements release different characteristic radiation when they absorb neutrons. This makes it useful in many fields related to mineral exploration and security.
This section needs attention from an expert in Physics. The specific problem is: No reliable sources indicated and a general public unable to find them. An expert needs to verify the information; copy-edit the text for encyclopedic style and find references.(October 2011)
This section needs additional citations for verification. (December 2011) (Learn how and when to remove this template message)
The most important neutron absorber is 10B as 10B4C in control rods, or boric acid as a coolant water additive in PWRs. Other important neutron absorbers that are used in nuclear reactors are xenon, cadmium, hafnium, gadolinium, cobalt, samarium, titanium, dysprosium, erbium, europium, molybdenum and ytterbium; all of which usually consist of mixtures of various isotopes—some of which are excellent neutron-absorbers. These also occur in combinations such as Mo2B5, hafnium diboride, titanium diboride, dysprosium titanate and gadolinium titanate.
Hafnium, one of the last stable elements to be discovered, presents an interesting case. Even though hafnium is a heavier element, its electron configuration makes it practically identical with the element zirconium, and they are always found in the same ores. However, their nuclear properties are different in a profound way. Hafnium absorbs neutrons avidly (Hf absorbs 600 times more than Zr), and it can be used in reactor control rods, whereas natural zirconium is practically transparent to neutrons. So, zirconium is a very desirable construction material for reactor internal parts, including the metallic cladding of the fuel rods which contain either uranium, plutonium, or mixed oxides of the two elements (MOX fuel).
Hence, it is quite important to be able to separate the zirconium from the hafnium in their naturally occurring alloy. This can only be done inexpensively by using modern chemical ion-exchange resins. Similar resins are also used in reprocessing nuclear fuel rods, when it is necessary to separate uranium and plutonium, and sometimes thorium.
- Ahmad, Ishfaq; Hans Mes; Jacques Hebert (1966). "Progress of theoretical physics: Resonance in the Nucleus". Institute of Physics. Ottawa, Canada: University of Ottawa (Department of Physics). 3 (3): 556–600.
- Morrison, P. "On gamma-ray astronomy". Bibcode:1958NCim....7..858M. doi:10.1007/BF02745590.
- Chupp, E.; et al. "Solar Gamma Ray and Neutron Observations". Bibcode:1973NASSP.342..285C.
- Prompt Gamma-ray Neutron Activation Analysis. International Atomic Energy Agency
- D. Franklin; R. B. Adamson (1 January 1984). Zirconium in the Nuclear Industry: Sixth International Symposium. ASTM International. pp. 26–. ISBN 978-0-8031-0270-5. Retrieved 7 October 2012. | <urn:uuid:54ab1729-5f20-40e3-a816-9e8d33ab0a8a> | 4.5 | 1,419 | Knowledge Article | Science & Tech. | 39.82255 | 95,512,612 |
Stratification of water-supply reservoirs frequently results in substantial hypolimnetic oxygen depletion with a resulting negative impact on raw water quality. Hypolimnetic oxygenators are used to add oxygen to the hypolimnion without significantly disrupting the thermal density structure. The three most common devices are the airlift aerator, the Speece Cone, and the bubble-plume diffuser. A discrete-bubble model based on fundamental principles has previously been shown to hold considerable promise for predicting the performance of airlift aerators and the Speece Cone. In this paper, we have further verified this model by comparing its predictions to a series of pilot-scale experimental measurements and have also demonstrated its ability, under somewhat idealized conditions, to predict the full-scale performance of a bubble-plume diffuser in a stratified reservoir. The potential for the diffused-bubble aeration system to increase oxygen demand, and the rate at which nitrogen builds up during operation and de-gasses following destratification, are also considered.
Research Article|June 01 2001
Hypolimnetic oxygenation: predicting performance using a discrete-bubble model
Water Science and Technology: Water Supply (2001) 1 (4): 185-191.
J.C. Little, D.F. McGinnis; Hypolimnetic oxygenation: predicting performance using a discrete-bubble model. Water Science and Technology: Water Supply 1 June 2001; 1 (4): 185–191. doi: https://doi.org/10.2166/ws.2001.0083
Download citation file:
Don't already have an account? Register
You could not be signed in. Please check your email address / username and password and try again. | <urn:uuid:8e132249-53f7-4699-9b6c-fbe4c0442ed6> | 2.9375 | 364 | Academic Writing | Science & Tech. | 30.237402 | 95,512,614 |
The Dalton Minimum was a period of low sunspot count, representing low solar activity, named after the English meteorologist John Dalton, lasting from about 1790 to 1830 or 1796 to 1820, corresponding to the period solar cycle 4 to solar cycle 7.
Like the Maunder Minimum and Spörer Minimum, the Dalton Minimum coincided with a period of lower-than-average global temperatures. During that period, there was a variation of temperature of about 1 °C in Germany.
The cause of the lower-than-average temperatures and their possible relation to the low sunspot count are not well understood. Recent papers have suggested that a rise in volcanism was largely responsible for the cooling trend.
While the Year Without a Summer, in 1816, occurred during the Dalton Minimum, the prime reason for that year's cool temperatures was the highly explosive 1815 eruption of Mount Tambora in Indonesia, which was one of the two largest eruptions in the past 2000 years. One must also consider that the rise in volcanism may have been triggered by lower levels of solar output as there is a weak but statistically significant link between decreased solar output and an increase in volcanism.
- Komitov and Kaftan 2004
- Archibald, p. 32
- File:Temperaturreihe Deutschland.png, red line, 1795 to 1815
- Wagner and Zorita, as well as Wilson.
- Komitov, Boris and Vladimir Kaftan (2004) "The Sunspot Activity in the Last Two Millenia on the Basis of Indirect and Instrumental Indexes: Time Series Models and Their Extrapolations for the 21st Century", in Proceedings of the International Astronomical Union, 2004, pp. 113–114.
- Wagner, Sebastian and Eduardo Zorita (2005) "The influence of volcanic, solar and CO2 forcing on the temperatures in the Dalton Minimum (1790–1830): a model study[permanent dead link]", Climate Dynamics v. 25, pp. 205–218, doi 10.1007/s00382-005-0029-0.
- Wilson, Robert M. (nd) "Volcanism, Cold Temperature, and Paucity of Sunspot Observing Days (1818–1858): A Connection?", The Smithsonian/NASA Astrophysics Data System, accessed February 2009.
- A detailed analysis with the auroral and solar data has been given by Wilfried Schröder, N.N. Shefov in a paper in Ann. Geophys. 2004. Also details can be found in Wilfried Schröder, Das Phänomen des Polarlichts (The Aurora in Time), Darmstadt, Wissenschaftliche Buchgeselllschaft 1984, and Science Edition, Bremen, 2000.
|This climatology/meteorology–related article is a stub. You can help Wikipedia by expanding it.|
|This article related to the Sun is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:2fcb7955-7d0f-4a88-9e40-ce36729ce6e1> | 3.515625 | 640 | Knowledge Article | Science & Tech. | 50.424068 | 95,512,629 |
Jan 2, 2016.
SQL server uses data types to store a specific kind of value such as numbers, dates, or text in table columns and to use in functions, such as mathematical expressions. One issue with data types is that they don't usually mix well. Conversion functions make them mix better! Though there are times when.
In SQL Server, you can use CONVERT function to convert a string with the specified format to a DATETIME value. In MySQL, you can use STR_TO_DATE function if you need a specific format, or CONVERT if you need the default format. Note that the order of parameters in SQL Server and MySQL CONVERT functions is. | <urn:uuid:b0b450bc-cf8c-4b7f-b212-0c0ea5ac3050> | 3.28125 | 140 | Documentation | Software Dev. | 64.214286 | 95,512,641 |
Please help with 14, 15 and 16 with step by step solutions showing equations used and answers. answer 14 not 16.1cm, answer 15 not 8.123 m/s, answer 16 not 19.6 m/s^2. (See attached file for full problem description)
A 40kg bucket is being lifted by a rope. The rope is guaranteed not to break if the tension is 500N or less. The bucket, started from rest, after being lifted for 3m, it is moving at 3m/s. Assuming that the acceleration is constant, is the rope in danger of breaking? Justify your answer by including the tension in the rope.
Two cars are traveling along a straight line in the same direction, the lead car at 25.3 m/s and the other car at 29.3 m/s. At the moment the cars are 38.9 m apart, the lead driver applies the brakes, causing her car to have an acceleration of -1.93m/s^2. Therefore, it travels 166 meters in this time and takes 13.1 to stop. Q
1) A rocket sled running on a straight level track has been used to study the physiological effects of large accelerations on astronauts. One such sled can attain a speed of 408m/sec in 1.6sec. starting from rest. a) What is the acceleration of the sled, assuming it is constant? Answer in m/s^2. b) How many g's would
1) If the complex number 5+2i is expressed in the exponential form Ae^io, determine the values of A and O. a)3.3754, 32.70 b)4.6483, 27.50 c)5.3852, 21.80 d)6.6722, 15.55 2) A harmonic motion has an amplitude of 0.05m and a frequency of 10Hz. Determine the maximum velocity. a)1.571m/s b)3.142m/s c)4.672m/s d)
1. What is the acceleration of a 20kg sled as it is pulled across the ground with an applied force of 50N? The force of friction working against the sleds motion is 30N. 2. A railroad deisel engine coasting at 10km/h runs into a stationary flatcar. The deisel weighs 4 times as much as the flatcar. Assuming the cars couple tog
A car starts from rest and moves along a straight line with an acceleration of a = (3s^(-1/3)) m/s^2, where s is in meters. Determine the car's acceleration when t = 4s.
1. What is the acceleration of a rock at the top of its trajectory when thrown straight upward? Explain whether or not the answer is zero by using the equation a=F/m as a guide. 2. What is the acceleration of a sky diver when air resistance is half the weight of the sky diver?
A pulley has an initial angular speed of 12.5 rad/s and a constant angular acceleration of 3.41 rad/s squared. Through what angle does the pulley turn in 5.26 s?
A 587 kg roller coaster car (includes mass of occupants) are passing through a vertical loop. The speed of the car at the top of the loop is 15.4 m/s. What radius of curvature (in meters) must the loop have at its very top in order for the occupants to experience a normal force which is 1/3 their weight? Two students sitting
A solid disk and ring of the same mass and radius start from rest at the top of an incline. both of them start from the same vertical height of 0.5m and roll down an incline. Which of these reach the bottom first? Give reason. Find the speed of the disk when it reaches the bottom of the incline.
A simple 2.05 m long pendulum oscillates. The acceleration of gravity is 9.8 m/s2. How many complete oscillation does this pendulum make in 5.72 min?
You need to know the height of a tower, but darkness obscures the ceiling. You note that a pendulum extending from the ceiling almost touches the floor and that its period is 12s. The acceleration of gravity is 9.81 m/s2. How tall is the tower in units of m?
A bullet of a mass of 20 grams leaves the barrel of a rifle with a velocity of 700m/s. The length of the barrel is 60 cm. a. What is the average acceleration of the bullet as it travels the length of the barrel. b. Find the average force on the bullet as it travels inside the barrel. c. Suppose the rifle is 1m above the surfa
A 80kg crate is pulled up a frictionless incline of 30 degrees by a rope with an acceleration of 1.5 ms^2. If the rope is parallel to the incline, what is the tension in the rope, assume g = 9.8 ms^2?
A car traveling at 20 ms is brought to a halt by applying the brakes in 4sec. What is the magnitude of acceleration of the car?
On my last test, there is a question: an object with a mass of 5kg is acted upon by 4 separate forces that each have a magnitude of 10N. I understand that there could be different resulting accelerations of the object but i don't understand why or how which is why i got this answer wrong. In fact, i was totally lost. What are th
3) A batter swings and hits a fly ball in baseball, giving it an initial velocity of 35 m/s at an angle of 40 degrees from the horizontal. The ball is impacted at 1 meter above the ground. The baseball undergoes a parabolic path that is set to just clear a 3 meter fence in center field. The center fielder, 27 meters away from th
46) A furniture crate of mass 60.8 kg is at rest on a loading ramp that makes an angle of 25.8 degrees with the horizontal. The coefficient of kinetic friction between the ramp and the crate is .272. What force, (in Newtons) applied parallel to the ramp, is required to push the crate up the incline at a constant speed? 47) A
An earthquake-produced surface wave can be approximated by a sinusoidal transverse wave. Assuming a frequency of 0.476 Hz (typical of earthquakes, which actually include a mixture of frequencies), what minimum amplitude will cause objects to leave contact with the ground?
1. A dockworker is loading 20.2-kg crates onto a ship. He notices that it takes 77.2 Newtons of horizontal force to set them into motion from rest. Once in motion, it takes 58.3 Newtons of horizontal force to keep them moving at a constant speed. Determine the coefficient of static friction. Enter your answer accurate to the thi
The angular acceleration of 1200 rev/min^2 when expressed in radians/s^2 is what?
Please explain how to find acceleration of the system and tension of the wire in the following case: A mass (m=1.4kg) sits on a frictionless table. Another mass is connected to it by a wire over a frictionless pulley (mass of second object = 1.6kg). See attached file for figure.
Question 69: A student proposes a design for an automobile crash barrier in which a 1700-kg sport utility vehicle moving at 20.0 m/s crashes into a spring of negligible mass that slows it to stop. To avoid injuring the passengers, the acceleration of the vehicle as it slows can be no more than 5.00g. a) Find the required
If a body with a mass of X kg is sliding across a horizontal floor with an initial velocity of Y meters per second and comes to rest after Xmeters, how do you calculate the force of friction that stopped the object?
(See attached file for full problem description) --- As a vehicle goes from +4m/s to -1 m/s, what is its change in velocity? For 2 and 3 use the following: Vector P: 50 meters @ 110 degrees Vector Q: 35 meters @ 315 degrees Question 2: What is 3P - 3Q? Question 3: What is 2P + 3Q? Question 4: Dick Rutan,
A m1 = 16.0 kg object and a m2 = 12.5 kg object are suspended, joined by a cord that passes over a pulley with a radius of 10.0 cm and a mass of 3.00 kg (Fig. P10.46). The cord has a negligible mass and does not slip on the pulley. The pulley rotates on its axis without friction. The objects start from rest 3.00 m apart. Treatin
The sled dog in figure drags sleds A and B across the snow. The coefficient of friction between the sleds and the snow is 0.10. a.) If the tension in rope 1 is 150 N, what is the tension in rope 2? Answer: =? N
I have no clue how to go about getting these equations. Can you help me out and explain how to do these. It is not for a test or anything. It is just a homework problem, but I can't figure it out. (See attached file for full problem description and equations) --- Atwood Machine Special Cases: An Atwood machine consist
A 5.0 kg block is pulled along a horzontal surface by a cord that exerts a force T = 12.0 N at an angle 35 degrees above the horizontal. There is no friction (A) Draw a free-body diagram of the block, labeling all forces. (B) What must be true of the sum of the vertical components of all forces? Why? (C) Determine the accelerati | <urn:uuid:cbe94e67-d988-411b-821f-15d8df90a139> | 3.359375 | 2,067 | Tutorial | Science & Tech. | 86.071477 | 95,512,651 |
A major upgrade of the World’s largest science experiment, the Large Hadron Collider (LHC) at CERN in Geneva, has commenced today with the eventual aim of increasing the number of collisions in the large experiments by over five times and thus boosting the probability of the discovery of new physics phenomena and expanding our understanding of the Universe.
Liverpool physicists, along with other UK and international researchers, are playing a key role in this work that will, by 2026, have considerably improved the performance of the LHC.
This includes work on the construction of the accelerator, the machine contributions in HL-LHC-UK to the high intensity collimation system, the crab cavity system, the advanced beam diagnostics and cold powering.
While the LHC is able to produce up to 1 billion proton-proton collisions per second, the HL-LHC will increase this number, referred to by physicists as “luminosity”, by a factor of between five and seven, allowing about 10 times more data to be accumulated between 2026 and 2036.
This means that physicists will be able to investigate rare phenomena and make more accurate measurements. Luminosity is a key performance indicator of an accelerator, as it tells you the number of particles colliding in a certain amount of time. Since discoveries in particle physics are based on collecting large amounts of data, then the greater the number of collisions the greater the chance hysicists have of seeing a new particle.
Professor Carsten Welsch, Head of the Physics Department and Leader of the Liverpool contribution to HLLHC-UK: “The high luminosity upgrade will make the Large Hadron Collider an even better accelerator in the future. We are making key contributions to the development of dedicated diagnostics for this upgrade, as well as to measurements targeting new technology, such as crab cavities, in close collaboration with other groups in the UK and our partners at CERN. These developments will allow to fully exploit and further improve the potential of the world’s largest and highest energy particle accelerator.”
In 2012 the LHC allowed physicists to unearth the Higgs boson, thereby making great progress in understanding how particles acquire their mass. The HL-LHC upgrade will allow the Higgs boson’s properties to be defined more accurately, and to measure with increased precision how it is produced, how it decays and how it interacts with other particles. In addition, scenarios beyond the Standard Model will be investigated, including supersymmetry (SUSY), theories about extra dimensions and quark substructure (compositeness).
The secret to increasing the collision rate is to squeeze the particle beam at the interaction points so that the probability of proton-proton collisions increases. To achieve this, the HL-LHC requires about 130 new magnets, in particular 24 new superconducting focusing quadrupoles to focus the beam and four superconducting dipoles. Both the quadrupoles and dipoles reach a field of about 11.5 tesla, as compared to the 8.3 tesla dipoles currently in use in the LHC.
Sixteen brand-new “crab cavities” will also be installed to maximise the overlap of the proton bunches at the collision points. Their function is to tilt the bunches so that they appear to move sideways – just like a crab. Much of this initial work has been carried out by a UK team and they successfully tested the new ‘crab cavities’ technology and rotated a beam of protons for the first time in May this year.
Professor Monica D’Onofrio, Team Leader of the ATLAS Liverpool group, said: ”The high luminosity LHC will allow the new upgraded experiments to reach an unprecedented sensitivity to new phenomena expected to manifest above the TeV scale.
“The Higgs boson properties will be measured with extremely high precision and we might even unveil the mysterious dark matter. The Liverpool group makes major contributions to the technical developments required for the upgrades to the ATLAS experiment.
“In collaboration with other groups in the UK, at CERN and worldwide, we work on the new silicon tracker, the ITk, which will be capable of delivering physics under the challenging conditions of the HL-LHC. It is exciting to be part of a global effort that will allow us to explore the high-energy frontier and push the limits of human knowledge. “
The HL-LHC project started as an international endeavour involving 29 institutes from 13 countries. It began in November 2011 and two years later was identified as one of the main priorities of the European Strategy for Particle Physics, before the project was formally approved by the CERN Council in June 2016. After successful prototyping, many new hardware elements will be constructed and installed in the years to come. Overall, more than 1.2 km of the current machine will need to be replaced with many new high-technology components such as magnets, collimators and radiofrequency cavities and UK scientists will have a key role to play in contributing to that work.
Professor Themis Bowcock, Head of Particle Physics at the University, added: “This is the next chapter for European Physics. Today’s Large Hadron Collider has culminated not only in the discovery of the Higgs, new particles and new states of matter but has cleared up our understanding of the standard model of physics beyond our best hopes and expectations.
“This is down to the amazing work of CERN, its engineers and all the scientists round the world who have worked so hard on the LHC. This new venture will extend our discovery reach (like Galileo building a better telescope!) to enable us to explore beyond the limits of the current machine; the high-luminosity LHC is an exciting new project that we are privileged to be part of.”
Another key ingredient in increasing the overall luminosity in the LHC is to enhance the machine’s availability and efficiency. For this, the HL-LHC project includes the relocation of some equipment to make it more accessible for maintenance. The power converters of the magnets will thus be moved into separate galleries, connected by new innovative superconducting cables capable of carrying up to 100 kA with almost zero energy dissipation.
To allow all these improvements to be carried out, major civil-engineering work at two main sites is needed, in Switzerland and in France. This includes the construction of new buildings, shafts, caverns and underground galleries. Tunnels and underground halls will house new cryogenic equipment, the electrical power supply systems and various plants for electricity, cooling and ventilation.
Professor Tara Shears, Professor of Physics at the University of Liverpool, explained: “HL-LHC isn’t to study what we already know, it’s to discover what we don’t. There are so many mysteries; dark matter, antimatter, gravity. HL-LHC will reveal the universe in intricate detail and, we hope, give us some answers.”
The LHC started colliding particles in 2010. Inside the 27-km LHC ring, bunches of protons travel at almost the speed of light and collide at four interaction points. These collisions generate new particles, which are measured by detectors surrounding the interaction points. By analysing these collisions, physicists from all over the world are deepening our understanding of the laws of nature.
During the civil engineering work, the LHC will continue to operate, with two long technical stop periods that will allow preparations and installations to be made for high luminosity alongside yearly regular maintenance activities. After completion of this major upgrade, the LHC is expected to produce data in high-luminosity mode from 2026 onwards. By pushing the frontiers of accelerator and detector technology, it will also pave the way for future higher-energy accelerators.
HL-LHC website: http://hilumilhc.web.cern.ch
H L-LHC-UK team website: http://www.hl-lhc-uk.org/
You must be logged in to post a comment.
All recent news
Maria Balshaw: “absolutely unbelievable” to be returning as Honorary Graduate
GALLERY: Graduation 2018
Compound identified that protects against neurodegeneration
New collaborative agreements with XJTU and Sichuan University
University Professor helps Academy of Medical Sciences win national award
Our Chancellor Colm Tóibín presents Professor Dame Jane Dacre with her Honorary Degree of Doctor of Medicine at this week's final #LivuniGrad ceremony. Congratulations @DacreJane!
Director @Tate Maria Balshaw tells us her Liverpool student self would consider it “absolutely unbelievable” to be returning 27 years later as an Honorary Graduate https://t.co/9QZKjR9fWF
Our multidisciplinary team develops novel neuroprotective molecule with 100 times the potency of the antiepileptic drug ethosuximide, shown to protect against conditions like #Parkinsons #Alzheimers and #Huntingtons https://t.co/JuYQoS4lRk @wellcometrust | <urn:uuid:14dcdfb8-27aa-47c6-9b90-ebe5358f961a> | 2.8125 | 1,911 | News (Org.) | Science & Tech. | 32.621057 | 95,512,653 |
Our Vision: Enable NASA to realize the capabilities of assembling and servicing future spacecraft in space to solve the deepest scientific mysteries of the Cosmos.
Welcome to the NASA Exoplanet Exploration Program's in-Space Servicing & Assembly (iSSA) website. We are actively exploring the benefits of assembling future large telescopes in space rather than autonomously deploying them. One day NASA will want to launch a telescope or interferometer whose size and/or mass exceeds the launch capability of our largest rockets. Additionally, the deployment schemes may be very complicated and perhaps carry too much risk of something going wrong. In those cases, assembling these structures in space will be the enabling capability. Today, the largest telescope aperture that can be autonomously deployed is 6.5 m, to be demonstrated by NASA's James Webb Space Telescope.
But will in-space assembly be a reality in 10 years or 40 years? We don’t yet know.
We are examining the following questions:
- At what telescope aperture (or cost) would it be less expensive to assemble the telescope in space rather than deploy autonomously?
- What risks does iSSA mitigate compared to autonomous deployment? What risks are increased?
- How would a large telescope (> 15 m diameter) be assembled in space? And what would that cost?
- In what orbit would that assembly be conducted?
- What technologies are required to enable such an assembly? What is the state-of-art and what are the technology gaps?
- What technology demonstrations would help advance the technology?
- What is the optimal balance between astronauts and robots in assembling large telescopes in space?
- How will future large telescopes be serviced to extend their lives and upgrade their payload instruments?
- Why consider this now?
Stay tuned! If you are interested in joining our announcement email list, which will provide occasional updates in this activity, upcoming events, or news, please contact Brendan Crill.
Upcoming Presentations and Events:
COSPAR 2018, Pasadena, 14 – 22 July: TBD
AIAA Space 2018, Orlando, 17 — 19 September: TBD
iSAT (in-Space Assembled Telescope)Telescope Assembly and Testing Workshop, NASA/Langley Research Center (invite-only), October 2-4
Mirror Technology Days, Redondo Beach, 5 — 7 November, 2018: TBD
Recent Presentations and News:
Robotic Assembly of Space Assets: Architectures and Technologies. Dr. Rudranarayan Mukherjee (JPL) Future In-Space Operations (FISO) teleconference presentation June 27, 2018.
SPIE Astronomical Telescopes and Instrumentation: Austin TX June 10-15, 2018
- "Achieving Future Major Astronomical Goals in Space: Promises and Challenges of Servicing and In-Space Assembly of Very Large Apertures." Harley Thronson (NASA/GSFC)
- "Servicing and Assembly: Enabling the Most Ambitious Future Space Observatories" Ron S. Polidan (PSST)
- "In-space assembly application and technology for NASA's future science observatory and platform mission" Lynn Bowman (NASA/LaRC)
Building The Future: in-Space Servicing and Assembly of Large Aperture Space Telescopes JPL-Max Plank Institute for Astronomy direct imaging technology workshop. Pasadena CA April 12, 2018
What Robotics in Space Can Enable: 2025-2035, Goddard Symposium Panel, Greenbelt MD, March 15, 2018
Deep Space Gateway (DSG) Science Workshop, Denver, February 27 — March 1, 2018
- "In-Space Assembly of Large Telescopes for Exoplanet Imaging and Characterization" Nicholas Siegler , NASA/JPL
- "Servicing Large Space Telescopes with the Deep Space Gateway" Bradley Peterson, OSU
- "Starshade Assembly Enabled by the Deep Space Gateway Architecture" John Grunsfeld, NASA (retired)
- "In-Space Assembly: Infrastructure Needs" Harley Thronson, NASA/GSFC
"On-Orbit Assembly of Space Assets: A Path to Affordable and Adaptable Space Infrastructure" Dr. Danielle Piskorz and Dr. Karen L. Jones, The Aerospace Corporation.
Future In-Space Operations (FISO) Working Group presentation: February 21, 2018
- "Findings and Observations from the November 2017 NASA in-Space Servicing and Assembly Technical Interchange Meeting" Nicholas Siegler , NASA/JPL , Bradley Peterson, OSU & STScI , Harley Thronson , NASA/GSFC
"Humanity's Biggest Machines Will Be Built in Space" Popular Mechanics Feb 16, 2018
"Scientists and engineers push for servicing and assembly of future space observatories" Space News Jan 10, 2018
231st American Astronomical Society Splinter Session: Astronomers, Astronauts and Robots: Enabling the Most Ambitious Future Space Observatories
National Harbor MD Jan 9, 2018
- Ron Polidan: In-space Servicing and Assembly of Extremely Large Telescopes: Introduction and Overview
- John Grunsfeld: The Current State of Assembly and Servicing of Space Observatories
- Jim Breckinridge: Space Astronomy Without Barriers | <urn:uuid:8916732b-fb42-43bc-a3ca-0a242e20d169> | 2.53125 | 1,083 | About (Org.) | Science & Tech. | 22.67406 | 95,512,673 |
WASHINGTON (AP) — Don't blame man-made global warming for the devastating California drought, a new federal report says.
A report issued Monday by the National Oceanic and Atmospheric Administration said natural variations — mostly a La Nina weather oscillation — were the primary drivers behind the drought that has now stretched to three years.
Study lead author Richard Seager of Columbia University said the paper has not yet been published in a peer-reviewed scientific journal. He and NOAA's Martin Hoerling said 160 runs of computer models show heat-trapping gases should slightly increase winter rain in parts of California, not decrease.
"The conditions of the last three winters are not the conditions that climate change models say would happen," Hoerling said. But he said the La Nina, which is the cooler flip side of the warming of central Pacific ocean, can only be blamed for about one-third of the drought. The rest of the causes can be from just random variation, he said.
Some outside climate scientists criticized the report, saying it didn't take into effect how record warmth worsened the drought. California is having its hottest year on record, based on the first 11 months of the year and is 4.1 degrees warmer than 20th-century average, according to the National Climatic Data Center.
"This study completely fails to consider what climate change is doing to water in California," wrote Kevin Trenberth, head of climate analysis at the National Center for Atmospheric Research. He said the work "completely misses" how hotter air increases drying by evaporating more it from the ground.
In droughts, extra heat from global warming enhances the drying in a feedback effect, Trenberth and others said. But Hoerling said that is less of a factor in California because it is so near the ocean and its rain comes in storms coming off the Pacific.
Peer-reviewed studies are divided on whether the drought can be blamed on climate change. Others published earlier this year point more directly to changes in pressure of the Pacific that blocked rain from coming into California, but Hoerling and Seager dismissed them as not adequate.
Hoerling, who specializes in the complicated field of studying the cause of climate extremes, in the past has downplayed other scientists' claims that regional droughts are caused by man-made warming. However, Hoerling acknowledges that climate change is happening, will worsen weather in the future and has produced past studies attributing strange weather — such as more frequent Mediterranean droughts — to heat-trapping gases from the burning of fossil fuels.
Scientists can't even agree on how bad the drought is. Hoerling said the drought isn't even in the top five worst for California. But a new peer-reviewed study in the journal Geophysical Research Letters by researchers at the University of Minnesota and Woods Hole Oceanographic calls this "the most severe drought in the last 1,200 years."
Deke Arndt, climate monitoring chief for NOAA's National Climatic Data Center, said by some drought measures, the current California drought "is slightly more intense than, but still comparable to, the late 1970s episode. I'd put them at 1a and 1b on the list of historical multi-year drought episodes affecting California in modern times."
Seth Borenstein can be followed at http://twitter.com/borenbears | <urn:uuid:f25260a1-b366-4e0b-9f2c-fb742ac0cba7> | 2.796875 | 691 | News Article | Science & Tech. | 43.008649 | 95,512,693 |
Changes in species composition in alpine snowbeds with climate change inferred from small-scale spatial patterns
Kammer, Peter M.
- Journal Article
Rights / licenseCreative Commons Attribution 3.0 Unported
Alpine snowbeds are characterised by a very short growing season. However, the length of the snow-free period is increasingly prolonged due to climate change, so that snowbeds become susceptible to invasions from neighbouring alpine meadow communities. We hypothesised that spatial distribution of species generated by plant interactions may indicate whether snowbed species will coexist with or will be out-competed by invading alpine species – spatial aggregation or segregation will point to coexistence or competitive exclusion, respectively. We tested this hypothesis in snowbeds of the Swiss Alps using the variance ratio statistics. We focused on the relationships between dominant snowbed species, subordinate snowbed species, and potentially invading alpine grassland species. Subordinate snowbed species were generally spatially aggregated with each other, but were segregated from alpine grassland species. Competition between alpine grassland and subordinate snowbed species may have caused this segregation. Segregation between these species groups increased with earlier snowmelt, suggesting an increasing importance of competition with climate change. Further, a dominant snowbed species (Alchemilla pentaphyllea) was spatially aggregated with subordinate snowbed species, while two other dominants (Gnaphalium supinum and Salix herbacea) showed aggregated patterns with alpine grassland species. These dominant species are known to show distinct microhabitat preferences suggesting the existence of hidden microhabitats with different susceptibility to invaders. These results allow us to suggest that alpine snowbed areas are likely to be reduced as a consequence of climate change and that invading species from nearby alpine grasslands could outcompete subordinate snowbed species. On the other hand, microhabitats dominated by Gnaphalium or Salix seem to be particularly prone to invasions by non-snowbed species Show more
Journal / seriesWeb Ecology
Pages / Article No.
Organisational unit02350 - Departement Umweltsystemwissenschaften / Department of Environmental Systems Science
00012 - Lehre und Forschung
02703 - Institut für Agrarwissenschaften / Institute of Agricultural Sciences (IAS)
09618 - Schöb, Christian (SNF-Professur)
MoreShow all metadata | <urn:uuid:75de9f10-0cdd-49e3-a35e-4cd64e84566e> | 2.671875 | 516 | Academic Writing | Science & Tech. | -3.979091 | 95,512,695 |
Library from Pennsylvania
THOMAS, W.Va. — Towering up to 228 feet above the Appalachian Mountain ridge — far above the treeline — are windmills lined up like marching aliens from War of the Worlds. Up close, they emit a high-pitched hum. From a few hundred yards away, their blades — extending 115 feet from center — cause a steady whooshing sound as they cut through the air at up to 140 mph at the tips.
Jon Boone's response, published in The Caledonian Record in August 2005, to those who challenged the authenticity of his DVD "Life Under a Windplant".
The generation of electricity by wind is a growing industry in Pennsylvania. While wind energy is certainly an attractive alternative to the pollution produced by fossil fuel power plants, all potential environmental impacts must be measured if electricity produced this way is to truly qualify as “green energy.” Surprisingly, only minimal environmental studies need to be done to site a wind farm in Pennsylvania. Improper siting of some wind farms in the U.S. has impacted migratory bird, resident bird, and bat populations. We present bird-impaction data from an industrial facility 30 km south of a proposed wind farm in Luzurne County, Pennsylvania, that suggest caution in the blind embrace of this energy technology. Siting decisions are made at the local government levels and are primarily based on economic incentives. We argue (a) that this energy alternative must incorporate robust site-specific impaction studies at each wind farm to demonstrate effects throughout the Commonwealth, and (b) that local government officials be given the guidance necessary to encourage and provide environmental oversight to wind farms in their areas.
The first glimpse of the turbines from state Route 6 presents a surreal image like something from a Road Warrior movie.
"These projects are very expensive and wouldn't happen without tax subsidies," he [Glenn Schleede] said. "Ordinary taxpayers are getting taken to the cleaners on this."
Capacity Factor by Month: (1) Mountaineer Windplant, WV, (2) Meyersdale Windplant, PA, (3) Mill Run Windplant, PA, and (4) Waymart Windplant, PA. This information, by month, highlights the issue of whether wind is available when electricity is needed. The charts reflect strong winds in the winter months and considerably lighter winds in the summer when demand for electricity is expected to peak.
These levels (noise) are much higher than predicted by the company.
After reviewing data collected during a groundbreaking research effort, the Bats and Wind Energy Cooperative (BWEC), a government-conservationindustry partnership, reported today substantial bat kills at two wind farms in the mid-Atlantic region between August 1 and September 13 of 2004. The report summarizes the first year’s research on potential causes and solutions. The research included the most detailed studies ever performed on bat fatality at wind sites and provides a foundation for further efforts aimed at better understanding why bats are being killed and how to minimize future fatalities.
The BWEC implemented research to improve fatality search protocols for bats and to evaluate interactions between bats and wind turbines from 31 July through 13 September 2004, the period when bat fatalities have most often been reported at wind facilities. The goal was to establish a basis for developing solutions to prevent or minimize threats to bats at wind energy facilities.
Written on behalf of the Friends of the Appalachian Highlands this letter addresses the threat to the Indiana Bat.
Dear Mr. Boone: I am in receipt of the information you sent regarding the Meyersdale wind project and the risk to bats, specifically Indiana bats in that area and your request for my opinion on this project. I have also done some research on my own concerning wind turbines and its affects on bats, to determine what data are available in the scientific literature in this area. I base this opinion on data and scientific literature, and my 16 years experience studying bat biology and bat ecology.
The story reveals that Radnor officials were misled and don’t understand that commercial wind energy is not an environmentally benign source of electricity. The officials are probably not aware of certain facts such as the following:
Dan Boone takes a close look at the landscape impact of the Mountaineer Wind Energy (WV) and Meyersdale (PA) industrial wind plants. | <urn:uuid:bd6ef508-aa87-4887-be84-57bafbc664c8> | 2.859375 | 887 | Content Listing | Science & Tech. | 36.283654 | 95,512,719 |
To err is human, to forgive, divine.
Alexander Pope, An Essay on Criticism
Exception handling was dealt with briefly at the end of chapter 3, but its such an important topic that its worth looking at in more detail. The Ada exception handling mechanism divides error handling into two separate concerns. One of them is detecting errors and the other is dealing with the errors in an appropriate way, and the two should be treated as two completely separate aspects of error handling. It also makes your programs easier to construct as well as more readable; a procedure is written as a section which deals with processing valid data and a separate section which deals with what to do when things go wrong. Thus, when youre writing the main part of the procedure you dont have to worry about how to deal with errors, and when youre reading it you dont have to get bogged down in the complexities of the error handling until after youve read and understood what happens with correct data. One way of writing a program is to do it incrementally: write the program without worrying too much about error handling initially, test it to make sure it works with correct data, and then concentrate on improving the exception handling once everything else is working. Debugging might also reveal exceptions that are raised in situations that youve overlooked, but it is much easier to add extra exception handlers than it is to disturb existing code and then have to go back and test it all again.
This separation of concerns is particularly important when designing packages which could be used by several different programs. Its always tempting to try and deal with errors as soon as you detect them, but one of the basic rules of package design is that you should never try to handle any errors within the package itself. The package may be able to detect errors but it will not usually know how to deal with them. Handling errors is something which is normally dependent on the overall program, and a package never knows anything about the program that is using it. What may be appropriate in one program may be totally inappropriate in another, and building in any assumptions about how an error should be handled will prevent you from reusing the package in more than one program. Displaying an error message on the screen and then halting may be appropriate in some situations, but in other situations there may not be a screen (e.g. your package is used by a program which controls a washing machine) or it may be a bad idea to halt (e.g. the package is used by a program in an aircrafts navigational system). Instead you should define your own exceptions and raise them if an error is detected; this will allow the main program to decide how best to deal with the error.
Ada defines four standard exceptions. Youve already met Constraint_Error; this is raised whenever a value goes outside the range allowed by its type. Youre very much less likely to meet the others (Storage_Error, Tasking_Error and Program_Error). Storage_Error is raised when you run out of memory. This is only likely to happen when your program is trying to allocate memory dynamically, as explained in chapter 11. Tasking_Error can occur if a program is composed of multiple tasks executing in parallel, as explained in chapter 19, and a task cant be started for some reason or if you try to communicate with a task that has finished executing. Program_Error is raised in a variety of situations where the program is incorrect but the compiler cant detect this at compile time (e.g. run-time accessibility checks on access types as explained in chapter 11, or reaching the end of a function without executing a return statement).
Ada allows you to define your own exceptions in addition to the standard exceptions, like this:
Something_Wrong : exception;
This declares an exception called Something_Wrong. The standard exceptions are of course declared in the package Standard, just as the standard types like Integer are. Other exceptions are defined in other packages such as Ada.Text_IO; Data_Error is an example of this. You may have noticed that Data_Error is not in the list of standard exceptions above. It is actually declared in a package called Ada.IO_Exceptions, and redeclared by renaming inside Ada.Text_IO (and all the other input/output packages) like this:
Data_Error : exception renames Ada.IO_Exceptions.Data_Error;
Although an exception declaration looks like a variable declaration, it isnt; about the only thing you can do with an exception (apart from handling it when it is raised) is to raise it using a raise statement:
When you raise an exception, the system looks for a handler for that exception in the current block. If there isnt one, it exits from the block (going to the line after end in the case of a begin ... end block, or returning to where it was called from in the case of a procedure or function body) and looks for a handler in the block it now finds itself in. In the worst case where there is no handler anywhere it will eventually exit from the main program, at which point the program will halt and an error will be reported.
Note that if an exception is raised inside an exception handler, you exit from the block immediately and then look for an exception handler in the block youve returned to. This prevents you getting stuck in an endless exception handling loop. The same thing happens if an exception is raised while elaborating declarations in a declaration section; this avoids the possibility of an exception handler referring to a variable that hasnt been created yet. Until youve got past the begin at the start of the block youre not counted as being inside it and hence not subject to the blocks exception handlers; once an exception occurs and youve entered the exception handler section, youre counted as having left the block so once again youre not subject to that blocks exception handlers. In other words, the exception handler only applies to the statements in the body of the block between begin and exception.
Sometimes you will want to do some tidying up before exiting a block even if you dont actually want to handle the exception at that point. For example, you may have created a temporary file on disk which needs to be deleted before you exit from the block. Heres how you can deal with this situation:
begin -- create a temporary file -- do something that might raise a Constraint_Error -- delete the temporary file exception when Constraint_Error => -- delete the temporary file raise Constraint_Error; end;
The temporary file will be deleted whether an exception occurs or not, either in the course of normal processing or from within the exception handler. A raise statement is used inside the exception handler to raise the same exception again, so that you will immediately exit from the block and look for another handler to handle the exception properly.
Sometimes you dont know exactly which exception has occurred. If you have an others handler or a single handler for several different exceptions, you wont know which exception to raise after youve done your tidying up. The solution is to use a special form of the raise statement which is only allowed inside an exception handler:
begin -- create a temporary file -- do something that might raise an exception -- delete the temporary file exception when others => -- delete the temporary file raise; -- re-raise the same exception end;
Raise on its own will re-raise the same exception, whatever it might be.
You may want to print out a message which says what the exception was as part of the handler. There is a standard package called Ada.Exceptions which contains some functions to give you this sort of information. Ada.Exceptions defines a data type called Exception_Occurrence and provides a function called Exception_Name which produces the name of the exception as a string from an Exception_Occurrence. You can get a value of type Exception_Occurrence by specifying a name for it as part of your exception handler:
begin ... exception when Error : Constraint_Error | Data_Error => Put ("The exception was "); Put_Line ( Exception_Name(Error) ); end;
The name of the Exception_Occurrence is prefixed to the list of exceptions in the handler (the name chosen was Error in this case). There are some other useful functions like Exception_Name; in particular, Exception_Message produces a string containing a short message giving some details about the exception, and Exception_Information produces a longer and more detailed message. Exception_Occurrence objects can also be useful for passing exception information to subprograms called from within an exception handler.
The standard exceptions will have a standard message associated with them. If you want to supply a message for an exception that youve defined yourself (or supply a different message for an existing exception) you can use the procedure Raise_Exception:
Raise_Exception (Constraint_Error'Identity, "Value out of range");
This has the same effect as raise Constraint_Error except that the message Value out of range will be associated with the exception occurrence. Since an exception is not a data object, you cant use an exception as a parameter to a subprogram. You can get a data object representing an exception using the Identity attribute which produces a valuewhich produces a of type Ada.Exceptions.Exception_Id, and it is this value which is passed as the first parameter to Raise_Exception..
One of the major sources of exceptions is when dealing with input and output. Users will inevitably supply invalid input from time to time, due to typing errors if nothing else, and your program must be prepared to cope with this. A typical situation arises when a user types in the name of a file that the program is supposed to read some data from or write something to; the filename might be misspelt, the file might be in another directory or on another disk, the disk might be full, the directory might be write protected. In these cases it is often unfair just to terminate the program; the user should generally be given another chance to type a filename in again.
To illustrate this, Ill briefly describe how file input/output works in Ada. File input/output is fundamentally no different to dealing with the keyboard and screen. The Text_IO package provides all the necessary facilities. The main difference is that you need to open a file before you can use it, and you must close it when youve finished using it. To open a file you first of all have to declare an object of type File_Type (defined in Ada.Text_IO):
File : Ada.Text_IO.File_Type;
Now you can open the file using the procedure Open:
Open (File, Mode => Ada.Text_IO.In_File, Name => "diary");
This opens an input file whose name is diary. The Mode parameter is an enumeration with three possible values. In_File means the file is to be opened for input, as in this case. Out_File means the file is to be opened for output. Any existing contents of the file will be lost in this case; Out_File specifies that the existing contents of the file should be scrapped. If you dont want to do this you can use Append_File, which means that whatever output you write to the file will be appended to the end of the files existing contents (if any). If the file doesnt already exist, Open will generate a Name_Error exception. If you want to create a brand new file for output, you can use the procedure Create:
Create (File, Name => "diary");
This will create a new output file with the given name if it doesnt already exist, or destroys the existing contents of the file if it does exist. You can optionally supply a Mode parameter as with Open; the default is Out_File, but you might want to use Append_File instead so that you will append your output to the file if it already exists. If the name isnt legal for some reason a Name_Error exception will be raised; for example, some systems cant handle filenames containing asterisks or question marks. The other exceptions that can occur when you try to open or create a file are Status_Error, which indicates that the file is already open, and Use_Error, which is raised if you cant open or create the file for any other reason (e.g. if there is no more disk space).
When youve finished using a file you should close it by calling the procedure Close:
While the file is open you can use Get to read it if its an input file and Put or Put_Line to write to it if its an output file. The only difference from using the keyboard and the screen is that you have to specify the file you want to read from or write to as the first parameter:
Get (File, C); -- get a character from File into C Put (File, "xyz"); -- write a string to File Put_Line (File, "xyz"); -- same, and then start a new line New_Line (File); -- start a new line in File Skip_Line (File); -- skip to start of next line of File
If you try to read from a file when youve reached the end of it, an End_Error exception will be raised. To avoid this you can test if youre at the end of the file using the function End_Of_File, which is defined in Ada.Text_IO like this:
function End_Of_File (File : File_Type) return Boolean;
This returns the value True if youre at the end of the file.
To illustrate how exception handling is used with file I/O, heres an example program which counts the number of words in a file:
with Ada.Text_IO, Ada.Integer_Text_IO; use Ada.Text_IO, Ada.Integer_Text_IO; procedure Word_Count is File : File_Type; Name : String(1..80); Size : Natural; Count : Natural := 0; In_Word : Boolean := False; Char : Character; begin -- Open input file loop begin Put ("Enter filename: "); Get_Line (Name, Size); Open (File, Mode => In_File, Name => Name(1..Size)); exit; exception when Name_Error | Use_Error => Put_Line ("Invalid filename -- please try again."); end; end loop; -- Process file while not End_Of_File (File) loop -- The end of a line is also the end of a word if End_Of_Line (File) then In_Word := False; end if; -- Process next character Get (File, Char); if In_Word and Char = ' ' then In_Word := False; elsif not In_Word and Char /= ' ' then In_Word := True; Count := Count + 1; end if; end loop; -- Close file and display result Close (File); Put (Count); Put_Line (" words."); end Word_Count;
The program is divided into the three traditional parts: initialisation (open the file), main processing (process the file) and finalisation (close the file and display the results). Opening the file involves getting the name of the input file from the user and then attempting to open it. This is done in a loop, and the loop is exited as soon as the input file is successfully opened. If attempting to open the file raises an exception, the exception handler for the block inside the loop displays the error message; the loop will then be executed again to give the user a chance to type in the filename correctly.
Once the file has been opened, the main processing loop begins. A variable called In_Word is used to keep track of whether we are in the middle of processing a word; initially its set to False to indicate that were not processing a word. A non-space character means that the start of a word has been seen, so In_Word is set True and the word count in Count is incremented. Once inside a word, characters are skipped until the end of the current line is reached or a space is read, using the function End_Of_Line to test if the current position is at the end of a line of input. Either of these conditions signals the end of a word, so In_Word gets set back to False. When the end of the file is reached, the loop terminates. The input file is then closed and the value of Count is displayed.
|7.1||Modify the guessing game program from exercise 5.1 to provide comprehensive exception handling to guard against input errors of any kind.|
|7.2||Write a program which asks the user for the name of an input file and an output file, reads up to 10000 integers from the input file, sorts them using the Shuffle_Sort procedure from the previous chapter, and then writes the sorted data to the output file. Check it to make sure it copes with errors arising from non-existent input files, write-protected destinations for output files, illegal filenames, and (one that often gets overlooked) using the same name for both files.|
|7.3||Modify the packages JE.Dates from the end of chapter 4 to define an exception called Date_Error, and get the Day_Of function to raise a Date_Error exception if it is called with an invalid date.|
|7.4||Modify the playing card package from exercise 6.3 to define an exception which will be raised if you try to deal a card from an empty pack or replace a card in a full pack. Use the package to implement a simple card game called Follow The Leader, in which a human player and the computer player are dealt ten cards each. The object of the game is to get rid of all the cards in your hand. The first player lays down a card, and each player in turn has to play a card which matches either the suit or the value of the previously played card (e.g. the Jack of Clubs could be followed by any Jack or any Club). If a player has no card that can be used to follow on, an extra card must be taken from the pack. If the pack becomes empty, the cards that were previously played (except the last one) must be returned to the pack. The first player to play out a hand and have no cards left is the winner.|
This file is part of Ada 95: The Craft
of Object-Oriented Programming by John English.
Copyright © John English 2000. All rights reserved.
Permission is given to redistribute this work for non-profit educational use only, provided that all the constituent files are distributed without change.
$Revision: 1.2 $
$Date: 2001/11/17 12:00:00 $ | <urn:uuid:94550289-3c19-46af-9dbe-f9121a212cdd> | 2.796875 | 3,909 | Documentation | Software Dev. | 44.715515 | 95,512,720 |
Join the DZone community and get the full member experience.Join For Free
Get the Edge with a Professional Java IDE. 30-day free trial.
I hope we all agree that methods an classes should be small and have only few dependencies. This point of view is widely accepted, while the interpretation of “small” varies. There is lots of literature out there about this. But what about packages?
Some people consider packages just as namespaces. So packages are just things that allow you to reuse names for classes!?
I thought so about methods a couple of years ago: Just containers for reusable code.
Today I disagree with both statements. Methods are a very important tool to name a thing, to separate it from the rest, even when it is called only once.
Likewise packages have an important role even when we ignore the issue of names. Packages bundle together some functionality. They name that functionality and they should encapsulate it so we might reuse it. And just as with methods a package should have single purpose. It is probably a ‘bigger’ purpose than that of a method but still it should do only stuff directly related to that purpose.
Unfortunately in many projects this doesn’t work out well.
Here is a challenge. Pick a random package out of your current project. For this package answer the following questions
- What is the purpose of that package? Please use a single simple sentence. Does this sentence match the name of the package?
- Go through the classes of that package do they all work together to achieve the purpose of the class? Or are there classes in there that just ended up there by accident?
- Go through all the other classes of your system. Is none of them concerned with the single purpose of your package? Or are there classes that float around somewhere else that really should be in the package you are looking at?
- If your coworker in the next project needs basically the same package, how difficult would it be to extract the package from your code base and build it as a standalone jar?
In my experience it is very likely that the answers to the questions above are rather depressing. They certainly are in most projects I worked with.
Even when the classes and methods in that project are reasonable clean.
Why is that?
I think the reason is that problems with packages are not as obvious as with methods or classes. If a methods spans across the complete monitor you see that every time all the time you are working with the method. And since the method is long there is probably much work to be done in it. Same for classes. But with packages is different. I spend whole days coding without looking what is inside a package . I open my classes with shortcuts and name based search, no need to look inside packages.
So you won’t notice that classes concerned with completely different issues are together in one package. You won’t notice that the number of classes in a package exceeds any reasonable threshold.
And if it comes to the last question, the question about dependencies it becomes really ugly. What other packages does a package depend on? Which class contains that dependency? There is very limited tool support for this kind of question. And the question only gets asked late in a project, maybe when a sister project gets spawned that should reuse some of the code base so it should move in a common base project.
Since I have been there a couple of times I recommend to implement a couple of tests right in the beginning of a project using either JDepend or Dependency Finder:
- No cyclic dependencies between packages
- A maximum number of classes per package
- A fixed naming schema like <domain>.<project>.<module>.<layer>.<rest-of-package-name>
- A fixed direction of dependencies between modules (modules are vertical slices, often based on some domain concept)
- A fixed direction of dependencies between layers ( gui, presentation, domain, persistence are typical examples)
But be warned: these tests tend to be hard to keep satisfied. But if you put in the extra effort to keep your packages clean it has a significant positive impact on your application structure.
Published at DZone with permission of Jens Schauder , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | <urn:uuid:5a1dd0c3-c519-4b45-9858-1aa768545ade> | 2.65625 | 892 | Comment Section | Software Dev. | 51.209699 | 95,512,729 |
Using a newly developed method researchers at the Institute of Molecular Biotechnology of the Austrian Academy of Sciences (IMBA) have been able to shed light on the complexity of genome reorganization occurring during the first hours after fertilization in the single-cell mammalian embryo. Their findings have recently been published in the journal Nature.
Using a newly developed method researchers at the Institute of Molecular Biotechnology of the Austrian Academy of Sciences (IMBA) have been able to shed light on the complexity of genome reorganization occurring during the first hours after fertilization in the single-cell mammalian embryo.
Chromatin structures of male and female nuclei are distinct from another and strikingly different from other any other interphase cells. This was observed by Ilya Flyamer, Johanna Gassler, Maxim Imakaev and colleagues using an adapted single-nucleus Hi-C protocol.
Their findings have recently been published in the journal Nature. The team of researchers (from three continents) have discovered that the egg and sperm genomes that co-exist in the single-cell embryo or zygote have a unique structure compared to other interphase cells.
Understanding this specialized chromatin “ground state” has the potential to provide insights into the yet mysterious process of epigenetic reprogramming to totipotency, the ability to give rise to all cell types.
Fusion of the egg and sperm, two highly differentiated cell types, leads to formation of the single-cell embryo or zygote. During the first hours after fertilization, the two separate genomes undergo reprogramming events that presumably function to erase the memory of the differentiated cell type and establish a state of totipotency. The mechanisms underlying totipotency remain poorly understood but are essential for generating a new organism from a fertilized egg.
A major advance in single-cell genomics
After fertilization, maternal and paternal genomes erase some of the epigenetic memory of the previously differentiated states in order to facilitate the beginning of new life as the zygote. In the first cell cycle after fertilization the maternal genome inherited from the oocyte (egg) and the paternal genome provided by sperm exist as separate nuclei in the zygote. The two genomes are marked by distinct epigenetic modifications acquired during reprogramming. Whether the 3D chromatin structure of the maternal and paternal genomes is also distinct was not known.
An international team headed by Kikuë Tachibana-Konwalski from IMBA in collaboration with researchers from the Massachusetts Institute of Technology (MIT) in Boston and the Lomonosov Moscow State University (MSU) aimed to uncover how chromatin structure is reorganized during the mammalian oocyte-to-zygote transition. Using next-generation sequencing, bioinformatics analysis and mathematical modeling performed by Maxim Imakaev in Leonid Mirny’s lab, the researchers identified specific patterns that emerge during genome reorganization in mouse oocytes and zygotes.
The low availability of starting material made it necessary to develop a new single-nucleus Hi-C (snHi-C) method that made it possible to analyze the chromatin architecture in oocytes and single-cell embryos for the first time. Using this method, features of genomic organization including compartments, topologically associating domains (TADs) and chromatin loops were detected in single cells when averaged over the genome.
“Our method allowed us to detect chromatin contacts ten times more efficiently than a previous method. Because of this we were able to find differences in genome folding on the level of single cells: these cell-to-cell variations were missed in conventional Hi-C due to the averaging over millions of cells,” says Ilya Flyamer, former Vienna Biocenter (VBC) summer student and then Master student and one of the first authors of the study.
Contrasting behaviour of maternal and paternal chromatin
“Additionally, we found unique differences in the three-dimensional organization of the zygote’s chromatin compared to other interphase cells. What was even more interesting is that maternal and paternal genomes of the zygote seem to have different organizations within the same cell. It seems like the chromatin architecture is reorganized after fertilization, and that this reorganization happens differentially for the maternal and the paternal genomes,” explained Johanna Gassler, PhD student at IMBA and one of the first authors of the study.
Senior author and IMBA group leader Kikuë Tachibana-Konwalski is fascinated by the secrets of the mammalian oocyte-to-zygote transition and has been studying the miracle of life, and in particular the very first molecular steps, for many years. She also hopes the findings will generate new insights for the emerging field of totipotency.
“To place the power of the zygote into context: Reprogramming to pluripotency by the Yamanaka factors takes several days with limited efficiency, whilst reprogramming to totipotency occurs in the zygote within hours. How this is achieved remains one of the key unknowns in biology. By studying the chromatin state of zygotes, we aim to gain insights into this mechanism, which could also have applications for regenerative medicine,” says Tachibana-Konwalski, underlining her excitement for the potential applications for her favourite research topic.
Original publication: “Single-nucleus Hi-C reveals unique chromatin reorganization at oocyte-to-zygote transition”, Flyamer, Gassler et al. , Nature, DOI 10.1038/nature21711
IMBA - Institute of Molecular Biotechnology is one of the leading biomedical research institutes in Europe focusing on cutting-edge functional genomics and stem cell technologies. IMBA is located at the Vienna Biocenter, the vibrant cluster of universities, research institutes and biotech companies in Austria. IMBA is a subsidiary of the Austrian Academy of Sciences, the leading national sponsor of non-university academic research.
Press picture: http://de.imba.oeaw.ac.at/index.php?id=516
Mag. Ines Méhu-Blantar | idw - Informationsdienst Wissenschaft
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
Pollen taxi for bacteria
18.07.2018 | Technische Universität München
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:d007d7a5-bb0c-4f8d-afcd-d73488cba433> | 2.828125 | 1,936 | Content Listing | Science & Tech. | 24.73121 | 95,512,735 |
“The process is simple,” said lead researcher and author Somenath Mitra, PhD, professor and acting chair of NJIT’s Department of Chemistry and Environmental Sciences. “Someday homeowners will even be able to print sheets of these solar cells with inexpensive home-based inkjet printers. Consumers can then slap the finished product on a wall, roof or billboard to create their own power stations.”
“Fullerene single wall carbon nanotube complex for polymer bulk heterojunction photovoltaic cells,” featured as the June 21, 2007 cover story of the Journal of Materials Chemistry published by the Royal Society of Chemistry, details the process. The Society, based at Oxford University, is the British equivalent of the American Chemical Society.
Harvesting energy directly from abundant solar radiation using solar cells is increasingly emerging as a major component of future global energy strategy, said Mitra. Yet, when it comes to harnessing renewable energy, challenges remain. Expensive, large-scale infrastructures such as wind mills or dams are necessary to drive renewable energy sources, such as wind or hydroelectric power plants. Purified silicon, also used for making computer chips, is a core material for fabricating conventional solar cells. However, the processing of a material such as purified silicon is beyond the reach of most consumers.
“Developing organic solar cells from polymers, however, is a cheap and potentially simpler alternative,” said Mitra. “We foresee a great deal of interest in our work because solar cells can be inexpensively printed or simply painted on exterior building walls and/or roof tops. Imagine some day driving in your hybrid car with a solar panel painted on the roof, which is producing electricity to drive the engine. The opportunities are endless. ”
The science goes something like this. When sunlight falls on an organic solar cell, the energy generates positive and negative charges. If the charges can be separated and sent to different electrodes, then a current flows. If not, the energy is wasted. Link cells electronically and the cells form what is called a panel, like the ones currently seen on most rooftops. The size of both the cell and panels vary. Cells can range from 1 millimeter to several feet; panels have no size limits.
The solar cell developed at NJIT uses a carbon nanotubes complex, which by the way, is a molecular configuration of carbon in a cylindrical shape. The name is derived from the tube’s miniscule size. Scientists estimate nanotubes to be 50,000 times smaller than a human hair. Nevertheless, just one nanotube can conduct current better than any conventional electrical wire. “Actually, nanotubes are significantly better conductors than copper,” Mitra added.
Mitra and his research team took the carbon nanotubes and combined them with tiny carbon Buckyballs (known as fullerenes) to form snake-like structures. Buckyballs trap electrons, although they can’t make electrons flow. Add sunlight to excite the polymers, and the buckyballs will grab the electrons. Nanotubes, behaving like copper wires, will then be able to make the electrons or current flow.
“Using this unique combination in an organic solar cell recipe can enhance the efficiency of future painted-on solar cells,” said Mitra. “Someday, I hope to see this process become an inexpensive energy alternative for households around the world.”
Sheryl Weinstein | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:b349ee45-5810-422a-8ad8-033143e34516> | 4.125 | 1,393 | Content Listing | Science & Tech. | 36.349212 | 95,512,750 |
+44 1803 865913
This identification guide should enable non-specialists to identify the 57 species of centipede found on the island of Britain, including 7 species known only from greenhouses. It includes dichotomous and tabular keys backed up by concise confirmatory notes.
Centipedes are some of the commonest larger arthropods found in our greenhouses, sheds, gardens, waste ground, woodland, moorland and coastal habitats, but the standard key (Eason, 1964) is now out of date and out of print. This new key includes the additional ten species discovered during the last 40 years.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
NHBS never fails to deliver
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:5bac498d-97df-4594-a04b-392a36924a66> | 3.578125 | 191 | Product Page | Science & Tech. | 43.651223 | 95,512,752 |
A small rectangle of pink glass, about the size of a postage stamp, sits on Professor Amy Shen’s desk. Despite its outwardly modest appearance, this little glass slide has the potential to revolutionize a wide range of processes, from monitoring food quality to diagnosing diseases.
The slide is made of a ‘nanoplasmonic’ material — its surface is coated in millions of gold nanostructures, each just a few billionths of a square meter in size. Plasmonic materials absorb and scatter light in interesting ways, giving them unique sensing properties. Nanoplasmonic materials have attracted the attention of biologists, chemists, physicists and material scientists, with possible uses in a diverse array of fields, such as biosensing, data storage, light generation and solar cells.
In several recent papers, Prof. Shen and colleagues at the Micro/Bio/Nanofluidics Unit at the Okinawa Institute of Science and Technology (OIST), described their creation of a new biosensing material that can be used to monitor processes in living cells.
“One of the major goals of nanoplasmonics is to search for better ways to monitor processes in living cells in real time,” says Prof. Shen. Capturing such information can reveal clues about cell behavior, but creating nanomaterials on which cells can survive for long periods of time yet don’t interfere with the cellular processes being measured is a challenge, she explains.
Counting Dividing Cells
One of the team’s new biosensors is made from a nanoplasmonic material that is able to accommodate a large number of cells on a single substrate and to monitor cell proliferation, a fundamental process involving cell growth and division, in real time. Seeing this process in action can reveal important insights into the health and functions of cells and tissues.
Researchers in OIST’s Micro/Bio/Nanofluidics Unit described the sensor in a study recently published in the journal Advanced Biosystems.
The most attractive feature of the material is that it allows cells to survive over long time periods. “Usually, when you put live cells on a nanomaterial, that material is toxic and it kills the cells,” says Dr. Nikhil Bhalla, a postdoctoral researcher at OIST and first author of the paper. “However, using our material, cells survived for over seven days.” The nanoplasmonic material is also highly sensitive: It can detect an increase in cells as small as 16 in 1000 cells.
The material looks just like an ordinary pieces of glass. However, the surface is coated in tiny nanoplasmonic mushroom-like structures, known as nanomushrooms, with stems of silicon dioxide and caps of gold. Together, these form a biosensor capable of detecting interactions at the molecular level.
The Latest on: Nanoplasmonic materials
Nanomushroom sensors—one material, many applications
on February 25, 2018 at 4:00 pm
The slide is made of a nanoplasmonic material—its surface is coated in millions of gold nanostructures, each just a few billionths of a square meter in size. Plasmonic materials absorb and scatter lig... […]
Ancient Color-Shifting Goblet Inspires Nanoplasmonic Biosensor
on December 10, 2017 at 12:19 am
Nadia Drake, reporting for Wired Science: An ancient Roman cup that changes color in different lighting is the inspiration for a new nanoplasmonic biosensor ... Funding for NOVA Next is provided by th... […]
A nanoplasmonic molecular ruler for measuring nuclease activity and DNA footprinting
on March 5, 2017 at 4:00 pm
The time resolution of the nanoplasmonic molecular ruler can be as high as one spectrum per second by taking advantage of the high quantum efficiency of Rayleigh scattering compared with fluorescence ... […]
Tunable pattern-free graphene nanoplasmonic waveguides on trenched silicon substrate
on October 27, 2014 at 5:00 pm
Graphene has emerged as a promising material for active plasmonic devices in the mid ... Here we propose a novel nanoplasmonic waveguide with a pattern-free graphene monolayer on the top of a nano-tre... […]
Rainbow-trapping scientist now strives to slow light waves even further
on April 11, 2011 at 5:00 pm
Gan and his colleagues created nanoplasmonic structures by making nanoscale grooves in metallic surfaces at different depths, which alters the materials' optical properties. These plasmonic chips prov... […]
via Google News and Bing News | <urn:uuid:6690edca-0612-4341-b1f6-d52433b6def5> | 3.75 | 989 | Content Listing | Science & Tech. | 39.864042 | 95,512,759 |
Mummichogs are a type of oxygen-breathing fish that flourish in the small pools that form as the tide recedes. But sometimes that pool evaporates or becomes toxic for the fish. So mummichogs have figured out how to move short distances across land in search of the next shallow pool via a series of backward tail flips, propping themselves up between jumps to re-orient themselves.
Cornell University undergraduate Noah Bressman became fascinated with mummichogs while taking a summer course on marine vertebrates at Cornell's Shoals Marine Lab. One morning he noticed one of the mummichogs on the ground some 3m away from the fish tank. He thought it was odd — shouldn't a fish that had jumped out of the tank be flopping around just beneath it? — and when he saw another mummichog in the exact same spot the next morning, he decided to investigate the matter as his research project for the course.
Those preliminary results were promising enough to snag a grant so Bressman could return the next summer to complete the research via a series of stranding experiments, recording the fishy behaviour with high-speed video. He just published his results last week in the Journal of Experimental Zoology Part A: Ecological Genetics and Physiology.
"I found out that the fish went to that spot on the floor because it was the spot onto which the first sunlight from the window landed," Bressman told Gizmodo. "The fish thought that the shiny tiles were actually the surface of water, so they moved towards that spot."
First the mummichogs will do a tail-flip jump, usually landing on their side. Then they prop themselves into a standing position (if fish can be said to "stand"), roll back onto their side and do another tail jump, repeating the cycle until they reach the water. Bressman thinks that the interim upright position helps the fish orient itself before the next jump. It's looking for visual cues in the form of reflected light off the surface of water — an hypothesis bolstered by the fact that in dark conditions, the mummichogs showed no preference in moving toward the pool of water.
There were outliers, of course — notably a handful of fish that didn't move at all, just lay there blowing bubbles to cover their tiny bodies. Bressman said he's only seen similar behaviour in round gobies, noting a a couple of possible hypotheses to explain it. It could be a way for the fish to ward off drying out, since the bubbles contain liquid and last sufficiently long to buy the mummichog some time to flip its way back to water.
Alternatively, "It could be a byproduct of aerial respiration," Bressman said. "Since fish don't typically breathe air, they could coat their respiratory surfaces in a mucous to keep [them] moist to maintain gas exchange. Bubbles could be formed by moving air through this mucous." Those are merely speculations pending further investigation, however.
Why study the humble mummichog? Certain invasive species, like snakeheads and climbing perch, manage to move from one body of water to another. Studying the mummichogs' locomotion could shed light on how this is accomplished by other fish, thereby preventing the spread of unwanted species in the future. Bressman says it could also be a useful analogue for how the first vertebrates may have moved from water to land millions of years ago.
Bressman, N., Farina, S., and Gibb, A. (2015) "Look before you leap: visual navigation and terrestrial locomotion of the intertidal killifish Fundulus heteroclitus," Journal of Experimental Zoology Part A: Ecological Genetics and Physiology. Published online, November 5, 2015.
Video courtesy of Noah Bressman. Image: NOAA Photo Library. Public domain. | <urn:uuid:91d9097b-4355-47a5-8cf0-8a95761db39b> | 3.734375 | 803 | Truncated | Science & Tech. | 48.207242 | 95,512,766 |
Tuesday, April 17, 2012
There have been numerous means of sending a message from point a to point b over the span of human existence, within the past couple centuries it has become possible to ask someone at point b what the weather is like without actually sending someone to physically deliver your missive. Naturally people have started to take the ability to receive an instantaneous response for granted and most science-fiction (and a few fantasy) authors have naturally incorporated it into their works, even including some form of “interplanetary internet” in some cases. Though sometimes they don’t think things through too much, making mistakes such as interstellar wi-fi, to prevent such errors why don’t we take a quick look at how communications may work across interplanetary and interstellar distances. Electromagnetic Radiation First off there’s the single most common medium of transmission since the mid-20th century, radio waves. Transmitters translate text, verbalization, or other forms of data into discrete or continuous pulses of electromagnetic radiation (aka light) with wavelengths ranging from 1 millimeter to 100 kilometers and frequencies of 300 GHz to 3 kHz and a receiver detects and re-translates the information sent. Their low frequency and long wavelengths mean that radio waves have very little energy compared to other forms of EM radiation (and most definitely cannot cause cancer) but can potentially carry information for light-years before losing coherence. However radio waves are limited to the speed of light, so any attempt at calling someone further out than a light-minute or two (for reference, the sun is about eight light-min from earth) is going to experience a considerable amount of lag as the time it takes the waves to travel to their destinations becomes noticeable. In addition signals sent using radio will become incoherent with distance, depending on the frequency, the absolute limit being one or two light-years. Another common means of communication is concentrated pulses of visible light, usually along glass fiber-optic cables which shield the signals from interference by the atmosphere. This method allows for far superior data quality than radio but atmospheric gases or particles can block them easily, as can physical objects that radio waves can pass through. In the vacuum of outer space there is considerably less matter in any form that can block an optical signal, however, especially if the signal is transmitted in the form of a laser capable of maintaining integrity over great distances. Lasers are also less susceptible to jamming or disruption by solar flares. But there has to be a clear line-of-sight between the transmitter and receiver and even lasers spread out and become incoherent over interstellar distances. The Internet As for how the internet might cope with space travel, e-mail and social networks would still be possible, and probably the primary form of communication between planets, but instant messaging would no longer be “instant” and if you think AOL back in the 1990s took a long time to load webpages, you probably wouldn’t have the patience to try surfing the internet from Mars. In all likelihood deep space colonies would form their own separate internets, with unique web sites inaccessible on earth or any other fairly distant regions. Certain websites that may be determined to be “important” enough might set up localized servers that would receive updates from one another at specified intervals, but you’d have to wait several hours and most likely need a massive transmitter to look up any other sites based outside your local region of space. Neutrinos Neutrinos, those supposedly massless particles that don’t interact with most normal matter and instead pass right through it, gained some publicity a few months ago when readings by CERN supposedly indicated that they travel slightly faster than the speed of light. Those readings were determined to be an equipment failure (a disconnected wire) but another group of researchers managed to do something not quite as amazing with neutrinos, but still significant. They managed to use neutrinos to send a one-word message through 240 meters of solid rock. (link: http://news.discovery.com/space/minerva-sends-a-message-in-a-neutrino-beam-120320.html ) Granted, the transmission speed was very slow, only 1 bit/second, and it took a particle accelerator to send the message, but still the neutrinos experienced negligible interference from materials that would block radio or optical signals completely. They could be very useful for communicating for people deep underground or underwater, or on the other side of a planet or star even. Neutrino transmission would need to be very tight beams like lasers to compensate for the low transmission rate, but the advantages of a transmission medium that is near impossible to block are considerable. Of course, if someone managed to place a neutrino detector between the sender and the receiver they could read the message without anyone knowing. Quantum Entanglement One of the science “buzzwords” of the century is “quantum mechanics”, relating to the behaviors of subatomic particles. One thing that science-fiction authors have extrapolated from the various “weird” properties covered under quantum mechanics is the use of “entanglement” to send messages instantaneously over any distance. The idea is that when two particles are “entangled” at the quantum level they can be separated and whatever happens to one particle happens to the other one instantaneously. Somewhere along the line someone decided that that could allow communication faster than the speed of light. In addition to sending messages instantaneously a quantum entanglement communique would be impossible to intercept as it would be teleported to the receiver. The harsh reality is that the act of observing an entangled particle breaks the connection with the paired particle, attempting to send data with entangled particles would by necessity require observing them. However, quantum entanglement can be used to encrypt messages sent by conventional (currently only dedicated fiber optic cables) means such that only those who possess one of two “keys” can interpret the data. By encoding a transmission in the form of quantum states of a particle one ensures that the very act of intercepting it would corrupt the data and alert the holders of the keys as to how much of the message was intercepted. And it actually has been done, some governments and companies who consider security worth the expense use quantum cryptography for their most secure data transmissions, the Swiss canton of Geneva used it to send national election ballot results to the capital in 2007 for example. There have also been experiments with sending quantum encrypted messages over radio as well, it seems likely that the technology will become more prevalent over the next few decades. Though of course it only works between two specialized devices that have to be physically transported to their working locations. The utterly Fantastic Of course, even quantum-encrypted FTL neutrinos would take years to travel from one solar system to another, so many authors have turned to the farther fringes of science in order to maintain “instantaneous communication”. For example, tachyons which are highly hypothetical particles that travel faster than light and which most scientists don’t believe exist. Or if their universe allows physical travel through some sort of “hyperspace” they might send radio transmissions through that same dimension where the normal laws of physics don’t apply. Heck, you might even use mentally “bonded” telepaths, worked for Heinlein. | <urn:uuid:1fcb3d3d-1bf4-46fb-9f1d-2c246336699b> | 3.59375 | 1,523 | Personal Blog | Science & Tech. | 23.606212 | 95,512,771 |
In a simple set-up, the scientists used the translation of position information of fluorescent markers into color information. Overcoming the need for scanning the depth of a sample, they were able to generate the precise 3D information at the same speed as it would take to acquire a 2D image. The general principle of this innovative approach can be used for broader applications and is published online in the PNAS Early Edition this week.
Visualization of Paxillin-fluctuations at the adhesion site of a murine melanoma cell, using the new technique. IMP
Illustration of Paxillin-distribution in a murine melanoma cell using the new technique. Red represents closest and blue furthest distances. IMP
For many disciplines in the natural sciences it is desirable to get highly enlarged, precise pictures of specimens such as cells. Depending on the purpose of an experiment and the preparation of the sample, different microscopy-techniques are used to analyze small structures or objects. However, a drawback of most current approaches is the need to scan the depth of a sample in order to get a 3D picture. Especially for optically sensitive or highly dynamic (fast moving) samples this often represents a serious problem. Katrin Heinze and Kareem Elsayad, lead authors of the PNAS publication, managed to circumvent this difficulty during their work at the IMP.
Precise images of sensitive and dynamic samples
Elsayad, who was part of a research team led by Katrin Heinze at the IMP, used fluorescence microscopy for his experimental set-up. The principle of fluorescence microscopy – now a common tool in biomedical research labs – is as follows: Fluorescent dyes, so-called fluorophores, are turned on by light of a certain wavelength and, as a consequence, “spontaneously” emit light of a different wavelength. Elsayad designed a thin biocompatible nanostructure consisting of a quartz microscope slide with a thin silver film and a dielectric layer. The IMP-scientist then labeled the sample – fixed or live cells – with a fluorescent dye and placed it above the coated slide.
Elsayad explains in simple terms how the biological imaging then took place: “The measured emission spectrum of a fluorescent dye above this substrate depends on its distance from the substrate. In other words, the position information of a collection of fluorophores is translated into color information, and this is what we were measuring in the end”. With this elaborate method, only one measurement is needed to determine the fluorophore distribution above the substrate, with a resolution – in the direction away from the substrate – down to 10 nanometers (1/100.000th of a millimeter). “I believe that the beauty of our method is its simplicity. No elaborate set-up or machines are required to achieve this high resolution. Once the sample is placed on the substrate, which can be mass produced, a confocal microscope with spectral detection is all that is needed”, Heinze points out.
Simple method, big potential
The novel technique was already successfully tested by Elsayad and Heinze. Together with collaborators at the Institute of Molecular Biotechnology (IMBA) of the Austrian Academy of Sciences, they used it to study paxillin, a protein important for cell adhesion, in living cells. The scientists also visualized the 3D dynamics of filopodia, small cell protrusions made of bundled actin-filaments that move very quickly and have a high turnover-rate during cell migration.
Originally developed for a single fluorescent marker, the new method can be adapted for others as well.“There are numerous possibilities for further development and additional applications of the technique”, Elsayad points out. “From optical readout on chips to make faster computers, to more efficient DNA sequencing methods.” The novel technique patented by the IMP has already attracted a lot of interest from several big optical companies.Original publication
Dr. Heidemarie Hurtl | idw
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:ae48969b-43b3-41c3-bb90-adbaef157914> | 3.171875 | 1,403 | Content Listing | Science & Tech. | 33.557499 | 95,512,781 |
The physicists performing the study used a small nuclear reactor used exclusively for research purposes at the University of Pavia, applying techniques that were created for the project known as “Cuore” (“Heart”), which is being developed at the INFN’s national laboratories in Gran Sasso.
The research, the results of which will be published in the journal “Il Nuovo Saggiatore”, was performed on hair samples that had been taken during different periods of Napoleon Bonaparte’s life, from when he was a boy in Corsica, during his exile on the Island of Elba, on the day of his death (May 5, 1821) on the Island of Saint Helena, and on the day after his death. Samples taken from the King of Rome (Napoleon’s son) in the years 1812, 1816, 1821, and 1826, and samples from the Empress Josephine, collected upon her death in 1814, were also analysed. The hair samples were provided by the Glauco-Lombardi Museum in Parma (Italy), the Malmaison Museum in Paris, and the Napoleonic Museum in Rome. In addition to these “historical” hair samples, 10 hairs from living persons were examined for comparison purposes.
The hairs were placed in capsules and inserted in the core of the nuclear reactor in Pavia. The technique used is known as “neutron activation”, which has two enormous advantages: it does not destroy the sample and it provides extremely precise results even on samples with an extremely small mass, such as human hair samples. Using this technique, the researchers have established that all of the hair samples contained traces of arsenic. The researchers chose to test for arsenic in particular because for a number of years various historians, scientists, and writers have hypothesized that Napoleon was poisoned by guards during his imprisonment in Saint Helena following the Battle of Waterloo.
The examination produced some surprising results. First of all, the level of arsenic in all of the hair samples from 200 years ago is 100 times greater than the average level detected in samples from persons living today. In fact, the Emperor’s hair had an average arsenic level of around ten parts per one million whereas the arsenic level in the hair samples from currently living persons was around one tenth of a part per one million. In other words, at the beginning of the 19th people evidently ingested arsenic that was present in the environment in quantities that are currently considered as dangerous.
The other surprise regards the finding that there were no significant differences in arsenic levels between when Napoleon was a boy and during his final days in Saint Helena. According to the researchers, and in particular the toxicologists who participated in the study, it is evident that this was not a case of poisoning but instead the result of the constant absorption of arsenic.
Eleonora Cossi | alfa
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:e0160e11-da34-45b3-a2c8-50256fe06286> | 3.25 | 1,172 | Content Listing | Science & Tech. | 39.660835 | 95,512,782 |
Endothermic process(Redirected from Endothermic)
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (June 2018) (Learn how and when to remove this template message)
The term endothermic process describes the process or reaction in which the system absorbs energy from its surroundings, usually in the form of heat. The term was coined by Marcellin Berthelot from the Greek roots endo-, derived from the word "endon" (ἔνδον) meaning "within" and the root "therm" (θερμ-) meaning "hot" or "warm " intended sense is that of a reaction that depends on absorbing heat if it is to proceed. The opposite of an endothermic process is an exothermic process, one that releases, "gives out" energy in the form of heat. Thus in each term (endothermic & exothermic) the prefix refers to where heat goes as the reaction occurs, though in reality it only refers to where the energy goes, without necessarily being in the form of heat. All chemical reactions involve both the breaking of existing and the making of new chemical bonds. A reaction to break a bond always requires the input of energy and so such a process is always endothermic. When atoms come together to form new chemical bonds, the electrostatic forces bringing them together leave the bond with a large excess of energy (usually in the form of vibrations and rotations). If that energy is not dissipated, the new bond would quickly break apart again. Instead, the new bond can shed its excess energy - by radiation, by transfer to other motions in the molecule, or to other molecules through collisions - and then become a stable new bond. Shedding this excess energy is the exothermicity that leaves the molecular system. Whether a given overall reaction is exothermic or endothermic is determined by the relative contribution of these bond breaking endothermic steps and new bond stabilizing exothermic steps.
Endothermic (and exothermic) analysis only accounts for the enthalpy change (∆H) of a reaction. The full energy analysis of a reaction is the Gibbs free energy (∆G), which includes an entropy (∆S) and temperature term in addition to the enthalpy. A reaction will be a spontaneous process at a certain temperature if the products have a lower Gibbs free energy (an exergonic reaction) even if the enthalpy of the products is higher. Entropy and enthalpy are different terms, so the change in entropic energy can overcome an opposite change in enthalpic energy and make an endothermic reaction favorable.
- Melting of ice
- Evaporating liquid water
- Sublimation of carbon dioxide (dry ice)
- Cracking of alkanes
- Thermal decomposition reactions
- Electrolytic decomposition of sodium chloride into sodium hydroxide and hydrogen chloride
- Dissolving ammonium chloride in water
- Nucleosynthesis of elements heavier than nickel in stellar cores
- High-energy neutrons can produce tritium from lithium-7 in an endothermic reaction, consuming 2.466 MeV. This was discovered when the 1954 Castle Bravo nuclear test produced an unexpectedly high yield.
- Nuclear fusion of elements heavier than iron in supernovae
- Austin, Patrick (January 1996). "Tritium: The environmental, health, budgetary, and strategic effects of the Department of Energy's decision to produce tritium". Institute for Energy and Environmental Research. Retrieved 2010-09-15.
- Qian, Y.-Z.; Vogel, P.; Wasserburg, G. J. (1998). "Diverse Supernova Sources for the r-Process". Astrophysical Journal 494 (1): 285–296. arXiv:astro-ph/9706120. Bibcode: 1998ApJ...494..285Q. doi:10.1086/305198.
- Endothermic Definition – MSDS Hyper-Glossary | <urn:uuid:29d30bd6-b97b-4888-8f2f-e23ce3b8b262> | 3.59375 | 849 | Knowledge Article | Science & Tech. | 40.853724 | 95,512,810 |
Using the Natural Motion of 2D Materials to Create a New Source of Clean Energy
Physics professor Paul Thibado has designed tiny graphene-powered motors that can run on ambient temperature.
The research of Paul Thibado, professor of physics at the University of Arkansas, provides strong evidence that the motion of two-dimensional materials could be used as a source of clean, limitless energy. Thibado and his students studied the movements of graphene, which is composed of a single layer of carbon.
Thibado has taken the first steps toward creating a device that can turn this movement into electricity, with the potential for many applications. He recently applied for a patent on this invention, called a Vibration Energy Harvester, or VEH.
Thibado predicts that his generators could transform our environment, allowing any object to send, receive, process and store information, powered only by room temperature heat.
This would have significant implications for the effort to connect physical objects to the digital world, known as the Internet of Things. This self-charging, microscopic power source could make everyday objects into smart devices, as well as powering more sophisticated biomedical devices such as pace-makers, hearing aids and wearable sensors.
"Self-powering enables smart bio-implants," explained Thibado, "which would profoundly impact society."
Read more about this research on the Research Frontier website.
The Department of Physics is part of the J. William Fulbright College of Arts and Sciences.
Camilla Shumaker, director of science and research communication
Learn about plant pathology and submit your diseased plant samples for diagnosis at our bi-weekly summer booth.
The former School of Law dean will focus on the University of Arkansas’ economic and social impact on the state and beyond.
In addition to Darynne Dahlem and Reagan Grubbs, five more U of A students also won scholarships and awards at the 2018 Miss Arkansas Scholarship Pageant.
Gov. Asa Hutchinson named Sarah Moore of Stuttgart to the Arkansas Board of Education, his office announced last week. She earned a doctorate in education policy in 2015.
After more than 92 years of combined service, three faculty members in Fulbright College recently retired and were honored following the successful completion of their final academic year. | <urn:uuid:ba753b88-5713-419a-98a8-d50b35b21b11> | 3.234375 | 465 | News (Org.) | Science & Tech. | 30.707948 | 95,512,813 |
An international team of astrophysicists has released IllustrisTNG, the most advanced universe model of its kind.
Novel computational methods have helped create the most information-packed universe-scale simulation ever produced. The new tool provides fresh insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed throughout the cosmos, and where magnetic fields originate.
Led by principal investigator Volker Springel at the Heidelberg Institute for Theoretical Studies, astrophysicists from the Max Planck Institutes for Astronomy (MPIA, Heidelberg) and Astrophysics (MPA, Garching), Harvard University, the Massachusetts Institute of Technology (MIT), and the Flatiron Institute's Center for Computational Astrophysics (CCA) developed and programmed the new universe simulation model, dubbed Illustris: The Next Generation, or IllustrisTNG.
The model is the most advanced universe simulation of its kind, says Shy Genel, an associate research scientist at CCA who helped develop and hone IllustrisTNG. The simulation's detail and scale enable Genel to study how galaxies form, evolve and grow in tandem with their star-formation activity. "When we observe galaxies using a telescope, we can only measure certain quantities," he says.
"With the simulation, we can track all the properties for all these galaxies. And not just how the galaxy looks now, but its entire formation history." Mapping out the ways galaxies evolve in the simulation offers a glimpse of what our own Milky Way galaxy might have been like when the Earth formed and how our galaxy could change in the future, he says.
Mark Vogelsberger, an assistant professor of physics at MIT and the MIT Kavli Institute for Astrophysics and Space Research, has been working to develop, test and analyze the new IllustrisTNG simulations. Along with postdoctoral researchers Federico Marinacci and Paul Torrey, Vogelsberger has been using IllustrisTNG to study the observable signatures from large-scale magnetic fields that pervade the universe.
"The high resolution of IllustrisTNG combined with its sophisticated galaxy formation model allowed us to explore these questions of magnetic fields in more detail than with any previous cosmological simulations," says Vogelsberger, one of the authors of the three papers published today in the Monthly Notices of the Royal Astronomical Society.
Modeling a (more) realistic universe
IllustrisTNG is a successor model to the original Illustris simulation developed by the same research team, but it has been updated to include some of the physical processes that play crucial roles in the formation and evolution of galaxies.
Like Illustris, the project models a cube-shaped universe smaller than our own. This time, the project followed the formation of millions of galaxies in a representative region of a universe with nearly 1 billion light-years per side (up from 350 million light-years per side just four years ago). lllustrisTNG is the largest hydrodynamic simulation project to date for the emergence of cosmic structures, says Springel, also of MPA and Heidelberg University.
The cosmic web of gas and dark matter predicted by IllustrisTNG produces galaxies quite similar to real galaxies in shape and size. For the first time, hydrodynamic simulations could directly compute the detailed clustering pattern of galaxies in space. In comparison with observational data - such as the data provided by the powerful Sloan Digital Sky Survey - the simulations from IllustrisTNG demonstrate a high degree of realism, says Springel.
In addition, the simulations predict how the cosmic web changes over time, especially in relation to the dark matter that underlies the cosmos. "It is particularly fascinating that we can accurately predict the influence of supermassive black holes on the distribution of matter out to large scales," says Springel. "This is crucial for reliably interpreting forthcoming cosmological measurements."
Astrophysics via code and supercomputers
For the project, the researchers developed a particularly powerful version of their highly parallel moving-mesh code AREPO and used it on the Hazel Hen machine, Germany's fastest mainframe computer, at the High Performance Computing Center Stuttgart. To compute one of the two main simulation runs, the team employed more than 24,000 processors over the course of more than two months. "The new simulations produced more than 500 terabytes of simulation data," says Springel. "Analyzing this huge mountain of data will keep us busy for years to come, and it promises many exciting new insights into different astrophysical processes."
Supermassive black holes squelch star formation
In another study, Dylan Nelson, a researcher at MPA, was able to demonstrate the impact of black holes on galaxies. Star-forming galaxies shine brightly in the blue light of their young stars until a sudden evolutionary shift halts the star formation, so that the galaxy becomes dominated by old, red stars and joins a graveyard full of old and dead galaxies.
"The only physical entity capable of extinguishing the star formation in our large elliptical galaxies are the supermassive black holes at their centers," explains Nelson. "The ultrafast outflows of these gravity traps reach velocities up to 10 percent of the speed of light and affect giant stellar systems that are billions of times larger than the comparably small black hole itself."
New findings for galaxy structure
IllustrisTNG also improves our understanding of the hierarchical structure of galaxy formation. Theorists argue that small galaxies should form first and then merge into ever-larger objects, driven by the relentless pull of gravity. The numerous galaxy collisions literally tear some galaxies apart and scatter their stars into wide orbits around the newly created large galaxies, which should give the galaxies a faint background glow of stellar light. These predicted pale stellar halos are very difficult to observe due to their low surface brightness, but IllustrisTNG was able to simulate exactly what astronomers should be looking for.
"Our predictions can now be systematically checked by observers," says Annalisa Pillepich, a researcher at MPIA, who led a further IllustrisTNG study. "This yields a critical test for the theoretical model of hierarchical galaxy formation."
ABOUT THE FLATIRON INSTITUTE
The Flatiron Institute is the research division of the Simons Foundation. Its mission is to advance scientific research through computational methods, including data analysis, modeling and simulation. The institute's Center for Computational Astrophysics creates new computational frameworks that allow scientists to analyze big astronomical datasets and to understand complex, multi-scale physics in a cosmological context.
Anastasia Greenebaum | EurekAlert!
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:8741153e-75c8-4884-839e-7229005357ad> | 2.796875 | 1,991 | Content Listing | Science & Tech. | 27.602387 | 95,512,816 |
|Posted: Jun 25, 2014|
Understanding mussels' stickiness could lead to better surgical and underwater glues
|(Nanowerk News) Mussels might be a welcome addition to a hearty seafood stew, but their notorious ability to attach themselves to ships’ hulls, as well as to piers and moorings, makes them an unwelcome sight and smell for boaters and swimmers.|
Clingy mussels are inspiring new ways to make nontoxic glues that could be used in surgery. Now, researchers report in ACS’ journal Langmuir ("A Fundamental Understanding of Catechol and Water Adsorption on a Hydrophilic Silica Surface: Exploring the Underwater Adhesion Mechanism of Mussels on an Atomic Scale") a clearer understanding of how mussels stick to surfaces, which could lead to new classes of adhesives that will work underwater and even inside the body.
Shabeer Ahmad Mian and colleagues note that mussels have a remarkable knack for clinging onto solid surfaces underwater. That can make them a real nuisance to recreational boaters and professional fishermen, who have to scrape the hitchhikers off their vessels to help them run more efficiently. Some types of mussels can even plug up drinking water pipes. Mussels also can stick to materials with nonstick coatings. Although researchers have already developed mussel-inspired glues, they still don’t have a full understanding of exactly how these critters stick so well to underwater surfaces. So, Mian’s team set out to investigate this mystery in painstaking detail to improve these adhesives and to develop new ones.
Using complex calculations and simulations, they determined that one part of the mussel “glue” molecule, called catechol, pushes water molecules out of the way to bind directly to wide variety of surfaces. They say that this study provides a clear picture of the first step of mussel adhesion, which could pave the way for better adhesives for many applications, such as for use in surgeries. The adhesives can be nontoxic and biocompatible, says Mian.
|Source: American Chemical Society|
Nanowerk Newsletter Email Digests
with a compilation of all of the day's news.
These articles might interest you as well: | <urn:uuid:aff0d8e1-3650-4a83-b0e3-8cf1c6d729c7> | 3.34375 | 479 | News Article | Science & Tech. | 27.873274 | 95,512,845 |
The Gravitational Lens G2237 + 0305
The European Space Agency's Faint Object Camera on board NASA's Hubble Space Telescope has provided astronomers with the most detailed image ever taken of the gravitational lens G2237 + 0305 – sometimes referred to as the "Einstein Cross." The photograph shows four images of a very distant quasar which has been multiple-imaged by a relatively nearby galaxy acting as a gravitational lens. The angular separation between the upper and lower images is 1.6 arcseconds.
The quasar seen here is at a distance of approximately 8 billion light-years, whereas the galaxy at a distance of 400 million light years is 20 times closer. The light from the quasar is bent in its path by the gravitational field of the galaxy. This bending has produced the four bright outer images seen in the photograph. The bright central region of the galaxy is seen as the diffuse central object.
Gravitational lensing occurs when the light from a distant source passes through or close to a massive foreground object. Depending on the detailed alignment of the foreground and background objects with the line of sight to Earth, several images of the background object may be seen. In fact, astronomers expect that a faint fifth image of the quasar should be present near the center of the galaxy in G2237 + 0305. Careful image processing will be needed to determine if the fifth image is actually seen in this FOC exposure.
Gravitational lenses, such as G2237 + 0305, are useful probes of many types of phenomena that occur in the cosmos. For example, it is possible to "weigh" the foreground galaxy by measuring the relative positions and the brightnesses of the different images of the quasar. This should be possible to do more accurately given the resolution of images obtained with the Faint Object Camera. Also, gravitational lenses in general offer the possibility of determining the elusive "Hubble Constant" – a fundamental measure of the size and age of the universe – by measuring the time delays in changes of the brightness of the lensed images.
Detailed analysis of this fascinating Faint Object Camera image and others to be observed later with the Hubble Space Telescope will provide a wealth of information on the details of lensing galaxies, as well as on the process of gravitational lensing itself. | <urn:uuid:72816cb4-537e-471e-a674-f65ca6b14bbf> | 3.625 | 468 | Knowledge Article | Science & Tech. | 34.859056 | 95,512,851 |
List of unsolved problems in statistics
There are many longstanding unsolved problems in mathematics for which a solution has still not yet been found. The unsolved problems in statistics are generally of a different flavor; according to John Tukey, "difficulties in identifying problems have delayed statistics far more than difficulties in solving problems." A list of "one or two open problems" (in fact 22 of them) was given by David Cox.
Inference and testing
- How to detect and correct for systematic errors, especially in sciences where random errors are large (a situation Tukey termed uncomfortable science).
- The Graybill–Deal estimator is often used to estimate the common mean of two normal populations with unknown and possibly unequal variances. Though this estimator is generally unbiased, its admissibility remains to be shown.
- Meta-analysis: Though independent p-values can be combined using Fisher's method, techniques are still being developed to handle the case of dependent p-values.
- Behrens–Fisher problem: Yuri Linnik showed in 1966 that there is no uniformly most powerful test for the difference of two means when the variances are unknown and possibly unequal. That is, there is no exact test (meaning that, if the means are in fact equal, one that rejects the null hypothesis with probability exactly α) that is also the most powerful for all values of the variances (which are thus nuisance parameters). Though there are many approximate solutions (such as Welch's t-test), the problem continues to attract attention as one of the classic problems in statistics.
- Multiple comparisons: There are various ways to adjust p-values to compensate for the simultaneous or sequential testing of hypothesis. Of particular interest is how to simultaneously control the overall error rate, preserve statistical power, and incorporate the dependence between tests into the adjustment. These issues are especially relevant when the number of simultaneous tests can be very large, as is increasingly the case in the analysis of data from DNA microarrays.
- Bayesian statistics: A list of open problems in Bayesian statistics has been proposed.
- As the theory of Latin squares is a cornerstone in the design of experiments, solving the problems in Latin squares could have immediate applicability to experimental design.
Problems of a more philosophical nature
- Sampling of species problem: How is a probability updated when there is unanticipated new data?
- Doomsday argument: How valid is the probabilistic argument that claims to predict the future lifetime of the human race given only an estimate of the total number of humans born so far?
- Exchange paradox: Issues arise within the subjectivistic interpretation of probability theory; more specifically within Bayesian decision theory. This is still an open problem among the subjectivists as no consensus has been reached yet. Examples include:
- Tukey, John W. (1954). "Unsolved Problems of Experimental Statistics". Journal of the American Statistical Association. Journal of the American Statistical Association, Vol. 49, No. 268. 49 (268): 706–731. doi:10.2307/2281535. JSTOR 2281535.
- Cox, D.R. (1984) "Present position and potential developments: Some personal views — Design of experiments and regression", Journal of the Royal Statistical Society, Series A, 147 (2), 306–315
- Nabendu Pal, Wooi K. Lim (1997) "A note on second-order admissibility of the Graybill–Deal estimator of a common mean of several normal populations", Journal of Statistical Planning and Inference, 63 (1), 71–78. doi:10.1016/S0378-3758(96)00202-9
- Fraser, D.A.S.; Rousseau, J. (2008) "Studentization and deriving accurate p-values". Biometrika, 95 (1), 1—16. doi:10.1093/biomet/asm093
- Jordan, M. I. (2011). "What are the open problems in Bayesian statistics?" The ISBA Bulletin, 18(1).
- Zabell, S. L. (1992). "Predicting the unpredictable". Synthese. 90: 205. doi:10.1007/bf00485351.
- Linnik, Jurii (1968). Statistical Problems with Nuisance Parameters. American Mathematical Society. ISBN 0-8218-1570-9.
- Sawilowsky, Shlomo S. (2002). "Fermat, Schubert, Einstein, and Behrens–Fisher: The Probable Difference Between Two Means When σ1 ≠ σ2", Journal of Modern Applied Statistical Methods, 1(2). | <urn:uuid:8e03460d-f825-4cc4-b60e-b90762647a55> | 2.75 | 980 | Knowledge Article | Science & Tech. | 46.359734 | 95,512,858 |
Authors: George Rajna
So-called Fresnel zone plate spectrometers offer new and more efficient ways of conducting experiments using soft X-rays. The world's largest X-ray laser opens Friday in Germany, promising to shed new light onto very small things by letting scientists penetrate the inner workings of atoms, viruses and chemical reactions. A sleek, subterranean X-ray laser to be unveiled Friday in Germany, by far the most powerful in the world, has scientists in a dozen fields jostling to train its mighty beam on their projects. Physicists from Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and Deutsches Elektronen-Synchrotron (DESY, Hamburg) have developed a method to improve the quality of X-ray images over conventional methods. A team of researchers with members from several countries in Europe has used a type of X-ray diffraction to reveal defects in the way a superconductor develops. In their paper published in the journal Nature, the team describes the technique they used to study one type of superconductor and what they saw. Erica Carlson with Perdue University offers a News & Views piece on the work done by the team in the same journal issue. This paper explains the magnetic effect of the superconductive current from the observed effects of the accelerating electrons, causing naturally the experienced changes of the electric field potential along the electric wire. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron's spin also, building the bridge between the Classical and Quantum Theories. The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the Higgs Field, the changing Relativistic Mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions.
Comments: 19 Pages.
[v1] 2017-09-04 12:55:15
Unique-IP document downloads: 13 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. | <urn:uuid:f015e593-c84b-4bcb-aa3a-96ad104ba3b0> | 2.75 | 598 | Content Listing | Science & Tech. | 32.028683 | 95,512,870 |
Paramagnetic materials have a small, positive susceptibility to magnetic fields. These materials are slightly attracted by a magnetic field and the material does not retain the magnetic properties when the external field is removed. Paramagnetic properties are due to the presence of some unpaired electrons, and from the realignment of the electron paths caused by the external magnetic field. Paramagnetic materials include magnesium, molybdenum, lithium, and tantalum.
Ferromagnetic materials have a large, positive susceptibility to an external magnetic field. They exhibit a strong attraction to magnetic fields and are able to retain their magnetic properties after the external field has been removed. Ferromagnetic materials have some unpaired electrons so their atoms have a net magnetic moment. They get their strong magnetic properties due to the presence of magnetic domains. In these domains, large numbers of atom’s moments (1012 to 1015) are aligned parallel so that the magnetic force within the domain is strong. When a ferromagnetic material is in the unmagnetized state, the domains are nearly randomly organized and the net magnetic field for the part as a whole is zero. When a magnetizing force is applied, the domains become aligned to produce a strong magnetic field within the part. Iron, nickel, and cobalt are examples of ferromagnetic materials. Components with these materials are commonly inspected using the magnetic particle method.
Diamagnetic materials have a weak, negative susceptibility to magnetic fields. Diamagnetic materials are slightly repelled by a magnetic field and the material does not retain the magnetic properties when the external field is removed. In diamagnetic materials all the electron are paired so there is no permanent net magnetic moment per atom. Diamagnetic properties arise from the realignment of the electron paths under the influence of an external magnetic field. Most elements in the periodic table, including copper, silver, and gold, are diamagnetic.
Light with the shortest wavelengths and the highest energies and frequencies in the electromagnetic spectrum; also called gamma radiation. Gamma rays are produced by violent events such as supernova explosions. They are also produced by the decay of radioactive materials. Gamma rays can kill living cells, so it is good that Earth’s atmosphere can stop them. Gamma radiation is used in medicine to kill cancer cells.
A high-energy stream of electromagnetic radiation having a frequency higher than that of ultraviolet light but less than that of a gamma ray (in the range of approximately 1016 to 1019 hertz). X-rays are absorbed by many forms of matter, including body tissues, and are used in medicine and industry to produce images of internal structures.
Invisible solar radiation that lies just beyond the violet end of the visible spectrum in the wavelength range from 10 to 400 nanometers (just below the x-ray range) and can harm living tissue. Much of the UV radiation is absorbed by the ozone molecules in the upper atmosphere (stratosphere), but a potentially dangerous amount passes through the ozone hole to cause cataracts, skin cancer (melanoma), suppression of the immune system, leaf damage, and reduced yields in some crops. UV rays are generated also during electric (arc) welding.
The visible light spectrum is the section of the electromagnetic radiation spectrum that is visible to the human eye. It ranges in wavelength from approximately 400 nm (4 x 10-7 m) to 700 nm (7 x 10-7 m). It is also known as the optical spectrum of light. The wavelength (which is related to frequency and energy) of the light determines the perceived color. The ranges of these different colors are listed in the table below. Some sources vary these ranges pretty drastically, and the boundaries of them are somewhat approximate as they blend into each other. The edges of the visible light spectrum blend into the ultraviolet and infrared levels of radiation. Most light that we interact with is in the form of white light, which contains many or all of these wavelength ranges within them. Shining white light through a prism causes the wavelengths to bend at slightly different angles due to optical refraction. The resulting light is, therefore, split across the visible color spectrum.
This is what causes a rainbow, with airborne water particles acting as the refractive medium. The order of wavelengths (as shown to the right) is in order of wavelength, which can be remembered by the pneumonic “Roy G. Biv” for Red, Orange, Yellow, Green, Blue, Indigo (the blue/violet border), and Violet. You’ll notice that in the image and table Cyan is also appears fairly distinctly, between green & blue. By using special sources, refractors, and filters, you can get a narrow band of about 10 nm in wavelength that is considered monochromatic light. Lasers are special because they are the most consistent source of narrowly monochromatic light that we can achieve.
Of or relating to the range of invisible radiation wavelengths from about 750 nanometers, just longer than red in the visible spectrum, to 1 millimeter, on the border of the microwave region.
The super-high frequency (SHF) and extremely high frequency (EHF) of microwaves come after radio waves. Microwaves are waves that are typically short enough to employ tubular metal waveguides of reasonable diameter. Microwave energy is produced with klystron and magnetron tubes, and with solid state diodes such as Gunn and IMPATT devices. Microwaves are absorbed by molecules that have a dipole moment in liquids. In a microwave oven, this effect is used to heat food. Low-intensity microwave radiation is used in Wi-Fi, although this is at intensity levels unable to cause thermal heating.
Radio waves have the longest wavelengths in the electromagnetic spectrum. These waves can be longer than a football field or as short as a football. Radio waves do more than just bring music to your radio. They also carry signals for your television and cellular phones. | <urn:uuid:2b417d20-4a2b-4325-96ca-aff2c1c38e74> | 3.84375 | 1,209 | Knowledge Article | Science & Tech. | 35.537923 | 95,512,895 |
Density is the mass per unit volume of a substance and is simply the ratio of mass to volume. Unlike mass, density is a characteristic property of a material, meaning a property that has the same value for any size sample of a given substance. For example, a small piece of gold has much less mass than a large piece of gold but both pieces have the same density.
In the metric system, the standard unit for density is kg per cubic meter. The imperial system of units uses the weight (or force) unit, pounds, instead of the unfamiliar mass unit, slugs. The weight per unit volume of a substance is called the specific weight and has standard imperial units of lb per cubic foot. Commercial and industrial publications in the U.S. routinely refer to the specific weight of a material as its density.
Density, Mass and Volume
Density tells you how much matter is present in a unit volume of space. As density increases, the amount of matter contained in a unit volume also increases.
Gold has a much greater density than aluminum. A 1-inch cube of gold weighs 11.2 oz., while a 1-inch cube of aluminum only weighs 1.6 oz. In terms of mass, there is much less aluminum in a 1-inch cube than there is gold.
If you have 5 oz. of gold and 5 oz. of aluminum in the form of a cube, the gold cube has sides of length 0.8 inches, while the aluminum cube has sides of 1.5 inches. In terms of mass, aluminum takes up more space than the same amount of gold.
Science teachers often include questions on their tests to make sure their students know how to use the term "density" properly. For example, is the statement that gold is heavier than aluminum true or false? This is false, since a big enough piece of aluminum can be heavier than a small piece of gold. The statement would be true if it also stated that the gold and aluminum pieces are the same size. A true statement is that gold is denser (not heavier) than aluminum. | <urn:uuid:1120b340-50fa-4f25-b937-2bf76a1d9135> | 4.46875 | 426 | Knowledge Article | Science & Tech. | 63.396911 | 95,512,903 |
New European research has raised the possibility of finding survivors in rubble — such as in the recent Christchurch and Japanese earthquakes — by detecting molecules in their breath, sweat and skin.
It was notable for being the first of its kind to use human participants. Over five days, at six-hour intervals, eight participants entered a simulator of a collapsed glass-clad reinforced-concrete building, which was designed, built and tested by the researchers from Loughborough University, National Technical University of Athens, University of Babe-Bolyai and University of Dortmund.
A variety of sensors, positioned throughout the simulator, rapidly detected carbon dioxide and ammonia with high-sensitivity in the plumes of air that travelled through the constructed rubble, highlighting their effectiveness as potential indicators.
In addition to these molecules, a large number of volatile organic compounds were detected; acetone and isoprene being the most prominent potential markers.
When trapped within a void of a collapsed building, casualties release volatile metabolites — products of the body’s natural breakdown mechanisms — through their breath, skin and other bodily fluids, which can have complicated interactions with the building materials. These interactions change with conditions such as humidity, heat, and wind strength and direction, making the detection process much more difficult.
Interestingly, there was a marked decrease in ammonia levels when the participants were asleep; a finding the researchers could not explain and will investigate further, along with the build-up of acetone with increasing food withdrawal and the presence of detectable molecules in urine.
Co-author of the study, Professor Paul Thomas of Loughborough University, said, “This is the first scientific study on sensing systems that could detect trapped people. The development of a portable detection device based on metabolites of breath, sweat and skin could hold several advantages over current techniques.
“A device could be used in the field without laboratory support. It could monitor signs of life for prolonged periods and be deployed in large numbers, as opposed to a handful of dogs working – at risk to themselves and their handlers – for 20 minutes before needing extensive rest.”
An Institute of Physics spokesperson said, “As the first study of its kind, this preliminary work can be built upon to help prepare for future disasters such as those tragedies we’ve seen recently in Japan and New Zealand.”
The paper was part of research for a European Community project “Second Generation Locator for Urban Search and Rescue” Operations (SGL for USaR) aimed at solving critical problems following large scale structural collapses in towns and cities. The project combines chemical and physical sensors integration with the development of an open ICT platform for addressing mobility and time-critical requirements of USaR operations. | <urn:uuid:d6d140c6-cdaa-4b7b-91c9-3257098d7b54> | 3.3125 | 554 | News Article | Science & Tech. | 14.390685 | 95,512,915 |
NASA's Hubble Space Telescope has recorded the never-before-seen break-up of an asteroid into as many as 10 smaller pieces.
Fragile comets, comprised of ice and dust, have been seen falling apart as they near the sun, but nothing like this has ever before been observed in the asteroid belt.
This series of Hubble Space Telescope images reveals the breakup of an asteroid over a period of several months starting in late 2013. The largest fragments are up to 180 meters (200 yards) in radius. Image Credit: NASA, ESA, D. Jewitt (UCLA)
"This is a rock, and seeing it fall apart before our eyes is pretty amazing," said David Jewitt of the University of California at Los Angeles, who led the astronomical forensics investigation.
The crumbling asteroid, designated P/2013 R3, was first noticed as an unusual, fuzzy-looking object by the Catalina and Pan STARRS sky surveys on Sept. 15, 2013. A follow-up observation on October 1 with the W. M. Keck Observatory on the summit of Mauna Kea, a dormant volcano on the island of Hawaii, revealed three bodies moving together in an envelope of dust nearly the diameter of Earth.
"The Keck Observatory showed us this thing was worth looking at with Hubble," Jewitt said. "With its superior resolution, space telescope observations soon showed there were really 10 embedded objects, each with comet-like dust tails. The four largest rocky fragments are up to 400 yards in diameter, about four times the length of a football field."
Hubble data showed the fragments drifting away from each other at a leisurely one mph. The asteroid began coming apart early last year, but new pieces continue to reveal themselves, as proved in the most recent images.
It is unlikely the asteroid is disintegrating because of a collision with another asteroid, which would have been instantaneous and violent by comparison to what has been observed. Debris from such a high-velocity smashup would also be expected to travel much faster than observed. Nor is the asteroid coming unglued due to the pressure of interior ices warming and vaporizing.
This leaves a scenario in which the asteroid is disintegrating due to a subtle effect of sunlight, which causes the rotation rate of the asteroid to gradually increase. Eventually, its component pieces -- like grapes on a stem -- succumb to centrifugal force and gently pull apart. The possibility of disruption in this manner has been discussed by scientists for several years, but never reliably observed.
For this scenario to occur, P/2013 R3 must have a weak, fractured interior -- probably as the result of numerous non-destructive collisions with other asteroids. Most small asteroids are thought to have been severely damaged in this way. P/2013 R3 is likely the byproduct of just such a collision sometime in the last billion years.
With the previous discovery of an active asteroid spouting six tails, named P/2013 P5, astronomers are finding more evidence the pressure of sunlight may be the primary force causing the disintegration of small asteroids -- less than a mile across-- in our solar system.
The asteroid's remnant debris, weighing about 200,000 tons, will in the future provide a rich source of meteoroids. Most will eventually plunge into the sun, but a small fraction of the debris may one day blaze across our skies as meteors.
The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center in Greenbelt, Md., manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington.
For images and more information about Hubble, visit:
Ray Villard | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:32d5d400-d0fe-4793-9d1c-a8b62134bd95> | 3.96875 | 1,361 | Content Listing | Science & Tech. | 41.830963 | 95,512,921 |
The anaerobic oxidation of ammonia (anammox) is an important pathway in the nitrogen cycle that was only discovered in the 1980s. Currently, scientists estimate that about 50 percent of the nitrogen in the atmosphere is forged by this process. A group of specialized bacteria perform the anammox reaction, but so far scientists have been in the dark about how these bacteria could convert ammonia to nitrogen in the complete absence of oxygen. Now, 25 years after its discovery, they finally solved the molecular mechanism of anammox.
Anammox bacteria are very unusual because they contain an organelle which is a typical eukaryotic feature. Inside this organelle, known as the “anammoxosome”, the bacteria perform the anammox reaction. The membrane of the anammoxosome presumably protects the cells from highly reactive intermediates of the anammox reaction. These intermediates could be hydrazine and hydroxylamine, as microbiologists proposed many years ago. This was very exciting news because the turnover of hydrazine, a very powerful reductant also used as rocket fuel, had never been shown in biology. However, these early experiments were provisional and many open questions remained.
To finally unravel the pathway experimentally was a very difficult enterprise. Marc Strous from the Max Planck Institute in Bremen says: “The anammox organisms are difficult to cultivate because they divide only once every two weeks. Therefore we had to develop cultivation approaches suitable for such low growth rates. Even after 20 years of trials, we can still only grow the organisms in bioreactors and not in pure culture.” In the present study, the researchers make use of the latest innovation in bioreactor technology for anammox cultivation: the membrane bioreactor. In such bioreactors the anammox organisms grow as suspended cells rather than in biofilms on surfaces, and relatively few contaminating organisms are present. The study makes use of protein purification and proteins cannot be effectively purified from biofilms because of the large amount of slime associated with these biofilms.
Another important key to the metabolism was the availability of the genome sequence of one of the best known anammox bacteria, Kuenenia stuttgartiensis. With the knowledge of the genome, the authors knew which proteins could be important. Based on the genome sequence, they could predict that nitric oxide, not hydroxylamine, might be the precursor for hydrazine. With a set of state-of-the art molecular methods the scientists could thus completely unravel the anammox pathway, and unequivocally establish the role of hydrazine and nitric oxide (NO) as intermediates.
“With this significant advance we can finally understand how the nitrogen in the air we breathe is created: from rocket fuel and nitric oxide!” concludes Marc Strous. With the establishment of the prominent role of nitric oxide in both anammox and denitrification, the research also opens a new window on the evolution of the biological nitrogen cycle in the Earth's distant past. Marc Strous explains: ”In the early days in Earth’s history, the nitric oxide accumulated in the atmosphere by vulcanic activity, was presumably the first “deep electron sink” on earth and may so have enabled the evolution of both microbial metabolic pathways anammox and denitrification.”
Institute for Water and Wetland Research, Department of Microbiology, Radboud University, Nijmegen, The Netherlands
Nijmegen Centre for Mitochondrial Disorders, Nijmegen Proteomics Facility, Department of Laboratory Medicine, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands
Radboud University, Department of Molecular Biology, Nijmegen Centre for Molecular Life Sciences, Nijmegen, The Netherlands
Delft University of Technology, Department Biotechnology, Delft, The Netherlands
Max Planck Institute for Marine Microbiology, Bremen, GermanyContact
Prof. Dr. Ir. Marc Strous | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:a6f9748d-dff7-465e-9ac7-abde9002a249> | 3.828125 | 1,506 | Content Listing | Science & Tech. | 27.640992 | 95,512,939 |
- Overlap extension polymerase chain reaction
- This page assumes familiarity with the terms and components used in polymerase chain reaction (PCR) process.
The overlap extension polymerase chain reaction (or OE-PCR) is a variant of PCR. It is also referred to as Splicing by overlap extension / Splicing by overhang extension (SOE) PCR. It is used to insert specific mutations at specific points in a sequence or to splice smaller DNA fragments into a larger polynucleotide.
Splicing of DNA Molecules
As in most PCR reaction, two primers -one for each end- are used per sequence. To splice two DNA molecules, special primers are used at the ends that are to be joined. For each molecule, the primer at the end to be joined is constructed such that it has a 5' overhang complementary to the end of the other molecule. Following annealing when replication occurs, the DNA is extended by a new sequence that is complementary to the molecule it is to be joined to. Once both DNA molecules are extended in such a manner, they are mixed and a PCR is carried out with only the primers for the far ends. The overlapping complementary sequences introduced will serve as primers and the two sequences will be fused. This method has an advantage over other gene splicing techniques in not requiring restriction sites.
To get higher yields, some primers are used in excess as in asymmetric PCR.
Introduction of Mutations
To insert a mutation into a DNA sequence, a specific primer is designed. The primer may contain a single substitution or contain a new sequence at its 5' end. If a deletion is required, a sequence that is 5' of the deletion is added, because the 3' end of the primer must have complementarity to the template strand so that the primer can sufficiently anneal to the template DNA.
Following annealing of the primer to the template, DNA replication proceeds to the end of the template. The duplex is denatured and the second primer anneals to the newly formed DNA strand, containing sequence from the first primer. Replication proceeds to produce a strand of the required sequence, containing the mutation.
The duplex is denatured again and the first primer can now bind to the latest DNA strand. The replication reaction continues to produce a fully dimerised DNA fragment. After further PCR cycles, to amplify the DNA, the sample can be separated by agarose gel electrophoresis, followed by electroelution for collection.
Efficiently generating oligonucleotides beyond ~110 nucleotides in length is very difficult, so to insert a mutation further into a sequence than a 110 nt primer will allow, it is necessary to employ overlap extension PCR. In OE-PCR the sequence being modified is used to make two modified strands with the mutation at opposite ends, using the technique described above. After mixing and denaturation, the strands are allowed to anneal to produce three different combinations as detailed in the diagram. Only the duplex without overlap at the 5' end will allow extension by DNA polymerase in 3' to 5' direction.
Following separation, the eluted fragments of appropriate size are subject to normal PCR, using the outermost primers used in the initial, mutagenic PCR reactions.
- ^ Higuchi R, Krummel B, Saiki R (1988). "A general method of in vitro preparation and specific mutagenesis of DNA fragments: study of protein and DNA interactions". Nucleic Acids Res 16 (15): 7351–67. doi:10.1093/nar/16.15.7351. PMC 338413. PMID 3045756. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=338413.
Polymerase chain reaction techniques
Real-time polymerase chain reaction (QRT-PCR) | Reverse transcription polymerase chain reaction (RT-PCR) | Inverse polymerase chain reaction | Nested polymerase chain reaction | Touchdown polymerase chain reaction | Overlap extension polymerase chain reaction | Multiplex polymerase chain reaction | Multiplex ligation-dependent probe amplification
Wikimedia Foundation. 2010.
Look at other dictionaries:
Polymerase chain reaction — PCR redirects here. For other uses, see PCR (disambiguation). A strip of eight PCR tubes, each containing a 100 μl reaction mixture The polymerase chain reaction (PCR) is a scientific technique in molecular biology to amplify a single or a… … Wikipedia
Nested polymerase chain reaction — A diagram illustrating the method of nested PCR. Nested polymerase chain reaction is a modification of polymerase chain reaction intended to reduce the contamination in products due to the amplification of unexpected primer binding sites.… … Wikipedia
Multiplex polymerase chain reaction — (Multiplex PCR) is a modification of polymerase chain reaction in order to rapidly detect deletions or duplications in a large gene. This process amplifies genomic DNA samples using multiple primers and a temperature mediated DNA polymerase in a… … Wikipedia
Variants of PCR — This page assumes familiarity with the terms and components used in the Polymerase Chain Reaction (PCR) process. The versatility of PCR has led to a large number of variants: Contents 1 Basic modifications 2 Pretreatments and extensions 3 Other… … Wikipedia
Multiplex ligation-dependent probe amplification — (MLPA) is a variation of the polymerase chain reaction that permits multiple targets to be amplified with only a single primer pair. Each probe consists of a two oligonucleotides which recognise adjacent target sites on the DNA. One probe… … Wikipedia
FastPCR — FastPCR … Википедия
Subcloning — In molecular biology, subcloning is a technique used to move a particular gene of interest from a parent vector to a destination vector in order to further study its functionality. Subcloning is not to be confused with molecular cloning, a… … Wikipedia
Mutagénesis de sitio dirigido — La mutagénesis de sitio dirigido, también llamada mutagénesis dirigida, es una técnica de biología molecular utilizada para crear mutaciones puntuales en una cadena de ADN. La técnica fue desarrollada por primera vez en 1978 por el científico… … Wikipedia Español
Gene synthesis — is the process of synthesizing an artificially designed gene into a physical DNA sequence.Gene synthesis was first demonstrated by Har Gobind Khorana in 1970 for a short artificial gene. Nowadays, commercial gene synthesis services are available… … Wikipedia
Artificial gene synthesis — is the process of synthesizing a gene in vitro without the need for initial template DNA samples. The main method is currently by oligonucleotide synthesis (also used for other applications) from digital genetic sequences and subsequent annealing … Wikipedia | <urn:uuid:3f2af0c0-079c-489f-8bed-77cc14ea47a0> | 3.34375 | 1,475 | Knowledge Article | Science & Tech. | 28.961935 | 95,512,945 |
Bildtext får vara max två rader text. Hela texten ska högerjusteras om den bara ska innehålla fotobyline! Photo: B. Christensen/Azote
reforestation in China
China’s efforts to restore forests after decades of destruction have been widely reported in the media. In 2016, researchers Andrés Viña and William J. McConnell with colleagues showed that Chinese policies have led to tree cover increasing over 1.6% of China. Forests are essential for climate regulation, soil and water conservation and enhancing biodiversity.
Centre researcher Tracy van Holt said, “The paper got a lot of media attention, but I noticed something: plantations weren’t mentioned anywhere in the article.”
This matters, argues Van Holt, because if the policy is to encourage tree plantations rather than native forests, then different ecosystem services result and this may affect human wellbeing. Biodiversity is often lower in plantations, the net amount of carbon sequestered is not always straightforward and water flow in plantation watersheds may diminish because plantations need a lot of water, says Van Holt.
“Wondering where in the article the tree plantations were referred to, I dug further and found that Viña and colleagues used the Food and Agriculture Organization definition of forest.” Defining “forests” is an active conversation in academia.
The FAO combines tree plantations and native forests. “The researchers also used low resolution satellite imagery making it very difficult to isolate tree plantations from native forests.”
“By not clarifying what is meant by “forest”, the public was potentially misled,” says Van Holt. But by how much? Van Holt has analysed the media data and published her results in Science Advances.
According to Altmetric tool, which analyses social media, blogs and media articles relating to research papers, up to 783,000 people were misled. “We analyzed the tweets associated with the article and all 71 tweets referred to native forest returning—not one mentioned tree plantations.” “Out of the 19 news articles analyzed, plantations were mentioned 4 times and native forest terms were reported 41 times.”
“If it turns out that most of the tree cover is tree farms, then we need to know this. This is especially important because large “reforestation” efforts are underway. It is quite possible that instead of native forests returning that commodity plantations will be planted. What this means for biodiversity, carbon sequestration, and human wellbeing is unknown.”
Van Holt argues that if researchers and policymakers stop conflating native forests and tree plantations then “we may be able to identify which areas are most appropriate for plantations and where native forest should be restored”.
Van Holt and Francis Jack Putz classified the content of the 71 tweets available as of Friday, 15 April 2016, that linked to this article, which, according to Science’s Altimetric AQ5 tool, had an upper bound of 783,000 followers. We then recorded for the presence of tree cover (a neutral term that can include native forests and/or plantations), native forest (including forest conservation, forest cover, forest, forest recovery, reforestation, and forest regeneration), and plantation (plantation, tree plantation, tree crops, and tree farm) in the 19 news articles linked to Viña et al. (1). They also searched for afforestation and regeneration, although these terms are ambiguous.
Tracy Van Holt is an affiliated researcher, primarily working within the Global Economic Dynamics and the Biosphere Programme. Her research focuses on the social and behavioral dynamics or interactions of natural resource-dependent communities under different configurations of landscape and seascape change, market dynamics and spatial features.
Research news | 2018-07-10
The World in 2050 initiative launches new report outlining synergies and benefits that render the goals achievable
Educational news | 2018-07-02
LEAP our leadership programme designed for changemakers that want to lead social-ecological transformations to sustainability. Application deadline is 5 August 2018.
Research news | 2018-06-27
Overfishing, fractured international relationships and political conflicts loom as fish migrate more unpredictably because of climate change. Here is how to deal with it
Research news | 2018-06-26
Profit-maximizing approaches are most likely to produce outcomes that harm people or the environment. But it depends on the circumstances whether a sustainable or a safe approach is most suitable, new study argues
General news | 2018-06-20
Will lead a redesign of the organisational structure at the centre
Research news | 2018-06-20
New book chapter looks into the economic, cultural and ecological reasons why some people leave the fisheries and aquaculture sector, and what could be done to reverse the trend | <urn:uuid:2f98a4fe-3334-4009-a97d-a13be8894914> | 3.125 | 1,012 | Content Listing | Science & Tech. | 30.617123 | 95,512,948 |
Photosynthesis (from photo- (light) and synthesis (composition)) is the process by which plants and certain other organisms obtain and convert solar (or light) energy into chemical energy.
All cells need energy, the ability to perform or complete work, in order for them to maintain their existence. Even us humans need energy! If we do not have energy, we cannot do even the most basic things in life! These basic things include walking, standing, sitting, and even your heart beating! All cells require energy for (but not limited to) these five reasons:
- Use energy to carry out active transport.
- Synthesis of proteins and nucleic acids.
- Responsd to chemical signals at the cell surface.
- Movement (motor proteins) of organelles around the cell.
- Used to produce light in some organisms, such as fireflies.
Life as we know it depends on chemical energy, energy saved in chemical bonds. But how do certain organisms get this chemical energy? There are two ways in which an organism obtains energy:
- Autotrophs are organisms that do not eat or absorb other organisms for energy as they make their own energy. Most autotrophs, known as photoautotrophs, carry out photosynthesis (plants, protists, and bacteria). Autotrophs don't only just produce energy to satisfy themselves, but they also produce enough energy to satisfy other animals too: Autotrophs (plants, after sunlight) start the food chain (EX: Grass provides energy for a rabbit, who provides energy to several animals, such as snakes and foxes). If it wasn't for these producers (An autotrophic organism that starts the food chain cycle), we wouldn't have anything to eat!
- Heterotrophs are organisms that are not able to make their own energy, so they resort to absorbing or eating energy from other organisms. Heterotrophs are also known as consumers because they consume other organisms for energy in the food chain cycle. Examples of heterotrophs are foxes, cats, snakes, hawks, eagles, crocodiles, tigers, lions, and even us: humans!
Photosynthesis converts light/solar energy into chemical energy, and thus is very important to life. But, how does it work? Let's first take a look at the chemical equation for photosynthesis (reactants on the left, products on the right):
energy from the Sun + 6CO2 + 6H2O → C6H12O6 + 6O2
Here, we need 3 important elements in order to kick-start the process. We need sunlight, carbon dioxide and water. How do plants obtain all of these three elements?
Wavelengths of light are absorbed and reflected by molecules called pigments. In plants, the green pigment that absorbs sunlight is known as chlorophyll. Chlorophyll is found in the chloroplast, an organelle that is the site of photosynthesis (in plants).
Chlorophyll absorbs solar energy and transfers it to chemicals involved in the photosynthetic process. Sunlight contains all the colors of the rainbow (Roy G. Biv). All the colors hit the chlorophyll molecules, but only certain colors are absorbed. Chlorophyll absorbs well in the blue-violet and red sections of the visible light spectrum, whereas chlorophyll reflects most of the green light in the visible light spectrum, giving most plants a green color.
- Carbon Dioxide
Pipe-like structures in the leaves, known as stomata, control the flow of carbon dioxide into a plant and the flow of oxygen outside of the plant. The flow of these gases are also regulated by gaurd cells, cells that open and close the stomata.
In a vascular plant, pipe-like tissues conduct water to different parts of the plant. In a non-vascular plant, water is unable to be conducted, and therefore, must be absorbed from the plant's surroundings (such as in the soil).
The 2-step process
Now that we have the necessary "ingredients" to perform photosynthesis, we can get started! Photosynthesis occurs in two steps, the Light Reactions (also: light-dependent reaction) and the Calvin Cycle (also: dark-reactions, light-independent reactions, carbon fixation).
The light reactions occur in the thylakoid membrane of the chloroplast. It is made up of two photosystems:
Photons from the sun travel 93 million miles into Photosystem II of the thylakoid. This excites the electrons in the chlorophyll molecule, which are then shifted around various "electron-acceptors"--each electron-accepter causing the electron's energy state to diminish. Moving around these excited electrons cause the electrons, and hydrogen molecules, in H2O (water) to be "donated" over to replace the excited electron's place in the various electron-acceptors in the chloroplast. This causes oxygen to be created as a waste product, as water is essentially stripped off of its hydrogens and electrons, leaving the oxygen molecules all by themselves. As the electron's energy state diminishes, groups of hydrogen protons are transported from the stroma over to the lumen.
Then, Photosystem I allows NADP+, the final electron acceptor in the thylakoid, to accept the not-so-excited electron and a hydrogen proton to make NADPH. This is where the NADPH comes from. Meanwhile, in the lumen, the hydrogen protons, after getting pumped into the lumen, demonstrate chemiosmosis--they are then pumped back up into the stroma, causing ATP synthase. The ATP synthase then merges ADP with several phosphate groups, forming ATP (Adenosine Triphosphate - energy storage molecule). The ATP and NADPH formed by these reactions are needed in the Calvin Cycle.
The chemical equation for Light Reaction is as shown:
SL (sunlight) + H2O → O2 + NADPH + ATP
The two byproducts from our light reactions, ATP and NADPH, are transferred to the stroma, the liquid-filling area of the chloroplast not taken up by the thylakoids, to go through the Calvin Cycle. Six molecules of CO2 react with six molecules of 5-carbon molecule RuBP (also: Ribulose Biphosphate, ribulose-1, 5-biphosphate) to form 12 molecules of 3-carbon molecule phosphoglyceraldehyde (PGAL), also known as glyceraldehyde-3-phosphate. Overall, 36 carbons are being made to react in order to form PGAL. Electrons in the PGAL and carbon dioxide are not in a high enough energy state to start this reaction by themselves, so an energy-source is needed: 12 ATPs and 12 NADPHs.
With all of these combined, 12 ADPs, 12 NADP+s, and 12 phosphate groups are created. The electrons in NADPH are at a higher energy state. When NADPH's electron's energy states go to lower energy states, it helps produce ADP and NADP+ to be formed by putting energy into the reaction. ATPs' electrons, when their phosphate groups are lost, are in a very high energy state. Like NADPH, when they enter into lower energy states, ATP helps drive the reaction.
As cycles reuse things, the Calvin Cycle reuses most of the PGAL to recreate RuBP. This "reusing" part of the cycle, just like in the beginning, will need energy: ATP, ADP and phosphate groups (no NADPH). Extra PGAL not used will be used to make glucose, or C6H12O6 (or any type of carbohydrate, starch or sugar).
The chemical equation of the Calvin Cycle is shown as follows:
CO2 + NADPH + ATP → C6H12O6 | <urn:uuid:f5c06f34-6f37-403e-b6be-1a8b0885927e> | 3.875 | 1,664 | Knowledge Article | Science & Tech. | 43.103435 | 95,512,951 |
Solar-powered sea slugs
UPPER: The sacoglossan Placida cf. dendritica showing the green network of ducts which contain the green chloroplasts from its algal food.
LOWER: The aeolid nudibranch Pteraeolidia ianthina which "farms" colonies of brown single-celled algae (zooxanthellae) in its body.
PHOTOS: Bill Rudman.
Two quite different groups of sea slugs have evolved ways of using the ability of plants to convert the sun's energy into sugars and other nutrients. In simple terms they have become "solar powered".
The herbivorous sacoglossans are suctorial feeders removing the cell sap from the algae on which they feed. In most, the cell contents are simply digested by the slug. Some species however have evolved branches of their gut which ramify throughout the body wall and contain plastids, which are the photosynthesising "factories" from the algae, alive and operating. In many cases these plastids are chloroplasts, but sacoglossans that feed on red and brown algae are also reported to keep the plastids from these algae alive as well. As I show elsewhere in the Forum, one species, Elysia cf furvacauda changes diet and plastid at least three times during its life history.
In nudibranchs, which are all carnivorous, many have evolved similar ways of keeping whole single-celled plants (zooxanthellae) alive in their bodies. In most cases the zoxanthellae are obtained from their food, often cnidarians, which already have symbiotic zooxanthellae in their bodies. This symbiosis as evolved many times within the nudibranchs with examples in many quite unrelated families and orders. Have a look at the following species for further information: Pteraeolidia ianthina, Phyllodesmium longicirrum, Phyllodesmium briareum, Phyllodesmium crypticum, Berghia verrucicornis, Spurilla australis, Pinufius rebus - zooxanthellae symbiosis, A. ransoni,A. harrietae, Melibe megaceras, Elysia cf. furvacauda, Elysia cf. pilosa, Plakobranchus ocellatus, Elysia crispata, Elysia chlorotica.
For further information:
• Zooxanthellae Symbiosis References.
•Zooxanthellae - what are they?
•Zooxanthellae - in Cnidarians
•Zooxanthellae - in nudibranchs
• Chloroplast Symbiosis References.
• Aspects of Coral feeding
• Chloroplast symbiosis Research
• Sacoglossan Feeding
• Feeding on Palythoa
Rudman, W.B., 1998 (October 11) Solar-powered sea slugs. [In] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/solarpow
October 8, 2008
From: Mihir Pathak
I am a PhD student in Mechanical Engineering at the Georgia Institute of Technology. My focus area is studying the thermal science and bio-mechanics of natural phenomena. I have recently, learned quite a bit about the "solar powered sea slug " and found this forum to be very helpful in getting some questions answered. Here are my questions:
1. How do they move to light? What physically/chemically happens when they are exposed to phototaxis? Also, what is the photosynthesis process in the particular algae that they eat?
2. How do these slugs store their energy? What properties enable them to do so?
3. What is an effective way to measure the O2 and CO2 levels when the slug is exposed and not exposed to light?
4. What other measurements will the biologists and chemists in this forum find useful that an engineer can do? Essentially, I would like to create a mimic-ed device that can do something similar. Any collaborators?
5. I will be getting some slugs into my lab soon. Any advice on how to build their tank? What kinds of apparatus would I need? Lamps, heaters, water filters, bubblers, live coral?, other things? How do I keep them alive? How do I keep the algae alive?
Any sort of assistance on these questions would be great. Thank you for letting me post on this forum. I look forward to hearing back from you. Thanks.
Georgia Institute of Technology
firstname.lastname@example.orgPathak, M.G., 2008 (Oct 8) Tank/Phototaxis/Energy/Experiments. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/21929
As you will see on the solar power Page, there are two major groups of solar-powered slugs. One group, the sacoglossans, are essentially herbivores who remove intact plastids from the plants and keep them alive and functioning in their own bodies. The second group are essentially carnivores, or related to carnivores, and they nurture single-celled plants [zooxanthellae] in their bodies. In most cases they have 'stolen' the zooxanthellae from their original cnidarian hosts [such as sea anemones or soft corals].
Quite different procedures are needed to keep these two types of animal - and their food - in aquaria. As far as I know only the first group have been successfully kept and studied in the lab. If you go to the Elysia chlorotica and Elysia clarki Fact Sheets and look at the attached messages you will find addresses of a number of researchers working in this field.
Concerning the second group, Ingo Burghardt [email@example.com] may be able to offer some suggestions,
October 28, 2002
From: Sam Hsieh
Even the largest chloroplast genomes account for less than 25% of the gene products needed for plastid function. How can an isolated organelle, normally dependent upon the genes residing in its own nucleus for most of the proteins making up its photosynthetic machinery, remain physically stable and function for months in a foreign cell? Are we seeing tertiary endosymbiosis in action?
2nd Year Student
University of British Columbia
firstname.lastname@example.orgHsieh, S., 2002 (Oct 28) Solar-powered Sea Slugs - tertiary endosymbiosis?. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/8303
When I suddenly get 5 or 6 messages all asking much the same question, I think it is fair to suspect that a teacher has asked a class to answer a question. So this answer is for all of your class mates as well.
The ability of some animals, such as solar-powered sea slugs to remove functioning plastids from plants and keep them alive in their own bodies [sacoglossans] or to keep whole plant cells alive in their bodies [nudibranchs], is fascinating for many reasons and is fertile ground for opisthobranch workers, physiologists, botanists, geneticists etc. I think it will be many years before we can say just how the symbiosis works. I think Kerry Clark coined the term kleptoplasty, or at least popularised it, for the phenomenon of 'stealing' plastids. It is from the same Ancient Greek word which gives us the word kleptomania - [compulsive stealing] an affliction which seems to infect American filmstars with monotonous regularity.
And now to your question about whether this is a tertiary symbiosis. I guess we have to define what a plastid is and what its origin is. This is, I am afraid, getting a little outside my field of expertise. What I can say is that you should have a look at some of Lynn Margulis's publications. I have a very thumbed copy of her 1981 book Symbiosis in Cell Evolution but I am sure you can find more up to date editions to have a look at. She clearly stated the hypothesis that eucaryotic cells evolved from bacteral ancestors by a series of symbioses. Many cell organelles are considered to be symbiotic organisms which 'invaded' protoorganisms in the early stages of the evolution of life on this planet. Plastids, like mitochondria, have their own genome, and at cell division act as though they are symbionts. I don't know if we gain much in our understanding of their biology by trying to number their 'grade' of symbiosis.
As I said above, their are two types of 'solar powered slugs. If we first consider the sacoglossans. In plants, the plastids could be considered primary symbionts. When they are removed by sacoglossans to their own cells, the plastids still occupy the same position in relation to the cell, as a primary symbiont. However if we look at the plastid in the sacoglossan we could say that this is its second primary symbiosis.
If we look at the solar-powered nudibranchs, the situation is a bit more complex. They remove whole single-celled plants [zooxanthellae] from the primary host (usually a cnidarian) and re-use them in their own tissue. In this case the plastid is a symbiont of the zooxanthella which is the symbiont of an animal. I guess you could call this a secondary symbiosis but it all becomes quite confusing if you want to record that it has been moved from its first host to a second host. In fact I once described the removal of zooxanthellae from cnidarians to nudibranchs as a 'secondary symbiosis' but I was describing the transfer of the zooxanthella from its primary host to a secondary host. You, on the other hand, are trying to number how many steps we can go back until we get to the first symbiont. I think its all a bit confusing and doesn't really enlighten us very much.
• Margulis, L. 1981. Symbiosis in Cell Evolution W.H.Freeman & Co.: San Francisco. 1-419.
I hope my answer is not too confusing,
September 4, 2001
From: Caroline R. Cripe
Sea slugs receive food from the algae that lives within their skin. What benefits, if any, do you think algae receive from sea slugs
Caroline R. Cripe
email@example.comCripe, C.R., 2001 (Sep 4) 'Solar-powered' sea slugs. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/5207
I guess you have found the Solar-powered Sea Slug Page where there is a lot of introductory information.
It's a bit hard to answer your question about 'what benefits' the algae receives from its relationship with sea slugs. Asking about benefits suggests that there is an accountant sitting there balancing expenditure against income. I'm afraid nature doesn't really work that way. These systems have evolved over millions of years and don't necessarily utilise the most cost efficient methods of operation. The important thing is that the whole system works as a single unit and the organisms involved survive to reproduce and pass on the system to the next generation of participants.
We are not even sure that the one-celled algae (zooxanthellae) can survive outside the slug. The zooxanthellae are related to free-living algae called dinoflagellates, but I don't think there is any research that has shown that they can complete their life-cycle as free living plants. Asking about the benefits of living in the slugs suggests there is an alternative, when in fact there probably is not. The symbiosis between zooxanthellae and animal is now an integral part of the life of both the plant and the animal. We might feel inclined to say that the plant benefits from a stable protected environment in the animal, but that doesn't mean very much because the plant probably hasn't any alternative.
You must also realise that in the case of the sacoglossan sea slugs it is not a whole plant that is kept alive in the slug's tissues, but just the chloroplasts, which are the organelles found in green plant cells which photosynthesise. In the case of chloroplast symbiosis, the chloroplast is either a 'slave' of the plant or a 'slave' of the sea slug, without one or the other it will die.
March 23, 2001
From: Tom Mackillop
To who it concerns
I am in yr.11 at school and in our biology class we have some dispute over the kingdom into which sea slugs fall into, animal or plant. I would appriciate hearing from anyone who could set us on the right track
firstname.lastname@example.orgMackillop, T., 2001 (Mar 23) sea slugs - plants or animals?. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/4015
One of the main differences between plants and animals is that plants produce their own food by chemical reactions while animals obtain their nutrients by eating existing organic matter. Sea Slugs are a specialised group of snails (Phylum Mollusca), and are definitely animals. Some are herbivores (plant-eaters) while others are carnivores, often eating only a very specialised group of animals.
There are two fascinating groups of sea slugs, which I have called 'Solar Powered' because they have become very plant-like in their behaviour. Have a look at the Solar Powered Slugs Page for some background information on these animals. Perhaps they are what made you wonder whether they are in fact plants?
Have a look also at the messages below yours on this page as there is more information with each message. If you click on any underlined word it will take you to another relevant page.
August 4, 2000
From: Daniel Barshis
To Dr. Rudman,
My name is Daniel Barshis and I am currently a Senior year undergraduate at the Evergreen State College [Washington, USA]. The opportunity for advance study in the marine sciences here is fairly limited and I am doing some preparatory searching for a possible marine diving volunteer position this coming winter. I am contacting you because I am also extremely interested in the evolutionary biology of photosynthetic marine invertebrates, particularly Elysia sp. I have done much research into the natural history of and current research being done on Elysia sp. I was wondering if you had any suggestions of places to go or other people to contact about possibly working with them on a research project during January to March of 2001. Any information or leads you can think of would be incredibly helpful. I would like to take my interests out of the library and into the water. I have an open water certification and have done some recreational reef diving in the tropics.
Thank you for your help,
email@example.comBarshis, D. , 2000 (Aug 4) Elysia and chloroplast symbiosis. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/2827
Daniel sent this request to me personally and with his permission I have posted it on the Forum. If anyone would like an assistant or could suggest someone he should contact, perhaps they could email Daniel directly.
May 17, 2000
From: Liz Summer
I would like to advertise a review on photosynthetic sea slugs that Mary, Jim, and I just had published in the journal Plant Physiology titled-
Mary E. Rumpho, Elizabeth J. Summer, and James R. Manhart (2000) Solar-Powered Sea Slugs. Mollusc/Algal Chloroplast Symbiosis. Plant Physiology, 123: 29-38.
We even got the cover picture (which demonstrates the broad thinking of the American Society of Plant Physiologists) which can be seen at http://www.plantphysiol.org/current.shtml - click on the image to enlarge. Unfortunately, you have to have a password to access the paper on-line. Most university libraries carry the journal. The paper is written from a chloroplast's point of view and discusses why these associations defy our normal understanding of chloroplast biology as well as some wild speculations on mechanisms and evolutionary significance.
firstname.lastname@example.orgSummer, E., 2000 (May 17) Review of sacoglossan - plastid symbiosis. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/2390
Congratulations to you and your colleagues on this publication. It is nice to have a Sea Slug as the 'covergirl' on any journal and it is certainly a bonus to have it on a plant journal!
This is a very useful review of our knowledge of the physiology and evolution of the slug - chloroplast symbiosis. Although, as you say, it is from the chloroplast's point of view, it will definitely be required reading for anyone wishing to understand the significance of the symbiosis.
Bill Rudman.Rudman, W.B., 2000 (May 17). Comment on Review of sacoglossan - plastid symbiosis by Liz Summer. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/2390
January 8, 2000
From: Mark Schoenbeck
Do any species of the genus Phyllidia form symbioses with algae? If not, which is the closest relative of P. pustulosa that does form a symbiosis?
email@example.comSchoenbeck, M., 2000 (Jan 8) Symbiosis among Phyllidia?. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/1734
To my knowledge there are no dorid nudibranchs with an algal symbiosis. The closest relatives - and they are not very close - would be the aeolids and arminoideans I discuss on the "Solar-powered sea slugs" Page.
If you know of any dorids, or have suspicions about possible symbionts, I would like to hear about them.
Bill Rudman.Rudman, W.B., 2000 (Jan 8). Comment on Symbiosis among Phyllidia? by Mark Schoenbeck. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/1734
December 5, 1999
From: Molly E. Hagan
I am looking for information concerning a species referred to as a "ruffled sea slug". I know that it has the unusual ability to apply the byproducts of plants to its body to grow. Is this just one kind of slug, or is it a branch?
Thank you for any information you might have.
firstname.lastname@example.orgHagan, M.E., 1999 (Dec 5) The solar-powered 'Ruffled Sea Slug' . [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/1625
I guess the 'Ruffled Sea Slug' is Tridachia crispata, which is found in the Caribbean. It is quite similar in shape to species such as Elysia ornata to which it is closely related. Tridachia and Elysia are sacoglossans, a group of herbivorous sea slugs, which suck the cell contents from the algae they feed on. Some have developed the means to keep the photosynthetic plastids from the plant tissue alive in their bodies where they are able to photosynthesise and provide extra nutrients for the animal. There are a number of families of nudibranch sea slugs which do something similar, keeping whole one-celled plant alive in their bodies. Have a look at the page on Solar-powered sea slugs for further information, and be sure to look at the messages and answers below yours on this page.
PS: Unfortunately I don't have a photo of Tridachia. If anyone out there can oblige I would be grateful.Rudman, W.B., 1999 (Dec 5). Comment on The solar-powered 'Ruffled Sea Slug' by Molly E. Hagan. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/1625
August 16, 1999
From: Jennifer Whittington
Sea slugs receive food from the algae that live within their skin. What benefits if any do you think algae receive from the sea slugs?
I am a first year student in high school Biology.
Floridakat@yahoo.comWhittington, J., 1999 (Aug 16) Sea Slugs & symbiotic algae. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/1184
If you look at the page on 'solar-powered' Sea Slugs you will see that there are two quite different processes at work.
The herbivorous sacoglossan Sea Slugs, suck the cell contents from the sea weeds they feed on. From this cell sap, they keep alive and functioning the plastids (those parts of the plant cell which convert the sun's energy into sugars). The conversion of the sun's light energy into food for the plant is called photosynthesis. In green plants the plastids are green, and are called chloroplasts. Most sacoglossans are coloured by the plant pigments they retain in their bodies.
Amongst the nudibranch Sea Slugs, which are all carnivores, a number of different families have evolved ways of keeping microscopic single-celled plants alive in their bodies. These single-celled plants are called zooxanthellae, and although they have free-living relatives in the plankton, they are adapted to living within the tissues of animals. The most spectacular zooxanthellae are the species which live in the tissues of coral animals. Without their symbiotic zooxanthellae, the tiny coral polyps would be unable to produce the calcium carbonate skeleton, which is the building material for the great coral reefs of the world.
Now to your question about what benefit the plant gets from the association. I guess your question refers to the Sea Slugs with zooxanthellae. I'm afraid applying cost-benefit analysis questions, which are the joy of accountants, probably doesn't have much meaning in the natural world. In the real world the only reward is survival and the ability to produce a new generation to carry on your genes. I guess being a microscopic free-living cell, floating around in the plankton, has its risks. There are many filter feeders waiting to eat you and you are at the mercy of the currents and tides. Despite this, free-living phytoplankton are clearly a very successful life form. From all accounts, living in an animal which has evolved special anatomical features and behaviour patterns, for your comfort, has its benefits, much like a plant being cared for in a greenhouse.
Ove Hoegh-Guldberg's studies on Pteraeolidia ianthina showed that zooxanthellae within its body, breed very rapidly, and at the same time produce nutrients, far in excess of their own requirements. This suggests that the zooxanthellae are living in a very healthy, protected environment.
The zooxanthellae are specially adapted for this symbiotic life and although we are not 100% sure, it seems they do not have the ability to live free. There is therefore not much point in listing the good and bad aspects of this life in some sort of balance sheet. This is the only life possible to them, they do not have the alternative of a free-living existence.
March 10, 1999
From: Derek Carmona
My name is Derek Carmona and I have been studying the idea of animals using chloroplasts for four years now. My question to you is if you know of any scientists who have made any advances on the subject and if so could you send me any information you can?
Derek Warren Carmona
email@example.comCarmona, D., 1999 (Mar 10) Information on solar animals?. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/660
I am not sure what level you are at in your studies so its a little hard to know at what level to answer your question. If you are at university, I would suggest you look at some of the references I have just posted on chloroplast symbiosis. Also look at the information at the top of this page above your message, and also at some of the correspondence you will find below your message. You will also find some information on the Flatworm Page about Convoluta roscoffensis which is a flatworm with symbiotic chloroplasts.
March 9, 1999
From: Jussi Evertsen
Thank you very much for your help. I am also very curious about if any of the Elysia species found here in Norwegian waters might show "solar power" affinities. Do you have any clue to which longtitude this solar power affinities work? Is this simply a tropical trait?
Trondhjem Biological Station
Departement of Natural History
firstname.lastname@example.orgEvertsen, J., 1999 (Mar 9) Solar-powered sacoglossans in Norway?. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/658
There are sacoglossans in quite northern latitudes with symbiotic chloroplasts. For example Elysia chlorotica in Nova Scotia and Elysia viridis in England. Also the photosynthetic flatworm Convoluta roscoffensis lives in the North Atlantic.
I have posted a list of publications on chloroplast symbiosis which may be useful. Any updates would be welcome.
One active research worker in the field, who could give you up to date advice is Cynthia Trowbridge whose address is:
Dr. Cynthia D. Trowbridge
Research Assistant Professor,
Oregon State University
Department of Zoology/Hatfield Marine Science Center
Newport, OR 97365
December 16, 1998
From: Michael Rhodin
Dear Dr. Bill Rudman,
My name is Michael Rhodin and I am a freshman at Trinity College in Hartford CT. I am doing research on the possibility of introducing a working chloroplast into an animal cell to make it photosynthetic. I have looked at the sea slug Elysia chlorotica and seen that it eats a type of algae called Vaucheria litorea, and its cells incorporate the chloroplasts into the slug's cells. I have also found that there are several genes found in the Elysia chlorotica which match the genes of the Vaucheria litorea, genes which work in relation to the chloroplasts. I was wondering if it would be possible through recombinant DNA to remove these genes and insert them into another creature's cells such as a fish, invertebrate, or reptile. Then I was wondering if that organism ate the Vaucheria litorea, would it take up and maintain the chloroplasts? Has this been attempted? Do you think it could work? Thank you for your time, this really is a big help, and a matter of great interest to me. I am so glad that I have found your site.
Michael.Rhodin@mail.trincoll.eduRhodin, M., 1998 (Dec 16) Photosynthetic Animals. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/391
I am not an expert in plastid physiology so I have sent copies of your message to a few colleagues who hopefully are more able to answer your question intelligently.
There are however a number of general points.
You say there are a number of genes in the Elysia which match the genes in the Vaucheria it feeds on. The obvious question to ask is how did you ensure that the Vaucheria-like genes were not contaminants from its food?
The other point is what got me interested in these symbioses in the first place, and that is the morphological adaptations that the "host" has undergone to become an efficient plastid or zooxanthellae "farmer". All these animals are related to animals which don't have symbiotic relationships and we can see how in certain phylogenetic lines of animals species have modified their anatomy, (finely branched gut, branches into the epithelial layer, flattened cerata, transparent bodies etc) all to better enable them to keep their plant tissues in an optimum environment for photosynthesising. A good example is shown in the aeolid genus Phyllodesmium where in one genus we can see major morphological changes which are correlated with the varying ability between species to grow zooxanthellae in their bodies (Rudman, 1991 - see references at top of page).
All things being equal, if gene transfer is the key - and possible, - then I suspect the experiment would only work in animals that had by chance a morphology which would provide an hospitable "nursery" for plant tissue. For example, the skin of most vertebrates, I would think, would block out sunlight and so prevent photosynthesis.
If you have any photos of Elysia chlorotica either on or off its food, or some photos of the plastids, it would be nice to put them up on the Sea Slug Forum.
With luck, one of the people I sent copies of your message to, will reply.
November 29, 1998
I am currently doing a research paper and would love to include these wonderful solor powered sea slugs. However my question is fairly simple, is the relationship between the sea slug and the photosynthetic algae either:
mutualistic, which i cannot see
commensal, which may be possible as the algae is being provided a home.
or parasitic as the sea slug is eating the algae and then harvesting the algae in its gut.
I really need an answer to my question fairly quickly.
If anyone has any thought please feel free to give me a shout:)
I know teachers often like to pigeon-hole nature into convenient categories but I'm afraid the relationships that organisms form with one another forms a continuum which defies our best attempts at dividing into suitable categories. In the Middle Ages, theologians used to discuss in great seriousness how many angels could fit on to the head of a pin. I suspect attempts at categorising relationships between organisms is of similar value.
In this case, the plant involved is a dinoflagellate algae belonging to the genus Symbiodinium or Gymnodinium. These dinoflagellates are known as "zooxanthellae" and are commonly found living in the tissues of cnidarians. Without them the polyps of the tropical hard corals would be unable to build the huge coral reefs of tropical waters. Other invertebrates also harbour zooxanthellae, including the Giant Clams (Tridacna spp). The relationship is usually described as an "endosymbiosis" and as I said above, in the case of reef-forming corals, seems to be essential for the coral's well-being. The increased incidence of coral-reef "bleaching", where the coral colonies over large areas eject their zooxanthellae, is causing grave concern amongst reef ecologists.
There is also debate over how many species of zooxanthellae there are. Some say there are only a few species, others that each 'host' has its own species of zooxanthellae especially adapted for that 'host'. The zooxanthellae in nudibranchs are particularly interesting in that debate because they are zooxanthellae which have been removed from their initial host, a cnidarian, and transplanted to a second host, a mollusc, in a completely unrelated phylum. If a zooxanthella can survive in two quite distinct phyla then it would tend to support the argument that there are a few widely distributed species of zooxanthellae.
The other point to realise is that the ability to "house" and "farm" zooxanthellae in nudibranchs has evolved independently a number of times and not all "hosts" are equally proficient in doing so. Some species seem to do no more than temporarily retain zooxanthellae until the zooxanthellae die. In these cases they are probably only useful as colour camouflage as they help to colour the slug and so camouflage it on its similarly coloured cnidarian food. From such simple beginnings we can find all stages to the "ultimate" stage where zooxanthellae are successfully farmed and bred within the nudibranch's body and the nudibranch gains a significant proportion of its nutrient requirement from the zooxanthellae.
If you are thinking of sea slugs as a whole rather then just nudibranchs, then a few words about the sacoglossans. They have a remarkably similar story. Instead of removing whole plants from their food animals, like the aeolid nudibranchs, they are herbivores which remove the plastids from the plants they feed on intact. Again I guess this is a "symbiosis" though the plastids are hardly a potentially "free-living" partner.
Plastids are considered cell organelles. If you have time a fascinating book to look at is:
Margulis, L. (1981). Symbiosis in Cell Evolution. Life and its Environment on the early Earth. W.H.Freeman & Co: San Francisco.
In it she discusses her ideas about the evolution of life and how plastids and other cell organelles such as mitochondria possibly evolved as endosymbiotic organisms in early protozoa.
Hope this of some help,
November 9, 1998
From: J.E. Austin
3 November 1998
Dear Dr. Rudman:
First, I want to thank-you for responding to my last questions and praise the exciting images located on the slug site. I have spent all night reading these pages. As I mentioned, I am an undergrad from Florida State University in the US and am involved in a Research Experience for Undergraduates. On Wednesday, I will join a deep-water boat cruise off of the south coast of Bermuda. I hope to dip net Sargassum and find the Sargassum nudibranch, so I'll keep you posted on my findings. :)
I have been watching Hypselodoris nudibranchs that were gathered outside the Bermuda Biological Station. They sit in my flow-through tank and have laid ribbons with red eggs. Some of them have little purple nodules beneath the side flaps of tissue. They are beautiful and hungry, I think.
I've been thinking of algal symbiosis again as I'm reviewing literature on chemical communication between algal endosymbionts and cnidarians. The mechanisms for mutual responsiveness have yet to enumerated. It seems- given the widespread independent evolution of symbiosis with algal cells- we'd have more answers. But instead, I'd like to please ask some more questions. :)
Concerning: Kempf, S.C. 1984. Symbiosis between the zooxanthellae Symbiodinium microadriaticum and four species of nudibranchs. Biol. Bull. 166: 110-126.
1) Do you know whether further Hawaiian species of Melibe have been described and whether a phylogeny exists for the Genus?
2) With the intent of raising nudibranch eggs, can one just keep them in a bowl with an aerator? How would one know if they've hatched, what to feed them?
If I wanted to do a time series to capture structure over an age range, can I just preserve them in SW 10% Formalin, then slide mount samples?
3) Is there a general use stain for sea slug tissue? I'm leaning toward neurons.
4) The paper indicates that the aeolid Berghia major has an oral veil. And I know that Melibe species do, has this morphological attribute been derived independently in other nudibranchs? Have any further observations on Melibe pilosa feeding been made since the Kempf paper; another food source beyond just captured crustaceans in the oral veil, i.e. some algal cell-containing prey?
Concerning: Rudman, W.B. 1991. Further studies on the taxonomy and biology of the octocoral-feeding genus Phyllodesmium, Ehrenberg, 1831 (Nudibranchia: Aeolidoidea). J. Moll. Stud. 57: 167-203.
1) As Phyllodesmium can autotomize ceras that contain antifeedant chemicals, have the actual compounds responsible been isolated or characterized? How would the anti-feedant nature be characterized- fed to fish?
2) The paper mentions that Chromodorids can concentrate anti-feedants from consumed sponges. Has this been assayed in Hypselodoris zebra? This orange, blue-striped dorid found in Bermuda feeds on the purple sponge Dysidea etherea?. I have 6 specimens and would like to try something, any suggestions?
3) The paper makes reference to Rudman 1984, but I could not find the reference at the end and was wondering if you have that citation?
Thank-you for your time,
email@example.comAustin, J.E., 1998 (Nov 9) Nudibranchs in Symbiosis with Zooxanthellae. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/288
Your obviously have been doing some thinking and reading! Sorry I can't answer all your questions at once, but in the hope someone out there has something to say on any of your points I am posting your questions unanswered. .. and will get to them in the next few days.
I hope you find some some animals in the Sargassum. If you have the facilities to photograph anything you find, including Hypselodoris zebra, I would love to post them on the Slug Forum. If you can't provide scans that is no problem. Just send photos or slides to me at:
The Australian Museum,
6 College St
Sydney, NSW 2000
I will scan them here and return the photos to you.
October 11, 1998
From: Amanda Lindsey
Dear Dr Bill Rudman:
My name is Amanda Lindsey and I am currently a junior in high school and am enthralled with science. Science fair is coming around the corner and my topic and hypothesis will interest you.
Topic: Can invertebrates undergo photosynthesis?
Hypothesis: By injecting slugs with a chlorophyll solution, they shall meet all the necessary requirements to undergo photosynthesis and sustain life.
While searching on the internet for information on slugs and their anatomy, behavior, environment, etc., I stumbled upon your nudibranchs and their adaptation for ingesting a whole microscopic plant intact, allowing for them to have a symbiotic relationship and practically never feed themselves. I wonder if this natural occurrence is similar to my "artificial" method of injection of chlorophyll. Please
respond with your thoughts on my project. Also, any information that you have on slugs, land slugs that is, would be greatly appreciated. Some other web sites and research "helps" would also be received with warmth and gratitude. Thanks for your time.
Yesdnil@juno.comLindsey, A., 1998 (Oct 11) Your "solar-powered" sea slugs.. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/255
Your question inspired me to put some more examples of "solar-powered" sea slugs on the Slug Forum. Have a look at the examples I have mentioned above on this page.
I'm afraid your proposed experiment is doomed to failure I would think. The process of photosynthesis involves a series of complicated chemical reactions. Chlorophyll is involved, at least in green plants. It is a molecule which traps light. For anything to then happen needs the combined activity of the other molecules and membranes that make up the chloroplast or plastid within the plant cell. Have a look at a text book on photosynthesis to see the wonderful series of chemical reactions that need to occur before the conversion of light to sugars and starches takes place.
That is why the sea slugs can only act like plants by either farming small plants in their bodies (as in the case of the nudibranchs with zooxanthellae), or by keeping the plastids, which are the little photosynthetic factories in plant cells alive, (in the case of the sacoglossans).
I'm sorry if this is not good news for your experiment but I hope it will give you some ideas.
October 11, 1998
Dear Dr. Bill Rudman:
I have been doing some research on evolutionary relationships between
pacific northwest nudibranchs (USA). In the context of a neurobiology course at the Friday Harbor Labs (associated with the University of Washington), I have been examining immunohistochemical staining of brain regions as a method for generating phylogenetic characters for perhaps
correlating morphology with function. Would you please recommend a couple of good papers for getting a better handle on the basic taxonomic tree for Opisthobranchs? I have read some work by Schmaekel based on neural and reproductive morphology but as well, am wondering if there is corrobrating
On a second note, I am currently studying chemical communication between sea anemones (Aiptasia pallida, specifically) and their endosymbiotic Symbiodinium algae. Some of the work is basically looking for what Host Factor (free amino acid cocktail, ect.) controls or stimulates
photosynthate release from host to algae. With your work on nudibranchs with their own algae gardens, do you know if it is known what nudibranchs
use to signal photosynthate release by the algal cells? Are the algae concentrated in special vacuoles? Then in your photograph (from the website) of Aeolidiella foulisi, what brown sea anemone species is pictured with it?
Thank-you so much for your time. I am an undergraduate in an NSF-funded research program at the Bermuda Biological Station working with Hank Trapido-Rosenthal.
p.s. I realize that it's a shot in the dark, but do you by chance know Terry Gosliner's email address (California Academy of Sciences)?
Bermuda Biological Station for Research,Inc.
St. George, GE01
firstname.lastname@example.orgAustin, J.E., 1998 (Oct 11) Zooxanthellae in nudibranchs. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/257
You ask quite a few questions so I'll answer them in order.
1. A good source of information on mollusc phylogeny. The most comprehensive and up to date work would be:
Beesley,P.L., Ross,G.J.B. & Wells,A (eds) 1998. Mollusca: The Southern Synthesis. Fauna of Australia Vol 5.. CSIRO Publishing : Melbourne. Part A pp1-563. Part b pp 564-1234.
I must declare I was involved in writing parts of it but it is generally accepted as the best around at the moment.
2.re photosynthate release. Have a look at
Hoegh-Guldberg,I.O. & Hinde, R.,1986. Proceedings of the Royal Society of London, Series B, 228:493-509.
Hoegh-Guldberg,I.O., Hinde, R. & Muscatine,L., 1986. Proceedings of the Royal Society of London, Series B, 228:511-521.
3. Are the zooxanthellae in special vacuoles? They seem to be in modified ducts of the digestive gland in most but not all the species. sometimes they sem yo be loose in the ducts and in other species they seem to be in subepithelial cells. I have described the position of the zooxanthellae in the various papers I cite at the top of this Solar-powered page.
4. The anemone that Aeolidiella foulisi is feeding on in the photo is Anthothoe albocincta.
5. And lastly, yes I can give you Terry Gosliner's email address. It is
Good luck with your research and please let us know of any interesting discoveries you make. .... Bill Rudman.Rudman, W.B., 1998 (Oct 11). Comment on Zooxanthellae in nudibranchs by J.E.Austin. [Message in] Sea Slug Forum. Australian Museum, Sydney. Available from http://www.seaslugforum.net/find/257 | <urn:uuid:4776a07f-1a73-4808-9426-a09277e631fe> | 3.15625 | 9,871 | Comment Section | Science & Tech. | 52.749288 | 95,512,971 |
Drones carry only one type of allele at each chromosomal position, because they are haploid (containing only one set of chromosomes from the mother). During the development of eggs within a queen, a diploid cell with 32 chromosomes divides to generate haploid cells called gametes with 16 chromosomes. The result is a haploid egg, with chromosomes having a new combination of alleles at the various loci. This process is called arrhenotokous parthenogenesis or simply arrhenotoky.
Because the male bee technically has only a mother, and no father, its genealogical tree is unusual. The first generation has one member (the male). One generation back also has one member (the mother). Two generations back are two members (the mother and father of the mother). Three generations back are three members. Four back are five members. That is, the numbers in each generation going back are 1, 1, 2, 3, 5, 8, ... – the Fibonacci sequence.
Much debate and controversy exist in the scientific literature about the dynamics and apparent benefit of the combined forms of reproduction in honey bees and other social insects, known as the haplodiploid sex-determination system. The drones have two reproductive functions: Each drone grows from the queen's unfertilized haploid egg and produces some 10 million male sperm cells, each genetically identical to the egg. Drones also serve as a vehicle to mate with a new queen to fertilize her eggs. Female worker bees develop from fertilized eggs and are diploid in origin, which means that the sperm from a father provides a second set of 16 chromosomes for a total of 32: one set from each parent. Since all the sperm cells produced by a particular drone are genetically identical, full sisters are more closely related than full sisters of other animals where the sperm is not genetically identical.
A laying worker bee exclusively produces totally unfertilized eggs, which develop into drones. As an exception to this rule, laying worker bees in some subspecies of honey bees may also produce diploid (and therefore female) fertile offspring in a process called thelytoky, in which the second set of chromosomes comes not from sperm, but from one of the three polar bodies during anaphase II of meiosis.
In honey bees, the genetics of offspring can best be controlled by artificially inseminating a queen with drones collected from a single hive, where the drones' mother is known. In the natural mating process, a queen mates with multiple drones, which may not come from the same hive. Therefore, batches of female offspring have fathers of a completely different genetic origin.
A drone is characterized by eyes that are twice the size of those of worker bees and queens, and a body size greater than that of worker bees, though usually smaller than the queen bee. His abdomen is stouter than the abdomen of workers or queen. Although heavy bodied, the drone must be able to fly fast enough to accompany the queen in flight.
An Apis cerana colony has about 200 drones during high summer peak time.
Drones die off or are ejected from the hive by the worker bees in late autumn, and do not reappear in the bee hive until late spring.
The drones' main function is to be ready to fertilize a receptive queen. Drones in a hive do not usually mate with a virgin queen of the same hive because they drift from hive to hive. Mating generally takes place in or near drone congregation areas. How these areas are selected is poorly understood, but they do exist. When a drone mates with a queen of the same hive, the resultant queen will have a spotty brood pattern (numerous empty cells on a brood frame) due to the removal of diploid drone larvae by nurse bees (i.e., a fertilized egg with two identical sex genes will develop into a drone instead of a worker).
Mating occurs in flight, which accounts for drones needing better vision, which is provided by their large eyes. Should a drone succeed in mating, he soon dies because the penis and associated abdominal tissues are ripped from the drone's body after sexual intercourse.
In areas with severe winters, all drones are driven out of the hive in the autumn. A colony begins to rear drones in spring and drone population reaches its peak coinciding with the swarm season in late spring and early summer. The life expectancy of a drone is about 90 days.
Although the drone is highly specialized to perform one function, mating and continuing the propagation of the hive, it is not completely without side benefit to it. All bees, when they sense the hive's temperature deviating from proper limits, either generate heat by shivering, or exhaust heat by moving air with their wings—behaviours which drones share with worker bees.
Drones do not exhibit typical worker bee behaviours such as nectar and pollen gathering, nursing, or hive construction. While drones are unable to sting, if picked up, they may swing their tails in an attempt to frighten the disturber. In some species, drones buzz around intruders in an attempt to disorient them if the nest is disturbed.
Drones fly in abundance in the early afternoon and are known to congregate in drone congregation areas a good distance away from the hive.
Mating and the drone reproductive organ
The drone penis is designed to disperse a large quantity of seminal fluid and spermatozoa with great speed and force. The penis is held internally in the drone (an endophallus). During mating, the organ is everted (turned inside out), into the queen. The eversion of the penis is achieved by contracting abdominal muscles, which increases hemolymph pressure, effectively "inflating" the penis. Cornua claspers at the base of the penis help to grip the queen.
Mating between a single drone and the queen lasts less than 5 seconds, and it is often completed within 1–2 seconds. Mating occurs mid-flight, and 10–40 m above ground. Since the queen mates with 5-19 drones, and drones die after mating, each drone must make the most of his single shot. The drone makes first contact from above the queen, his thorax above her abdomen, straddling her. He then grasps her with all six legs, and everts the endophallus into her opened sting chamber. If the queen’s sting chamber is not fully opened, mating is unsuccessful, so some males that mount the queen do not transfer semen. Once the endophallus has been everted, the drone is paralyzed, flipping backwards as he ejaculates. The process of ejaculation is explosive—semen is blasted through the queen’s sting chamber and into the oviduct. The process is sometimes audible to the human ear, akin to a "popping" sound. The ejaculation is so powerful that it ruptures the endophallus, disconnecting the drone from the queen. The bulb of the endophallus is broken off inside of the queen during mating—so drones mate only once, and die shortly after. The leftover penis remaining in the queen’s vagina is referred to as the “mating sign”. The plug will not prevent the next drone from mating with the same queen, but may prevent semen from flowing out of the vagina.
Drone congregation areas
Mating between the drones and a virgin queen takes place away from the colony, in mid-air mating sites. These mating sites, called ‘congregation areas’, are specific locations, where drones wait for the arrival of virgin queens. A congregation area is typically 10–40 m above ground, and can have a diameter of 30–200 m. The boundaries of a congregation area are distinct; queens flying a few meters outside the boundaries are mostly ignored by the drones. Congregation areas are typically used year after year, with some spots showing little change over 12 years. Since drones are expelled from a colony during the winter, and new drones are raised each spring, inexperienced drones must find these congregation areas anew. This suggests some environmental cues define a congregation area, although the actual cues are unknown.
Congregation areas are typically located above open ground, away from trees or hills, where flight is somewhat protected from the wind (calm winds may be helpful during mating flight). At the same time, many congregation areas do not show such characteristics, such as those located above water or the forest canopy. Some studies have suggested that magnetic orientation could play a role, since drones older than 6 days contain cells in the abdomen that are rich in magnetite.
Congregation areas can be located by attaching a virgin queen (in a cage) to a balloon floating above ground. The person then moves around, taking note of where drones are attracted to the caged queen. Congregation areas are not found closer than 90 m from an apiary, and congregation areas located farther away from apiaries receive more drones. In a congregation area, drones accumulate from as many as 200 colonies, with estimates of up to 25,000 individual drones. This broad mixing of drones is how a virgin queen can ensure she will receive the genetic diversity needed for her colony. By flying to congregation areas further away from her colony, she further increases the probability of outbreeding.
A single drone visits multiple congregation areas during his lifetime, often taking multiple trips per afternoon. A drone’s mating flight averages 25–32 minutes, but can last up to 60 minutes, before he must return to the colony to refuel with honey. While at the site, the drones fly around passively, waiting for the arrival of a virgin. When the virgin queen arrives to the congregation area, the drones locate her by visual and olfactory cues. At this point, it is a race to mate with the virgin queen, to be genetically represented in the newly founded colony. The swarming drones, as they actively follow the queen, reportedly resemble a “drone comet”, dissolving and reforming as the drones chase the virgin queen. Drones greatly outnumber the quantity of virgin queens produced per season, so even with multiple mating by the queen, very few drones mate successfully (estimated at less than one in 1000). If needed, a virgin queen can embark on multiple ‘nuptial flights’, to be sure to receive enough semen from enough drones.
Varroa destructor, a parasitic mite, propagates within the brood cell of bees. The Varroa mite prefers drone brood as it guarantees a longer development period, which is important for its own propagation success. The number of Varroa mites can be kept in check by removing the capped drone brood and either freezing the brood comb or heating it.
- Nickel, J. (2001). Mathematics: Is God Silent? (Revised ed.). Vallecito, CA: Ross House Books. p. 242. ISBN 1-879998-22-X.
- Winston, Mark L. (1991). The Biology of the Honey Bee. Harvard University Press. p. 41. ISBN 0-674-07409-2.
- Oldroyd, Benjamin P. (2006). Asian Honey Bees: Biology, Conservation, and Human Interactions. Harvard University Press. p. 112. ISBN 0-674-02194-0.
|Wikimedia Commons has media related to Apoidea.| | <urn:uuid:b5fd0802-0f8d-48a0-aad5-17ce92c5adca> | 3.96875 | 2,344 | Knowledge Article | Science & Tech. | 46.938417 | 95,512,972 |
Mean-flow and topographic control on surface eddy-mixing in the Southern Ocean
Surface cross-stream eddy diffusion in the Southern Ocean is estimated by monitoring dispersion of particles numerically advected with observed satellite altimetry velocity fields. To gain statistical significance and accuracy in the resolution of the jets, more than 1,5 million particles are released every 6 months over 16 years and advected for one year. Results are analyzed in a dynamic height coordinate system. Cross-stream eddy diffusion is highly inhomogenous. Diffusivity is larger on the equatorward flank of the Antarctic Circumpolar Current (ACC) along eddy stagnation bands, where eddy displacement speed approaches zero. Along such bands, diffusivities reach typical values of 3500 m2 s–1. Local maxima of about 8–12.103 m2 s–1 occur in the energetic western boundary current systems. In contrast, diffusivity is lower in the core of the Antarctic Circumpolar Current with values of 1500–3000 m2 s–1, and continues to decrease south of the main ACC system. The distribution of eddy diffusion is set at three scales: at circumpolar scale, the mean flow reduces diffusion in the ACC and enhances it on the equatorward side of the current; at basin scale, diffusion is enhanced in the energetic western boundary current extension regions; at regional scale, diffusion is enhanced in the wake of large topographic obstacles. We find that the zonally average structure of eddy diffusion can be explained by theory which takes mean flow into account; however, local values depend on eddy propagation, not simply described by a single wave speed, and topography.
No Supplementary Data.
No Article Media
Document Type: Research Article
Publication date: 01 July 2011
More about this publication?
- The Journal of Marine Research publishes peer-reviewed research articles covering a broad array of topics in physical, biological and chemical oceanography. Articles that deal with processes, as well as those that report significant observations, are welcome. In the area of biology, studies involving coupling between ecological and physical processes are preferred over those that report systematics. Authors benefit from thorough reviews of their manuscripts, where an attempt is made to maximize clarity. The time between submission and publication is kept to a minimum; there is no page charge.
- Editorial Board
- Information for Authors
- Subscribe to this Title
- Ingenta Connect is not responsible for the content or availability of external websites | <urn:uuid:0b9e7bbb-10dd-4b36-af74-69b68f84879e> | 2.921875 | 518 | Academic Writing | Science & Tech. | 22.288844 | 95,512,980 |
Magnetic fields in barred galaxies - III. The southern peculiar galaxy NGC 2442
- E D P Sciences
- Publication Type:
- Journal Article
- Astronomy & Astrophysics, 2004, 421 (NA), pp. 571 - 581
- Issue Date:
Observations of the southern peculiar galaxy NGC 2442 with the Australia Telescope Compact Array in total and linearly polarized radio continuum at $\lambda$6 cm are presented and compared with previously obtained H $\alpha$ data. The distribution of polarized emission, a signature of regular magnetic fields, reveals some physical phenomena which are unusual among spiral galaxies. We find evidence for tidal interaction and/or ram pressure from the intergalactic medium compressing the magnetic field at the northern and western edges of the galaxy. The radial component of the regular magnetic field in the northern arm is directed away from the centre of the galaxy, a finding which is in contrast to the majority of galaxies studied to date. The oval distortion caused by the interaction generates a sudden jump of the magnetic field pattern upstream of the inner northern spiral arm, similar to galaxies with long bars. An unusual "island" of strong regular magnetic field east of the galaxy is probably the brightest part of a magnetic arm similar to those seen in some normal spiral galaxies, which appear to be phase-shifted images of the preceding optical arm. The strong magnetic field of the "island" may indicate a past phase of active star formation when the preceding optical arm was exposed to ram pressure.
Please use this identifier to cite or link to this item: | <urn:uuid:83dcfbea-6d41-474f-8e12-ad43b888db0d> | 2.65625 | 321 | Academic Writing | Science & Tech. | 26.609201 | 95,512,984 |
In some gley-soils (Haplaquepts) in the pleistocene part of the Netherlands high concentrations of arsenic are found. In these gley-soils iron and arsenic have accumulated, presumably by weathering and mobilization in higher grounds, e.g. ice pushed sands, and the subsequent transportation by groundwater (reducing conditions) to lower areas. By oxidation of ferrous iron and arsenite near the surface of the gley-soils a coprecipitate has been formed, also containing some manganese.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
- Arsenic in Gley-Soils, Occurrence and Human Exposure
Ir. H. Hidding
Ir. K. van Malderen
- Springer Netherlands
Fallstudie Überschwemmungskarten/© Thaut Images | Fotolia | <urn:uuid:fd3a47c4-fb8a-4541-ae74-6eb0bda97382> | 2.8125 | 189 | Knowledge Article | Science & Tech. | 27.661607 | 95,512,991 |
If I have an infinitely large grid made of perfect squares aligned in the exact same way as normal squared paper, if I start at any arbitary point, and draw concentric circles, where I start on the corners of squares, and draw round 360 degrees, will the circle, no matter how large, only ever pass exactly (not just to good precision, but exactly) 4 corners of squares. I appreciate I probably haven't explained it well so i've attached a diagram. it takes ages to do though so only the first few are shown, you should get the idea I think.
If your interested, i'm doing this to work out the RDF of a uniform lattice
Turn on thread page Beta
Non-urgent, non-exam-related geometry question watch
- Thread Starter
- 15-01-2010 02:03
- 15-01-2010 09:20
If your circle has radius 5, won't it pass through (3,4)?
- 15-01-2010 09:50
- 15-01-2010 10:08
Taking the centre of the circles as the origin, doesn't the circle passing through (1,3) pass eight corners? In fact, if you imagine the radius as the hypotenuse of a right-angled triangle, as tgodkin suggested, then any time the circle passes through a corner where the shorter sides of the triangle are different lengths it will pass through eight points.
E.g.: a circle drawn through (1,3) would also pass (3,1), (3,-1), (1,-3), (-1,-3), (-3,-1), (-3,1), (-1,3).
However, a circle passing through a point where one of the coordinates is 0 [e.g. (2,0)] or where the absolute value of the x and y coordinates is the same [e.g. (2,-2)] will only pass through four corners.
I might have misunderstood your question, but hopefully that helps.Last edited by Meridian_Star; 15-01-2010 at 10:12. | <urn:uuid:e7d5e51a-8d84-4c08-8b87-a8d57a267628> | 3.171875 | 438 | Comment Section | Science & Tech. | 80.503307 | 95,512,992 |
Energy GenerationSee also: Stellar nucleosynthesis
All main-sequence stars have a core region where energy is generated by nuclear fusion. The temperature and density of this core are at the levels necessary to sustain the energy production that will support the remainder of the star. A reduction of energy production would cause the overlaying mass to compress the core, resulting in an increase in the fusion rate because of higher temperature and pressure. Likewise an increase in energy production would cause the star to expand, lowering the pressure at the core. Thus the star forms a self-regulating system in hydrostatic equilibrium that is stable over the course of its main sequence lifetime.
Main-sequence stars employ two types of hydrogen fusion processes, and the rate of energy generation from each type depends on the temperature in the core region. Astronomers divide the main sequence into upper and lower parts, based on which of the two is the dominant fusion process. In the lower main sequence, energy is primarily generated as the result of the proton-proton chain, which directly fuses hydrogen together in a series of stages to produce helium. Stars in the upper main sequence have sufficiently high core temperatures to efficiently use the CNO cycle. (See the chart.) This process uses atoms of carbon, nitrogen and oxygen as intermediaries in the process of fusing hydrogen into helium.
At a stellar core temperature of 18 million kelvins, the PP process and CNO cycle are equally efficient, and each type generates half of the star's net luminosity. As this is the core temperature of a star with about 1.5 solar masses, the upper main sequence consists of stars above this mass. Thus, roughly speaking, stars of spectral class F or cooler belong to the lower main sequence, while class A stars or hotter are upper main-sequence stars. The transition in primary energy production from one form to the other spans a range difference of less than a single solar mass. In the Sun, a one solar mass star, only 1.5% of the energy is generated by the CNO cycle. By contrast, stars with 1.8 solar masses or above generate almost their entire energy output through the CNO cycle.
The observed upper limit for a main-sequence star is 120–200 solar masses. The theoretical explanation for this limit is that stars above this mass can not radiate energy fast enough to remain stable, so any additional mass will be ejected in a series of pulsations until the star reaches a stable limit. The lower limit for sustained proton-proton nuclear fusion is about 0.08 solar masses. Below this threshold are sub-stellar objects that can not sustain hydrogen fusion, known as brown dwarfs.
Read more about this topic: Main Sequence
Other articles related to "energy, energy generation, generation":
... In a future full hydrogen economy, primary energy sources and feedstock would be used to produce hydrogen gas as stored energy for use in various sectors of the ... Producing hydrogen from primary energy sources other than coal, oil, and natural gas, would result in lower production of the greenhouse gases characteristic of the combustion ... of a hydrogen economy would be that in mobile applications (primarily vehicular transport) energy generation and use could be decoupled ...
... a capacity of 660 MW and 400 MW respectively, are two of the hydroelectric generation projects currently under construction ... The first generation unit in Porce III will start producing electrical energy at the end of 2010, and around the same time construction of the first civil works for Porce IV will begin ... landscape also indicate the path taken in their research on alternative energy sources ...
... wider urban area form part of 'Britain's Energy Coast', and has one of the highest concentrations of wind farms in the world, the vast majority are located offshore and have been built ...
Famous quotes containing the words generation and/or energy:
“If you think about it seriously, all the questions about the soul and the immortality of the soul and paradise and hell are at bottom only a way of seeing this very simple fact: that every action of ours is passed on to others according to its value, of good or evil, it passes from father to son, from one generation to the next, in a perpetual movement.”
—Antonio Gramsci (18911937)
“The tendencies of the times favor the idea of self-government, and leave the individual, for all code, to the rewards and penalties of his own constitution, which work with more energy than we believe, whilst we depend on artificial restraints.”
—Ralph Waldo Emerson (18031882) | <urn:uuid:0e84c515-1b20-4982-b0fc-8c5a355ca730> | 3.890625 | 943 | Knowledge Article | Science & Tech. | 45.147568 | 95,512,999 |
In what could prove to be a major breakthrough in quantum memory storage and information processing, German researchers have frozen the fastest thing in the universe: light. And they did so for a record-breaking one minute.
It sounds weird and it is. The reason for wanting to hold light in its place (aside from the sheer awesomeness of it) is to ensure that it retains its quantum coherence properties (i.e. its information state), thus making it possible to build light-based quantum memory. And the longer that light can be held, the better as far as computation is concerned. Accordingly, it could allow for more secure quantum communications over longer distances.
Needless to say, halting light is not easy — you can’t just put in the freezer. Light is electromagnetic radiation that moves at 300 million meters per second. Over the course of a one minute span, it can travel about 11 million miles (18 million km), or 20 round trips to the moon. So it’s a rather wily and slippery medium, to say the least.
But light can be slowed down and even halted altogether. And in fact, researchers once kept it still for 16 seconds by using cold atoms.
For this particular experiment, researcher Georg Heinze and his team converted light coherence into atomic coherences. They did so by using a quantum interference effect that makes an opaque medium — in this case a crystal — transparent over a narrow range of light spectra (a process called electromagnetically induced transparency (EIT)). The researchers shot a laser through this crystal (a source of light), which sent its atoms into a quantum superposition of two states. A second beam then switched off the first laser, and as a consequence, the transparency. Thus, the researchers collapsed the superposition — and trapped the second laser beam inside.
And they proved the accomplishment by storing — and then successfully retrieving — information in the form of a 100-micrometer-long picture with three horizontal stripes on it.
“The result outperforms earlier demonstrations in atomic gases by about six orders of magnitude and offers exciting possibilities of long-storage-time quantum memories that are spatially multiplexed, i.e., can store different quantum bits as different pixels,” notes physicist Hugues de Riedmatten in an associated Physics Review article.
In future, the researchers will try to use different substances to increase the duration of information storage even further.
Read the entire study at Physical Review Letters: “Stopped Light and Image Storage by Electromagnetically Induced Transparency up to the Regime of One Minute.”
Source of this article: IO9.com
Published by: http://consciousnewsmedia.blogspot.com | <urn:uuid:3861a4df-bb4b-455f-bff5-38f220db86b0> | 3.6875 | 565 | News Article | Science & Tech. | 41.621743 | 95,513,003 |
Large mammals helped create park-like Europe
Danish researchers have demonstrated that large grazers and browsers of the past – wild cattle, bison and even straight-tusked elephants – created a mosaic of varied landscapes consisting of closed and semi-closed forests and parkland in prehistoric Europe. Their study appears in the current Proceedings of the National Academy of Sciences.
They found that beetles associated with the dung of large animals were much more common in Europe 110,000 to 132,000 years ago than in later prehistoric times.
“Large animals in high numbers were an integral part of nature in prehistoric times. ... The proportion and number of the wild large animals declined after the appearance of modern man. As a result of this, the countryside developed into predominantly dense forest that was first cleared when humans began to use the land for agriculture,” said Jens-Christian Svenning of Denmark’s Aarhus University.
“An important way to create more self-managing ecosystems with a high level of biodiversity is to make room for large herbivores in the European landscape. They would create and maintain a varied vegetation in temperate ecosystems, and thereby ensure the basis for a high level of biodiversity,” senior scientist Rasmus Ejrnæs said. au.dk
Bright lights, better space veggies
Exposing leafy vegetables grown during spaceflight to a few bright pulses of light daily could increase the amount of eye-protecting nutrients produced by the plants, according to a study by researchers at the University of Colorado at Boulder.
One of the concerns for astronauts during future extended spaceflights will be the onslaught of eye-damaging radiation they’ll be exposed to. But astronauts should be able to mitigate that by eating plants that contain carotenoids, especially zeaxanthin, which is known to promote eye health.
Zeaxanthin could be ingested as a supplement, but there is evidence that human bodies are better at absorbing carotenoids from whole foods, such as green leafy vegetables.
Using Arabidopsis – also called rockcress, a flowering plant related to cabbage and mustard – the team demonstrated that a few pulses of bright light on a daily basis spurred the plants to begin making zeaxanthin in preparation for an expected excess of sunlight. colorado.edu/news
More ice-free days in Arctic
The ice-free season across the Arctic is getting longer by five days per decade, according to new research. New analysis of satellite data shows the Arctic Ocean absorbing ever more of the sun’s energy in summer, leading to an ever later appearance of sea ice in the autumn. In some regions, autumn freeze-up is occurring up to 11 days per decade later than it used to.
The research, published in a forthcoming issue of the journal Geophysical Research Letters, has implications for tracking climate change, as well as having practical applications for shipping and the resource industry in the arctic regions.
“The extent of sea ice in the Arctic has been declining for the last four decades,” said Julienne Stroeve, a member of the research team and a climatologist and professor of polar observation at Britain’s University College London, “and the timing of when melt begins and ends has a large impact on the amount if ice lost each summer. With the Arctic region becoming more accessible for long periods of time, there is a growing need for improved prediction of when the ice retreats and reforms in winter.” ucl.ac.uk | <urn:uuid:f3e35e66-3114-407e-bce9-a25523726c29> | 3.734375 | 727 | Content Listing | Science & Tech. | 35.716761 | 95,513,006 |
Green Chemistry. By Rebecca Gill. What is Green Chemistry?.
PowerPoint Slideshow about 'Green Chemistry' - overton
An Image/Link below is provided (as is) to download presentation
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Green Chemistry is designed to develop processes and products to reduce or get rid of hazardous substances and chemicals. Also it is designed to prevent environmental impact and to help prevent pollution. Green Chemistry relies on the 12 principles to help come up with ways to prevent pollution and environmental impacts.
PYROCOOL technologies developed a “fire extinguishing foam that is nontoxic and highly biodegradable”. They created a product that isn’t toxic to the environment or any living creature and it also breaks down after it has been used. The product doesn’t affect the ozone layer unlike other fire extinguisher products. It is a universal fire extinguisher and cooling agent. Pyrocool technologies are considerate of the environmental demands of the public which it aims to serve.
The benefits to the company and environment from the Green Chemistry initiative is that they are able to fill a gap in the market and give customers something that doesn’t harm the environment. Not only do they give their customers what they want but what they need. The benefits to the environment are that people are now trying to find better ways to protect the environment from being destroyed. It has become a main priority so businesses look like they are concerned and doing something about but also trying to help.
Warner, J. C. "12 Principles of Green Chemistry." American Chemical Society - The World's Largest Scientific Society. N.p., 1998. Web. 02 Aug. 2012. <http://portal.acs.org/portal/acs/corg/content?_nfpb=true>.
"The Twelve Principles of Green Chemistry." Green Chemistry Glossary. N.p., n.d. Web. 02 Aug. 2012. <http://greenchem.uoregon.edu/Pages/GreenChemGlossary.php>. | <urn:uuid:12a62614-6180-45e6-9348-f8e8fa5c9a1c> | 2.875 | 483 | Truncated | Science & Tech. | 55.270838 | 95,513,028 |
1) Sketch the graph of y= -k/X where k is a positive constant
2) On the same diagram sketch the graph of y= a- (k/X )
I understand that for part two it's just the first part translated 'a' units up, however I just don't understand how to plot the first part. Many thanks in advance.
Turn on thread page Beta
C1 Maths help watch
- Thread Starter
Last edited by James4780; 10-03-2018 at 06:56.
- 10-03-2018 06:54
- 10-03-2018 06:59 | <urn:uuid:3da88bef-9504-4041-8d04-fdc178a4c101> | 3.375 | 131 | Comment Section | Science & Tech. | 91.785 | 95,513,104 |
Trivia Quizzes, Games, and Facts
Trivia Quiz Categories
Religion & Faith
Science & Technology
More Trivia Fun
Popular Recent Quizzes
Most Highly Rated Quizzes
Write a Quiz
Today's Easy Trivia
Today's Hard Trivia
Easy Hourly Trivia
Hard Hourly Trivia
Browse Trivia Facts
Submit a Trivia Fact
294 playing now
- Quizzes served
- Most points
- Registered members
- Just registered!
- Most points
NY's Strongest !!!!
Quiz Of The Day
- Tournament Leader
- Last Tournament Champ
You are here:
This is a quiz on some basic astronomy facts and definitions.
What is the nearest star to the Sun?
In 1928, the International Astronomical Union divided the sky into how many official constellations?
What is the apparent path of the sun around the sky called?
At how many approximate degrees does Venus appear from the sun?
Name the point where an object's orbit passes through the plane of Earth's orbit.
The moon phases are the result of the moon's rotation around the Earth, causing us to see different parts of the moon's surface lit by the sun.
What is the name for a band of the celestial sphere centered on the ecliptic and encircling the sky?
Summer Solstice is the point on the celestial sphere where the sun is at its most northerly point.
What is perihelion?
the darkening of the moon when it moves through the Earth's shadow
any planet visible in the sky just after sunset
the orbital point of the closest approach to the sun
the orbital point of greatest distance from the sun
How long can a total lunar eclipse last?
3 hours & 20 minutes
4 hours & 30 minutes
2 hours & 30 minutes
1 hour & 40 minutes
(will not interfere with your quiz)
Conditions of Use
. All Rights Reserved. | <urn:uuid:80135a7b-399d-42bb-92f4-46fc94b6e4e4> | 2.5625 | 414 | Product Page | Science & Tech. | 61.151636 | 95,513,123 |
Epoch J2000.0 Equinox J2000.0
|Right ascension||07h 20m 03.254s|
|Declination||−08° 46′ 49.90″|
|Apparent magnitude (V)||18.3|
|Spectral type||M9 ± 1
|Radial velocity (Rv)||83.1 km/s|
|Proper motion (μ)||RA: -40.3 ± 0.2 mas/yr
Dec.: -114.8 ± 0.4 mas/yr
|Parallax (π)||166 ± 28 mas|
|Distance||approx. 20 ly
(approx. 6 pc)
|Absolute magnitude (MV)||19.4|
Scholz's Star (WISE designation WISE 0720−0846 or fully WISE J072003.20−084651.2) is a dim binary stellar system about 17–23 light-years (5.1–7.2 parsecs) from the Sun in the southern constellation Monoceros near the galactic plane. It was discovered in 2013 by astronomer Ralf-Dieter Scholz. In 2015, Eric Mamajek and collaborators reported the system passed through the solar system's Oort cloud roughly 70,000 years ago, and dubbed it Scholz's Star.
The primary is a red dwarf with a stellar classification of M±1 and has 9±2 86Jupiter masses. The secondary is probably a T5 brown dwarf with ±12 Jupiter masses. 65 The system has 0.15 solar masses. The pair orbit at a distance of about 0.8 astronomical units (120,000,000 kilometers; 74,000,000 miles) with a period of roughly 4 years. The system has an apparent magnitude of 18.3, and is estimated to be between 3 and 10 billion years old. With a parallax of 166 mas (0.166 arcseconds), about 80 star systems are known to be closer to the Sun. It is a late discovery, as far as nearby stars go, because past efforts concentrated on high-proper-motion objects.
Solar System flyby
Estimates indicate that the WISE 0720−0846 system passed about 52,000 astronomical units (0.25 parsecs; 0.82 light-years) from the Sun about 70,000 years ago. 98% of mathematical simulations of the star system's trajectory indicated it passed through the Solar System's Oort cloud, or within 120,000 AU (0.58 pc; 1.9 ly) of the Sun. Comets perturbed from the Oort cloud would require roughly 2 million years to get to the inner Solar System. At closest approach the system would have had an apparent magnitude of about 11.4, and would have been best viewed from high latitudes in the northern hemisphere, in the autumn mostly. A star is expected to pass through the Oort Cloud every 100,000 years or so. An approach as close or closer than 52,000 AU is expected to occur about every 9 million years. In about 1.4 million years, Gliese 710 will pass somewhere between 8,800 and 13,700 AU from the Sun.
The star was first discovered to be a nearby one by astronomer Ralf-Dieter Scholz, announced on arXiv in November 2013. Given the importance of the system having passed so close to the solar system in prehistorical times, Eric Mamajek and collaborators dubbed the system Scholz's star in their paper discussing the star's velocity and past trajectory.
- "2MASS J07200325-0846499". SIMBAD. Centre de données astronomiques de Strasbourg. Retrieved 2015-02-18.
- Mamajek, Eric E.; Barenfeld, Scott A.; Ivanov, Valentin D. (2015). "The Closest Known Flyby of a Star to the Solar System". The Astrophysical Journal. 800 (1). arXiv: . Bibcode:2015ApJ...800L..17M. doi:10.1088/2041-8205/800/1/L17.
- Burgasser, Adam J.; et al. (2015). "WISE J072003.20-084651.2: an Old and Active M9.5 + T5 Spectral Binary 6 pc from the Sun". The Astronomical Journal. 149 (3). 104. arXiv: . Bibcode:2015AJ....149..104B. doi:10.1088/0004-6256/149/3/104.
- Mamajek, Eric. "FAQ". Retrieved 2015-02-18.
- "Featured Research: Closest known flyby of star to our solar system: Dim star passed through Oort Cloud 70,000 years ago". Science Daily. 17 February 2015. Retrieved 2015-02-21.
- Burgasser, Adam J.; et al. (2015). "Radio Emission and Orbital Motion from the Close-encounter Star–Brown Dwarf Binary WISE J072003.20–084651.2". The Astronomical Journal. 150 (6). 180. arXiv: . Bibcode:2015AJ....150..180B. doi:10.1088/0004-6256/150/6/180.
- "THE ONE HUNDRED NEAREST STAR SYSTEMS". RECONS (Research Consortium On Nearby Stars). Retrieved 2015-02-18.
- de la Fuente Marcos, Carlos; de la Fuente Marcos, Raúl; Aarseth, Sverre J.; (2018-05-01). Where the Solar system meets the solar neighbourhood: patterns in the distribution of radiants of observed hyperbolic minor bodies, Monthly Notices of the Royal Astronomical Society: Letters, Volume 476, Issue 1, 1 May 2018, Pages L1–L5, https://doi.org/10.1093/mnrasl/sly019 published on 6 February 2018. Retrieved from https://academic.oup.com/mnrasl/article-abstract/476/1/L1/4840245?redirectedFrom=fulltext.
- Warren, Matt (2018-03-22). Prehistoric visit from nearby star disturbed comets in our solar system, Science, 22 March 2018. Retrieved from http://www.sciencemag.org/news/2018/03/prehistoric-visit-nearby-star-disturbed-comets-our-solar-system?et_rid=382659176&et_cid=1923796.
- Dvorsky, George (2018-03-21). A Visiting Star Jostled Our Solar System 70,000 Years Ago, Gizmodo, 21 March 2018. Retrieved from https://gizmodo.com/a-visiting-star-jostled-our-solar-system-70-000-years-a-1823954398. | <urn:uuid:925dc68c-adcd-4a3e-a9e0-a43e7251fab5> | 2.515625 | 1,497 | Knowledge Article | Science & Tech. | 90.782562 | 95,513,126 |
Activation of Silent Transposable Elements
It is well known among maize geneticists that agents that cause chromosome breakage can activate quiescent transposable elements. However, other than temporarily relieving position effect, it is difficult to understand how these events can lead directly to activation. One possibility is that chromosome breakage can initiate a process in the cell resulting in a higher rate of spontaneous mutation. Such a system could be analogous to the SOS response of Escherichia coli in which an error-prone repair system is induced. Chemical mutagens that cause little chromosome breakage but add bulky adducts to the DNA can induce the SOS response. In seed homozygous for a1-m2(8004), wx-m8, no active Spm, that had been treated with ethyl methanesulfonate, we observed activation of Spm at the rate of 1.1 × 10−4. The spontaneous rate of activation in this material was 1.2 × 10−5. Most of the activation events occurred as single kernels. This result contrasts with sectors covering at least one-eighth of the ear that would have been expected if activation had occurred as a direct result of mutagenesis in the mature kernel. The late timing of these events suggests that the activation, in most instances, may not be the direct result of chemical mutagenesis.
KeywordsTransposable Element Position Effect Chromosome Break Chromosome Breakage Ethyl Methanesulfonate
Unable to display preview. Download preview PDF.
- 4.Bianchi, A., F. Salamini, and R. Parlavecchio (1969) On the origin of controlling elements in maize. Genetica Agraria 22:335–344.Google Scholar
- 11.Coe, E.H., and M.G. Neuffer (1978) Embryo cells and their destinies in the corn plant. In The Clonal Basis of Development, S. Subtelny and I. Sussex, eds. Academic Press, New York, pp. 113–129.Google Scholar
- 14.Elespuru, R.K. (1984) Induction of bacteriophage lambda by DNA-interacting chemicals. Chem. Mutagens 9:213–231.Google Scholar
- 19.McClintock, B. (1951) Mutable loci in maize. Carnegie Institution Washington Yearbook 50:174–181.Google Scholar
- 20.McClintock, B. (1965) The control of gene action in maize. Brookhaven Symposia on Quantitative Biology 18:162–182.Google Scholar
- 21.McClintock, B. (1967) Genetic systems regulating gene expression during development. Develop. Biol. Suppl. 1:84–112.Google Scholar
- 22.McClintock, B. (1968) The states of a gene locus in maize. Carnegie Institution Washington Yearbook 66:20–28.Google Scholar
- 27.Neuffer, M.G., and E.H. Coe (1977) Paraffin oil technique for treating corn pollen with chemical mutagens. Maydica 22:21–28.Google Scholar
- 36.Spofford, J.B. (1976) Position-effect variegation in Drosophila. In The Genetics and Biology of Drosophila, M. Ashburner and E. Novitski, eds. Academic Press, New York, 1c:955–1018.Google Scholar | <urn:uuid:837a32c3-59fd-41d2-ada1-0455baef18ae> | 2.71875 | 746 | Academic Writing | Science & Tech. | 59.178707 | 95,513,128 |
Imagining the future of humanity, our planet, and everything we hold dear in our corner of the cold dark Universe is typically the domain of science fiction, and we’re usually only worried about the next few hundred years at best.
But what about thousands and thousands of years from now? What will happen then? It turns out that thanks to various tools from science, a few things in the distant future can be predicted with surprising accuracy.
Based on what we know about life, the Universe and everything, some scientific predictions in fields like astrophysics and evolution can actually reach hundreds of thousands of years ahead of our time.
You can find several riveting far future timelines on Wikipedia, including one that draws heavily on sci-fi and popular fiction.
But let’s have a look at what science says will happen in the nearest of these far futures – roughly 10,000 years from now.
For starters, at that point the entire East Antarctic will be no more. It’s the longest continuous ice sheet on our planet, and modelling predicts that if the Wilkes subglacial basin collapses, it will take between 5,000 and 10,000 years for that gigantic ice block to dissipate into the sea, rising the water levels by 3-4 metres (10-13 feet).
There’s a chance we won’t have any humans left around to have to deal with all that rising seawater, though.
According to one estimate called the Doomsday argument, as proposed by Australian theoretical physicist Brandon Carter, there’s a 95 percent chance that humans will have died out in 10,000 years.
That argument has been heavily debated, so we’re not entirely sure if people will be around or not. But if they are, in 10,000 years there will be no regional genetic variation between humans. That’s not to say people will all look the same, but whatever genetic differences there are – such as blue eyes versus brown – will be evenly distributed across the planet.
And those evenly mixed people, with vastly different shorelines from ones we know today, and with a Gregorian calendar 10 days out of sync with the Sun’s position, may also be treated to a spectacular stellar explosion.
It is predicted that within the next 10,000 years the red supergiant star Antares is expected to burst into a supernova so bright it will be visible in broad daylight.
(Antares could actually burst at any moment, so we’re kinda hoping it will happen sooner rather than later, so we get to see it in our skies instead of our hypothetical and possibly extinct descendants.)
Oh and by the way, if we stretch that time window to just 13,000 years, Earth’s axial tilt will be reversed, flipping the seasons between the hemispheres. Now that would be confusing to live through.
But regardless of whether humans make it to the 10,000 year mark or not, the space probes Pioneer 10 and 11, Voyager 1 and 2, and New Horizons are likely to still be cruising out there among the stars not just for thousands, but millions of years.
In fact, if we squint and look just a little further into the future, 296,000 years from now Voyager 2 will actually pass within a spitting distance – in stellar terms – from Sirius, the brightest star in our sky.
All of these predictions only deal with the closest of time points in what’s known as the far future, and we’re already feeling pretty dizzy.
But if you want to probe these timelines even further, you can head over to the full Wikipedia timeline here. A fun spoiler before you go: it will take 1 million years until Neil Armstrong’s footprint on the Moon has eroded. | <urn:uuid:1ca50d57-d255-4b36-ae0f-7e9d69eb76c7> | 3.71875 | 781 | Personal Blog | Science & Tech. | 52.820315 | 95,513,142 |
Design call for 'solar sentinel' mission
UK scientists and engineers will play a leading role in developing a satellite that can warn if Earth is about to be hit by damaging solar storms.
The European Space Agency has requested studies be undertaken to design the mission that would launch in the 2020s.
Explosive eruptions from the Sun can lead to widespread disruption on our planet - degrading communications, even knocking over power grids.
The satellite's observations would increase the time available to prepare.
Esa has a working name for the new mission - "Lagrange", which reflects the position the satellite would take up in space.
The plan is to go to a gravitational "sweetspot" just behind the Earth in its orbit around the Sun known as "Lagrangian Point 5".
Spacecraft that are sited there do not have to use so much fuel to maintain station - but there is an even bigger operational rationale to use this location: it is the perfect spot to see that part of the Sun which is about to rotate into view of the Earth.
"So, not only do you get a preview of the active regions and how complicated they are, but if the Sun throws something out you also get to track it from the side," explained British solar physicist Prof Richard Harrison.
"Imagine a fist coming directly at your face - it's difficult to say how far away it is; but if you see that fist from the side, it's much easier," he told BBC News.
Esa signed four so-called Phase AB1 contracts on Friday at its mission control centre in Darmstadt, Germany.
These include two parallel industrial studies - to be led by Airbus UK and OHB System of Germany - to spec the spacecraft bus, or chassis, and the process for integrating all the satellite's instruments.
The aerospace companies will also work out how the entire mission would be managed, from launch to the end of service life.
The actual design of the onboard instruments is the subject of the other two contracts. Both of these will be directed by British-led consortia.
RAL (Rutherford Appleton Laboratory) Space will assess the requirements of the mission's "remote sensing package" - that is, the instruments that discern what the Sun is doing by looking at it.
The UK's Mullard Space Science Laboratory (MSSL) will scope the "in-situ package" - those instruments that investigate the Sun's activity by directly sensing emitted particles and magnetic fields.
Although led from Britain, these efforts will of course draw on talents from across European member states.
The different Lagrangian Points
- These are the sweetspots in the Sun-Earth-Moon system
- They are places where gravitational forces balance out
- Satellites at these locations use less fuel to maintain station
- L5 is at a 60-degree offset, and follows Earth in its orbit
- A complementary US mission would very likely go to L1
Solar storms are a common occurrence. Our star will sometimes despatch big bursts of shortwave and longwave radiation, superfast particles and colossal volumes of charged gas (plasma) in our direction. This material is also threaded with strong magnetic fields.
When these emissions encounter Earth, they can kick-off a number of effects in modern infrastructure, from glitching electronics in aircraft avionics and in orbiting spacecraft to increasing the interference heard on radio broadcasts, such as those from the BBC.
Numerous studies have warned of the possible consequences of a major solar storm impacting Earth.
Just last year, a government report said the UK economy would lose £1bn for every day the GPS satellite-navigation service was unavailable.
"What we need is a 'solar sentinel', watching the Sun to tell us what is going to happen in advance," said Dr Ralph Cordey from Airbus UK.
"This is an area where the UK's expertise is well established. It's also the case that the impacts of 'space weather' are regarded as a priority in the UK with the issue recognised in the register of civil hazards, along with pandemic flu, severe flooding and volcanic eruptions."
The Lagrange mission concept is being overseen by the Space Situational Awareness programme at Esa to which the UK committed €22m, over 4 years, at the last gathering of Europe's space ministers in December 2016.
When the ministers next meet, in December 2019, they will have the results of the new studies and should hopefully be in a position then to sanction the mission's full development.
One key instrument that will have to be carried is a coronagraph.
This is a device that blocks the full glare of the Sun's disc so that the beginnings of an eruption are more easily seen.
At the moment, space weather forecasters are relying on a coronagraph on a 20-year-old spacecraft called Soho.
"A coronagraph gives us the first warning that something really is happening," said Prof Harrison, who is the chief scientist at RAL Space.
"A coronal mass ejection is a million times weaker in intensity than the Sun itself. It's a contrast problem: if you didn't block off the Sun, you wouldn't see it."
It is likely the Americans will launch a similar mission in the coming years that will sit directly in front of Earth in line with the Sun. Taking the two perspectives together will give solar storm forecasters the best assessment of potential impacts.
and follow me on Twitter: @BBCAmos | <urn:uuid:d6e63e68-36c0-4cc4-9979-802684a9bcc0> | 3.09375 | 1,133 | News Article | Science & Tech. | 43.433343 | 95,513,160 |
The team used nano-particles of gold instead of bulk gold. The catalyst structure looks as if someone had pulverized a piece of gold and spread the tiny nano-sized pieces over an aluminum oxide support. The properties of the nano-particles are very different from those of bulk gold. Only when the gold atoms are confined to the size of just a few millionth of a millimetre they start showing the desired catalytic behaviour.
Mechanism for the catalytic reaction 2CO + O2 -> 2CO2
Scientists already knew that gold nano-particles react with this kind of setup and catalyses CO with oxygen (O2) into CO2. What they did not know was how the oxygen is activated on the catalyst. In order to find that out, they set up a cell where they could carry out the reaction, and in situ perform an X-ray experiment with the ESRF beam.
The researchers first applied a flow of oxygen over the gold nano-particles and observed how the oxygen becomes chemically active when bound on the gold nano-particles using high-energy resolution X-ray absorption spectroscopy. While constantly monitoring the samples, they switched to a flow of toxic carbon monoxide and found that the oxygen bound to the gold reacted with the carbon monoxide to form carbon dioxide. Without the gold nano-particles, this reaction does not take place. “We knew beforehand that the small gold particles were active, but not how they did the reaction. The nice thing is that we have been able to observe, for the first time, the steps and path of the reaction. The results followed almost perfectly our original hypotheses. Isn’t it beautiful that the most inert bulk metal is so reactive when finely dispersed?” comments Jeroen A. van Bokhoven, the corresponding author of the paper.
The possible applications of this research could involve pollution control such as air cleaning, or purification of hydrogen streams used for fuel cells. “Regarding the technique we used, the exceptionally high structural detail that can be obtained with it could be used to study other catalytic systems, with the aim of making them more stable and perform better”, says van Bokhoven.
One of the great advantages of this experiment is the nature of catalysis. The fact that once the material has reacted, it goes back to its initial state, has made the experiments easier. Nevertheless, in technological terms, it has been very demanding: “We combined the unique properties of our beamline with an interesting and strongly debated question in catalysis. Some extra time was needed to adapt the beamline, to the special requirements of this experiment,” explains Pieter Glatzel, scientist in charge of ID26 beamline, where the experiments were carried out. At the end, it only took the team a bit over half a year to prepare and carry out the experiments and publish the paper. “This is a very nice recognition of our work,” says Glatzel.
The article appears in this week’s international edition of Angewandte Chemie with a very high impact among the chemistry audience. In addition to this, the paper has been attributed the status of Very Important Paper, which is given to only 5% of all the publications in this journal.
Montserrat Capellas | alfa
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:fec9074b-2d08-40f1-84ce-f64a2c2ab7c5> | 3.84375 | 1,263 | Content Listing | Science & Tech. | 41.894692 | 95,513,161 |
Engineers at the University of Washington have developed a new HD video streaming method that doesn't need to be plugged in. Their prototype skips the power-hungry components and has something else, like a smartphone, process the video instead.
For the first time, Lawrence Livermore National Laboratory (LLNL) has issued state-by-state energy and water flow charts in one location so that analysts and policymakers can find all the information they need in one place.
An international team led by Argonne National Laboratory makes breakthrough in understanding the chemistry of the microscopically thin layer that forms between the liquid electrolyte and solid electrode in lithium-ion batteries. The results are being used in improving the layer and better predicting battery lifetime.
A team of researchers from Lawrence Livermore National Laboratory (LLNL), Princeton University, Johns Hopkins University and the University of Rochester have provided the first experimentally based mass-radius relationship for a hypothetical pure iron planet at super-Earth core conditions. This discovery can be used to evaluate plausible compositional space for large, rocky exoplanets, forming the basis of future planetary interior models, which in turn can be used to more accurately interpret observation data from the Kepler space mission and aid in identifying planets suitable for habitability.
Magnesium ions move very fast to enable a new class of battery materials.
Research appearing today in Nature Communications finds useful new information-handling potential in samples of tin(II) sulfide (SnS), a candidate "valleytronics" transistor material that might one day enable chipmakers to pack more computing power onto microchips.
SLAC and its collaborators are transforming the way new materials are discovered. In a new report, they combine artificial intelligence and accelerated experiments to discover potential alternatives to steel in a fraction of the time.
Scientists directly see how the atoms in a magnesium-based battery fit into the structure of electrodes.
Researchers at Pacific Northwest National Laboratory have developed and successfully tested a novel process - called Friction Stir Dovetailing - that joins thick plates of aluminum to steel. The new process will be used to make lighter-weight military vehicles that are more agile and fuel efficient.
Converting laser light into nuclear vibrations is key to switching a material's properties on and off for future electronics.
Electronics miniaturization has put high-powered computing capability into the hands of ordinary people, but the ongoing downsizing of integrated circuits is challenging engineers to come up with new ways to thwart component overheating.
Scientists demonstrated that powerful acids heal certain structural defects in synthetic films.
It may sound like a futuristic device out of a spy novel, a computer the size of a pinhead, but according to new research from the University of New Hampshire, it might be a reality sooner than once thought. Researchers have discovered that using an easily made combination of materials might be the way to offer a more stable environment for smaller and safer data storage, ultimately leading to miniature computers.
A multi-institutional project to understand one of the major targets of human drug design has produced new insights into how structural communication works in a cell component called a G protein-coupled receptor (GPCRs), basically a "doorbell" structure that alerts the cell of important molecules nearby.
CMI Expands Research in Tech Metals as Rapid Growth in Electric Vehicles Drives Demand for Lithium, Cobalt
As increasing consumer interest in electric vehicles drives the demand for supplies of lithium and cobalt (ingredients in lithium-ion batteries), the Critical Materials Institute will begin new efforts this July to maximize the efficient processing, use, and recycling of those elements.
Novel engineered polymers assemble buckyballs into columns using a conventional coating process.
A biologically inspired membrane intended to cleanse carbon dioxide almost completely from the smoke of coal-fired power plants has been developed by scientists at Sandia National Laboratories and the University of New Mexico.
Lasting just a few hundred billionths of a billionth of a second, these bursts offer new tool to study chemistry and magnetism.
Scientists have decoded faint distortions in the patterns of the universe's earliest light to map huge tubelike structures invisible to our eyes - known as filaments - that serve as superhighways for delivering matter to dense hubs such as galaxy clusters.
When power generators transfer electricity to homes, businesses and the power grid, they lose almost 10 percent of the generated power. To address this problem, scientists are researching new diamond semiconductor circuits to make power conversion systems more efficient. Researchers in Japan successfully fabricated a key circuit in power conversion systems using hydrogenated diamond. These circuits can be used in diamond-based electronic devices that are smaller, lighter and more efficient than silicon-based devices. They report their findings in this week's Applied Physics Letters.
This week, the Axion Dark Matter Experiment (ADMX) unveiled a new result, published in Physical Review Letters, that places it in a category of one: It is the world's first and only experiment to have achieved the necessary sensitivity to "hear" the telltale signs of dark matter axions. This technological breakthrough is the result of more than 30 years of research and development, with the latest piece of the puzzle coming in the form of a quantum-enabled device that allows ADMX to listen for axions more closely than any experiment ever built.
UPTON, NY--Scientists studying plant biochemistry at the U.S. Department of Energy's Brookhaven National Laboratory have discovered new details about biomolecules that put the brakes on oil production. The findings suggest that disabling these biomolecular brakes could push oil production into high gear--a possible pathway toward generating abundant biofuels and plant-derived bioproducts.
An international team of researchers is laying the foundation for more widespread use of lithium metal batteries. They developed a method to mitigate the formation of dendrites - crystal-like masses - that damage the batteries' performance.
The mirror-like physics of the superconductor-insulator transition operates exactly as expected. Scientists know this to be true following the observation of a remarkable phenomenon, the existence of which was predicted three decades ago but that had eluded experimental detection until now. The observation confirms that two fundamental quantum states, superconductivity and superinsulation, both arise in mirror-like images of each other.
A group of scientists working on the MiniBooNE experiment at the Department of Energy's Fermilab has reported a breakthrough: They were able to identify exactly-known-energy muon neutrinos hitting the atoms at the heart of their particle detector. The result eliminates a major source of uncertainty when testing theoretical models of neutrino interactions and neutrino oscillations. | <urn:uuid:a9b7e731-c85a-4a64-8ac8-cf3cd86c8f7d> | 2.6875 | 1,363 | Content Listing | Science & Tech. | 13.823338 | 95,513,171 |
- Open Access
A novel approach for finding ring species: look for barriers rather than rings
© Irwin; licensee BioMed Central Ltd. 2012
Received: 23 February 2012
Accepted: 12 March 2012
Published: 12 March 2012
Ring species, in which two different forms coexist in one region while being connected by a long chain of interbreeding populations encircling a geographic barrier, provide clear demonstrations of the evolution of one species into two. Known ring species are rare, but now Monahan et al. propose an intriguing new approach to discovering them: focus first on geography to find potential barriers.
See research article http://www.biomedcentral.com/1741-7007/10/20
Until now, our knowledge of the diversity of ring species has arisen primarily from the field of taxonomy, with experts on the taxonomy of particular groups occasionally noticing a pattern of gradual variation between quite divergent forms. This somewhat haphazard approach has led to a variety of ring species being proposed [2, 4], only some of which have held up to further scrutiny [4, 5]. Only two well-studied cases are generally accepted as solid examples of ring species: these are the Ensatina eschscholtzii salamander complex in California and the Phylloscopus trochiloides greenish warbler complex in Asia . One challenge in relying on taxonomists to discover ring species is that the naming rules of taxonomy generally conceal their existence: taxonomists have to decide whether a group of specimens is two species or one species; the taxonomic naming system does not lend itself toward describing gradients between two species .
The study by Monahan et al. proposes a novel approach to the discovery of ring species, focusing on geography rather than taxonomy as the starting point. They ask an intriguing question: where in the world are there barriers that might promote ring speciation? A topographic model, based on slope of the landscape, is used to identify potential geographic barriers worldwide. In the model, barriers are regions that have either more or less slope than the regions around them. The characteristics of the potential barriers, such as size and shape, are then compared with those of known barriers in two ring species (E. eschscholtzii salamanders and P. trochiloides greenish warblers) and two groups that have been proposed as ring species and share many of their characteristics (Acacia karoo trees and Larus gulls). Known barriers are similar to only a small proportion of all potential barriers, suggesting that ring species barriers have common characteristics. The authors also show maps of a small subset of the potential barriers that are similar to the real ring species barriers, suggesting that these may be good locations to look for ring species.
Though the current model is based solely on slope, other geographic and environmental variables could eventually be incorporated to enhance the effectiveness of the model in identifying some barriers in species distributions. In particular, it may be advantageous to introduce elevation as a geographic variable in the model. The current use of slope results in two sorts of 'barriers' being identified: 1) areas of high slope, such as mountain ranges, escarpments, or ocean trenches, surrounded by areas of low slope such as plains, plateaus, or ocean basins; and 2) areas of low slope surrounded by those of high slope. As a result, some of the barriers identified by this model are peculiar: for example, in the first case, an area of flat land bordered on one side by a steep climb toward higher elevations and on the other side by a steep drop toward lower elevations; in the second case, a steep escarpment between a high plateau and a low plain. In both of these, it seems unlikely that a species could live in all areas encircling the 'barrier' without also inhabiting the 'barrier' itself. Rather, it seems that the optimal topographic model would use some combination of both slope and elevation to identify barriers. Elevation is also likely to work better than slope in describing the Arctic Ocean barrier in the case of the Larus gull ring; the slope-based model results in three separate barriers corresponding to deep ocean basins, which the authors then joined as a composite barrier (see , their Figure 2D). It seems that slope on the deep ocean floor is of little relevance to describing the distribution of a bird species, whereas elevation (for example, above or below sea level) is of substantial importance.
Environmental variables such as climate or vegetation could also be incorporated into the model. For instance, with respect to the central Asian barrier that the greenish warbler encircles, Monahan et al. find that their model did not identify a single barrier - rather, they construct a composite barrier out of two separate barriers identified by the model. They remark that, in cases such as this, 'it is difficult to imagine any univariate or multivariate environmental approximation of a single barrier (for example, Central Asia, which is comprised of the Takla Maka-Gobi deserts and the Tibetan Plateau - large geographic regions that differ dramatically in terms of climate and vegetation).' However, a good explanatory variable has been identified in this case: greenish warblers inhabit forests , and maps of forests in Asia (for example, ) show a large gap that includes the Tibetan Plateau as well as the Taklamakan and Gobi deserts. Other examples of large potential barriers that show up clearly when considering a basic environmental variable (wet versus dry) are Antarctica, Australia, and Greenland (for marine and/or terrestrial coastal organisms), which were missed by the current topographic model. It is clear that the addition of other topographic and environmental variables could greatly enhance the precision of the model, and Monahan et al. emphasize that their general approach can be modified to work with any kind of continuously distributed environmental variable, making it of wide applicability to many different types of investigations into barriers to dispersal that may contribute to speciation.
Finally, the very large number of potential barriers identified by the topographic model (952,147, about 10,000 of which are 'topographically similar' to those associated with known ring taxa ) raises another issue. Given the very large number of identified candidate barriers, it is almost inevitable that at least one will be associated with any interesting species complex that we might point to as a candidate for ring speciation, and this means that the predictive value of the model will depend on further refinement. Despite these issues, it is likely that the present model represents an important first step in this geography-oriented approach to the analysis of barriers involved in both ring speciation and speciation more generally. The approach proposed by Monahan et al. will likely be adapted to incorporate multiple variables (in addition to slopes), and this will allow more refined identifications of a smaller number of potential barriers, resulting in more useful predictions. The discovery and inclusion of more ring species (for example, the willow warblers Phylloscopus trochilus, which display a form of incipient ring speciation around the Baltic Sea [5, 10]) will likewise allow further refinement of the model, perhaps eventually allowing an analysis of what types of barriers are associated with ring species from different taxonomic groups. By applying an explicit geographic framework to the analysis of ring species, Monahan et al. have pioneered an interesting new approach to the study of the relationship between geography and speciation. In the years ahead, it will be exciting to see whether additional ring species are identified using this geography-oriented approach.
This work was funded by grants from the Natural Sciences and Engineering Research Council of Canada.
- Jordan DS: The law of geminate species. Am Nat. 1908, 42: 73-80. 10.1086/278905.View ArticleGoogle Scholar
- Mayr E: Systematics and the Origin of Species. 1942, New York: Dover PublicationsGoogle Scholar
- Cain AJ: Animal Species and their Evolution. 1954, London: Hutchinson HouseGoogle Scholar
- Irwin DE, Irwin JH, Price TD: Ring species as bridges between microevolution and speciation. Genetica. 2001, 112-113: 223-243.PubMedView ArticleGoogle Scholar
- Irwin DE: Incipient ring speciation revealed by a migratory divide. Mol Ecol. 2009, 18: 2923-2925. 10.1111/j.1365-294X.2009.04211.x.PubMedView ArticleGoogle Scholar
- Wake DB: Incipient species formation in salamanders of the Ensatina complex. Proc Natl Acad Sci USA. 1997, 94: 7761-7767. 10.1073/pnas.94.15.7761.PubMedPubMed CentralView ArticleGoogle Scholar
- Irwin DE, Bensch S, Price TD: Speciation in a ring. Nature. 2001, 409: 333-337. 10.1038/35053059.PubMedView ArticleGoogle Scholar
- Monahan WB, Pereira RJ, Wake DB: Ring distributions leading to species formation: a global topographic analysis of geographic barriers associated with ring species. BMC Biol. 2012, 10: 20-10.1186/1741-7007-10-20.PubMedPubMed CentralView ArticleGoogle Scholar
- Global Forest Watch: Global Forest Map. [http://www.globalforestwatch.org/english/interactive.maps/global.htm]
- Bensch S, Grahn M, Müller N, Gay L, Åkesson S: Genetic, morphological, and feather isotope variation of migratory willow warblers show gradual divergence in a ring. Mol Ecol. 2009, 18: 3087-3096. 10.1111/j.1365-294X.2009.04210.x.PubMedView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | <urn:uuid:6afa1023-f6c8-4209-9142-65e2ee42c53d> | 3.375 | 2,112 | Truncated | Science & Tech. | 40.973593 | 95,513,219 |
Python is an interpreted, interactive, object-oriented programming language. It is often compared to Tcl, Perl, Scheme or Java. Python combines remarkable power with very clear syntax. It has modules, classes, exceptions, very high level dynamic data types, and dynamic typing. There are interfaces to many system calls and libraries, as well as to various windowing systems (X11, Motif, Tk, Mac, MFC). New built-in modules are easily written in C or C++. Python is also usable as an extension language for applications that need a programmable interface.The Python implementation is portable: it runs on many brands of UNIX, on Windows, DOS, OS/2, Mac, Amiga. If your favorite system isn't listed here, it may still be supported, if there's a C compiler for it. Ask around on comp.lang.python -- or just try compiling Python yourself.
|File Size||162.01 MB|
|System Requirements||<ul><li>macOS High Sierra </li><li>macOS Sierra </li><li>OS X El Capitan </li><li>OS X Yosemite </li><li>OS X Mavericks </li><li>OS X Mountain Lion </li><li>OS X Lion </li><li>OS X Snow Leopard </li></ul>| | <urn:uuid:9302e084-9d93-4935-a866-ffc68111417c> | 3.21875 | 278 | Product Page | Software Dev. | 59.042727 | 95,513,230 |
Both air pollution and global warming could be reduced by controlling emissions of methane gas, according to a new study by scientists at Harvard University, the Argonne National Laboratory, and the Environmental Protection Agency. The reason, they say, is that methane is directly linked to the production of ozone in the troposphere, the lowest part of Earths atmosphere, extending from the surface to around 12 kilometers [7 miles] altitude. Ozone is the primary constituent of smog and both methane and ozone are significant greenhouse gases.
A simulation based upon emissions projections by the Intergovernmental Panel on Climate Change (IPCC) predicts a longer and more intense ozone season in the United States by 2030, despite domestic emission reductions, the researchers note. Mitigation should therefore be considered on a global scale, the researchers say, and must take into account a rising global background level of ozone. Currently, the U.S. standard is based upon 84 parts per billion by volume of ozone, not to be exceeded more than three times per year, a standard that is not currently met nationwide. In Europe, the standard is much stricter, 55-65 parts of ozone per billion by volume, but these targets are also exceeded in many European countries.
Writing this month in the journal Geophysical Research Letters, Arlene M. Fiore and her colleagues say that one way to simultaneously decrease ozone pollution and greenhouse warming is to reduce methane emissions. Ozone is formed in the troposphere by chemical reactions involving methane, other organic compounds, and carbon monoxide, in the presence of nitrogen oxides and sunlight. Methane is known to be a major source of ozone throughout the troposphere, but is not usually considered to play a key role in the production of ozone smog in surface air, because of its long lifetime.
Harvey Leifert | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:cce672ed-e180-4660-8547-172d8baf77c3> | 4.09375 | 967 | Content Listing | Science & Tech. | 34.177977 | 95,513,252 |
Astronomers using NASA's Hubble Space Telescope have measured the rotation rate of an extreme exoplanet by observing the varied brightness in its atmosphere. This is the first measurement of the rotation of a massive exoplanet using direct imaging.
"The result is very exciting," said Daniel Apai of the University of Arizona in Tucson, leader of the Hubble investigation. "It gives us a unique technique to explore the atmospheres of exoplanets and to measure their rotation rates."
This is an illustration of a planet that is four times the mass of Jupiter and orbits 5 billion miles from a brown dwarf companion object (the bright red star seen in the background). The planet is only 170 light-years away. Our sun is a faint star in the background.
Credits: NASA, ESA, and G. Bacon/STScI
The planet, called 2M1207b, is about four times more massive than Jupiter and is dubbed a "super-Jupiter." It is a companion to a failed star known as a brown dwarf, orbiting the object at a distance of 5 billion miles. By contrast, Jupiter is approximately 500 million miles from the sun. The brown dwarf is known as 2M1207. The system resides 170 light-years away from Earth.
Hubble's image stability, high resolution, and high-contrast imaging capabilities allowed astronomers to precisely measure the planet's brightness changes as it spins. The researchers attribute the brightness variation to complex clouds patterns in the planet's atmosphere. The new Hubble measurements not only verify the presence of these clouds, but also show that the cloud layers are patchy and colorless.
Astronomers first observed the massive exoplanet 10 years ago with Hubble. The observations revealed that the exoplanet's atmosphere is hot enough to have "rain" clouds made of silicates: vaporized rock that cools down to form tiny particles with sizes similar to those in cigarette smoke. Deeper into the atmosphere, iron droplets are forming and falling like rain, eventually evaporating as they enter the lower levels of the atmosphere.
"So at higher altitudes it rains glass, and at lower altitudes it rains iron," said Yifan Zhou of the University of Arizona, lead author on the research paper. "The atmospheric temperatures are between about 2,200 to 2,600 degrees Fahrenheit."
The super-Jupiter is so hot that it appears brightest in infrared light. Astronomers used Hubble's Wide Field Camera 3 to analyze the exoplanet in infrared light to explore the object's cloud cover and measure its rotation rate. The planet is hot because it is only about 10 million years old and is still contracting and cooling. For comparison, Jupiter in our solar system is about 4.5 billion years old.
The planet, however, will not maintain these sizzling temperatures. Over the next few billion years, the object will cool and fade dramatically. As its temperature decreases, the iron and silicate clouds will also form lower and lower in the atmosphere and will eventually disappear from view.
Zhou and his team have also determined that the super-Jupiter completes one rotation approximately every 10 hours, spinning at about the same fast rate as Jupiter.
This super-Jupiter is only about five to seven times less massive than its brown-dwarf host. By contrast, our sun is about 1,000 times more massive than Jupiter. "So this is a very good clue that the 2M1207 system we studied formed differently than our own solar system," Zhou explained. The planets orbiting our sun formed inside a circumstellar disk through accretion. But the super-Jupiter and its companion may have formed throughout the gravitational collapse of a pair of separate disks.
"Our study demonstrates that Hubble and its successor, NASA's James Webb Space Telescope, will be able to derive cloud maps for exoplanets, based on the light we receive from them," Apai said. Indeed, this super-Jupiter is an ideal target for the Webb telescope, an infrared space observatory scheduled to launch in 2018. Webb will help astronomers better determine the exoplanet's atmospheric composition and derive detailed maps from brightness changes with the new technique demonstrated with the Hubble observations.
Results from this study will appear in the Feb. 11, 2016, edition of The Astrophysical Journal.
For more information about NASA's Hubble Space Telescope, visit:
Ray Villard | EurekAlert!
Computer model predicts how fracturing metallic glass releases energy at the atomic level
20.07.2018 | American Institute of Physics
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:4ff059a3-4d64-47da-a08c-18c330b98c80> | 3.90625 | 1,477 | Content Listing | Science & Tech. | 41.022592 | 95,513,253 |
1. Coastal grazing marshes comprise an important habitat for wetland biota but are threatened by agricultural intensi®cation and conversion to arable farmland. In Britain, the Environmentally Sensitive Area (ESA) scheme addresses these pro-blems by providing ®nancial incentives to farmers to retain their grazing marshes, and to follow conservation management prescriptions. 2. A modelling approach was used to aid the development of management pre-scriptions for ground-nesting birds in the North Kent Marshes ESA. This ESA contains the largest area of coastal grazing marsh remaining in England and Wales (c. 6500 ha) and supports nationally important breeding populations of lapwing Vanellus vanellus and redshank Tringa totanus. 3. Counts of ground-nesting birds, and assessments of sward structure, surface topography and wetness, landscape structure and sources of human disturbance were made in 1995 and again in 1996, on 19 land-holdings with a combined area of c. 3000 ha. The land-holdings varied from nature reserves at one extreme to an intensive dairy farm at the other. 4. Models of relationship between the presence or absence of ground-nesting birds and the grazing marsh habitat in each of c. 430 marshes were constructed using a generalized linear mixed modelling (GLMM) method. This is an extension to the conventional logistic regression approach, in which a random term is used to model dierences in the proportion of marshes occupied on dierent land-holdings. 5. The combined species models predicted that the probability of marshes being occupied by at least one ground-nesting species increased concomitantly with the complexity of the grass sward and surface topography but decreased in the pre-sence of hedgerows, roads and power lines. 6. Models were also prepared for each of the 10 most widespread species, including lapwing and redshank. Their composition diered between species. Variables describing the sward were included in models for ®ve species: heterogeneity of sward height tended to be more important than mean sward height. Surface topo-graphy and wetness were important for waders and wildfowl but not for other spe-cies. Eects of boundaries, proximity to roads and power lines were included in some models and were negative in all cases. 7. Binomial GLMMs are useful for investigating habitat factors that aect the dis-tribution of birds at two nested spatial scales, in this case ®elds (marshes) grouped within farms. Models of the type presented in this paper provide a framework for targeting of conservation management prescriptions for ground-nesting birds at the ®eld scale on the North Kent Marshes ESA and on lowland wet grassland else-where in Europe.
Mendeley saves you time finding and organizing research
There are no full text links
Choose a citation style from the tabs below | <urn:uuid:ff2ac4ce-3db4-43fd-9391-02c87f656701> | 2.796875 | 599 | Academic Writing | Science & Tech. | 40.559692 | 95,513,254 |
What's up in
To efficiently analyze a firehose of data, scientists first have to break big numbers into bits.
Voevodsky’s friends remember him as constitutionally unable to compromise on the truth — a quality that led him to produce some of the most important mathematics of the 20th century.
A type of symmetry so unusual that it was called a “pariah” turns out to have deep connections to number theory.
Two mathematicians have proved that two different infinities are equal in size, settling a long-standing question. Their proof rests on a surprising link between the sizes of infinities and the complexity of mathematical theories.
To tell truth from fiction, start with quantitative thinking, argues the mathematician Rebecca Goldin. | <urn:uuid:934363e0-c38f-4999-98ec-4be80657785a> | 3.046875 | 156 | Content Listing | Science & Tech. | 34.96 | 95,513,256 |
+44 1803 865913
By: Harry . McSween
432 pages, Illus
This book was written expressly for geologists and focuses on how geochemical principles can be used to solve practical problems. It incorporates new geochemical discoveries as examples of processes and pathways, and contains chapters on mineral structure and bonding, on organic matter and biomarkers.
I would happily recommend this book as a wide-ranging introduction to the subject. -- Mike Fowler Geological Magazine v. 142 2005
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
the world’s foremost supplier of natural history and environmental books
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:9908175f-5127-4997-88e1-ca1a9ca612f2> | 2.84375 | 168 | Product Page | Science & Tech. | 41.506127 | 95,513,267 |
I have worked this problem many different ways and cannot seem to set it up correctly. Na2CO3 + 2HCl => 2NaCl + CO2 + H2O I have 36.53g of sodium carbonate reacting with 32.94g of hydrchloric acid. I am trying to find how many mL of water are produced from this reaction.
1. a) A copper refining cell is operating with an electrolyte which contains 0.05 m Sn2+. At what concentration of Cu2+ will the tin start to plate out in preference to the copper? The cell temperature is 40°C. b) A polarization potential arises from a concentration difference between ions in the neighborhood of the el
A gas was found to have a densitiy of .08747 mg/ml at 17.0 degree C and a pressure of 76o torr. What is its molecular mass? What type of gas most likely is this?
A certain laser uses a gas mixture made up of 9.00 g HCL, 2.00 g H2, and 165.0 g Ne. What pressure is exerted by the mixture in a 75.0 L tank at 22 degree C? Which gas has the smallest partial pressure?
A 100 ml sample of .10 molar NAOH solution was added to 100 ml of .10 molar H3C6H5O7. After equilibrium was established, which of the ions listed below was present in the greatest concentration? A) H2C6H5O7^- B) HC6H5O7^2- C) C6H5O7^3- D) OH E) H^+ Please provide thorough explanation for this question.
How can you use refractive indices to calculate weight % composition of a distillate and its residue? How can you convert the weight % composition to mole fractions? Could you please show me how they relate in mathematical form?
1. Step 1: Cl2 ------ 2Cl (fast) Step 2: CHCl3 + Cl -------- CCl3 + HCl (slow) Step 3: CCl3 + Cl ------------CCl4 (fast) Identify the rate determining step, reaction intermediates and the experimental rate law (intermediates can not be in the rate law) 2. For the half-reaction XeF2(aq) +
The numerical value of the rate constant for the gaseous reaction 2N2O5------- 4NO2+ O2 was found to be 5.8*10-4. The initial concetration of N205 was 1.00mol/L. Assuming all measurements are recorded in seconds, determine the tim required for the reation to be 60% complete if the reaction is second order
1. 5.30 moles of gas, initially at 25 degrees celcius and 5.00 atm pressure, was allowed to expand adiabatically( with no heat exchange between systen and surroundings) against a constant external pressure of 1.50 atm until the initial volume had trebled. The heat capacity of the gas was known to be 37.1 J/ degree celuius mol
Estimate the critical constants of a gas with van der Waals parameters a= 1.32 atm L^2/mol^2 b = 0.0436 L/mol
Derive expression and numerical value for compression factor of gas that obeys the equation of state.
Derive an expression for the compression factor of a gas that obeys the equation of state P(V - nb) = nRT where b and R are constants. If the pressure and temperature are such that Vm = 10b, what is the numerical value of the compression factor? Extra Info. As a measure of the deviation from ideality of the behavi
The two figures show Cp VS T for various transitions. I need to know what the graph would look like for: 1. u (internal energy) vs T 2. S (Entropy) vs T 3. and V vs T for a first order (discontinuous) transition. Extra Info: FOr the equilibrium phase transitions at constant T and P (temperature and pressure) t
An ammonium ion selective electrode responds to both NH4+ and H+, but not to NH3. (pKa=9.244 for NH4+) A) What pH conditions must be imposed to ensure the electode properly represents toe activity of NH4+ B) Why is the ionic strength an important factor when using this or any ISE? How can you compensate for this effect?
In the study of a first-order reaction, A--->B, it is found that A/Ao=0.125 after one hour. The system initially consisted of 0.20 mole of a gaseous A at 25 degrees C and 1 atm. Calculate the initial rate of reaction in moles of A reacting per minute.
For the following mechanism: N205------>k1<--------k2 N02 + N03 N0 + N03 ------>k3 2N02 Using the steady-state assumption, find the expression for dPN02/dt Note. The k1 is on top of the right pointing arow and k2 is below the left pointing arrow and the arrows are like they are in equilibriu
Determine the percent by mass of all the elements in the compound cobalt III acetate.
Please solve for the following question: How many atoms of magnesium are there in 52.65 moles of Mg3 (PO4)2?
How would you prepare a liter of "carbonate buffer" at a pH of 10.10? Ka = 4.2 times ten to the negative 7(carbonic acid) Ka = 4.8 times ten to the negative 11(bicarbonate ion) It says the key to solving the problem is picking the correct Ka value. How do I know which one to work with?
Find delta Smix, delta Gmix, delta Hmix, and delta Vmix if 125 g of benzene and 25 g of naphthalene are mixed at 60 degrees C. Assume the solution to be ideal. The molecular weights of benzene and naphthalene are 78.12 g/mol and 128.19 g/mol respectively.
Calculate the freezing point of 250 ml of water containing 7.5 g of sucrose. For water, Kf=1.86 K kg/mol. The molecular weight of sucrose is 342.3 g/mol.
(See attached file for full problem description) --- 31. A sample of H2 gas occupies 615 mL at C and 575 mm Hg. When the gas is cooled, its volume is reduced to 455 mL and its pressure is reduced to 385 mm Hg. What is the new temperature of the gas? 54. Hydrogen can be made in the "water gas reaction." If you
I need some help with this question: Two wines are available for blending: One tank of 1000 L has a Titratable Acidity of 9.0 g/L and another tank containing 2000 L has a Titratable Acidity of 0.6 g/L. How much volume do you need to blend to make the 9.0 g/L Titatrable Acid wine equivalent to 7.2 g/L? What is the final volu
Relationship of mole to amu. Is mole same thing as amu?
I need to find a simple way to convert gases expressed as mol % to volume percent. Pressure and temperature are constant. For example, what volume percent would water vapour be if given as 1.52 mol %?
Consider the combustion reaction for propane below. CH3CH2CH3(g)+5O2(g)-------->3 C02(g)+4H20(g) a) Determine the delta G standard state at 298 K from the following standard state values at T= 298 S(CH3CH2CH3)=269.9 J/K mol S(C02)= 213.7 J/K mol S(H20)=188.8 J/K mol S(02)=205.1 J/ K mol delta H(CH3CH2CH3)=-103.9 kJ/mol
Show that the partial of H with respect to P at constant T =0 for an ideal gas. Hint: start with dH=VdP+TdS. Divide by dP and impose constant T. Use a maxwell relation from dG.
The heat capacity of a gas is given by Cp=a+bT where a and b are constants: a) Determine delta S for heating the gas from T1 to T2 at constant P b) Determine delta G for heating the gas from T1 to T2 at constant P HINT: the integral lnx dx= x ln x-x
Find the entropy change delta for argon gas undergoing the following temperature and pressure changes: good Ar(g),P=1 atm, T=300 K------> Ar(g), P=10 atm, T=500K.
Consider the reacton: 2 H2(g) + O2(g) -------> 2 H2O(g). Determine delta G for this reaction at 1000 K.
*** Please see file for full description*** 1. Into each of three potatoes insert 1 copper and 1 zinc probe. The probes should be inserted to a medium depth inside the potatoes and should be spaced within about an inch of each other. The probes should NOT touch inside or outside of the potato. IMPORTANT: To illuminate an | <urn:uuid:518b1ef7-4b5c-41a3-a0fe-203db22e6063> | 3.28125 | 2,015 | Q&A Forum | Science & Tech. | 84.643378 | 95,513,294 |
A distinct decline in horseshoe crab numbers has occurred that parallels climate change associated with the end of the last Ice Age, according to a study that used genomics to assess historical trends in population sizes.
The new research also indicates that horseshoe crabs numbers may continue to decline in the future because of predicted climate change, said Tim King, a scientist with the U.S. Geological Survey and a lead author on the new study published in Molecular Ecology.
While the current decline in horseshoe crabs is attributed in great part to overharvest for fishing bait and for the pharmaceutical industry, the new research indicates that climate change also appears to have historically played a role in altering the numbers of successfully reproducing horseshoe crabs. More importantly, said King, predicted future climate change, with its accompanying sea-level rise and water temperature fluctuations, may well limit horseshoe crab distribution and interbreeding, resulting in distributional changes and localized and regional population declines, such as happened after the last Ice Age.
“Using genetic variation, we determined the trends between past and present population sizes of horseshoe crabs and found that a clear decline in the number of horseshoe crabs has occurred that parallels climate change associated with the end of the last Ice Age,” said King.
The research substantiated recent significant declines in all areas where horseshoe crabs occur along the West Atlantic Coast from Maine to Florida and the eastern Gulf of Mexico, with the possible exception of a distinct population along the Yucatan Peninsula of Mexico
These findings, combined with the results of a 2005 study by King and colleagues, have important implications for the welfare of wildlife that rely on nutrient-rich horseshoe crab eggs for food each spring.
For example, Atlantic loggerhead sea turtles, which used to feed mainly on adult horseshoe crabs and blue crabs in Chesapeake Bay, already have been forced to find other less suitable sources of food, perhaps contributing to declines in Virginia’s sea turtle abundance. Additionally, horseshoe crab eggs are an important source of food for millions of migrating shorebirds. This is particularly true for the red knot, an at-risk shorebird that uses horseshoe crab eggs at Delaware Bay to refuel during its marathon migration of some 10,000 miles. Since the late 1990s, both horseshoe crabs and red knot populations in the Delaware Bay area have declined, although census numbers for horseshoe crabs have increased incrementally recently.
“Population size decreases of these ancient mariners have implications beyond the obvious,” King said. “Genetic diversity is the most fundamental level of biodiversity, providing the raw material for evolutionary processes to act upon and affording populations the opportunity to adapt to their surroundings. For this reason, the low effective population sizes indicated in the new study give one pause.”
These studies should help conservation managers make better-informed decisions about protecting horseshoe crabs and other species with a similar evolutionary history. For example, the 2005 study indicated males moved between bays but females did not, suggesting management efforts may best be targeted at local populations instead of regional ones since an absence of enough females may result in local extinctions.
“Consequently, harvest limitations on females in populations with low numbers may be a useful management strategy, as well as relocating females from adjacent bays to help restore certain populations,” King said. “Both studies highlight the importance of considering both climatic change and other human-caused factors such as overharvest in understanding the population dynamics of this and other species.”
Background on Horseshoe Crabs
Horseshoe crabs are not crabs at all – in fact, they are more closely related to spiders, ticks and scorpions. While historically horseshoe crabs have been used in fertilizer, most horseshoe crab harvest today comes from the fishing industry, which uses the crab as bait, and the pharmaceutical industry, which collects their blood for its clotting properties. While the crabs are returned after their blood is taken, the estimated mortality rate for bled horseshoe crabs can be as high as 30 percent.
The research, Population dynamics of American horseshoe crabs—historic climatic events and recent anthropogenic pressures, was published in the June issue of Molecular Ecology and was authored by Søren Faurby (Aarhus University, Denmark), Tim King, Matthias Obst (University of Gothenburg, Sweden) and others.
The 2005 study, Regional differentiation and sex-biased dispersal among populations of the horseshoe crab (Limulus polyphemus), was published in the Transactions of the American Fisheries Society and authored by Tim King, Mike Eackles Adrian Spidle (USGS) and Jane Brockman (University of Florida).
Catherine Puckett | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Life Sciences
18.07.2018 | Materials Sciences
18.07.2018 | Health and Medicine | <urn:uuid:a3252b65-6b8f-447a-af81-b921126f0f56> | 3.609375 | 1,648 | Content Listing | Science & Tech. | 29.165541 | 95,513,332 |
Fresh water resources, human societies, and ecosystems are expected to be strongly impacted by climate change, with precipitation trends being one of the most important elements that will be closely monitored. However, the natural variability of precipitation data can often mask existing trends such that the results appear as statistically insignificant. Information on the limitations of trend detection is important for risk assessment and for decision making related to adaption strategies under inherent uncertainties. This paper reports on an effort to quantify and map minimal detectable absolute trends in annual precipitation data series on a global scale. Monte Carlo simulations were conducted to generate realizations of trended precipitation data for different precipitation means and coefficients of variance, and the MannKendall method was applied for detecting the trend significance. Global Precipitation Climatology Centre (GPCC) VASClimO data was used to compute the mean and coefficient of variance of annual precipitation over land and to map minimal detectable absolute trends. It was found that relatively high magnitude trends (positive or negative) have a low chance of being detected as a result of high natural variance of the precipitation data. The largest undetectable trends were found for the tropics. Arid and semiarid regions also present high relative values in terms of percent change from the mean annual precipitation. Although the present analysis is based on several simplified assumptions, the goal was to point out an inherent problem of potentially undetectable high absolute trends that must be considered in analyzing precipitation data series and assessing risks in adaption strategies to climate change.
The FLASH project was implemented from 2006 to 2010 under the EU FP6 framework. The project focused on using lightning observations to better understand and predict convective storms that result in flash floods. As part of the project 23 case studies of flash floods in the Mediterranean region were examined. For the analysis of these storms lightning data from the ZEUS network were used together with satellite derived rainfall estimates in order to understand the storm development and electrification. In addition, these case studies were simulated using mesoscale meteorological models to better understand the meteorological and synoptic conditions leading up to these intense storms. As part of this project tools for short term predictions (nowcasts) of intense convection across the Mediterranean and Europe, and long term forecasts (a few days) of the likelihood of intense convection were developed. The project also focused on educational outreach through our website http://flashproject.org supplying real time lightning observations, real time experimental nowcasts, forecasts and educational materials. While flash floods and intense thunderstorms cannot be prevented as the climate changes, long-range regional lightning networks can supply valuable data, in real time, for warning end-users and stakeholders of imminent intense rainfall and possible flash floods. ?? 2011 Elsevier Ltd.
While intense scientific efforts have focused on radar precipitation estimation in temperate climatic regimes, relatively few studies have examined dry climatic regions. This paper examines rain depth estimation for a 19 day rainfall period in Israel, where the gauge spatial distribution is particularly nonhomogeneous. This fact exacerbates the main drawback of rain gauge observations, which is undersampling. Meteorological ground‐ based radar (GR) can supplement the desired information on precipitation distribution. However, especially in a complex orographic region, radar scientists are faced with beam broadening with distance, nonhomogeneous beam filling, and partial‐beam occultation, together with changes in the vertical reflectivity profile. This paper presents an improvement of GR precipitation estimates thanks to a range adjustment based on spaceborne meteorological radar. In the past, the Tropical Rainfall Measuring Mission (TRMM) satellite radar was used for checking the GR mean field bias around the world. To our knowledge, however, it is the first time that GR‐derived cumulative rainfall amounts show a better agreement with gauges, thanks to the mean field bias and range‐ dependent compensation derived using the well‐calibrated Ku band TRMM radar as a reference. The average bias improves from +1.0 dB to −0.3 dB; more interesting and difficult to obtain is a reduction of the dispersion of the error. Using TRMM‐based range compensation, the scatter decreases from 2.21 dB to 1.93 dB. We conclude that it is well worth trying to compensate for the GR range degradation.
The climate of the eastern Mediterranean (EM), at the transition zone between the Mediterranean climate and the semi‐arid/arid climate, has been studied for a 39‐year period to determine whether climate changes have taken place. A thorough trend analysis using the nonparametric Mann‐Kendall test with Sen's slope estimator has been applied to ground station measurements, atmospheric reanalysis data, synoptic classification data and global data sets for the years 1964–2003. In addition, changes in atmospheric regional patterns between the first and last twenty years were determined by visual comparisons of their composite mean. The main findings of the analysis are: 1) changes of atmospheric conditions during summer and the transitional seasons (mainly autumn) support a warmer climate over the EM and this change is already statistically evident in surface temperatures having exhibited positive trends of 0.2–1°C/decade; 2) changes of atmospheric conditions during winter and the transitional seasons support drier conditions due to reduction in cyclogenesis and specific humidity over the EM, but this change is not yet statistically evident in surface station rain data, presumably because of the high natural precipitation variance masking such a change. The overall conclusion of this study is that the EM region is under climate change leading to warmer and drier conditions.
A new parameter is introduced: the lightning potential index (LPI), which is a measure of the potential for charge generation and separation that leads to lightning flashes in convective thunderstorms. The LPI is calculated within the charge separation region of clouds between 0°C and −20°C, where the noninductive mechanism involving collisions of ice and graupel particles in the presence of supercooled water is most effective. As shown in several case studies using the Weather Research and Forecasting (WRF) model with explicit microphysics, the LPI is highly correlated with observed lightning. It is suggested that the LPI may be a useful parameter for predicting lightning as well as a tool for improving weather forecasting of convective storms and heavy rainfall.
Flash floods cause some of the most severe natural disasters in Europe but Mediterranean areas are especially vulnerable. They can cause devastating damage to property, infrastructures and loss of human life. The complexity of flash flood generation processes and their dependency on different factors related to watershed properties and rainfall characteristics make flash flood prediction a difficult task. In this study, as part of the EU-FLASH project, we used an uncalibrated hydrological model to simulate flow events in a 27km2 Mediterranean watershed in Israel to analyze and better understand the various factors influencing flows. The model is based on the well-known SCS curve number method for rainfall-runoff calculations and on the kinematic wave method for flow routing. Existing data available from maps, GIS and field studies were used to define model parameters, and no further calibration was conducted to obtain a better fit between computed and observed flow data. The model rainfall input was obtained from the high temporal and spatial resolution radar data adjusted to rain gauges. Twenty flow events that occurred within the study area over a 15year period were analyzed. The model shows a generally good capability in predicting flash flood peak discharge in terms of their general level, classified as low, medium or high (all high level events were correctly predicted). It was found that the model mainly well predicts flash floods generated by intense, short-lived convective storm events while model performances for low and moderate flows generated by more widespread winter storms were quite poor. The degree of urban development was found to have a large impact on runoff amount and peak discharge, with higher sensitivity of moderate and low flow events relative to high flows. Flash flood generation was also found to be very sensitive to the temporal distribution of rain intensity within a specific storm event. ?? 2010 Elsevier B.V.
Recharge is a critical issue for water management. Recharge assessment and the factors affecting recharge are of scientific and practical importance. The purpose of this study was to develop a daily recharge assessment model (DREAM) on the basis of a water balance principle with input from conventional and generally available precipitation and evaporation data and demonstrate the application of this model to recharge estimation in the Western Mountain Aquifer (WMA) in Israel. The WMA (area 13,000 km2)isa karst aquifer that supplies 360–400 Mm3 yr−1 of freshwater, which constitutes 20% of Israel's freshwater and is highly vulnerable to climate variability and change. DREAM was linked to a groundwater flow model (FEFLOW) to simulate monthly hydraulic heads and spring flows. The models were calibrated for 1987–2002 and validated for 2003– 2007, yielding high agreement between calculated and measured values (R2 = 0.95; relative root‐mean‐square error = 4.8%; relative bias = 1.04). DREAM allows insights into the effect of intra‐annual precipitation distribution factors on recharge. Although annual precipitation amount explains ∼70% of the variability in simulated recharge, analyses with DREAM indicate that the rainy season length is an important factor controlling recharge. Years with similar annual precipitation produce different recharge values as a result of temporal distribution throughout the rainy season. An experiment with a synthetic data set exhibits similar results, explaining ∼90% of the recharge variability. DREAM represents significant improvement over previous recharge estimation techniques in this region by providing near‐real‐time recharge estimates that can be used to predict the impact of climate variability on groundwater resources at high temporal and spatial resolution.
This paper summarises innovative research into the assessment of long-term groundwater recharge from flood events in dryland environments of the Kuiseb (Namibia) and the Buffels (South Africa) rivers. The integrated water resource management (IWRM) policies and institutions affecting the exploitation of groundwater resources in each of these developing countries are compared. The relatively large alluvial aquifer of the Kuiseb River (similar to 240 Mm(3)) is recharged from irregular floods originating in the upper catchment. Reported abstraction of 4.6 Mm(3) per year is primarily consumed in the town of Walvis Bay, although the groundwater decay (pumping and natural losses along the period 1983-2005) was estimated in 14.8 Mm(3) per year. Recharge is variable, occurring in 11 out of 13 years in the middle Kuiseb River, but only in 11 out of 28 years in the middle-lower reaches. In contrast, the Buffels River has relatively minor alluvial aquifers (similar to 11 Mm(3)) and recharge sources derive from both lateral subsurface flow and floodwater infiltration, the latter limited to a recharge maximum of 1.3 Mm(3) during floods occurring once every four years. Current abstractions to supply the adjacent rural population and a few small-scale, irrigated commercial farms are 0.15 Mm(3) yr (-aEuro parts per thousand 1), well within the long-term sustainable yield estimated to be 0.7 Mm(3) yr (-aEuro parts per thousand 1). Since independence in 1990, Namibia`s water resource management approach has focussed on ephemeral river basin management of which the Kuiseb Basin Management Committee (KBMC) is a model. Here, some water points are managed independently by rural communities through committees while the national bulk water supplier provides for Walvis Bay Municipality from the lower aquifers. This provides a sense of local ownership through local participation between government, NGOs and CBOs (community-based organisations) in the planning and implementation of IWRM. Despite the potential for water resource development in the lower Buffels River, the scope for implementing IWRM is limited not only by the small aquifer size, but also because basin management in South Africa is considered only in the context of perennial rivers. Since 2001, water service delivery in the Buffels River catchment has become the responsibility of two newly created local municipalities. As municipal government gains experience, skills and capacity, its ability to respond to local needs related to water service delivery will be accomplished through local participation in the design and implementation of annual `integrated development plans`. These two case studies demonstrate that a variety of IWRM strategies in the drylands of developing countries are appropriate depending on scales of governance, evolving policy frameworks, scales of need and limitations inherent in the hydrological processes of groundwater resources.
Flood water infiltrates ephemeral channels, recharging local and regional aquifers, and it is the main water source in hyperarid regions. Quantitative estimations of these resources are limited by the scarcity of data from such regions. The floods of the Kuiseb River in the Namib Desert have been monitored for 46 years, providing a unique data set of flow hydrographs from one of the world's hyperarid regions. The study objectives were to: (1) subject the records to quality control; (2) model flood routing and transmission losses; and (3) study the relationships between flood characteristics, river characteristics and recharge into the aquifers. After rigorous quality-testing of the original gauge-station data, a flood-routing model based on kinematic flow with components accounting for channel-bed infiltration was constructed and applied to the data. A simplified module added to this routing model estimates aquifer recharge from the infiltrating flood water. Most of the model parameters were obtained from field surveys and GIS analyses. Two of the model parameters-Manning's roughness coefficient and the constant infiltration rate-were calibrated based on the high-quality measured flow data set, providing values of 0.025 and 8.5 mm/h, respectively. This infiltration rate is in agreement with that estimated from extensive direct TDR-based moisture measurements in the vadose zone under the Kuiseb River channel, and is low relative to those reported for other sites. The model was later verified with additional flood data and observed groundwater levels in boreholes. Sensitivity analysis showed the important role of large and medium floods in aquifer recharge. To generalize from the studied river to other streams with diverse conditions, we demonstrate that with increasing in infiltration rate, channel length or active channel width, the relative contribution of high-magnitude floods to recharge also increases, whereas medium and small floods contribute less, often not reaching the downstream parts of the arid ephemeral river at all. For example, more than three-quarters of the floods reaching the downstream Kuiseb River (with an infiltration rate of 8.5 mm/h) would not have reached similar distances in rivers with all other properties similar but with infiltration rates of 50 mm/h. The recharge volume in the downstream segment in the case of higher infiltration is mainly contributed by floods with magnitude ???93rd percentile, compared to floods in the 63rd percentile at an infiltration rate of 8.5 mm/h. ?? 2009 Elsevier B.V. All rights reserved.
Detailed hydrologic models require high-resolution spatial and temporal data. This study aims at improving the spatial interpolation of daily precipitation for hydrologic models. Different parameterizations of (1) inverse distance weighted (IDW) interpolation and (2) A local weighted regression (LWR) method in which elevation is the explanatory variable and distance, elevation difference and aspect difference are weighting factors, were tested at a hilly setting in the eastern Mediterranean, using 16 years of daily data. The preferred IDW interpolation was better than the preferred LWR scheme in 27 out of 31 validation gauges (VGs) according to a criteria aimed at minimizing the absolute bias and the mean absolute error (MAE) of estimations. The choice of the IDW exponent was found to be more important than the choice of whether or not to use elevation as explanatory data in most cases. The rank of preferred interpolators in a specific VG was found to be a stable local characteristic if a sufficient number of rainy days are averaged. A spatial pattern of the preferred IDW exponents was revealed. Large exponents (3) were more effective closer to the coast line whereas small exponents (1) were more effective closer to the mountain crest. This spatial variability is consistent with previous studies that showed smaller correlation distances of daily precipitation closer to the Mediterranean coast than at the hills, attributed mainly to relatively warm sea-surface temperature resulting in more cellular convection coastward. These results suggest that spatially variable, physically based parameterization of the distance weighting function can improve the spatial interpolation of daily precipitation
Quantitatively estimating rainfall-runoff relations in extremely arid regions is a challenging task, mainly because of lack of in situ data. For the past 40 years, rain and floods have been monitored in the Nahal Yael catchment (0.5 km2) in southern Israel, providing a unique data set of runoff hydrographs and rainfall in a hyper-arid region. Here we present an exploratory study focusing on rainfall-runoff modeling issues for a small (0.05 km2) sub-catchment of Nahal Yael. The event-based model includes the computation of rainfall excess, hillslope and channel routing. Two model parameters of the infiltration process were found by calibration. A resampling methodology of calibration group composition is suggested to derive optimal model parameters and their uncertainty range. Log-based objective functions were found to be more robust and less sensitive than non-log functions to calibration group composition. The fit achieved between observed and computed runoff hydrographs for the calibration and validation events is considered good relative to other modeling studies in arid and semi-arid regions. The study indicates that, under the calibration scheme used, a lumped model performs better than a model representing the catchment division into three sub-catchments. In addition, the use of rain data from several gauges improves runoff prediction as compared to input from a single gauge. It was found that rainfall uncertainty dominates uncertainties in runoff prediction while parameter uncertainties have only a minor effect. ?? 2009 Elsevier B.V. All rights reserved.
Flash-flood warning models can save lives and protect various kinds of infrastructure. In dry climate regions, rainfall is highly variable and can be of high-intensity. Since rain gauge networks in such areas are sparse, rainfall information derived from weather radar systems can provide useful input for flash- flood models. This paper presents a flash-flood warning model which utilizes radar rainfall data and applies it to two catchments that drain into the dry Dead Sea region. Radar-based quantitative precipita- tion estimates (QPEs) were derived using a rain gauge adjustment approach, either on a daily basis (allowing the adjustment factor to change over time, assuming available real-time gauge data) or using a constant factor value (derived from rain gauge data) over the entire period of the analysis. The QPEs served as input for a continuous hydrological model that represents the main hydrological processes in the region, namely infiltration, flow routing and transmission losses. The infiltration function is applied in a distributed mode while the routing and transmission loss functions are applied in a lumped mode. Model parameters were found by calibration based on the 5 years of data for one of the catchments. Val- idation was performed for a subsequent 5-year period for the same catchment and then for an entire 10- year record for the second catchment. The probability of detection and false alarm rates for the validation cases were reasonable. Probabilistic flash-flood prediction is presented applying Monte Carlo simulations with an uncertainty range for the QPEs and model parameters. With low probability thresholds, one can maintain more than 70% detection with no more than 30% false alarms. The study demonstrates that a flash-flood warning model is feasible for catchments in the area studied.
viously applied in the Alps of Europe. Adjustment coefficients have been derived for 28 rainfall periods using 59 independent gauges of a quality-checked training data set. The validation was based on an independent data set composed of gauges located in eleven 20 ? 20 km2 validation areas, which are representative of different climate, topography and radar distance conditions. The WR and WMR methods were found preferable with a slight better performance of the latter. Furthermore, a novel approach has been adopted in this study, whereby radar estimates are considered useable if they provide information that is better than gauge-only estimates. The latter was derived by spatial interpolation of the gauges belonging to the training data set. Note that these gauges are outside the validation areas. As for the radar-adjusted estimates, gauge-derived estimates were assessed against gauge data in the validation areas. It was found that radar-based estimates are better for the validation areas at the dry climate regime. At distances larger than 100 km, the radar underestimation becomes too large in the two northern validation areas, while in the southern one radar data are still better than gauge interpolation. It is concluded that in ungauged areas of Israel it is preferable to use WMR-adjusted (or alternatively, simply WR-adjusted) radar echoes rather than the standard bulk adjustment method and for dry ungauged areas it is preferable over the conventional gauge-interpolated values derived from point measurements, which are outside the areas themselves. The WR and WMR adjustment methods provide useful rain depth estimates for rainfall periods for the examined areas but within the limitation stated above.
Analysis of extreme hydrometeorological events is important for characterizing and better understanding the meteorological conditions that can generate severe rainstorms and the consequent catastrophic flooding. According to several studies (e.g., Alpert et al., 2004; Wittenberg et al., 2007), the occurrence of such extreme events is increasing over the eastern Mediterranean although total rain amounts are generally decreasing. The current study presents an analysis of an extreme event utilizing different methodologies: (a) synoptic maps and high resolution satellite imagery for atmospheric condition analysis; (b) rainfall analysis by rain gauges data; (c) meteorological radar rainfall calibration and analysis; (d) field measurements for estimating maximum peak discharges; and, (e) high resolution aerial photographs together with field surveying for quantifying the geomorphic impacts. The unusual storm occurred over Israel between 30 March and 2 April, 2006. Heavy rainfall produced more than 100mm in some locations in only few hours and more than 200mm in the major core area. Extreme rain intensities with recurrence intervals of more than 100 years were found for durations of 1 h and more as well as for the daily rain depth values. In the most severely affected area,Wadi Ara, extreme flash floods caused damages and casualties. Specific peak discharges were as high as 10–30m3/s/km2 for catchments of the size of 1–10 km2, values larger than any recorded floods in similar climatic regions in Israel.
Weather radar data contain detailed information about the spatial structures of rain fields previously unavail- able from conventional rain gauge networks. This information is of major importance for enhancing our understanding of precipitation and hydrometeorological systems. This study focuses on spatial features of convective rain cells in southern Israel where the climate ranges fromMediterranean to hyper-arid. Extensive data bases from two study areas covered by radar systems were analyzed. Rain cell features were extracted such as center location, area, maximal rain intensity, spatial integral of rain intensity, major radius length, minor radius length, ellipticity, and orientation. Rain cells in the two study areas were compared in terms of feature distributions and the functional relationships between cell area and cell magnitude, represented by maximal rain intensity and spatial integral of rain intensity. Analytical distribution functions were fitted to the empirical distributions and the log-normal function was found to fit well the distributions of cell area, maximal rain intensity and major and minor radius lengths. The normal distribution fits well ellipticity em- pirical distribution, and orientation distribution was well-represented by the normal or uniform distribution functions. The effect of distance fromtheMediterranean coastline on cell features was assessed. Amaximum of cell rain intensity at the coastline and maximum cell density 15 km inland from the coastline were found. In addition, a gradual change of cell orientation was observed with a northwest-southeast orientation 30 km from the coastline at the Mediterranean Sea and to almost a west-east orientation 30 km from the coastline inland
Weather radar systems provide detailed information on spatial rainfall patterns known to play a significant role in runoff generation processes. In the current study, we present an innovative approach to exploit spatial rainfall information of air mass thunderstorms and link it with a watershed hydrological model. Observed radar data are decomposed into sets of rain cells conceptualized as circular Gaussian elements and the associated rain cell parameters, namely, location, maximal intensity and decay factor, are input into a hydrological model. Rain cells were retrieved from radar data for several thunderstorms over southern Arizona. Spatial characteristics of the resulting rain fields were evaluated using data from a dense rain gauge network. For an extreme case study in a semi-arid watershed, rain cells were derived and fed as input into a hydrological model to compute runoff response. A major factor in this event was found to be a single intense rain cell (out of the five cells decomposed from the storm). The path of this cell near watershed tributaries and toward the outlet enhanced generation of high flow. Furthermore, sensitivity analysis to cell characteristics indicated that peak discharge could be a factor of two higher if the cell was initiated just a few kilometers aside.
A spatial rainfall model was applied to radar data of air mass thunderstorms to yield a rainstorm representation as a set of convective rain cells. The modeled rainfall was used as input into hydrological model, instead of the standard radar-grid data. This approach allows a comprehensive linkage between runoff responses and rainfall structures
Radar-based estimates of rainfall rates and accumulations are one of the principal tools used by the National Weather Service (NWS) to identify areas of extreme precipitation that could lead to flooding. Radar-based rainfall estimates have been compared to gauge observations for 13 convective storm events over a densely instrumented, experimental watershed to derive an accurate reflectivity–rainfall rate (i.e., Z–R) relationship for these events. The resultant Z–R relationship, which is much different than the NWS operational Z–R, has been examined for a separate, independent event that occurred over a different location. For all events studied, the NWS operational Z–R significantly overestimates rainfall compared to gauge measurements. The gauge data from the experimental network, the NWS operational rain estimates, and the improved estimates resulting from this study have been input into a hydrologic model to “predict” watershed runoff for an intense event. Rainfall data from the gauges and from the derived Z–R relation produce predictions in relatively good agreement with observed streamflows. The NWS Z–R estimates lead to predicted peak discharge rates that are more than twice as large as the observed discharges. These results were consistent over a relatively wide range of subwatershed areas (4–148 km2). The experimentally derived Z–R relationship may provide more accurate radar estimates for convective storms over the southwest United States than does the operational convective Z–R used by the NWS. These initial results suggest that the generic NWS Z–R relation, used nationally for convective storms, might be substantially improved for regional application. | <urn:uuid:a4347201-aee9-4250-8647-f8489888dcdf> | 2.90625 | 5,665 | Truncated | Science & Tech. | 24.160019 | 95,513,353 |
Pendulum rocket fallacy is a common fundamental misunderstanding of the mechanics of rocket flight and how rockets remain on a stable trajectory. The first liquid-fuel rocket, constructed by Robert Goddard in 1926, differed significantly from modern rockets in that the rocket engine was at the top and the fuel tank at the bottom of the rocket. It was believed that, in flight, the rocket would "hang" from the engine like a pendulum from a pivot, and the weight of the fuel tank would be all that was needed to keep the rocket flying straight up. This belief is incorrect such a rocket will turn and crash into the ground soon after launch, and this is what happened to Goddard's rocket.
HerausgeberIndigo Theophanes Dax
Pendulum Rocket Fallacy
Trajectory, Liquid-propellant Rocket, Robert H. Goddard
Indigo Theophanes Dax
Lieferdatum:zwischen Dienstag, 31. Juli und Donnerstag, 2. August | <urn:uuid:462c91f4-31de-4ce7-887e-cb796d466cc4> | 3.390625 | 211 | Knowledge Article | Science & Tech. | 47.973061 | 95,513,371 |
Can absolute zero ever be achieved? I this a theoretical kinetic energy?
It's more a theoretical lack of energy. Although we've come very close to it with liquid helium, I don't think that it's practically attainable. You would need, to start with, a perfect insulator, which doesn't exist.
Isn't the point of the third law of thermodynamics that it is impossible to reach absolute zero?
Correct, at least the way that I learned the 4 laws.
1) No matter how hard you try, the best that you can do is break even.
2) You can only break even at absolute zero.
3) Absolute zero is impossible to attain.
4) No matter how hard you shake it, the last drop always goes down your pants.
Number 4... ROFL
I hate that SO much. :grumpy:
If absolute zero is impossible to attain, then is it not found anywhere in the universe? Is it something that just cannot be broken, in terms of going any lower?
if absolute zero is obtained the universe will collapse into itself...
I'm not so sure about the collapsing part, but the universe as a whole is still permeated by the cosmic microwave backgound 'noise' from the Big Bang. That's something like 3 degrees K.. To attain absolute zero, you'd have to isolate a container of some type, and then pump out those 3 degrees. I'm not saying categorically that it's impossible, because technology continues to take me by surprise, but our current methods aren't up to it.
You would also need to have perfect insulation would you not?
As mentioned in post #2, yes. I don't believe (just my opinion) that there can be such a thing, given quantum fluctuations and whatnot.
I don't remember the the Laws of Thermodynamics forbidding absolute zero, but either way it's still impossible to attain absolute zero. To do so would be in violation of the Heisenberg Uncertainty Principle as if a particle is at absolute zero you would be able to learn it's precise position and velocity.
I never thought of it that way, but it makes sense. Its energy and movement would both be '0'.
and if u get particles to stop moving what do you thing will happen to everything around it...
Really in order for absolute zero to be achieved, which it isn't but hypothetically here, the matter being put at absolute zero would have to be secluded from all other matter and shielded from radiation. Even if it wasn't it just means that you'll start slowly cooling other stuff too.
Bingo! And by cooling other stuff, you're gaining heat from it.
Therefore it's impossible to reach absolute zero until all atomic motion in the system (universe) has been stopped.
As has been said before, perfect insulators don't exist. If one atom isn't moving and there's an atom that is moving near it, the one that isn't moving will steal some of the energy from the moving atom and start to move.
It's fundamentally impossible to get any part of the universe to reach absolute zero, the part about needing the insulation was just a matter of hypothetical thinking.
Theoretically? Yes you can reach it if you have a perfect insulator (which is impossible)
and because there does not exist a perfect insulator in the universe, as long as there is heat somewhere in the universe, its not possible. Besides, the moment you try to measure it to make sure all motion has stopped, you will have inadvertently heated it back up again. You'd be stuck with a container that you cannot touch, pointing at it and screaming eureka with absolutely no way to prove theres anything inside to begin with
Photons have a spin of one, as electrons and protons have a spin of one-half; these quantum phenomena no amount of cooling can extinguish. I think it safe to assume that these residual spins involve a minimum energy (and thus temperature) greater than zero. My guess is that although absolute zero may exist at certain singularities, the very attempt to measure it would cause heating.
Having used vortex tube coolers, and finding a refrigeration manuel that explained in detail how they work, my question is, would the very center of a vortex of high velocity produce a very small center point of zero condition ?
My chemistry teacher said that some institute came very close to reaching absolute zero...but all the same, it will be impossible to ever attain absolute zero.
I really think that it's time for one of the 'gurus' to get involved. Someone with professional knowledge such as Zapper Z, Arildno, or Astronuc can probably put this thing to bed without us having to speculate further.
Post edited appropriately.
Separate names with a comma. | <urn:uuid:6ddeca0f-e217-4968-808f-ad514f83e4a8> | 2.546875 | 1,001 | Comment Section | Science & Tech. | 55.371098 | 95,513,373 |
Bodies of radiation are also covered by the same kind of reasoning. More recently, point function and path function in thermodynamics pdf has been recognized that the quantity ‘entropy’ can be derived by considering the actually possible thermodynamic processes simply from the point of view of their irreversibility, not relying on temperature for the reasoning. For example, consider a room containing a glass of melting ice as one system. The difference in temperature between the warm room and the cold glass of ice and water is equalized as heat from the room is transferred to the cooler ice and water mixture.
Over time the temperature of the glass and its contents and the temperature of the room achieve balance. The entropy of the room has decreased. However, the entropy of the glass of ice and water has increased more than the entropy of the room has decreased. Thus, when the system of the room and ice water system has reached temperature equilibrium, the entropy change from the initial state is at its maximum. There are many irreversible processes that result in an increase of the entropy.
One of them is mixing of two or more different substances, occasioned by bringing them together by removing a wall that separates them, keeping the temperature and pressure constant. With T being the uniform temperature of the closed system and delta Q the incremental reversible transfer of heat energy into that system. In classical thermodynamics the entropy of the reference state can be put equal to zero at any convenient temperature and pressure. For example, for pure substances, one can take the entropy of the solid at the melting point at 1 bar equal to zero. | <urn:uuid:e3280e34-dcab-4010-9c45-4ef4da7e2dbe> | 3.59375 | 320 | Academic Writing | Science & Tech. | 31.517308 | 95,513,391 |
London: China and Europe are in collaboration for the launch of a low-cost space mission.
The proposed project, called Discovering the Sky at the Longest Wavelengths (DSL), is one of around 15 submitted for a call by the European Space Agency (ESA) and the Chinese Academy of Sciences (CAS) that concluded recently, Nature reported.
The DSL project would use the satellite array to detect radio signals with wavelengths a few hundred metres in length.
"The call is a win-win situation for China and the EU," Taotao Fang, an astronomer at the Xiamen University in China, who is part of the team that proposed the MESSIER orbiter, was quoted as saying.
The MESSIER is a mini satellite aimed at studying galaxy formation.
When passing around the Moon's dark side, the telescope might be able to spot signatures of hydrogen thought to have existed from 370,000 years to 550 million years after the Big Bang.
The final mission will be led by principal investigators affiliated with both European and Chinese institutions, with an aim to launch in 2021.
Other proposals include an X-ray imager called SMILE that would study Earth's magnetosphere, and SIRIUS, an extreme-ultraviolet telescope that would look at 'hot objects', such as stellar coronae, in the Galaxy.
EU scientists had contributed to the payload of China's Double Star mission, which launched in 2003 to study the near-Earth environment.
But it will be the first jointly-run project set up in collaboration from the start. | <urn:uuid:ba374ad3-10c4-4623-abc7-6ca01ce0d706> | 3.203125 | 325 | News Article | Science & Tech. | 43.387911 | 95,513,395 |
"The energy crisis is really nothing new," says Dr. E.A. Farber, solar expert and Director of the Solar Energy & Energy Conversion Laboratory at the University of Florida in Gainesville. "We were already running short of fossil fuels, our so-called 'conventional' sources of power, 40 years ago. It just hadn't come to the public's attention at that time."
Maybe not, but the world's supply of energy and its relationship to the development of nations most certainly had already come to the attention of a few farsighted individuals back there in the early 1930's, and Austrian-born Erich Farber was one of them.
Four decades ago, while still a high school student, Farber observed that the countries and civilizations which controlled the most energy—and used it—were the nations and cultures that also advanced most rapidly. Young Erich further noted that the power providing this advantage came mainly from the fossil fuels—gas, coal, and oil—which obviously (to him, at least) would one day be exhausted.
"This led me directly to solar energy," Dr. Farber says. "I thought of the planet's human population as a family trying to live off its savings (fossil fuels) which were stored in a bank (underground) and which were being steadily depleted. This, of course, cannot go on indefinitely. Sooner or later that family has to begin living on its income, sooner or later we have to make do with the amount of renewable, incoming energy we receive. After mulling over the possibilities of wind, geothermal, tidal, and other sources of power—all very good when the conditions are right for their use—I realized that the sun alone offered the resource I was seeking. Solar energy is readily available, well distributed, inexhaustible for all practical purposes, and does not pollute the environment when converted and utilized."
Farber developed his ideas as he received the major part of his education in Europe and during the time he studied at the Universities of Missouri and Iowa. He further honed his keen interest in solar power while teaching at the Universities of Missouri, Iowa, and Wisconsin. By the time he moved to Gainesville—20 years ago—to instruct at the University of Florida, Erich was quite possibly the planet's most enthusiastic and knowledgeable authority on the subject. Little wonder that the University of Florida's Solar Energy Lab is one of the largest and most advanced facilities of its kind in the world.
The UF solar energy installation is especially interesting because of its emphasis on working hardware. Ever think of building a solar energy collector or sun-operated water heater, stove or still? How about a solar turbine, steam engine, refrigerator, or air conditioner? An electric car which has its batteries recharged by the sun? Or a "solar gravity" motor or a sewage treatment plant that uses Ole Sol's rays to double its processing capacity? All old hat to Farber, his staff of ten and the students who attend the three classes conducted by the Solar Energy Lab.
And don't think you can't duplicate UF's success just because you live in Minnesota or British Columbia. Farber believes that, "Florida isn't any better than many other areas of the earth for solar energy collection. Look at the Weather Bureau's data and you'll be surprised at how evenly this source of power is distributed. Pick practically any point on the face of the planet and, if people live there, the chances are very good that the surrounding region receives meaningful amounts of sunshine."
Yes, but is it practical to try to utilize the sun's rays to heat our houses, drive our engines, cook our food, and otherwise power the industrialized society in which we live? "It depends on what you mean by 'practical'," says Dr. Farber. "What's practical to one man is not practical to another."
"We now know how to use the sun to provide all the forms of energy which we need in our daily lives. We can warm a house, heat water, air-condition buildings, produce electricity and so on. We've already done these things. We've even converted a Corvair automobile to run on batteries which can be charged by solar cells. Theoretically, at least, we could replace our present fossil-fueled transportation system with a sun-powered electrical system. Instead of gasoline stations, you'd drive your car into solar battery-charging stations. The attendants there would lift out your discharged batteries, give you a freshly charged set, and you'd drive right on.
"Now this would be a very 'practical' way of doing business if you started it from scratch. It's pollution-free and it makes use of a virtually inexhaustible energy resource. But the point is that we're not starting from scratch. We're already set up to power our personal transportation with gasoline, and as long as we have gasoline and the government doesn't ration it, I'm sure we'll find it more 'practical' to keep right on using gasoline until we run out."
Maybe so. But in the meantime, Dr. Farber and his staff and the students in the classes conducted at the Solar Energy Lab are going to continue right on developing and operating sun-powered hardware of many kinds.
Whether you want to learn how to grow and raise your own food, build your own root cellar, or create a green dream home, come out and learn everything you need to know — and then some!LEARN MORE
At MOTHER EARTH NEWS, we are dedicated to conserving our planet's natural resources while helping you conserve your financial resources. You'll find tips for slashing heating bills, growing fresh, natural produce at home, and more. That's why we want you to save money and trees by subscribing through our earth-friendly automatic renewal savings plan. By paying with a credit card, you save an additional $5 and get 6 issues of MOTHER EARTH NEWS for only $12.95 (USA only).
You may also use the Bill Me option and pay $17.95 for 6 issues. | <urn:uuid:b896cc65-8292-44e7-9889-0cc02ff4debf> | 3.125 | 1,254 | Nonfiction Writing | Science & Tech. | 49.438677 | 95,513,405 |
This is really a huge subject, only a short introduction will be given. Essential quantities and notions will be introduced and explained. We will be mainly concerned with paramagnetism (induced magnetic moment is directed along the applied magnetic field), diamagnetism (opposite to the field) and also ferro- and antiferromagnetism where spontaneous magnetisation (without any applied field) and other interesting properties are observed. We shall start by recalling several important points from classical electrodynamics and quantum mechanics related to magnetism (see e.g. [55, 14]).
KeywordsMagnetic Field Partition Function Applied Magnetic Field Free Energy Density Spontaneous Magnetisation
Unable to display preview. Download preview PDF. | <urn:uuid:1dab3c54-c748-42b8-b9f1-7414fcbea7be> | 2.625 | 151 | Truncated | Science & Tech. | 17.996001 | 95,513,413 |
Professor Carla P Gomes, faculty of Computing and Information Science and director of Institute for Computational Sustainability, is a pioneer in the field of computational sustainability.
In 1987, there was a UN report that first raised concerns about human impact on the planet. A follow-up report showed things like the biomass of fish is 10% of what it was 50 years ago. We're over harvesting our planet and overusing our resources. A 2009 report looked at whether or not we've crossed the tipping point, and it was looking grim. All these things inspired Professor Gomes to do further research in this area to see what we could do to help reverse the tide using the field of computer science. She strongly believes that computer scientists can, and should, play a key role in increasing our efficiency of managing natural resources.
Computational sustainability encompasses many disciplines like economics, sociology, environmental sciences and engineering, biology, crop and soil science, meteorology and atmospheric science. There is a need to develop computation methods to model things in these fields, which will help resolve these problems. This cross discipline model helps all fields learn new research models from each other, which is helping things in this area to progress.
One problem this field is addressing is wildlife corridors, which link biological areas allowing animal movement between areas. One of the issues here is that, while important for the the animals, there isn't usually much money available to buy land, etc, to set these corridors up so that animals in different national preserves can cross populate. This is a computational problem - need to find the graph that has the best and cheapest path between the two places. While this is an NP hard problem, the computer scientists can simplify the problem by using the Min Cost Steiner Tree. Models are critically important in solving these problems and for addressing the issues of scale.
This approach allows them to handle large problems and reduce corridor cost dramatically, allowing the projects to actually proceed as opposed to being ignored or done with too much expense or in a sub-par fashion that won't help the animals as much as possible. Her work has been done for grizzly bears and wolverines.
Now she is working on assisting the recovery of a subspecies of woodpecker, by analyzing network cascades. They are buying up the land where the birds fly, then looking at the birds flight patterns and buying nearby land, which will help the birds spread their territory which will lead to increased population. The complicated issue is figuring out which land the birds will choose to spread to.
Further consideration is necessary for species interaction, as not all species interact in a cooperative manner.
They are getting help from the eBird project, at Cornell, which allows average folks to submit data about bird sightings. This helps them to learn where the birds are migrating and how long they spend in various areas.
Many of these concepts can also be applied to analyzing solutions to problems fought by very impoverished communities. For example, what will be more valuable to the impoverished? A chicken, improved roadways, or providing cell phones?
Back to the problem of over fishing, it seems to be caused by mismanagement. Professor Gomes is looking at models to help correct this mismanagement without causing any additional problems. Even after they figure out recommendations they need to get the fisheries to implement them. It is difficult to convince fishery owners that periodically closing the fisheries will actually lead to more fish when they reopen - you gotta give them time to reproduce and reach reproductive age!
Another thing her team is studying is the impact of fertilizers. While they do greatly increase the amount of food that can be harvested, they end up creating dead zones. On top of all that, they are also studying how to discover materials for fuel cell technology! These, again, Professor Gomes claims are problems for computer scientists.
Professor Gomes's research area is so incredibly broad! She shared with us, more quickly than I could capture, many of the different algorithms and approaches they are using to solve these problems. I got a great mini-introduction to all sorts of algorithms and data structures I'd never heard of before, like a spatially balanced Latin squares! She is an amazingly energetic, intelligent and passionate technical speaker and I think I could spend an entire day listening to her!
Monday Sweets(?) Celebrates Parks And WRECK - Yesterday Alison told us in the comments that she'd be disappointed if we didn't have a follow-up post entitled, "Celebrating Parks and *Wreck*." So Alis... | <urn:uuid:66a4e141-a311-48b9-82dc-2c1fd9cc0628> | 3.171875 | 924 | Personal Blog | Science & Tech. | 42.814627 | 95,513,430 |
Global carbon dioxide emissions from burning fossil fuels have increased by 49 per cent in the last two decades, according to the latest figures by an international team, including researchers at the Tyndall Centre for Climate Change Research, University of East Anglia (UEA).
Published today in the journal Nature Climate Change, the new analysis by the Global Carbon Project shows fossil fuel emissions increased by 5.9 per cent in 2010 and by 49 per cent since 1990 the reference year for the Kyoto protocol.
On average, fossil fuel emissions have risen by 3.1 per cent each year between 2000 and 2010 three times the rate of increase during the 1990s. They are projected to continue to increase by 3.1 per cent in 2011.
Total emissions - which combine fossil fuel combustion, cement production, deforestation and other land use emissions - reached 10 billion tonnes of carbon1 in 2010 for the first time. Half of the emissions remained in the atmosphere, where CO2 concentration reached 389.6 parts per million. The remaining emissions were taken up by the ocean and land reservoirs, in approximately equal proportions.
Rebounding from the global financial crisis of 2008-09 when emissions temporarily decreased, last year's high growth was caused by both emerging and developed economies. Rich countries continued to outsource part of their emissions to emerging economies through international trade.
Contributions to global emissions growth in 2010 were largest from China, the United States, India, the Russian Federation and the European Union. Emissions from the trade of goods and services produced in emerging economies but consumed in the West increased from 2.5 per cent of the share of rich countries in 1990 to 16 per cent in 2010.
In the UK, fossil fuel CO2 emissions grew 3.8 per cent in 2010 but were 14 per cent below their 1990 levels. However, emissions from the trade of goods and services grew from 5 per cent of the emissions produced locally in 1990 to 46 per cent in 2010 - overcompensating the reductions in local emissions. Emissions in the UK were 20 per cent above their 1990 levels when emissions from trade are taken into account.
"Global CO2 emissions since 2000 are tracking the high end of the projections used by the Intergovernmental Panel on Climate Change, which far exceed two degrees warming by 2100," said co-author Prof Corinne Le Quéré, director of the Tyndall Centre for Climate Change Research and professor at the University of East Anglia. "Yet governments have pledged to keep warming below two degrees to avoid the most dangerous aspects of climate change such as widespread water stress and sea level rise, and increases in extreme climatic events.
"Taking action to reverse current trends is urgent."
Lead author Dr Glen Peters, of the Centre for International Climate and Environmental Research in Norway, said: "Many saw the global financial crisis as an opportunity to move the global economy away from persistent and high emissions growth, but the return to emissions growth in 2010 suggests the opportunity was not exploited."
Co-author Dr Pep Canadell, executive director of the Global Carbon Project, added: "The global financial crisis has helped developed countries meet their production emission commitments as promised in the Kyoto Protocol and Copenhagen Accord, but its impact has been short-lived and pre-existing challenges remain."
Explore further: Global CO2 emissions back on the rise in 2010: study
'Rapid growth in CO2 emissions after the 2008-2009 global financial crisis', Nature Climate Change, December 4 2011 | <urn:uuid:1a6ef254-6b99-4048-8310-1bd9b165a55c> | 3 | 700 | News Article | Science & Tech. | 41.70654 | 95,513,468 |
Iron in Minerals and the Formation of Rust in Stone
The mean iron content of the earth’s crust is 5%. Iron is locked in ferromagnesian silicates in rocks at the earth’s surface mostly as green or black ferrous-ferric iron. The black ferrous-ferric form is magnetite, the red ferric oxide, hematite, and the yellow-brass ferrous sulfides are commonly cubic pyrite and orthorhombic spearhead-shaped marcasite. Iron also appears as white to dark brown ferrous carbonate (siderite) and green iron silicate, glauconite, which adds a greenish color to sedimentary rocks (Sect. 4.4).
KeywordsMetallic Iron Iron Mineral Pyrite Oxidation Ferric Hydroxide Stone Surface
Unable to display preview. Download preview PDF.
- Garrels RM, Christ CL (1965) Solutions, minerals and equilibria. Harper and Row, New York, 450 ppGoogle Scholar
- Gottlieb SE (1988) Preventing limestone spalls, McKim, Mead and White buildings at Columbia University. Assoc Presery Technol Bull XX (3): 33–27Google Scholar
- Keller WD, Balgoard WD, Reesman AL (1963) Dissolved products of pulverized minerals, Part I. J Sediment Petrol 33 (1): 191–204Google Scholar
- Weber J (1966) Die Verfärbungen an natürlichen Bausteinen infolge Verwitterung von Eisensulfidmineralien. Schweiz Baurtg 84 (5.3): 1–7Google Scholar | <urn:uuid:065a5cc9-d59e-4152-840f-9dcc409eb155> | 3.46875 | 358 | Academic Writing | Science & Tech. | 44.97 | 95,513,469 |
in physics, the effect produced by the combination or superposition of two systems of waves, in which these waves reinforce, neutralize, or in other ways interfere with each other. Interference is observed in both sound waves and electromagnetic waves, especially those of visible light and radio.
When two sound waves occur at the same time and are in the same phase, i.e., when the condensations of the two coincide and hence their rarefactions also, the waves reinforce each other and the sound becomes louder. This is known as constructive interference. On the other hand, two sound waves occurring simultaneously and having the same intensity neutralize each other if the rarefactions of the one coincide with the condensations of the other, i.e., if they are of opposite phase. This canceling is known as destructive interference. In this case, the result is silence.
Alternate reinforcement and neutralization (or weakening) take place when two sound waves differing slightly in frequency are superimposed. The audible result is a series of pulsations or, as these pulsations are called commonly, beats, caused by the alternate coincidence of first a condensation of the one wave with a condensation of the other and then a condensation with a rarefaction. The beat frequency is equal to the difference between the frequencies of the interfering sound waves.
Light waves reinforce or neutralize each other in very much the same way as sound waves. If, for example, two light waves each of one color (monochromatic waves), of the same amplitude, and of the same frequency are combined, the interference they exhibit is characterized by so-called fringes—a series of light bands (resulting from reinforcement) alternating with dark bands (caused by neutralization). Such a pattern is formed either by light passing through two narrow slits and being diffracted (see diffraction), or by light passing through a single slit. In the case of two slits, each slit acts as a light source, producing two sets of waves that may combine or cancel depending upon their phase relationship. In the case of a single slit, each point within the slit acts as a light source. In all cases, for light waves to demonstrate such behavior, they must emanate from the same source; light from distinct sources has too many random differences to permit interference patterns.
The relative positions of light and dark lines depend upon the wavelength of the light, among other factors. Thus, if white light, which is made up of all colors, is used instead of monochromatic light, bands of color are formed because each color, or wavelength, is reinforced at a different position. This fact is utilized in the diffraction grating, which forms a spectrum by diffraction and interference of a beam of light incident on it. Newton's rings also are the result of the interference of light. They are formed concentrically around the point of contact between a glass plate and a slightly convex lens set upon it or between two lenses pressed together; they consist of bright rings separated by dark ones when monochromatic light is used, or of alternate spectrum-colored and black rings when white light is used. Various natural phenomena are the result of interference, e.g., the colors appearing in soap bubbles and the iridescence of mother-of-pearl and other substances.
The experiments of Thomas Young first illustrated interference and definitely pointed the way to a wave theory of light. A. J. Fresnel's experiments clearly demonstrated that the interference phenomena could be explained adequately only upon the basis of a wave theory. The thickness of a very thin film such as the soap-bubble wall can be measured by an instrument called the interferometer. When the wavelength of the light is known, the interferometer indicates the thickness of the film by the interference patterns it forms. The reverse process, i.e., the measurement of the length of an unknown light wave, can also be carried out by the interferometer.
The Michelson interferometer used in the Michelson-Morley experiment of 1887 to determine the velocity of light had a half-silvered mirror to split an incident beam of light into two parts at right angles to one another. The two halves of the beam were then reflected off mirrors and rejoined. Any difference in the speed of light along the paths could be detected by the interference pattern. The failure of the experiment to detect any such difference threw doubt on the existence of the ether and thus paved the way for the special theory of relativity.
Another type of interferometer devised by Michelson has been applied in measuring the diameters of certain stars. The radio interferometer consists of two or more radio telescopes separated by fairly large distances (necessary because radio waves are much longer than light waves) and is used to pinpoint and study various celestial sources of radiation in the radio range. Astronomical interferometers consisting of two or more optical telescopes are used to enhance visible images of distant celestial objects. See radio astronomy; virtual telescope.
One of the earliest and most robust findings of experimental psychology is that two event representations in memory can compete with one...
A density-dependent reduction in the per capita efficiency of a predator or parasitoid due to direct behavioural interactions. For...
Effect observed when two trains of waves of the same wavelength meet. If maxima (crests) of the waves arrive simultaneously at the same place,... | <urn:uuid:15124f7e-1e18-4788-93b1-ca2c6955aa40> | 3.8125 | 1,106 | Knowledge Article | Science & Tech. | 37.217068 | 95,513,518 |
The unstretched length of the cord is 0.500m and it's mass is 5.00g.The "spring constant" for the cord is 100N/m. Block is released and stops at lowest point.
a) determine the tension in cord when block is at lowest point.
I'm not sure but I do know that
Sum F= T-mg= 0 at lowest point.
but is the T= mg and that's it?
b) what is length of cord in stretched position?
I think but I'm not sure that I can find it by using
v= rad(T/mu)= rad(mgL/mblock)
However I don't think I can use this since I don't have v or L.
c) find speed of transversal wave in cord if block is at lowest position.
what I thought transverse speed was was from the equation:
vy= -omega*A cos(kx-omega*t) but I don't think I can use k that was given since It's not the same and I don't think I have omega either so how would I find transversal wave speed?
Recently Asked Questions
- I'm having a hard time figuring out where to start with all of this information (provided below). Next to the question I have put what I've been able to figure
- A lab techniition wants to make 500mL of a 2.50 mol/L solution from a standard solution of concentrated sulphuric acid. Step 1 , Step 2 , Step
- You'll be prompting a student for grades and credits for courses taken. From that, you will calculate a GPA. This site: http://www.back2college.com/gpa.htm | <urn:uuid:14caafe1-2236-4063-8cb7-0ea409f51911> | 2.703125 | 367 | Q&A Forum | Science & Tech. | 91.965266 | 95,513,558 |
A University of Michigan researcher is developing a unique way to reconcile these crucial data.
"If we're going to adapt to climate change, we need to be able to predict what the climate will be," said Anna Michalak, assistant professor in the Department of Civil and Environmental Engineering and the Department of Atmospheric, Oceanic and Space Sciences. "We want to know how the sources and sinks of carbon will evolve in the future, and the only way we can manage climate change is with scientific information."
Michalak is discussing the work at the symposium "Improving Understanding of Carbon Flux Variability Using Atmospheric Inverse Modeling" Sunday at the American Association for the Advancement of Science annual meeting here. She co-organized the session, "The Carbon Budget: Can We Reconcile Flux Estimates?" with Joyce Penner, a professor in the Department of Atmospheric, Oceanic and Space Sciences.
For some 50 years, scientists have measured the amount of carbon dioxide in the air on a large scale, at an increasing number of locations sprinkled across the globe, and by sampling very small areas. Together with inventories of fossil fuel use, that's given good data about how much carbon is being pumped into the atmosphere---currently approximately 8 billion tons a year.
It's also known that half of that stays in the atmosphere. The rest comes to rest in the oceans, the earth, or is gobbled up by plants during photosynthesis.
But then the data gets harder to come by and scientists have had to make some assumptions. Those flux towers only cover a few places on Earth, and it's too cumbersome to collect data on small areas. Even a powerful new tool Michalak will be using---NASA's Orbiting Carbon Observatory (OCO), a satellite designed to monitor atmospheric carbon---does not paint a perfect picture. She compares the thin data strips it harvests with wrapping a basketball with floss.
The problem: Michalak said the data takes such a big-picture approach that it is difficult to isolate carbon being emitted or taken up in specific regions, or even countries. Scientists are left with an understanding of carbon sources that isn't nimble enough to understand the variability, or to be confident about predicting the future.
Michalak has developed a robust way to use available data to understand this variability called "geostatistical inverse modeling." This method breaks the globe into small regions and examines how much CO2 must have been emitted in each region to achieve the concentrations measured at atmospheric sample points. This method also allows her and her collaborators to use information from other existing satellites that measure the Earth's surface to supplement the information from the atmospheric monitoring network. Eventually, this method aims to trace the carbon levels at each sample point to a particular source or sink on the surface.
The technique, Michalak says, is like figuring out where the cream was originally poured in a cup of half-stirred coffee.
"Winds and weather patterns mix CO2 in the atmosphere just like stirring mixes cream in a cup of coffee," she said. "As soon as you start stirring, you lose some information about where and when the cream was originally added to the cup. With careful measurements and models, however, much of this information can be recovered."
"One of our big questions is how carbon sources and sinks evolve," Michalak said. "This is all with an eye on prediction and management."
Sue Nichols | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:7c34d40e-ce96-4190-b0d2-b8f879087471> | 3.515625 | 1,367 | Content Listing | Science & Tech. | 39.403732 | 95,513,578 |
Topological insulators appeared to be rather well-understood from theory until now. The electrons that can only occupy "allowed" quantum states in the crystal lattice are free to move in only two dimensions, namely along the surface, behaving like massless particles.
Topological insulators are therefore highly conductive at their surfaces and electrically insulating within. Only magnetic fields should destroy this mobility, according to theory.
Now physicists headed by Oliver Rader and Jaime Sánchez-Barriga from HZB along with teams from other HZB departments, groups from Austria, the Czech Republic, Russia, and theoreticians in Munich have disproved this hypothesis.
They investigated samples for this purpose made of bismuth-selenide - a classic topological insulator - built up from enormous numbers of extremely thin layers, like puff pastry. These samples were doped with the magnetic element manganese (Mn), forming (Bi1_xMnx)2Se3 with various concentrations of Mn.
Theoretically, what is known as a band gap should have opened between the allowed electron states as a result of doping with magnetic impurities so that the previously conductive surface becomes insulating. As a result of the appearance of the band gap, the electrons also regain part of their mass. The magnetism of the impurities should be the critical influence in this process.
Theory disproved: Magnetism is not influencing the mobility of electrons
The physicists were able to actually detect the formation of a band gap in the doped samples. The mass of the electrons climbed from zero to one-sixth the mass of free electrons. They showed, however, that this band gap is not the result of ferromagnetic ordering in the interior or at the surface of the material, nor of the local magnetic moments of the manganese. The band gap formed independent of the strength of the magnetisation and even when the sample was doped with nonmagnetic impurities.
"We even measured surface band gaps that are ten times larger than the theoretically predicted magnetic band gaps, and actually independent of whether we had incorporated magnetic or nonmagnetic impurities", says Jaime Sánchez-Barriga.
Instead, they suggest an entirely different process in these samples that causes the band gap at the Dirac point: with the help of what is known as resonant photoemission spectroscopy, they were able to observe scattering processes that might be responsible for opening a band gap. The fundamental properties of topological insulators do not offer many possibilities for these kinds of scattering processes. The researchers think it is conceivable that the presence of the impurities enables the electrons to leave the surface and disappear into the bulk.
"It is always more interesting for experimentalists like us, of course, when the experiment does not confirm the theoretical expectation. This band gap is considerably larger than predicted by theory and additionally involves a different causal mechanism. In order be sure that we are not mistaken, we used the entire arsenal at BESSY II, such as photoelectron microscopy and magnetic fields up to seven tesla. This enabled us to really preclude magnetism occurring as a possible cause down to roughly the nanometre scale", explains Oliver Rader.
Two conclusions can already be drawn from this work: on one hand, that topologically shielded states are still far from being completely understood. On the other, it means that problems previously overlooked are now in the spotlight. How can scattering processes be minimised by the choice of magnetic impurities? And what is the role of lattice location of the impurities in the host? Since Topological insulators are promising candidates for new information technologies, those questions should be explored in depth.
Antonia Roetger | Helmholtz-Zentrum Berlin für Materialien und Energie
What happens when we heat the atomic lattice of a magnet all of a sudden?
17.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
17.07.2018 | Information Technology
17.07.2018 | Materials Sciences
17.07.2018 | Power and Electrical Engineering | <urn:uuid:5bf67f6e-d4b4-450b-ac8a-d2ef6f2ce707> | 3.5 | 1,426 | Content Listing | Science & Tech. | 32.77581 | 95,513,579 |
Time factors of millions and billions of years is difficult even for adults to comprehend.
Since atmospheric carbon 14 arises at about the same rate that the atom decays, the Earth's levels of carbon 14 have remained constant.
Because atmospheric carbon 14 arises at about the same rate that the atom decays, Earth's levels of carbon 14 have remained fairly constant.
Once an organism is dead, however, no new carbon is actively absorbed by its tissues, and its carbon 14 gradually decays.
A very small percentage of carbon, however, consists of the isotope carbon 14, or radiocarbon, which is unstable.
Carbon 14 has a half-life of 5,780 years, and is continuously created in Earth's atmosphere through the interaction of nitrogen and gamma rays from outer space. | <urn:uuid:4777e591-8ef2-4826-8c3a-0763cb827be9> | 3.75 | 161 | Knowledge Article | Science & Tech. | 44.535235 | 95,513,597 |
Winds - Coriolis Effect, 3 Planetary and Local Winds in Climatology
- published: 30 Jun 2016
- views: 90261
Dr. Manishika Jain explains the concept of planetary or permanent winds and local winds by understanding the coriolis effect and pressure gradient force. Here is the list of major local winds globally: Cold Winds 1. Mistral- Blows in Spain & France from N-W to S-E. Common during winter 2. Bora- Blows along the shores of the Adriatic sea. 3. Blizzard -Snow laden wind in canada. 4. Purga- Snow laden wind in Russian tundra. Much like Buran. 5. Bise- An extremely cold wind in France 6. Levanter- Blows in strait of Gibraltar between Spain & Morocco. 7. Pampero- Pampas of S. America 8. Papagayo -Costa Rica, Mexico, Nicaragua. 9. Haboob- Sudan 10. Friagem- Amazon Valley 11. Buran -Eastern Russia & central Siberia 12. Norther- Texas, Gulf of Mexico & western carribean 13. Etesian- Eastern Mediterranean. 14. Surazo -Cold wind blowing from Argentinean pampas & Patagonia. 15. Norte -A strong cold northeasterly wind which blows in Mexico. 16. Tehuantepecer -This is a violent, squally wind from north or north-east in S. Mexico. Hot winds 1. Fohn -Warm & dry local winds blowing on leeward side of Alps in Switzerland. 2. Chinook- Warm & dry local winds blowing on leeward side of Rockies in USA 3. Harmattan -Blowing from east & northeast towards west in Sahara 4. Brickfielder -Victoria province of Australia 5. Black Roller- Great plains of USA 6. Shamal- Mesopotamia & Persian Gulf 7. Norwester- New Zealand 8. Sirocco -From Sahara over Mediterranean. Known as khamsin in Egypt, Chili in Tunisia, Gibli in Libya, Levech in Spain & Leste in Madiera & Morocco. 9. Sim... | <urn:uuid:684d3f6f-9450-4cb5-866d-033f3f13d900> | 3.546875 | 460 | Listicle | Science & Tech. | 65.846496 | 95,513,601 |
The system will be unavailable due to maintenance on Thursday July 19 from 7:00-8:30 am ET.
The Leached Layer Formed On Wollastonite In An Acid Environment
Weissbart, Erich J
MetadataShow full item record
THE LEACHED LAYER FORMED ON WOLLASTONITE IN AN ACID ENVIRONMENT Erich J. Weissbart (ABSTRACT) Experiments were carried out in a fixed-bed external recycle mixed flow reactor to measure the rate of dissolution and the development of a leached layer on wollastonite. Each experiment ran for approximately 24 hours and the release rates of Si and Ca in the interval from 14 to 24 hours were analyzed. Each experiment began with an incongruent stage where Ca was released faster than the silica that remained on the surface to form the leached layer. The silica release rate after 14 hours was 2.13 x 10-9 ( 1.03 x 10-9, 1 , n=67) mol/m2/sec, and this rate appeared to be independent of pH from pH 2 to 6 at 25 degrees C. BET surface area measurements of reacted wollastonite showed large increases in BET Asp over the course of experiments even though both the Ca and Si release rates decreased. These large increases of measured Asp were the result of the growing internal porosity of the leached layer, and much of this surface does not seem to contribute to Si release rates. From these data, we infer that the overall reaction for the hydrolysis of wollastonite in an acid environment is best explained by two relatively independent reactions. First, Ca is removed from the crystal leaving behind linear silica polymers; then the silica polymers are released into solution where they hydrolyze to form H4SiO4. nCaSiO3 + 2nH+ nCa2+ + (H2SiO3)n (H2SiO3)n + nH2O n(H4SiO4) As the leached layer grows in thickness, the Ca release rate slows because it is controlled by transport through the leached layer. A model of Ca diffusion through the leached layer shows that the leached layer grows thicker at lower pH and presents a longer diffusion path for Ca transport into the solution. This diffusion limited reaction offsets the faster rate of the Ca hydrolysis reaction so that at steady state the Ca rates should also become equal to the Si release rate.
- Masters Theses | <urn:uuid:646e96eb-9f0e-4cb0-ad71-016ac273f3e9> | 2.84375 | 523 | Academic Writing | Science & Tech. | 45.111935 | 95,513,604 |
The present volume contains selected papers of the International Symposium on Adaptations to Terrestrial Environment, held in Halki diki, Greece from Sept 26th to Oct 2nd, 1982. The meeting was designed to consider the means as weIl as the mechanisms whereby organisms adapt to their environment. The papers presented dealt with a large variety of species from insects up to and including mamrnals. What became apparent during the course of the meeting was the incredible variety of means that organisms use to survive in their particular environmental niche. The ploys utilized are almost as numerous as the number of species investi gated. This will become clearly apparent in the accompanying manu scripts which are published in this book. The Editors allowed the authors of the accepted papers great leeway in terms of the thorough ness of their contributions. Some of the presentations contain exclusively new findings, whereas others extensively review the existing literature. The Volume is divided into two parts: Invertebrates and Verte brates. The first provides information on adaptations of inverte brat es on environmental stresses (such as low er high temperatures and water deficits) from the physiological and/or biochemical points of view as weIl as behavioral responses resulting from their life strategies and interactions with other organisrns. In the second part papers selected deal with vertebrates. Adaptations to special environmental factors such as light and temperature are discussed as weIl as behavioral, physiological and biochemical solutions to problems imposed. | <urn:uuid:9c176456-240d-4660-a861-b8c16cdd685b> | 2.96875 | 294 | Truncated | Science & Tech. | 22.759222 | 95,513,605 |
Lucio Mayer, Professor for Theoretical Physics at the University of Zurich, and his team are convinced that they have discovered the origin of the first supermassive black holes, which came into being about 13 billion years ago, at the very beginning of the universe. In their article which has appeared in "Nature" magazine, Lucio Mayer and his colleagues describe their computer simulations with which they modelled the formation of galaxies and black holes during the first billion years after the "Big Bang".
According to the current status of knowledge, the universe is approximately 14 billion years old. Recently, research groups discovered that galaxies formed much earlier than assumed until then - namely within the first billion years. The computer simulations from Mayer's team now show that the very first supermassive black holes came into existence when those early galaxies collided with each other and merged.Galaxies and massive black holes formed very quickly
Huge galaxies and supermassive black holes form quickly. Small galaxies - on the other hand, such as our own, the Milky Way and its comparatively small black hole in the centre weighting only 1 million solar masses instead of the 1 billion solar masses of the black holes simulated by Mayer and colleagues - have formed more slowly. As Lucio Mayer explained, the galaxies in their simulation would count among the biggest known today in reality - they were around a hundred times larger than the Milky Way. A galaxy that probably arose from a collision in that way is our neighbouring galaxy M87 in the Virgo cluster, located at 54 million light years from us.
The scientists began their simulation with two large, primary galaxies comprised of stars and characteristic for the beginning of the universe. They then simulated the collision and the merging of galaxies. Thanks to the super-computer "Zbox3" at the University of Zurich and the "Brutus Cluster" from the ETHZ, the researchers were able to observe, at a resolution higher than ever before, what happened next: Initially, dust and condensed gases collected in the centre of the new galaxy and formed a dense disk there. The disk became unstable, so that the gases and the dust contracted again and formed an even more dense region. From that, a supermassive black hole eventually came into existence without forming a star first.
The new findings have consequences for cosmology: The assumption that the characteristics of galaxies and the mass of the black hole are related to each other because they grow in parallel will have to be revised. In Mayer's model, the black hole grows much more quickly than the galaxy. It is therefore possible that the black hole is not regulated by the growth of the galaxy. It is far more possible that the galaxy is regulated by the growth of the black hole. Mayer and his colleagues believe that their research will also be useful for physicists who search for gravitational waves and thus want to supply direct proof of Einstein's theory of relativity. According to Einstein, who received his doctorate in 1906 at the University of Zurich, the merging of supermassive black holes must have caused massive gravitational waves - waves in a space-time continuum whose remains should still be measurable today. The LISA and LISA Pathfinder projects at the ESA and NASA, in which physicists from the University of Zurich are also participants, want to find gravitational waves of that kind. In order to be able to interpret future measurement results correctly, it is important to understand the formation of supermassive black holes in the early time of the universe.
Beat Müller | idw
What happens when we heat the atomic lattice of a magnet all of a sudden?
18.07.2018 | Forschungsverbund Berlin
Subaru Telescope helps pinpoint origin of ultra-high energy neutrino
16.07.2018 | National Institutes of Natural Sciences
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Life Sciences
18.07.2018 | Materials Sciences
18.07.2018 | Health and Medicine | <urn:uuid:11110706-5fcf-44a3-b6bf-624ae025c79d> | 4.1875 | 1,334 | Content Listing | Science & Tech. | 39.119445 | 95,513,606 |
China's second orbiting space lab Tiangong-2, which may enable two astronauts to live in space for up to 30 days, has been delivered to Jiuquan Satellite Launch Center.
The lab was sent from Beijing Thursday by railway and reached the launch center Saturday, marking the start of the Tiangong-2 and Shenzhou-11 manned spacecraft missions, said a statement issued by China's manned space engineering office.
-Assembly and tests will begin at the center ahead of the lab's launch scheduled for mid-September, the statement said.
According to the statement, Tiangong-2 will be capable of receiving manned and cargo1 spaceships, and will be a testing place for systems and processes for mid-term space stays and refueling in space.
It will also be involved in experiments on aerospace2 medicine, space sciences, on-orbit maintenance and space station technologies.
China's first space lab Tiangong-1, which was launched in September 2011 with a designed life of two years, ended its data service earlier this year. It had docked with Shenzhou-8, Shenzhou-9 and Shenzhou-10 spacecraft and undertook a series of experiments. | <urn:uuid:07f80fa2-20df-4a24-a1d0-36a40c16e1ed> | 2.625 | 243 | News Article | Science & Tech. | 44.125 | 95,513,628 |
Researchers have discovered 10 new molecular structures with pharmaceutical potential in a species of red seaweed that lives in the shallow coral reef along the coastline of Fiji in the south Pacific Ocean.
Some of these natural compounds showed the potential to kill cancer cells, bacteria and the HIV virus, according to research at the Georgia Institute of Technology. In fact, two of them exhibit anti-bacterial activity towards antibiotic-resistant Staphylococcus aureus at concentrations worth pursuing, though researchers dont know yet whether the concentrations of the compounds required to kill the bacterium would be harmful to humans.
The compound that was isolated in the greatest abundance -- named bromophycolide A by the researchers -- killed human tumor cells by inducing programmed cell death (called apoptosis), a mechanism that is promising for development of new anti-cancer drugs, researchers noted.
Jane M. Sanders | EurekAlert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:62949230-5007-47cc-b767-f10a52505d0d> | 3.015625 | 761 | Content Listing | Science & Tech. | 34.907093 | 95,513,655 |
System developers have used modeling languages for decades to specify, visualize, construct, and document systems. The Unified Modeling Language (UML) is one of those languages. UML makes it possible for team members to collaborate by providing a common language that applies to a multitude of different systems. Essentially, it enables you to communicate solutions in a consistent, tool-supported language.Today, UML has become the standard method for modeling software systems, which means you're probably confronting this rich and expressive language more than ever before. And even though you may not write UML diagrams yourself, you'll still need to interpret diagrams written by others.UML 2.0 in a Nutshell from O'Reilly feels your pain. It's been crafted for professionals like you who must read, create, and understand system artifacts expressed using UML. Furthermore, it's been fully revised to cover version 2.0 of the language.This comprehensive new edition not only provides a quick-reference to all UML 2.0 diagram types, it also explains key concepts in a way that appeals to readers already familiar with UML or object-oriented programming concepts.Topics include:
The role and value of UML in projectsThe object-oriented paradigm and its relation to the UMLAn integrated approach to UML diagramsClass and Object, Use Case, Sequence, Collaboration, Statechart, Activity, Component, and Deployment DiagramsExtension MechanismsThe Object Constraint Language (OCL)If you're new to UML, a tutorial with realistic examples has even been included to help you quickly familiarize yourself with the system. | <urn:uuid:145d407e-2ee5-45de-bc10-317848040021> | 3.125 | 330 | Product Page | Software Dev. | 33.208976 | 95,513,664 |
To cite this page, please use the following:
· For print: . Accessed
· For web:
Found most commonly in these habitats: 10 times found in montane wet forest, 5 times found in mature wet forest, 3 times found in wet forest, 3 times found in cloud forest, 2 times found in Sura, 2 times found in tropical wet forest, 1 times found in Bamboo forest, 1 times found in Primary wet forest, 1 times found in STR.
Found most commonly in these microhabitats: 18 times ex sifted leaf litter, 3 times Hojarasca, 1 times Sobre Vegetacion, 1 times sifted leaf litter, 1 times date is 25 Feb-9 Mar, 1 times bajo de M/27, 1 times bajo de M/24, 1 times bajo de M/23.
Collected most commonly using these methods: 9 times maxiWinkler, 7 times miniWinkler, 3 times Malaise, 3 times winkler, 3 times flight intercept trap, 3 times Mini Winkler, 1 times YPT, 1 times Sweeping.
Elevations: collected from 30 - 1100 meters, 530 meters average
AntWeb content is licensed under a Creative Commons Attribution License. We encourage use of AntWeb images. In print, each image must include attribution to its photographer and "from www.AntWeb.org" in the figure caption. For websites, images must be clearly identified as coming from www.AntWeb.org, with a backward link to the respective source page. See How to Cite AntWeb.
Antweb is funded from private donations and from grants from the National Science Foundation, DEB-0344731, EF-0431330 and DEB-0842395. c:0 | <urn:uuid:f20c9728-e4e8-4824-9c64-a4de14c86255> | 2.875 | 369 | Knowledge Article | Science & Tech. | 61.329286 | 95,513,674 |
This section describes lock types used by
T1 holds a shared
S) lock on row
then requests from some distinct transaction
T2 for a lock on row
handled as follows:
A request by
Slock can be granted immediately. As a result, both
A request by
Xlock cannot be granted immediately.
If a transaction
T1 holds an exclusive
X) lock on row
a request from some distinct transaction
for a lock of either type on
r cannot be
granted immediately. Instead, transaction
has to wait for transaction
T1 to release its
lock on row
InnoDB supports multiple
granularity locking which permits coexistence of row
locks and table locks. For example, a statement such as
LOCK TABLES ...
WRITE takes an exclusive lock (an
lock) on the specified table. To make locking at multiple
granularity levels practical,
Intention locks are table-level locks that indicate which type
of lock (shared or exclusive) a transaction requires later for a
row in a table. There are two types of intention locks:
The intention locking protocol is as follows:
Before a transaction can acquire a shared lock on a row in a table, it must first acquire an
ISlock or stronger on the table.
Before a transaction can acquire an exclusive lock on a row in a table, it must first acquire an
IXlock on the table.
Table-level lock type compatibility is summarized in the following matrix.
A lock is granted to a requesting transaction if it is compatible with existing locks, but not if it conflicts with existing locks. A transaction waits until the conflicting existing lock is released. If a lock request conflicts with an existing lock and cannot be granted because it would cause deadlock, an error occurs.
Intention locks do not block anything except full table requests
TABLES ... WRITE). The main purpose of intention locks
is to show that someone is locking a row, or going to lock a row
in the table.
TABLE LOCK table `test`.`t` trx id 10080 lock mode IX
A record lock is a lock on an index record. For example,
SELECT c1 FROM t WHERE c1 = 10 FOR UPDATE;
prevents any other transaction from inserting, updating, or
deleting rows where the value of
Record locks always lock index records, even if a table is
defined with no indexes. For such cases,
InnoDB creates a hidden clustered index and
uses this index for record locking. See
Section 126.96.36.199, “Clustered and Secondary Indexes”.
RECORD LOCKS space id 58 page no 3 n bits 72 index `PRIMARY` of table `test`.`t` trx id 10078 lock_mode X locks rec but not gap Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 0 0: len 4; hex 8000000a; asc ;; 1: len 6; hex 00000000274f; asc 'O;; 2: len 7; hex b60000019d0110; asc ;;
A gap lock is a lock on a gap between index records, or a lock
on the gap before the first or after the last index record. For
SELECT c1 FROM t WHERE c1 BETWEEN 10 and 20
FOR UPDATE; prevents other transactions from inserting
a value of
15 into column
t.c1, whether or not there was already any
such value in the column, because the gaps between all existing
values in the range are locked.
A gap might span a single index value, multiple index values, or even be empty.
Gap locks are part of the tradeoff between performance and concurrency, and are used in some transaction isolation levels and not others.
Gap locking is not needed for statements that lock rows using a
unique index to search for a unique row. (This does not include
the case that the search condition includes only some columns of
a multiple-column unique index; in that case, gap locking does
occur.) For example, if the
id column has a
unique index, the following statement uses only an index-record
lock for the row having
id value 100 and it
does not matter whether other sessions insert rows in the
SELECT * FROM child WHERE id = 100;
id is not indexed or has a nonunique
index, the statement does lock the preceding gap.
It is also worth noting here that conflicting locks can be held on a gap by different transactions. For example, transaction A can hold a shared gap lock (gap S-lock) on a gap while transaction B holds an exclusive gap lock (gap X-lock) on the same gap. The reason conflicting gap locks are allowed is that if a record is purged from an index, the gap locks held on the record by different transactions must be merged.
Gap locks in
InnoDB are “purely
inhibitive”, which means that their only purpose is to
prevent other transactions from inserting to the gap. Gap locks
can co-exist. A gap lock taken by one transaction does not
prevent another transaction from taking a gap lock on the same
gap. There is no difference between shared and exclusive gap
locks. They do not conflict with each other, and they perform
the same function.
Gap locking can be disabled explicitly. This occurs if you
change the transaction isolation level to
READ COMMITTED or enable the
system variable (which is now deprecated). Under these
circumstances, gap locking is disabled for searches and index
scans and is used only for foreign-key constraint checking and
There are also other effects of using the
READ COMMITTED isolation
level or enabling
Record locks for nonmatching rows are released after MySQL has
WHERE condition. For
does a “semi-consistent” read, such that it returns
the latest committed version to MySQL so that MySQL can
determine whether the row matches the
condition of the
A next-key lock is a combination of a record lock on the index record and a gap lock on the gap before the index record.
InnoDB performs row-level locking in such a
way that when it searches or scans a table index, it sets shared
or exclusive locks on the index records it encounters. Thus, the
row-level locks are actually index-record locks. A next-key lock
on an index record also affects the “gap” before
that index record. That is, a next-key lock is an index-record
lock plus a gap lock on the gap preceding the index record. If
one session has a shared or exclusive lock on record
R in an index, another session cannot insert
a new index record in the gap immediately before
R in the index order.
Suppose that an index contains the values 10, 11, 13, and 20. The possible next-key locks for this index cover the following intervals, where a round bracket denotes exclusion of the interval endpoint and a square bracket denotes inclusion of the endpoint:
(negative infinity, 10] (10, 11] (11, 13] (13, 20] (20, positive infinity)
For the last interval, the next-key lock locks the gap above the largest value in the index and the “supremum” pseudo-record having a value higher than any value actually in the index. The supremum is not a real index record, so, in effect, this next-key lock locks only the gap following the largest index value.
InnoDB operates in
REPEATABLE READ transaction
isolation level. In this case,
next-key locks for searches and index scans, which prevents
phantom rows (see Section 14.5.4, “Phantom Rows”).
RECORD LOCKS space id 58 page no 3 n bits 72 index `PRIMARY` of table `test`.`t` trx id 10080 lock_mode X Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0 0: len 8; hex 73757072656d756d; asc supremum;; Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 0 0: len 4; hex 8000000a; asc ;; 1: len 6; hex 00000000274f; asc 'O;; 2: len 7; hex b60000019d0110; asc ;;
An insert intention lock is a type of gap lock set by
INSERT operations prior to row
insertion. This lock signals the intent to insert in such a way
that multiple transactions inserting into the same index gap
need not wait for each other if they are not inserting at the
same position within the gap. Suppose that there are index
records with values of 4 and 7. Separate transactions that
attempt to insert values of 5 and 6, respectively, each lock the
gap between 4 and 7 with insert intention locks prior to
obtaining the exclusive lock on the inserted row, but do not
block each other because the rows are nonconflicting.
The following example demonstrates a transaction taking an insert intention lock prior to obtaining an exclusive lock on the inserted record. The example involves two clients, A and B.
Client A creates a table containing two index records (90 and 102) and then starts a transaction that places an exclusive lock on index records with an ID greater than 100. The exclusive lock includes a gap lock before record 102:
mysql> CREATE TABLE child (id int(11) NOT NULL, PRIMARY KEY(id)) ENGINE=InnoDB; mysql> INSERT INTO child (id) values (90),(102); mysql> START TRANSACTION; mysql> SELECT * FROM child WHERE id > 100 FOR UPDATE; +-----+ | id | +-----+ | 102 | +-----+
Client B begins a transaction to insert a record into the gap. The transaction takes an insert intention lock while it waits to obtain an exclusive lock.
mysql> START TRANSACTION; mysql> INSERT INTO child (id) VALUES (101);
RECORD LOCKS space id 31 page no 3 n bits 72 index `PRIMARY` of table `test`.`child` trx id 8731 lock_mode X locks gap before rec insert intention waiting Record lock, heap no 3 PHYSICAL RECORD: n_fields 3; compact format; info bits 0 0: len 4; hex 80000066; asc f;; 1: len 6; hex 000000002215; asc " ;; 2: len 7; hex 9000000172011c; asc r ;;...
AUTO-INC lock is a special table-level
lock taken by transactions inserting into tables with
AUTO_INCREMENT columns. In the simplest case,
if one transaction is inserting values into the table, any other
transactions must wait to do their own inserts into that table,
so that rows inserted by the first transaction receive
consecutive primary key values.
configuration option controls the algorithm used for
auto-increment locking. It allows you to choose how to trade off
between predictable sequences of auto-increment values and
maximum concurrency for insert operations.
For more information, see Section 188.8.131.52, “AUTO_INCREMENT Handling in InnoDB”.
indexing of columns containing spatial columns (see
Section 11.5.8, “Optimizing Spatial Analysis”).
To handle locking for operations involving
SPATIAL indexes, next-key locking does not
work well to support
isolation levels. There is no absolute ordering concept in
multidimensional data, so it is not clear which is the
To enable support of isolation levels for tables with
uses predicate locks. A
contains minimum bounding rectangle (MBR) values, so
InnoDB enforces consistent read on the index
by setting a predicate lock on the MBR value used for a query.
Other transactions cannot insert or modify a row that would
match the query condition. | <urn:uuid:d25d38ac-1704-4bce-abad-015eb64d29f6> | 2.609375 | 2,581 | Documentation | Software Dev. | 56.978908 | 95,513,687 |