text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
From Environmental Leader, Published 31 March 2014
A report by the engineering and design consultancy Arup projects how cities around the world will change in response to climate change, resource scarcity and increasing urban flooding.
Cities Alive predicts that technology and nature will work together seamlessly to create cities that are an integrated network of intelligent green spaces. Buildings may be transformed into vertical urban farms, and solar powered pathways and urban wetlands and forests will become common features.
The report also looks at the importance of creating higher quality public places and greener urban environments through high quality landscape design in order to protect our cities from flooding and increase the health and wellbeing of citizens.
The report also highlights the benefits that additional green spaces will provide in terms of increasing health and wellbeing including:
For additional reading regarding climate change, please refer to the following links below:
Subscribe to our blog Latest post: Pension Plans Are Driving Sustainability Reporting
DOWNLOAD THE LATEST WHITEPAPER Effectiveness of Local Agency Sustainability Plans
Subscribe to Greenwatch Newsletter Check out the latest issues
READ OUR LATEST CASE STUDY Assisting City of Dublin with CEQA Review for Major Kaiser Permanente Medical Facility | <urn:uuid:6d7b6f72-19a8-4c7f-bd63-f9948a0adf05> | 3.265625 | 241 | News (Org.) | Science & Tech. | -9.035119 | 95,552,541 |
Scientists have been left baffled after a massive hole the size of the state of Maine opened up in Antarctica.
The mysterious hole has been described as “quite remarkable” by atmospheric physicist Kent Moore, a professor at the University of Toronto, who says “It looks like you just punched a hole in the ice.”
Motherboard reports: Areas of open water surrounded by sea ice, such as this one, are known as polynias. They form in coastal regions of Antarctica, Moore told me. What’s strange here, though, is that this polynia is “deep in the ice pack,” he said, and must have formed through other processes that aren’t understood.
“This is hundreds of kilometres from the ice edge. If we didn’t have a satellite, we wouldn’t know it was there.” (It measured 80,000 k㎡ at its peak.)
A polynia was observed in the same location, in Antarctica’s Weddell Sea, in the 1970s, according to Moore, who’s been working with the Southern Ocean Carbon and Climate Observations and Modelling (SOCCOM) group, based at Princeton University, to analyze what’s going on. Back then, scientists’ observation tools weren’t nearly as good, so that hole remained largely unstudied. Then it went away for four decades, until last year, when it reopened for a few weeks. Now it’s back again.
“This is now the second year in a row it’s opened after 40 years of not being there,” Moore said. (It opened around September 9.) “We’re still trying to figure out what’s going on.”
It’s tempting to blame this strange hole on climate change, which is reshaping so much of the world, including Antarctica. But Moore said that’s “premature.” Scientists can say with certainty, though, that the polynia will have a wider impact on the oceans.
“Once the sea ice melts back, you have this huge temperature contrast between the ocean and the atmosphere,” Moore explained. “It can start driving convection.” Denser, colder water sinks to the bottom of the ocean, while warmer water comes to the surface, “which can keep the polynia open once it starts,” he said.
Using observations from satellites and deep sea robots, Moore and his collaborators are working on as-yet-unpublished research that aims to answer some of these questions. “Compared to 40 years ago, the amount of data we have is amazing,” he said.
Antarctica is undergoing massive changes right now, and figuring out why a gaping hole could suddenly open up will be key to understanding larger systems at play.
Latest posts by Sean Adl-Tabatabai (see all)
- Trump: ‘Deep State’ Would Prefer WW3 Than Peace With Russia - July 18, 2018
- George Soros: Obama Is Greatest Disappointment To ‘New World Order’ - July 18, 2018
- James Comey: Republicans Must Help Hillary Win In 2020 - July 18, 2018 | <urn:uuid:d1e8963a-0430-4ea1-9ed3-70bf336e2b8c> | 3.578125 | 687 | News Article | Science & Tech. | 54.297004 | 95,552,563 |
In celebration of Star Trek's 50th anniversary, NASA explains the science behind Star Trek to reveal how much of it is scientific in nature.
There's a giant hole growing on the surface of the Sun, but despite its massive size, scientists confirmed that there is nothing to worry about.
NASA revealed last Friday, that the agency will sail at full speed in terms of space exploration while waiting for the Jupiter's up close encounter courtesy of Juno on Aug.27.
A new distant dwarf planet was discovered beyond Neptune.
Evidence of first water clouds was discovered in a brown dwarf or a failed star outside the Solar System.
If conditions eons ago were a little bit different, the Earth could have been searing and dead while its nearest neighbor Venus could the one lush and teeming with life.
The New Horizons spacecraft will perform a flyby on an another object within the Kuiper belt that orbits the sun one billion tmiles farther than Pluto.
Juno will enter Jupiter's orbit on the Fourth of July. To fulfill its mission, the spacecraft should be able to withstand "hellish" radiation levels as powerful as 100 million dental X-rays.
Geologists have discovered that Mercury’s composition a few billion years ago was that of an extremely rare meteorite found on Earth.
A new study from University of Cambridge suggests that the existence of Planet Nine would most likely affect the orbit of extreme trans-Nepunian objects, making lengthy and unstable.
NASA is expecting the mission to deliver tons of information from a planet believed to be formed as early as the Sun.
Titan and Europa both have Earth-like properties which is why scientists are closely monitoring the moons to know whether life forms exist on these celetial bodies.
Researchers at Cornell University said that in order to find habitable zones in the galaxy, scientists should look at expanding red and old stars because they are capable of thawing icy and frozen moons and planets. Once thawed, they can now have the capability to retain water and eventually enable life to thrive.
NASA is developing Heliopost Electrostatic Transit System (HERTS) E-sail system which uses protons from solar wind to propel the spacecraft. | <urn:uuid:b6328323-e22f-4854-8640-0d6643463ea2> | 3.28125 | 451 | Content Listing | Science & Tech. | 41.544572 | 95,552,578 |
|There's more than one way to do things|
Well, you are not "knowing" you are "guessing"... ... and that's not a good base for designing code.
This is unfair, Rolf. I am certainly not claiming to be an expert on the subject, this is why I am saying I may be wrong. But, please, don't say that I am just guessing, I have done a quite a little bit more work than just guessing. I am just admitting that I might be completely wrong, if such is the case, please show me why, and I will be happy to recognize it.
What's worse, I also guessed that and and or have the same precedence ...
Well, I knew they did not have the same precedence. But my point is that this is probably essentially irrelevant.
... but now take a look what B::Deparse says: ...
$a and $b or $c
What I am saying: true, i.e. $a and $b, if they are both true, otherwise, $c. Exactly equivalent to $c unless $a and $b;
$a or $b and $c
True if $a is true. Otherwise, true only if $b and $c are both true. Again, it seems to me that this is exactly equivalent to $b and $c unless $a;
In brief, the two deparse examples that you have shown confirm exactly and precisely by left-to-right Boolean interpretation with short-circuit that I made.
Again, I am being very careful on this, I may be completely wrong. But, *please*, don't use this uncertainty to claim that I am just guessing. | <urn:uuid:a4754cb5-af3e-4fb3-9193-4b82f221389b> | 2.578125 | 356 | Comment Section | Software Dev. | 82.940865 | 95,552,580 |
The Watershed assessment services that we offer include several
steps. These are outlined below. All assessments that we complete result
in the compilation and publication of a restoration plan.
Determine pollution sources
Conduct water analysis of the pollutant source and surrounding stream
Conduct biological surveys and determine effects of pollution
Determine pollutant loadings entering the watershed
Make assessments as to the overall impact of pollution on the stream
Watershed assessments begin with walking all stream segments and identifying the pollutant sources. This is a discharge in the Shamokin Creek watershed that has been identified as a major pollutant source. | <urn:uuid:9047eb3a-f7c8-4523-8ed7-f0e3d75eb78d> | 2.734375 | 127 | Product Page | Science & Tech. | 15.828909 | 95,552,594 |
Racing cars are high performance cars. In order for them to be fast the cars body (and interior must be very light).
By re-arranging Newton’s 2nd it can be seen that the larger the mass of the car the smaller the acceleration the car will have.
For the car to accelerate (or decelerate) there must be some friction in between the cars tyres and the road surface. This traction helps the car to move at a very high speed and if the traction isn’t there the car won’t move even though the tires are rotating. This can be seen when the road surface is icy and the cars loose grip, the wheels are rotating but the car doesn’t move very fast. The force required to slide a tyre is called the adhesive limit of the tyre, or sometimes the stiction.
The formula F µN shows the relationship between the frictional force and the surface the tyre is moving on. F is the frictional force, N is the normal reaction between on the tyre and µ is the coefficient of the friction, the more the µ value the rougher the surface is. The maximum frictional force provided by the tyre is given by µN, beyond this value slipping starts to occur. This equation implies that the frictional force made by the tyres is independent of the width of the tyre. A car fitted with a wider tyre is creates the same friction as a thinner tyre because the thinner tyre creates more pressure point hence created more contact between the tyre and road surface whereas, the wider tyre covers more surface area therefore creates the same grip between the tyre and surface.
Newton’s second law (F=ma), shows that when the traction force is generated the car accelerates forward. Newton’s third law, every action has an opposite and equal reaction shows that when the car does accelerate the driver experiences a force opposite and is pushed backwards into the seat. When the steering wheel is turned the driver tries to get the front tyres to push a little sideways on the ground, and by Newton’s third law the ground pushes back, which causes little sideways acceleration.
This changes the sideway velocity. The acceleration is relative to the sideways force and inversely relative to the mass of the car. The sideways acceleration causes the car to go sideways which the driver requires when turning the wheel. When the car decelerates the resistive braking force acts in the opposite direction to the moving car and the driver is pushed forward. The car experiences torque when traction and braking forces are generated. This transfers the weight forward when accelerating sharply and backward when braking sharply. Weight transfer can be controlled by using throttle, brakes and steering.
The equation shows that if the speed of the car doubles, the centripetal force must quadruple for the car to go around the same bend, the centrifugal force must also quadruple if this happens. Therefore, racing cars are subject to forward and backward forces due to linear acceleration and deceleration and also large side forces during cornering bends at high speeds, which are called G forces. The equation also shows that if the bend is doubled the centripetal halves. It’s useful for the driver to go around bends at high speeds as the fastest route given by tracing out a curve with the largest radius possible.
Air resistance can be modelled by the equation
C-Coefficient of drag (0.25-0.45 for cars)
p-Density of air
A-Reference area (area of car perpendicular to the direction of motion)
υ-Speed/velocity of car
If the speed of the vehicle doubles the drag force quadruples. The drag force can be minimised by decreasing the reference area. This is achieved by making the car flatter so that it cuts through the wind easily giving it a stream lined shape. The shape is also usually like the wings of an aeroplane upside down. In aeroplane the wings will give it a lift whereas the upside down shape on the car will give it a downwards force which helps it prevent from lifting up at high speeds.
According to Newton’s first law of motion a car in a straight line motion at a constant speed will keep such motion until it’s acted upon with an external force. The reason why the car does not perform such a motion forever is because of air resistance and friction.
http://www.dur.ac.uk/r.g.bower/PoM/pom/node16.html#eqweight, Richard Bower, 8 16:09:30 BST 1998 | <urn:uuid:fe8b49a0-a15d-41f9-8339-52c45a7f3ad1> | 4 | 947 | Knowledge Article | Science & Tech. | 56.107661 | 95,552,595 |
Some Biological Controls on the Distribution of Shallow Water Sea Stars (Asteroidea; Echinodermata)
Tropical shallow water sea star faunas, especially those of the Indo-West Pacific, are dominated by the order Valvatida. Among sea stars, valvatidans have the best-developed antipredatory devices. Vermeij (1978) found high to low latitudinal increases in antipredatory structures in various in vertebrate groups (e.g., gastropods). The valvatidan occurrences suggest the presence of controls in sea stars similar to those affecting other groups. The Valvatida includes few genera that prey on active, solitary invertebrates, but such habits are common in other orders, and in cooler waters. Protective structures appear to restrict predatory abilities. The importance of sea stars as predators on solitary organisms declines in tropical latitudes, yet sea stars have evolved only limited basic structural variation since their appearance in the Ordovician. Phylogenetic constraints in adaptability appear strong in sea stars because of their evolutionary failure to maintain predatory life habits in shallow tropical waters.
No Supplementary Data.
No Article Media
Document Type: Research Article
Publication date: 01 July 1983
More about this publication?
- The Bulletin of Marine Science is dedicated to the dissemination of high quality research from the world's oceans. All aspects of marine science are treated by the Bulletin of Marine Science, including papers in marine biology, biological oceanography, fisheries, marine affairs, applied marine physics, marine geology and geophysics, marine and atmospheric chemistry, and meteorology and physical oceanography.
- Editorial Board
- Information for Authors
- Subscribe to this Title
- Terms & Conditions
- Ingenta Connect is not responsible for the content or availability of external websites | <urn:uuid:465c381e-63fe-4c30-a0d2-aae182576033> | 3.0625 | 375 | Truncated | Science & Tech. | 3.140763 | 95,552,612 |
The secret of how salamanders successfully regrow body parts is being unravelled by UCL researchers in a bid to apply it to humans.
For the first time, researchers have found that the 'ERK pathway' must be constantly active for salamander cells to be reprogrammed, and hence able to contribute to the regeneration of different body parts.
The team identified a key difference between the activity of this pathway in salamanders and mammals, which helps us to understand why humans can't regrow limbs and sheds light on how regeneration of human cells can be improved.
The study published in Stem Cell Reports today, demonstrates that the ERK pathway is not fully active in mammalian cells, but when forced to be constantly active, gives the cells more potential for reprogramming and regeneration. This could help researchers better understand diseases and design new therapies.
Lead researcher on the study, Dr Max Yun (UCL Institute of Structural and Molecular Biology) said: "While humans have limited regenerative abilities, other organisms, such as the salamander, are able to regenerate an impressive repertoire of complex structures including parts of their hearts, eyes, spinal cord, tails, and they are the only adult vertebrates able to regenerate full limbs.
We're thrilled to have found a critical molecular pathway, the ERK pathway, that determines whether an adult cell is able to be reprogrammed and help the regeneration processes. Manipulating this mechanism could contribute to therapies directed at enhancing regenerative potential of human cells."
The ERK pathway is a way for proteins to communicate a signal from the surface of a cell to the nucleus which contains the cell's genetic material. Further research will focus on understanding how this important pathway is regulated during limb regeneration, and which other molecules are involved in the process.
Dr. Rebecca Caygill | Eurek Alert!
Scientists uncover the role of a protein in production & survival of myelin-forming cells
19.07.2018 | Advanced Science Research Center, GC/CUNY
NYSCF researchers develop novel bioengineering technique for personalized bone grafts
18.07.2018 | New York Stem Cell Foundation
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:4af4bf18-777f-4ddb-adc0-39406b454274> | 3.5 | 959 | Content Listing | Science & Tech. | 34.937946 | 95,552,615 |
Please see attached file for full problem description.
1. The thickness h of a puddle of water on a waxy surface depends on the density ρ of the liquid, the surface tension γ (SI units: N/m) and another physically which is gravity, g. Use dimensional analysis to find a relationship between the thickness and the other 3 variables. (I.e make a proportionality relationship that make sense dimensionally) e.g,
h ∝ [a combination of the physically relevant parameters that has dimensions of length]
2. The Canadian Norman Wells Oil Pipeline extends from Norman Wells, Northwest Territories, to Zama, Alberta. The 8.68 x 10^5 m-long pipeline has an inside diameter of 12 in, and can be supplied with 35 L/s.
(a) What is the volume of oil in the pipeline if it is full at some instant in time?
(b) How long would it take to fill the pipeline with oil if it is initially empty?© BrainMass Inc. brainmass.com July 18, 2018, 3:11 am ad1c9bdddf
The formula for the thickness of water puddle is derived using dimensional analysis is determined. The rate of volume flow through a pipeline is calculated. | <urn:uuid:1490256d-bd21-4ac3-ac92-ae54c640b7bf> | 3.4375 | 262 | Q&A Forum | Science & Tech. | 61.02676 | 95,552,661 |
|You might also like:||BOWHEAD WHALE||RIGHT WHALE||SPERM WHALE||GRAY WHALE: ZoomWhales.com||Whales - Site Index||Today's featured page: Make a Classroom Newspaper|
ALL ABOUT WHALES!
|What is a Whale?||Whale Information Sheets||Anatomy and Behavior||Extreme Whales||Whale Myths||Whale Evolution||Whale Classification||Whale Glossary||Whale Activities||Whale Index|
|Minke Whale Print-Out|
The minke whale is the smallest baleen whale with 50-70 throat grooves. It is a rorqual whale (baleen whale with throat grooves). They are the most abundant baleen whale. Minke whales have a characteristic white band on each flipper, contrasting with its very dark gray top color. They have 2 blowholes, like all baleen whales.
SKIN, SHAPE AND FINS
The minke whale's skin is very dark gray above and lighter below, sometimes with pale trapezoidal stripes behind the flippers on the top. Minke whales have a characteristic white band on each flipper (this is absent on the southern minke whales).
Minke whales are stocky, having a layer of blubber several inches thick. They have 50-70 throat grooves, running from the chin to the mid-section. The minke whale has two long flippers (up to 1/8 of the body size), a small dorsal fin, and a series of small ridges along the its back near the flukes (tail).
DIET AND BALEEN
Minke whales (like all baleen whales) are seasonal feeders and carnivores. They sieve through the ocean water with their baleen. They filters out small polar plankton, krill, and small fish, even chasing schools of sardines, anchovies, cod, herring, and capelin. They have the same diet as blue whales.
The baleen plates in the minke whale's jaws have about 300 pairs of short, smooth baleen plates. The largest plates are about less than 12 inches (30 cm) long and 5 inches (13 cm) wide. The fine textured baleen bristles are fringed and are creamy-white with pure white bristles..
Minke whales either travel singly or congregated in small pods of about 2-3 whales.
Minke whales can dive for up to 20-25 minutes, but usually make shorter dives, lasting about 10-12 minutes. Just before diving, minke whales arch their back to a great degree, but the flukes do not rise out of the water.
Minke whales breathe air at the surface of the water through 2 blowholes located near the top of the head. At rest, minke whales spout (breathe) about 5-6 times per minute. The spout of the minke whale is a very low, almost inconspicuous stream that rises up to 6.5 feet (2 m) above the water. Minke whales start to exhaling before they reach the surface; this minimizes the blow.
Minke whales normally swim 3-16 mph (4.8-25 kph), but can go up to 18-21 mph (29-34 kph) in bursts when in danger. Feeding speeds are slower, about 1-6 mph (1.6-9.8 kph).
Minke whales makes very loud sounds, up to 152 decibels (as loud as a jet taking off). They make series (trains) of grunts, thuds, and raspy sounds, usually in the 100-200 Hertz range. These sounds may be used in communication with other minke whales and in echolocation.
HABITAT AND RANGE
Minke whales live at the surface of the ocean in all but polar seas.
Minke whales have a life expectancy of over 20 years.
Minke whales are the most abundant baleen whale. It is estimated that there are about almost 800,000 minke whales world-wide.
Minke whales (Balaenoptera acutorostrata) are baleen whales (Suborder Mysticeti). They are one of 76 cetacean species, and are marine mammals.
Kingdom Animalia (animals)
Phylum Chordata (vertebrates)
Class Mammalia (mammals)
Order Cetacea (whales and dolphins)
Suborder Mysticeti (baleen whales)
Minke Whale Connect-the-Dots
A first grade addition activity. Solve the 1-digit addition problems, then do letter substitutions to answer a whale question.
A Minke whale word hunt activity - For second and third graders.
MINKE WHALE LINKS
The Whale-watching web
(and other cetaceans)
Over 35,000 Web Pages
Sample Pages for Prospective Subscribers, or click below
Overview of Site|
Enchanted Learning Home
Monthly Activity Calendar
Books to Print
Parts of Speech
The Test of Time
TapQuiz Maps - free iPhone Geography Game
Biology Label Printouts
Physical Sciences: K-12
Art and Artists
Label Me! Printouts
|Search the Enchanted Learning website for:| | <urn:uuid:a5ef687c-3adc-4b5b-90be-b93bfacc96cf> | 3.28125 | 1,142 | Knowledge Article | Science & Tech. | 57.68512 | 95,552,671 |
One of the fundamental particles that makes up the universe is also one of the most mysterious.
Neutrinos, Italian for “little neutral one,” are everywhere. They emerged soon after the Big Bang, and, later on, from black holes, exploding stars, the nuclear reaction that fuels our sun, even from the interaction between cosmic radiation and Earth’s atmosphere. The tiny particles have very, very little mass—how much exactly, no one knows—and don’t abide by the same rules as other particles with which we’re more familiar. Unlike electrons, for example, neutrinos lack an electric charge, so the usual electromagnetic forces in space that jostle other particles have no effect on them. Neutrinos roam freely in space, zipping across great distances at nearly the speed of light without slowing down or changing direction. They pass through planets, stars, and whole galaxies, imperceptible to all.
Frederick Reines, the physicist who co-discovered neutrinos in 1956, described them as “the most tiny quantity of reality ever imagined by a human being.”
Scientists have spent decades designing and testing experiments to detect the elusive particles, particularly the high-energy kind that originates from mysterious sources in the depths of the cosmos. Five years ago, an observatory near the South Pole in Antarctica managed to detect, for the first time, neutrinos from beyond our solar system. They couldn’t figure out where they were coming from, though; the particles appeared to be bombarding Earth from random directions across the sky. Which was perfectly fine, for the most part. After all, just detecting these things was a tremendous scientific feat. Scientists buckled down and waited to find more.
The observatory, known as IceCube, detects several hundred neutrinos every day, but these are produced near Earth. On September 22, 2017, the instrument caught something unusual: a single neutrino unlike all the rest. It had far more energy than the others, which suggested it came from somewhere beyond the solar system.
This time, astronomers have figured out exactly where it came from.
An international team of astronomers have traced this single, high-energy cosmic neutrino to a supermassive black hole at the center of a galaxy nearly 4 billion light-years away. The discovery, announced Thursday and published in Science, marks the first-ever detection of the origin of cosmic neutrinos.
The story of this discovery begins billions of years ago, when the Earth was just beginning to take shape. In another corner of the universe, a gargantuan black hole churned at the heart of a sparkling galaxy. Black holes love to gobble up everything that comes near them, but they’re also quite good at spitting matter back out, creating high-energy streams that light up the darkness. The black hole spewed a jet of particles, including neutrinos, and sent them flying through space.
Skip ahead a little bit—to last year—and one of these neutrinos finally made it to Earth. It passed through the planet as it would anything else, but it didn’t go unnoticed. Scientists were ready for it.
They had drilled more than 80 narrow holes into the thick Antarctic ice, each stretching about 8,200 feet into the depths. Into the holes went more than 5,000 light sensors. Neutrinos, the scientists knew, are not governed by electromagnetic forces, but they can, on rare occasions, interact with the nucleus of an atom. This interaction generates a new kind of particle, and that particle produces a tiny flash of light the underground sensors can detect.
This is what happened in Antarctica. When the neutrino struck the ice and continued on its merry way, IceCube recorded the glint of its passing.
The IceCube astronomers sounded the alarm within minutes, galvanizing their colleagues around the world to help. The flash of light they detected gave them a sense of the direction the neutrino had come from. They broadcast these coordinates to their peers, who aimed telescopes at the relevant slice of the sky, scanning it in virtually every wave on the electromagnetic spectrum. Meanwhile, the IceCube team pored over archival telescope observations of the region. They found more than 600 potential candidates for the source of the neutrino.
After countless analyses were run, only one candidate was left standing: a special kind of galaxy called a blazar. Blazars house supermassive black holes that spray jets of high-energy particles pointed directly toward Earth. This one, known as TXS 0506+056, is located in the constellation Orion, visible in the night sky throughout the world.
When the IceCube team sorted through their old data, they found that at least a dozen other, less energetic neutrinos had originated from this distant region from late 2014 to early 2015. Data from other telescopes further backed up their discovery. One of NASA’s space telescopes, Fermi, detected a flare of high-energy gamma rays that seemed to come from the blazar and followed a similar path to the neutrino. So did another observatory, known as MAGIC, based in the Canary Islands.
The discovery comes nearly a century after the the Austrian American physicist Victor Hess described cosmic radiation, and posited that high-energy particles can originate from beyond the solar system.
Astronomers have heralded this discovery as the beginning of a new chapter in astronomy, one that will allow them to better understand neutrinos, the tiny messengers from across the universe. If the concept of neutrinos—and of blazars, and other astrophysical creatures with funky names—feels difficult to grasp, consider this: Neutrinos are everywhere, and they pass through everything, all the time. This includes your little place on Earth: your home, your body, the membranes that enclose the cells that make you you. About 100 trillion neutrinos pass through your body every second. You can’t feel them, but perhaps you can feel some wonder in knowing they’re there, just passing through, a toll booth on an invisible journey without an end.
We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org. | <urn:uuid:3dbbabd1-79b9-4f6f-a870-47db36d794f5> | 3.96875 | 1,301 | Truncated | Science & Tech. | 41.988001 | 95,552,683 |
Since 2014, CCFC has been collaborating with Indigo Expeditions. Their work has focussed on assessing levels of amphibian and reptile diversity on our campus. To date they have identified 17 species of amphibian and 34 species of reptiles. Around 50% of the amphibian species are endangered and endemic to the region, as are many of the reptiles.
Over the next few years of Indigo’s continued work we expect the number of species encountered here to increase, for example in the first half of 2018 alone, five species of snake have been encountered here for the first time.
The future work of Indigo Expeditions will focus on assessing how amphibians and reptiles are able to recolonize our agroecology and reforestry parcels (as seen above: this Keeled Helmeted Iguana was found in our Amaranth and Dalia garden). We are thrilled to maintain this long-term collaboration with Indigo, their work helps to inform our conservation work and always keeps us inspired. | <urn:uuid:4e3f55a4-469f-43ab-a74b-2d3f1bb3719c> | 2.609375 | 204 | About (Org.) | Science & Tech. | 33.47 | 95,552,684 |
Semiarid grassland responses to short-term variation in water availability
- 127 Downloads
Standing crop and species composition in semiarid grassland are linked to long-term patterns of water availability, but grasslands are characterized by large single-season variability in rainfall. We tested whether a single season of altered water availability influenced the proportions of grasses and shrubs in a semiarid grassland near the northern edge of the North American Great Plains. We studied stands of the clonal shrub snowberry (Symphoricarpos occidentalis) and adjacent grassland dominated by the native grasses Stipa spartea and Bouteloua gracilis. Rain was excluded and water supplied in amounts corresponding to years of low, medium, and high rainfall, producing a 2 − 4-fold range in monthly precipitation among water supply treatments. There were ten replicate plots of each water treatment in both snowberry stands and grassland. Grass standing crop increased significantly with water availability in grassland but not inside snowberry stands. Total standing crop and shrub stem density increased significantly with water supply, averaged across both communities. In contrast, water had no effect on shrub standing crop or light penetration. In summary, our finding that water has significant effects on a subset of components of grassland vegetation is consistent with long-term, correlational studies, but we also found that a single season of altered water supply had no effect on other important aspects of the ecosystem.
Unable to display preview. Download preview PDF.
- Bowers J.E. and Turner R.M. 2002. The influence of climatic variability on local population dynamics of Cercidium microphyllum (foothill paloverde). Oecologia 130: 105–113.Google Scholar
- Carpenter A.T. and West N.E. 1987. Indifference of mountain big sagebrush growth to supplemental water and nitrogen. Journal of Range Management 40: 448–451.Google Scholar
- Looman J. and Best K.K. 1987. Budd's flora of the Canadian prairie provinces. Publication 1662, Agriculture Canada, Research Branch, Hull, Canada.Google Scholar
- Owensby C.E., Hyde R.W. and Anderson K.L. 1970. Effects of clipping and supplemental nitrogen and water on loamy upland bluestem range. Journal of Range Management 23: 341–346.Google Scholar
- Tilman D. 1990. Constraints and tradeoffs: toward a predictive theory of competition and succession. Oikos 58: 3–15.Google Scholar | <urn:uuid:bc5d00f2-e5be-448c-af72-a186042303ac> | 2.734375 | 532 | Truncated | Science & Tech. | 46.934367 | 95,552,690 |
NASA's Plans Lack the Cash
Can the commercial sector rescue the U.S. human spaceflight program?
On Tuesday, after months of deliberation, the independent committee charged with reviewing the future of the U.S. human space program released a summary report of its findings, a document that will guide key decisions that lie ahead for the Obama administration.
According to the report, the current crisis facing NASA lies with its budget, and not with technical or programmatic issues. “The report clearly stated that the current program is not executable or sustainable with the budget that we have,” says Scott Pace, director of the Space Policy Institute at George Washington University, in DC.
The report was issued by the Augustine panel, named after its chair Norman Augustine, a retired chairman and CEO of Lockheed Martin. It recommends extending the Space Shuttle to 2011 to complete its remaining flights and extending the life of the International Space Station (ISS) to 2020 to ensure that the U.S. and international partners get a return on their investment. Crucially, the report also suggests utilizing the commercial sector for unmanned and potentially manned missions to reduce government costs.
NASA’s current program, called Constellation, calls for sending humans to the ISS, the moon, Mars, and beyond. The plan includes developing a new launch system and a new crew vehicle, called Ares and Orion respectively, to replace the aging Space Shuttle.
The committee’s report puts forth five alternatives for human exploration of the solar system: continue with the current Constellation program; slow down and stretch out the Constellation program; focus on extending the life of the ISS to 2020, and develop a smaller version of the Ares V heavy-lift rocket for moon missions; extend the Space Shuttle to 2015 and the ISS to 2020 using either commercial services, a lighter version of the Ares V, or a shuttle-derived concept; and sending astronauts on deep-fly-bys of the moon, asteroids, and Mars.
The committee has stated that Mars is “unquestionably the most scientifically interesting destination in the inner solar system…but it is not an easy first place to visit with existing technologies and without substantial investment of resources.” Therefore, it recommends that the U.S. travel to the moon first or follow a “flexible path” option–in other words, embark on a series of deep-space rendezvous and fly-by missions before attempting to land astronauts on Mars.
James Oberg, a space expert and former NASA engineer, says that the report’s recommendation to develop commercial orbital access is central to some of the options. “There are some remarkable orbital vehicles that are being designed by the private sector,” he says.
“If we are to have a spaceflight program with the purpose of sending humans beyond low earth orbit, we need more money,” says John Logsdon, who served on the Columbia Accident Investigation Board and is the founder and former director of the Space Policy Institute. “If the budget is not increased, then we need to lower our goals, which [the committee] would call disappointing,” says Logsdon.
But Logsdon agrees with Oberg, adding that “the panel members looked at the commercial competitors and said yes, we think they can do the job. [The report] is an endorsement of commercial options.”
NASA’s current budget for fiscal 2010 is approximately $18.6 billion, an increase from fiscal 2009, but the human space exploration program has received $3.4 billion less than was suggested by the previous administration. In addition, the budget’s profile through 2020 is around $80 billion–$28 billion less than what the agency was told it could expect four years ago, when it devised the Constellation program.
“If you add in the $3 billion for the years 2011 to 2013 and put back in the projected inflation of 2.4 percent instead of 1.36 percent, then all the options that the Augustine Committee came up with are affordable,” says Pace, who was assistant director for space and aeronautics in the White House Office of Science and Technology Policy under former President George W. Bush.
Pace says he does not see any alternatives that are more attractive than the current Constellation program. “If the technical program is not broken, then do you change the policy, or do you change the budget? My opinion is you change the budget.” He adds that the current policy has been endorsed by two different congresses, under the NASA authorization bill in fiscal 2005 and 2008, and “is as solid of a policy as you are going to get.”
However, the Constellation program, which calls for developing the Ares I rocket for flights to the ISS by 2016 and building the Orion crew capsule to return humans to the moon by 2020, has attracted criticism. Logsdon says it’s clear that the committee does not think the Ares I is a good idea and that the most feasible date for moon landings would be mid-2020s.
Pace argues that the criticism of Ares I obscures deeper questions. “Are we willing to be dependent on the Russians for a longer period of time? Or are we willing to bet that commercial capabilities will arrive on time?”
Among the other options put forth by the panel, Oberg says that the flexible option is particularly interesting. “This could be the breakthrough path to develop new technologies for human exploration, as opposed to the favored ‘Apollo on steroids’ approach,” he says.
The panel also mentioned using a shuttle-derived launch vehicle, although most experts agree that this option would, in the end, be more expensive and leave the U.S. without an adequate heavy launch vehicle. It would only be viable if the Obama administration decided not to increase NASA’s budget.
The committee concludes that “no plan compatible with the FY 2010 budget profile permits human exploration to continue in any meaningful way.” The question the Obama administration will have to answer, says Pace, is “what sort of space program do we want to have, and what are we willing to pay?”
Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video | <urn:uuid:cde511f4-639a-416f-a16b-26d6ee0c7a53> | 2.65625 | 1,311 | Truncated | Science & Tech. | 47.107692 | 95,552,708 |
Several algorithms for enumerating points in an unbounded plane
This is not a library or a tool. I am not even providing a
Makefile or build instructions.
Four different ways of mapping a single integer to a point in a plane.
- zigzag ( only block 0 <= x,y <= w )
- triangle shape ( includes only the x,y >=0 quadrant )
- diamond shape ( full plane )
- spiral ( full plane )
zigzag, and spiral always make steps of distance
diamond and triangle make steps of either
Each enumeration algorithm implements two functions:
pt2n, the first converts an integer to a point, the second converts a point to an integer.
- add block for enumerating entire plane in block shape.
- add half-plane enumerator
- add volume enumerators:
- octant in tetrahedron shape
- space in cube shape
Willem Hengeveld email@example.com | <urn:uuid:391e5eda-0964-4576-842b-58de8836fa76> | 2.875 | 209 | Product Page | Software Dev. | 39.368823 | 95,552,718 |
You are here : Home » Operators in C++
Operators in C++
An operator is a symbol that tells the compiler to perform specific mathematical or logical manipulations. C++ is rich in built-in operators and provide the following types of operators −
- Arithmetic Operators
- Relational Operators
- Logical Operators
- Bitwise Operators
- Assignment Operators
- Misc Operators
This chapter will examine the arithmetic, relational, logical, bitwise, assignment and other operators one by one.
There are following arithmetic operators supported by C++ language −
Assume variable A holds 10 and variable B holds 20, then −
|+||Adds two operands||A + B will give 30|
|-||Subtracts second operand from the first||A - B will give -10|
|*||Multiplies both operands||A * B will give 200|
|/||Divides numerator by de-numerator||B / A will give 2|
|%||Modulus Operator and remainder of after an integer division||B % A will give 0|
|++||Increment operator, increases integer value by one||A++ will give 11|
|--||Decrement operator, decreases integer value by one||A-- will give 9|
There are following relational operators supported by C++ language
Assume variable A holds 10 and variable B holds 20, then −
|==||Checks if the values of two operands are equal or not, if yes then condition becomes true.||(A == B) is not true.|
|!=||Checks if the values of two operands are equal or not, if values are not equal then condition becomes true.||(A != B) is true.|
|>||Checks if the value of left operand is greater than the value of right operand, if yes then condition becomes true.||(A > B) is not true.|
|<||Checks if the value of left operand is less than the value of right operand, if yes then condition becomes true.||(A < B) is true.|
|>=||Checks if the value of left operand is greater than or equal to the value of right operand, if yes then condition becomes true.||(A >= B) is not true.|
|<=||Checks if the value of left operand is less than or equal to the value of right operand, if yes then condition becomes true.||(A <= B) is true.|
There are following logical operators supported by C++ language.
Assume variable A holds 1 and variable B holds 0, then −
|&&||Called Logical AND operator. If both the operands are non-zero, then condition becomes true.||(A && B) is false.|
|||||Called Logical OR Operator. If any of the two operands is non-zero, then condition becomes true.||(A || B) is true.|
|!||Called Logical NOT Operator. Use to reverses the logical state of its operand. If a condition is true, then Logical NOT operator will make false.||!(A && B) is true.|
Bitwise operator works on bits and perform bit-by-bit operation. The truth tables for &, |, and ^ are as follows −
|p||q||p & q||p | q||p ^ q|
Assume if A = 60; and B = 13; now in binary format they will be as follows −
A = 0011 1100
B = 0000 1101
A&B = 0000 1100
A|B = 0011 1101
A^B = 0011 0001
~A = 1100 0011
The Bitwise operators supported by C++ language are listed in the following table. Assume variable A holds 60 and variable B holds 13, then −
|&||Binary AND Operator copies a bit to the result if it exists in both operands.||(A & B) will give 12 which is 0000 1100|
||||Binary OR Operator copies a bit if it exists in either operand.||(A | B) will give 61 which is 0011 1101|
|^||Binary XOR Operator copies the bit if it is set in one operand but not both.||(A ^ B) will give 49 which is 0011 0001|
|~||Binary Ones Complement Operator is unary and has the effect of 'flipping' bits.||(~A ) will give -61 which is 1100 0011 in 2's complement form due to a signed binary number.|
|<<||Binary Left Shift Operator. The left operands value is moved left by the number of bits specified by the right operand.||A << 2 will give 240 which is 1111 0000|
|>>||Binary Right Shift Operator. The left operands value is moved right by the number of bits specified by the right operand.||A >> 2 will give 15 which is 0000 1111|
There are following assignment operators supported by C++ language −
|=||Simple assignment operator, Assigns values from right side operands to left side operand.||C = A + B will assign value of A + B into C|
|+=||Add AND assignment operator, It adds right operand to the left operand and assign the result to left operand.||C += A is equivalent to C = C + A|
|-=||Subtract AND assignment operator, It subtracts right operand from the left operand and assign the result to left operand.||C -= A is equivalent to C = C - A|
|*=||Multiply AND assignment operator, It multiplies right operand with the left operand and assign the result to left operand.||C *= A is equivalent to C = C * A|
|/=||Divide AND assignment operator, It divides left operand with the right operand and assign the result to left operand.||C /= A is equivalent to C = C / A|
|%=||Modulus AND assignment operator, It takes modulus using two operands and assign the result to left operand.||C %= A is equivalent to C = C % A|
|<<=||Left shift AND assignment operator.||C <<= 2 is same as C = C << 2|
|>>=||Right shift AND assignment operator.||C >>= 2 is same as C = C >> 2|
|&=||Bitwise AND assignment operator.||C &= 2 is same as C = C & 2|
|^=||Bitwise exclusive OR and assignment operator.||C ^= 2 is same as C = C ^ 2|
||=||Bitwise inclusive OR and assignment operator.||C |= 2 is same as C = C | 2|
The following table lists some other operators that C++ supports.
|Sr.No||Operator & Description|
operator- returns the size of a variable. For example, sizeof(a), where ‘a’ is integer, and will return 4.
Condition ? X : Y
Conditional operator (?) - If Condition is true then it returns value of X otherwise returns value of Y.
Comma operator - causes a sequence of operations to be performed. The value of the entire comma expression is the value of the last expression of the comma-separated list.
. (dot) and -> (arrow)
Member operators - are used to reference individual members of classes, structures, and unions.
Casting operators - convert one data type to another. For example, int(2.2000) would return 2.
Pointer operator * - is pointer to a variable. For example *var; will pointer to a variable var.
Operators Precedence in C++
Operator precedence determines the grouping of terms in an expression. This affects how an expression is evaluated. Certain operators have higher precedence than others; for example, the multiplication operator has higher precedence than the addition operator −
For example x = 7 + 3 * 2; here, x is assigned 13, not 20 because operator * has higher precedence than +, so it first gets multiplied with 3*2 and then adds into 7.
Here, operators with the highest precedence appear at the top of the table, those with the lowest appear at the bottom. Within an expression, higher precedence operators will be evaluated first.
|Postfix||() -> . ++ - -||Left to right|
|Unary||+ - ! ~ ++ - - (type)* & sizeof||Right to left|
|Multiplicative||* / %||Left to right|
|Additive||+ -||Left to right|
|Shift||<< >>||Left to right|
|Relational||< <= > >=||Left to right|
|Equality||== !=||Left to right|
|Bitwise AND||&||Left to right|
|Bitwise XOR||^||Left to right|
|Bitwise OR|||||Left to right|
|Logical AND||&&||Left to right|
|Logical OR||||||Left to right|
|Conditional||?:||Right to left|
|Assignment||= += -= *= /= %=>>= <<= &= ^= |=||Right to left|
|Comma||,||Left to right| | <urn:uuid:cde3bd5d-c468-4f01-8b85-d6376647dcaa> | 4.375 | 2,030 | Documentation | Software Dev. | 57.094224 | 95,552,740 |
ALMA gives scientists unique view of the sun
Far up in the Chilean highlands, surrounded by guanaco herds and cacti, a huddle of 66 antennas much like satellite dishes is now facing the sun.
Atacama Large Millimeter/submillimeter Array, more commonly known as the ALMA telescope, is able to register radiowaves of about one milliteter length.
Radiowaves of this range will illumate details of the solar atmosphere that can not be seen with regular telescopes, and will give researchers a completely new view of the sun's physical processes.
– From images of this millimeter wavelength radiation we will get detailed maps of both temperature and density that will make us better understand properties of the solar atmosphere, says researcher Sven Wedemeyer at the Institute of Theoretical Astrophysics (ITA) at the University of Oslo to Titan.uio.no.
Accompanied with his colleagues at the ITA Solar Physics Group, he will now analyze six hours of observational data collected by the ALMA observatory in december.
Integrated understanding of the sun
You would maybe think that our own star is already well understood. But there are in fact a range of questions on fundamental processes of the sun that remain unanswered. One such question relates to the general dissipation of energy. By generating more direct measures of temperatures, ALMA will help the physicists to understand such mechanisms.
Another example relates to solar flares.
– Solar flares are caused by unstable magnetic fields that in sudden bursts release a lot of energy in the form of roentgen, gamma, UV, visible light, particles etc., that is flung out at great speeds. This may in turn cause northern lights if they reach the earth. This will be a big topic for ALMA, Wedemeyer explains.
For the Solar Physics Group processes involved in the chromosphere, the solar atmosphere's middle layer, are of particular interest.
– The majority of the solar radiation in the millimeter wavelenght range, comes from the chromosphere. However, the resolution of telescopes that are able to register these, has been limited. The chromosphere is poorly understood, and with the new telescope data we will get a more integrated picture of how the sun works.
– A giant technological leap
The Atacama desert was not a coincidental location for the billion dollar telescope. At 5000 meters elevation, the highland plateau is characterized with both thin and dry air, which minizme the risk of radiowaves getting distorted by the air.
– But why are we referring to this as one telescope, when it in reality comprises 66 antennas?
– Every antenna actually works in a similar way a satellite dish. On every dish there is a receiver that transformes radiowaves into electrical signals, Wedemayer explains.
However, radio telescope have limitations, he continues.
Telescope image resolutions are directly affected by both the wave length of the radiation being registered and the physical size of the lens or dish (the diameter). The ratio dicates that the larger wavelenghts you want to register, the larger the telescope needs to be in order to get an equivalent resolution.
Hence, telescopes using visible light are in general much better in resolution compared to radio telescopes. (1 millimeter radiowaves are about 1000 times larger than the wavelengths for visible light.)
In order to bypass this physical obstacle, ALMA utilizes a trick called interferometry.
– By spreading single antennas over a large surface area, you can simulate a very large telescope. The solar observations are done in a configuration corresponding to an approximately 500 meter wide telescope lense.
– ALMA also comprises more and better antennas than previous radio telescopes, yielding increased sensitivity. This is a giant techonologcal leap, says the solar physicist.
Publication within the year
Although ALMA previously has been mostly directed towards distant structures of the universe, it was constructed for observation of closer objects as well.
The ALMA "Solar Campaign" started with regular scientific observations December 2016. At the same time, the European Southern Observatory (ESO) has made the observational test data publicly available.
Wedemeyer and his colleagues are also generating simulations in their ALMA research, which they will compare to the new observational data. The main aim is to build models that can explain the general processes operating on the sun.
– Some components are very hard to model, but are crucial to include if you want to understand what is going on there, explains Wedemeyer.
What will emerge on the images will not only be relevant to our knowledge about our own sun, but also for our understanding of general processes operating on other stars.
The solar physicists anticipate publishing their first ALMA results witin the year.
More at Titan.uio.no:
Mest lest siste syv dager
– Jeg pleier å si at jeg jobber med de to største uløste gåtene i verden.
Professor Tom Lindstrøm er muligens den eneste matematikeren i Norge som også skriver bokanmeldelser og publiserer dem på sin egen blogg. Her kommer Lindstrøms tips til dem som ønsker å lese skjønnlitterære verker med innslag av mer eller mindre tung matematikk.
– Det er ikke så rart at det er i ferd med å gå galt med humlene. Vi bygger ut, vi river ned, vi intensiverer landbruket, vi endrer klimaet, vi bygger veier og introduserer nye arter der de ikke hører hjemme. I tillegg pøser vi på med kjemikalier, advarer forsker Anders Nielsen. Sommerjobben hans i år er å undersøke hvordan de mye omtalte neonikotinoidene påvirker humler. | <urn:uuid:b1c1dc8d-d01d-4469-b6ec-8edf88f1f6f0> | 3.921875 | 1,280 | News Article | Science & Tech. | 32.676083 | 95,552,747 |
What is ambient noise?
ambient noise meaning acoustic signals originating from a variety of underwater sources, such as propeller cavitation, engine noises, animal sounds, wind, waves, and rain
The sounds produced by the spotted boxfish,Ostracion meleagris,contribute to the ambient noise on Pacific reefs. (Photo: Hawai’i Coral Reef network)
reference: Coral Reef Information System – GlossaryWhat is | <urn:uuid:cfc67edb-2c15-4f12-9634-fb5d364358cc> | 3.296875 | 90 | Knowledge Article | Science & Tech. | 6.741379 | 95,552,767 |
Utqiagvik, Where the Climate Has Changed
Utqiagvik (formerly Barrow), where climate change has already happened.
Photo by Ned Rozell
Two things happened on top of the world this week. In Utqiagvik (formerly Barrow), on January 22 the sun topped the horizon for the first time since mid-November.
The day before that, January 21, was the first time since Halloween the town’s thermometers recorded a below-normal daily average air temperature.
The returning daylight for the continent’s farthest north community is due to a predictable nod of the Earth back toward the Sun. Utqiagvik’s second day of direct sunlight, January 23, featured almost an hour’s increase from the day before. The town will have four hours of daylight by the end of January. By May 11, there will be no night.
Just as dramatic are the recent warm autumns and winters in Utqiagvik. While many people worldwide sense their favorite places are changing, residents of Utqiagvik use the past tense.
Photo by Ned Rozell
Craig George looks for migrating bowhead whales north of Utqiagvik in May 2010.
“The term is no longer ‘climate change’ at Utqiagvik. It is ‘climate changed.’ No doubt about it, based on my 40 years,” said biologist Craig George, who studies bowhead whales and other animals from his home in Utqiagvik.
George remembered back to October 1988, when three gray whales became trapped in Beaufort Sea ice just north of Point Barrow. The whales became a worldwide news story, as local rescuers used chainsaws to cut circular breathing holes in the sea ice, trying to lead the whales to open ocean.
“This year, we had crashing waves onshore and 34 degrees F on winter solstice,” he said. “It’s almost like a different planet.”
In December, NOAA scientists looking for the latest temperatures from Utqiagvik sensors found computer algorithms had flagged and removed November readings because they seemed so far off.
The average temperature for October through December 2017 was 15.6 degrees F, 12.2 degrees above normal and highest for that span in the last 98 years, according to NOAA climatologist Rick Thoman.
Since 2000, the average October temperature in Utqiagvik has increased 7.8 degrees F. November’s average temperature has increased 6.9 degrees and December’s 4.7.
Utqiagvik residents experienced above-normal average daily temperatures 77 percent of the year 2017, Thoman figured. It seems a different place.
“Barrow (Utqiagvik) presents a great example of a changed climate,” said climatologist John Walsh, chief scientist at the International Arctic Research Center.
Utqiagvik also had its latest ground freezeup ever this winter.
“Permafrost temperatures there at (four-foot) depth are 3 to 4 degrees C higher than at the same time last year, even though last year was also warmer than normal,” said Vladimir Romanovsky of UAF’s Geophysical Institute.
Sea ice that is forming later in the fall and covering less ocean is driving the warmth. Open ocean has a warming effect on the land around it. Utqiagvik is seeming more like Helsinki than a town on the Arctic Ocean.
The Chukchi Sea to the west of Utqiagvik did not ice over until about Jan. 1, 2018, the latest in the satellite record that goes back to the late 1970s. An average date the Chukchi Basin was ice-covered in the late 1980s was about November 20.
Where does Utqiagvik go from here? Residents say they hoped for a return to conditions before the 1990s, when the extreme warming began.
“I’ve been waiting, almost wishing, for the climate models to be wrong,” said George, who works for the North Slope Borough. “No question now. They’re right. Argue about the reason, but the fact is, our world has changed.” | <urn:uuid:ec6aacbe-f008-49a0-bd12-c430f193b402> | 2.84375 | 880 | News Article | Science & Tech. | 55.446829 | 95,552,768 |
Identified for the first time a mineral that until now was only present in meteorites
- News 31 July 2017 2394 hits
The study, published in European Journal of Mineralogy, affirms that the mineral is chladniite, a complex phosphate belonging to the fillowite group, which contains sodium, calcium, magnesium and iron, and has a trigonal structure. It has been found in a pegmatite, an igneous (magmatic) rock, formed from the slow cooling and solidification of magma.
Chladniite was first encountered in 1993, in the iron meteorite Carlton IIICD. So far, it had only been observed in this type of meteorites, characterized by having undergone fusion and differentiation processes such as igneous rocks. The name of the mineral is in honor of the German physicist and musician Ernst Florens Friedrich Chladni (1756-1827), pioneer in the study of meteorites, who defended its extraterrestrial origin.
The crystalline structure of the mineral has been determined by fixing a very thin section of the rock onto a glass substrate, and by passing synchrotron light through the set, with the technique called through-the-substrate X-ray-microdiffraction. This technique, driven by researchers from ICMAB, began in collaboration with the Spanish-Catalan line of the European Synchrotron Radiation Facility (ESRF) in Grenoble (France) and has subsequently been developed and refined together with researchers from the MSPD line of the ALBA Synchrotron.
Two of the innovative features of this characterization technique are both the use of a very fine glass substrate, which allows the easy location of the area to be irradiated, and the use of focalized synchrotron light, which allows enlightening a very small area of the rock and, therefore, the diffraction of individual crystals.
First terrestrial occurrence of the complex phosphate chladniite: crystal-structure refinement by synchrotron through-the-substrate microdiffraction
Vallcorba, Oriol; Casas, Lluís; Colombo, Fernando; Frontera, Carlos; Rius, Jordi
European Journal of Mineralogy Volume 29 Number 2 (2017), p. 287 - 293
MSPD beamline at ALBA Synchrotron
In the press: | <urn:uuid:13c9056e-4217-4b20-9287-370f29282325> | 3.53125 | 490 | News (Org.) | Science & Tech. | 14.623432 | 95,552,784 |
The speech by former US Vice-President Al Gore was apocalyptic. ‘The North Polar ice cap is falling off a cliff,’ he said. ‘It could be completely gone in summer in as little as seven years. Seven years from now.’
Those comments came in 2007 as Mr Gore accepted the Nobel Peace Prize for his campaigning on climate change.
But seven years after his warning, The Mail can reveal that, far from vanishing, the Arctic ice cap has expanded for the second year in succession – with a surge, depending on how you measure it, of between 43 and 63 per cent since 2012.
To put it another way, an area the size of Alaska, America’s biggest state, was open water two years ago, but is again now covered by ice.
The most widely used measurements of Arctic ice extent are the daily satellite readings issued by the US National Snow and Ice Data Center, which is co-funded by Nasa. These reveal that – while the long-term trend still shows a decline – last Monday, August 25, the area of the Arctic Ocean with at least 15 per cent ice cover was 5.62 million square kilometers.
This was the highest level recorded on that date since 2006 (see graph, right), and represents an increase of 1.71 million square kilometres over the past two years – an impressive 43 per cent.
Other figures from the Danish Meteorological Institute suggest that the growth has been even more dramatic. Using a different measure, the area with at least 30 per cent ice cover, these reveal a 63 per cent rise – from 2.7 million to 4.4 million square kilometers.
The satellite images published here are taken from a further authoritative source, the University of Illinois’s Cryosphere project.
They show that as well as becoming more extensive, the ice has grown more concentrated, with the purple areas – denoting regions where the ice pack is most dense – increasing markedly.
Crucially, the ice is also thicker, and therefore more resilient to future melting. Professor Andrew Shepherd, of Leeds University, an expert in climate satellite monitoring, said: ‘It is clear from the measurements we have collected that the Arctic sea ice has experienced a significant recovery in thickness over the past year.
‘It seems that an unusually cool summer in 2013 allowed more ice to survive through to last winter. This means that the Arctic sea ice pack is thicker and stronger than usual, and this should be taken into account when making predictions of its future extent.’
Read More: The Daily Mail | <urn:uuid:a80d3d1d-c323-4d73-86af-a35b049447de> | 3.359375 | 523 | Personal Blog | Science & Tech. | 49.394948 | 95,552,785 |
5 Benefits of Immutable Objects Worth Considering for Your Next Project
Roger Jin Jul 17 '17
When first learning object oriented programming (OOP), you typically create a very basic object and implement getters and setters. From that point forward, objects are this magical world of malleable data. However, you’d be surprised to find that sometimes removing the ability to alter the data in an object can lead to more straightforward and easier to understand code. This is the case with immutable objects.
In programming, an immutable object is an object whose state cannot be modified after it is created. While at first this may not seem very useful as often times the getters and setters are the first functions created for an object, there are many clear benefits to immutable objects.
Given that by definition an immutable object cannot be changed, you will not have any synchronization issues when using them. No matter which thread is accessing the version of an object, it is guaranteed to have the same state it originally had. If one thread needs a new version or altered version of that object, it must create a new one therefore any other threads will still have the original object. This leads to simpler, more thread safe code.
PHP specifically does not support threads out of the box, so this benefit isn’t as immediately obvious when using that language.
No invalid state
Once you are given an immutable object and verify its state, you know it will always remain safe. No other thread or background process in your program will be able to change that object without your direct knowledge. Furthermore, all of the data needed to have a complete object should be provided. One situation in which this may be extremely useful it programs that need to have high security. For instance, if given an object with a filename to write to you know nothing can change that location on you.
For example, the code below outlines how constructing an immutable object may look. Since the
SimplePerson class requires all of it’s data in the constructor, we can be sure all the necessary data to have a valid object will always be present. If instead we had used setters for this data, there is no guarantee that a function like
setAge($age) would have been called by the time we received the object.
create(new SimplePerson(‘Joe’, 42));
When passing immutable objects around your code base, you can better encapsulate your methods. Since you know the object won’t change, anytime you pass this object to another method you can be positive that doing so will not alter the original state of that object in the calling code.
This also means these objects can always be passed by reference, and there is no need to have to worry about solutions like defensive copying.
Furthermore, debugging will be easier when any issues arise. You can be certain that the state the object is given to you in doesn’t change and therefore more easily track down where bugs started from.
The code below helps outline this benefit. In this code, both auto body shops we have created will have a Mercedes car associated with it since our object was not immutable. This was not the intended result, it was simple a bad side effect of using mutable objects.
setMake(‘BMW’); $autoShopOne = new AutoBodyShop($car); $car->setMake(‘Mercedes’); $autoShopTwo = new AutoBodyShop($car);
Simpler to test
Going hand in hand with better encapsulation is code that is simpler to test. The benefits of testable code are obvious and lead to a more robust and error free code base. When your code is designed in a way that lead to less side effects, there are less confusing code paths to track down.
More readable and maintainable code
Any piece of code working against an immutable object does not need to be concerned with affecting other areas of your code using that object. This means there are less intermingled and moving parts in your code. Therefore, your code will be cleaner and easier to understand.
In conclusion, making use of immutability has clear benefits. If done correctly it can lead to more understandable and testable code which are large wins in any codebase. Depending on the language you are using these benefits may be larger than others. In multi-threaded programs for instance immutability will leave much less room for confusion and side effects in your code. However, regardless of the language you’re sure to get some advantages from this functionality.
This was originally posted here. | <urn:uuid:af009e11-bcd1-4376-9eb6-96432e130009> | 2.71875 | 942 | Listicle | Software Dev. | 41.881683 | 95,552,789 |
In biology, a phylum (//; plural: phyla) is a level of classification or taxonomic rank below Kingdom and above Class. Traditionally, in botany the term division has been used instead of phylum, although the International Code of Nomenclature for algae, fungi, and plants accepts the terms as equivalent. Depending on definitions, the animal kingdom Animalia or Metazoa contains approximately 32 phyla, the plant kingdom Plantae contains about 14, and the fungus kingdom Fungi contains about 8 phyla. Current research in phylogenetics is uncovering the relationships between phyla, which are contained in larger clades, like Ecdysozoa and Embryophyta.
The term phylum was coined in 1866 by Ernst Haeckel from the Greek phylon (φῦλον, "race, stock"), related to phyle (φυλή, "tribe, clan"). In plant taxonomy, August W. Eichler (1883) classified plants into five groups named divisions, a term that remains in use today for groups of plants, algae and fungi. The definitions of zoological phyla have changed from their origins in the six Linnaean classes and the four embranchements of Georges Cuvier.
Informally, phyla can be thought of as groupings of organisms based on general specialization of body plan. At its most basic, a phylum can be defined in two ways: as a group of organisms with a certain degree of morphological or developmental similarity (the phenetic definition), or a group of organisms with a certain degree of evolutionary relatedness (the phylogenetic definition). Attempting to define a level of the Linnean hierarchy without referring to (evolutionary) relatedness is unsatisfactory, but a phenetic definition is useful when addressing questions of a morphological nature—such as how successful different body plans were.
Definition based on genetic relation
The most important objective measure in the above definitions is the "certain degree" that defines how different organisms need to be to be members of different phyla. The minimal requirement is that all organisms in a phylum should be clearly more closely related to one another than to any other group. Even this is problematic because the requirement depends on knowledge of organisms' relationships: as more data become available, particularly from molecular studies, we are better able to determine the relationships between groups. So phyla can be merged or split if it becomes apparent that they are related to one another or not. For example, the bearded worms were described as a new phylum (the Pogonophora) in the middle of the 20th century, but molecular work almost half a century later found them to be a group of annelids, so the phyla were merged (the bearded worms are now an annelid family). On the other hand, the highly parasitic phylum Mesozoa was divided into two phyla (Orthonectida and Rhombozoa) when it was discovered the Orthonectida are probably deuterostomes and the Rhombozoa protostomes.
This changeability of phyla has led some biologists to call for the concept of a phylum to be abandoned in favour of cladistics, a method in which groups are placed on a "family tree" without any formal ranking of group size.
Definition based on body plan
A definition of a phylum based on body plan has been proposed by paleontologists Graham Budd and Sören Jensen (as Haeckel had done a century earlier). The definition was posited because extinct organisms are hardest to classify: they can be offshoots that diverged from a phylum's line before the characters that define the modern phylum were all acquired. By Budd and Jensen's definition, a phylum is defined by a set of characters shared by all its living representatives.
This approach brings some small problems—for instance, ancestral characters common to most members of a phylum may have been lost by some members. Also, this definition is based on an arbitrary point of time: the present. However, as it is character based, it is easy to apply to the fossil record. A greater problem is that it relies on a subjective decision about which groups of organisms should be considered as phyla.
The approach is useful because it makes it easy to classify extinct organisms as "stem groups" to the phyla with which they bear the most resemblance, based only on the taxonomically important similarities. However, proving that a fossil belongs to the crown group of a phylum is difficult, as it must display a character unique to a sub-set of the crown group. Furthermore, organisms in the stem group of a phylum can possess the "body plan" of the phylum without all the characteristics necessary to fall within it. This weakens the idea that each of the phyla represents a distinct body plan.
A classification using this definition may be strongly affected by the chance survival of rare groups, which can make a phylum much more diverse than it would be otherwise.
This section needs additional citations for verification. (February 2013) (Learn how and when to remove this template message)
Total numbers are estimates; figures from different authors vary wildly, not least because some are based on described species, some on extrapolations to numbers of undescribed species. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million.
|Phylum||Meaning||Common name||Distinguishing characteristic||Species described|
|Acanthocephala||Thorny head||Thorny-headed worms:278||Reversible spiny proboscis that bears many rows of hooked spines||approx. 1,100|
|Annelida||Little ring:306||Annelids||Multiple circular segment||17,000+ extant|
|Arthropoda||Jointed foot||Arthropods||Segmented bodies and jointed limbs, with Chitin exoskeleton|| 20,000+ extinct1,250,000+ extant;|
|Brachiopoda||Arm foot:336||Lampshells:336||Lophophore and pedicle||300-500 extant; 12,000+ extinct|
|Bryozoa||Moss animals||Moss animals, sea mats, ectoprocts:332||Lophophore, no pedicle, ciliated tentacles, anus outside ring of cilia||6,000 extant|
|Chaetognatha||Longhair jaw||Arrow worms:342||Chitinous spines either side of head, fins||approx. 100 extant|
|Chordata||With a cord||Chordates||Hollow dorsal nerve cord, notochord, pharyngeal slits, endostyle, post-anal tail||approx. 55,000+|
|Cnidaria||Stinging nettle||Cnidarians||Nematocysts (stinging cells)||approx. 16,000|
|Ctenophora||Comb bearer||Comb jellies:256||Eight "comb rows" of fused cilia||approx. 100-150 extant|
|Cycliophora||Wheel carrying||Symbion||Circular mouth surrounded by small cilia, sac-like bodies||3+|
|Echinodermata||Spiny skin||Echinoderms:348||Fivefold radial symmetry in living forms, mesodermal calcified spines|| approx. 13,000 extinctapprox. 7,500 extant;|
|Entoprocta||Inside anus:292||Goblet worms||Anus inside ring of cilia||approx. 150|
|Gastrotricha||Hairy stomach:288||Gastrotrich worms||Two terminal adhesive tubes||approx. 690|
|Gnathostomulida||Jaw orifice||Jaw worms:260||approx. 100|
|Hemichordata||Half cord:344||Acorn worms, hemichordates||Stomochord in collar, pharyngeal slits||approx. 130 extant|
|Kinorhyncha||Motion snout||Mud dragons||Eleven segments, each with a dorsal plate||approx. 150|
|Loricifera||Corset bearer||Brush heads||Umbrella-like scales at each end||approx. 122|
|Micrognathozoa||Tiny jaw animals||Limnognathia||Accordion-like extensible thorax||1|
|Mollusca||Soft:320||Mollusks / molluscs||Muscular foot and mantle round shell|| 80,000+ extinct85,000+ extant;|
|Nematoda||Thread like||Round worms, thread worms:274||Round cross section, keratin cuticle||25,000|
|Nematomorpha||Thread form:276||Horsehair worms, Gordian worms:276||approx. 320|
|Nemertea||A sea nymph:270||Ribbon worms, Rhynchocoela:270||approx. 1,200|
|Onychophora||Claw bearer||Velvet worms:328||Legs tipped by chitinous claws||approx. 200 extant|
|Orthonectida||Straight swimming:268||Orthonectids:268||Single layer of ciliated cells surrounding a mass of sex cells||approx. 26|
|Phoronida||Zeus's mistress||Horseshoe worms||U-shaped gut||11|
|Placozoa||Plate animals||Trichoplaxes:242||Differentiated top and bottom surfaces, two ciliated cell layers, amoeboid fiber cells in between||1|
|Platyhelminthes||Flat worm:262||Flatworms:262||approx. 29,500|
|Porifera [a]||Pore bearer||Sponges:246||Perforated interior wall||10,800 extant|
|Priapulida||Little Priapus||Penis worms||approx. 20|
|Rhombozoa||Lozenge animal||Rhombozoans:264||Single anteroposterior axial cell surrounded by ciliated cells||100+|
|Rotifera||Wheel bearer||Rotifers:282||Anterior crown of cilia||approx. 2,000|
|Tardigrada||Slow step||Water bears, moss piglets:324||Four-segmented body and head||1,000+|
|Xenacoelomorpha||Strange form without gut||—||Ciliated deuterostome||400+|
The kingdom Plantae is defined in various ways by different biologists (see Current definitions of Plantae). All definitions include the living embryophytes (land plants), to which may be added the two green algae divisions, Chlorophyta and Charophyta, to form the clade Viridiplantae. The table below follows the influential (though contentious) Cavalier-Smith system in equating "Plantae" with Archaeplastida, a group containing Viridiplantae and the algal Rhodophyta and Glaucophyta divisions.
The definition and classification of plants at the division level also varies from source to source, and has changed progressively in recent years. Thus some sources place horsetails in division Arthrophyta and ferns in division Pteridophyta, while others place them both in Pteridophyta, as shown below. The division Pinophyta may be used for all gymnosperms (i.e. including cycads, ginkgos and gnetophytes), or for conifers alone as below.
Since the first publication of the APG system in 1998, which proposed a classification of angiosperms up to the level of orders, many sources have preferred to treat ranks higher than orders as informal clades. Where formal ranks have been provided, the traditional divisions listed below have been reduced to a very much lower level, e.g. subclasses.
|Other algae (Biliphyta)|
|Division||Meaning||Common name||Distinguishing characteristics||Species described|
|Anthocerotophyta||Anthoceros-like plant||Hornworts||Horn-shaped sporophytes, no vascular system||100-300+|
|Bryophyta||Bryum-like plant, moss plant||Mosses||Persistent unbranched sporophytes, no vascular system||approx. 12,000|
|Charophyta||Chara-like plant||Charophytes||approx. 1,000|
|Chlorophyta||Yellow-green plant:200||Chlorophytes||approx. 7,000|
|Cycadophyta||Cycas-like plant, palm-like plant||Cycads||Seeds, crown of compound leaves||approx. 100-200|
|Ginkgophyta||Ginkgo-like plant||Ginkgo, maidenhair tree||Seeds not protected by fruit (single living species)||only 1 extant; 50+ extinct|
|Gnetophyta||Gnetum-like plant||Gnetophytes||Seeds and woody vascular system with vessels||approx. 70|
|Clubmosses & spikemosses||Microphyll leaves, vascular system||1,290 extant|
|Magnoliophyta||Magnolia-like plant||Flowering plants, angiosperms||Flowers and fruit, vascular system with vessels||300,000|
|Liverworts||Ephemeral unbranched sporophytes, no vascular system||approx. 9,000|
|Conifers||Cones containing seeds and wood composed of tracheids||629 extant|
|Pteridophyta||Pteris-like plant, fern plant||Ferns & horsetails||Prothallus gametophytes, vascular system||approx. 9,000 (not including lycophytes)|
|Rhodophyta||Rose plant||Red algae||approx. 7,000|
|Division||Meaning||Common name||Distinguishing characteristics|
|Ascomycota||Bladder fungus:396||Ascomycetes,:396 sac fungi|
|Basidiomycota||Small base fungus:402||Basidiomycetes:402|
|Blastocladiomycota||Offshoot branch fungus||Blastoclads|
|Chytridiomycota||Little cooking pot fungus||Chytrids|
|Glomeromycota||Ball of yarn fungus:394||Glomeromycetes, AM fungi:394|
|Neocallimastigomycota||New beautiful whip fungus||Neocallimastigomycetes|
Phylum Microsporidia is generally included in kingdom Fungi, though its exact relations remain uncertain, and it is considered a protozoan by the International Society of Protistologists (see Protista, below). Molecular analysis of Zygomycota has found it to be polyphyletic (its members do not share an immediate ancestor), which is considered undesirable by many biologists. Accordingly, there is a proposal to abolish the Zygomycota phylum. Its members would be divided between phylum Glomeromycota and four new subphyla incertae sedis (of uncertain placement): Entomophthoromycotina, Kickxellomycotina, Mucoromycotina, and Zoopagomycotina.
Kingdom Protista (or Protoctista) is included in the traditional five- or six-kingdom model, where it can be defined as containing all eukaryotes that are not plants, animals, or fungi.:120 Protista is a polyphyletic taxon (it includes groups not directly related to one another), which is less acceptable to present-day biologists than in the past. Proposals have been made to divide it among several new kingdoms, such as Protozoa and Chromista in the Cavalier-Smith system.
Protist taxonomy has long been unstable, with different approaches and definitions resulting in many competing classification schemes. The phyla listed here are used for Chromista and Protozoa by the Catalogue of Life, adapted from the system used by the International Society of Protistologists.
|Phylum/Division||Meaning||Common name||Distinguishing characteristics||Example|
|Euglenozoa||True eye animal||Euglena|
|Foraminifera||Hole bearers||Forams||Complex shells with one or more chambers||Forams|
Currently there are 29 phyla accepted by List of Prokaryotic names with Standing in Nomenclature (LPSN)
- Acidobacteria, phenotipically diverse and mostly uncultured
- Actinobacteria, High-G+C Gram positive species
- Aquificae, only 14 thermophilic genera, deep branching
- Caldiserica, formerly candidate division OP5, Caldisericum exile is the sole representative
- Chlamydiae, only 6 genera
- Chlorobi, only 7 genera, green sulphur bacteria
- Chloroflexi, green non-sulphur bacteria
- Chrysiogenetes, only 3 genera (Chrysiogenes arsenatis, Desulfurispira natronophila, Desulfurispirillum alkaliphilum)
- Cyanobacteria, also known as the blue-green algae
- Deinococcus-Thermus, Deinococcus radiodurans and Thermus aquaticus are "commonly known" species of this phyla
- Elusimicrobia, formerly candidate division Thermite Group 1
- Firmicutes, Low-G+C Gram positive species, such as the spore-formers Bacilli (aerobic) and Clostridia (anaerobic)
- Lentisphaerae, formerly clade VadinBE97
- Proteobacteria, the most known phyla, containing species such as Escherichia coli or Pseudomonas aeruginosa
- Spirochaetes, species include Borrelia burgdorferi, which causes Lyme disease
- Tenericutes, alternatively class Mollicutes in phylum Firmicutes (notable genus: Mycoplasma)
- Thermotogae, deep branching
Currently there are 5 phyla accepted by List of Prokaryotic names with Standing in Nomenclature (LPSN).
- Crenarchaeota, second most common archaeal phylum
- Euryarchaeota, most common archaeal phylum
- Nanoarchaeota, ultra-small symbiotes, single known species
- McNeill, J.; et al., eds. (2012). International Code of Nomenclature for algae, fungi, and plants (Melbourne Code), Adopted by the Eighteenth International Botanical Congress Melbourne, Australia, July 2011 (electronic ed.). International Association for Plant Taxonomy. Retrieved 2017-05-14.
- "Life sciences". The American Heritage New Dictionary of Cultural Literacy (third ed.). Houghton Mifflin Company. 2005. Retrieved 2008-10-04.
Phyla in the plant kingdom are frequently called divisions.
- Berg, Linda R. (2 March 2007). Introductory Botany: Plants, People, and the Environment (2 ed.). Cengage Learning. p. 15. ISBN 9780534466695. Retrieved 2012-07-23.
- Valentine 2004, p. 8.
- Naik, V.N. (1984). Taxonomy of Angiosperms. Tata McGraw-Hill. p. 27. ISBN 9780074517888.
- Collins AG, Valentine JW (2001). "Defining phyla: evolutionary pathways to metazoan body plans." Evol. Dev. 3: 432-442.
- Valentine, James W. (2004). On the Origin of Phyla. Chicago: University Of Chicago Press. p. 7. ISBN 0-226-84548-6.
Classifications of organisms in hierarchical systems were in use by the seventeenth and eighteenth centuries. Usually organisms were grouped according to their morphological similarities as perceived by those early workers, and those groups were then grouped according to their similarities, and so on, to form a hierarchy.
- Budd, G.E.; Jensen, S. (May 2000). SCIOS "A critical reappraisal of the fossil record of the bilaterian phyla" Check
|url=value (help). Biological Reviews. 75 (2): 253–295. doi:10.1111/j.1469-185X.1999.tb00046.x. PMID 10881389. Retrieved 2007-05-26.
- Rouse G.W. (2001). "A cladistic analysis of Siboglinidae Caullery, 1914 (Polychaeta, Annelida): formerly the phyla Pogonophora and Vestimentifera". Zoological Journal of the Linnean Society. 132 (1): 55–80. doi:10.1006/zjls.2000.0263.
- Pawlowski J, Montoya-Burgos JI, Fahrni JF, Wüest J, Zaninetti L (October 1996). "Origin of the Mesozoa inferred from 18S rRNA gene sequences". Mol. Biol. Evol. 13 (8): 1128–32. doi:10.1093/oxfordjournals.molbev.a025675. PMID 8865666.
- Budd, G. E. (September 1998). "Arthropod body-plan evolution in the Cambrian with an example from anomalocaridid muscle". Lethaia. Blackwell Synergy. 31 (3): 197–210. doi:10.1111/j.1502-3931.1998.tb00508.x.
- Briggs, D. E. G.; Fortey, R. A. (2005). "Wonderful strife: systematics, stem groups, and the phylogenetic signal of the Cambrian radiation". Paleobiology. 31 (2 (Suppl)): 94–112. doi:10.1666/0094-8373(2005)031[0094:WSSSGA]2.0.CO;2.
- Zhang, Zhi-Qiang (2013-08-30). "Animal biodiversity: An update of classification and diversity in 2013. In: Zhang, Z.-Q. (Ed.) Animal Biodiversity: An Outline of Higher-level Classification and Survey of Taxonomic Richness (Addenda 2013)". Zootaxa. 3703 (1): 5. doi:10.11646/zootaxa.3703.1.3.
- Felder, Darryl L.; Camp, David K. (2009). Gulf of Mexico Origin, Waters, and Biota: Biodiversity. Texas A&M University Press. p. 1111. ISBN 978-1-60344-269-5.
- Margulis, Lynn; Chapman, Michael J. (2009). Kingdoms and Domains (4th corrected ed.). London: Academic Press. ISBN 9780123736215.
- Feldkamp, S. (2002) Modern Biology. Holt, Rinehart, and Winston, USA. (pp. 725)
- Cavalier-Smith, Thomas (22 June 2004). "Only Six Kingdoms of Life". Proceedings: Biological Sciences. London: Royal Society. 271 (1545): 1251–1262. doi:10.1098/rspb.2004.2705. PMC .
- Mauseth 2012, pp. 514, 517.
- Cronquist, A.; A. Takhtajan; W. Zimmermann (April 1966). "On the higher taxa of Embryobionta". Taxon. International Association for Plant Taxonomy (IAPT). 15 (4): 129–134. doi:10.2307/1217531. JSTOR 1217531.
- Chase, Mark W. & Reveal, James L. (October 2009), "A phylogenetic classification of the land plants to accompany APG III", Botanical Journal of the Linnean Society, 161 (2): 122–127, doi:10.1111/j.1095-8339.2009.01002.x
- Mauseth, James D. (2012). Botany : An Introduction to Plant Biology (5th ed.). Sudbury, MA: Jones and Bartlett Learning. ISBN 978-1-4496-6580-7. p. 489
- Mauseth 2012, p. 489.
- Mauseth 2012, p. 540.
- Mauseth 2012, p. 542.
- Mauseth 2012, p. 543.
- Mauseth 2012, p. 509.
- Crandall-Stotler, Barbara; Stotler, Raymond E. (2000). "Morphology and classification of the Marchantiophyta". In A. Jonathan Shaw & Bernard Goffinet (Eds.). Bryophyte Biology. Cambridge: Cambridge University Press. p. 21. ISBN 0-521-66097-1.
- Mauseth 2012, p. 535.
- Holt, Jack R.; Iudica, Carlos A. (1 October 2016). "Blastocladiomycota". Diversity of Life. Susquehanna University. Retrieved 29 December 2016.
- Holt, Jack R.; Iudica, Carlos A. (9 January 2014). "Chytridiomycota". Diversity of Life. Susquehanna University. Retrieved 29 December 2016.
- Holt, Jack R.; Iudica, Carlos A. (12 March 2013). "Microsporidia". Diversity of Life. Susquehanna University. Retrieved 29 December 2016.
- Holt, Jack R.; Iudica, Carlos A. (23 April 2013). "Neocallimastigomycota". Diversity of Life. Susquehanna University. Retrieved 29 December 2016.
- Hibbett DS, Binder M, Bischoff JF, Blackwell M, Cannon PF, Eriksson OE, et al. (May 2007). "A higher-level phylogenetic classification of the Fungi" (PDF). Mycological Research. 111 (Pt 5): 509–47. doi:10.1016/j.mycres.2007.03.004. PMID 17572334. Archived from the original (PDF) on 26 March 2009.
- Ruggiero, Michael A.; Gordon, Dennis P.; Orrell, Thomas M.; et al. (29 April 2015). "A Higher Level Classification of All Living Organisms". PLOS One. 10 (6): e0119248. doi:10.1371/journal.pone.0119248.
- White, Merlin M.; James, Timothy Y.; O'Donnell, Kerry; et al. (Nov–Dec 2006). "Phylogeny of the Zygomycota Based on Nuclear Ribosomal Sequence Data". Mycologia. Lawrence, KS: Mycological Society of America. 98 (6): 872–884. doi:10.1080/15572536.2006.11832617.
- Hagen, Joel B. (January 2012). "Five Kingdoms, More or Less: Robert Whittaker and the Broad Classification of Organisms". BioScience. 62 (1): 67–74. doi:10.1525/bio.2012.62.1.11.
- Blackwell, Will H.; Powell, Martha J. (June 1999). "Reconciling Kingdoms with Codes of Nomenclature: Is It Necessary?". Systematic Biology. 48 (2): 406–412. doi:10.1080/106351599260382.
- Davis, R. A. (19 March 2012). "Kingdom PROTISTA". College of Mount St. Joseph. Retrieved 28 December 2016.
- "Taxonomic tree". Catalogue of Life. 23 December 2016. Retrieved 28 December 2016.
- Corliss, John O. (1984). "The Kingdom Protista and its 45 Phyla". BioSystems. 17: 87–176. doi:10.1016/0303-2647(84)90003-0.
- J.P. Euzéby. "List of Prokaryotic names with Standing in Nomenclature: Phyla". Retrieved 2016-12-28.
|Look up Phylum in Wiktionary, the free dictionary.|
- Are phyla "real"? Is there really a well-defined "number of animal phyla" extant and in the fossil record?
- Major Phyla Of Animals | <urn:uuid:2450936f-cb3e-4397-9eaf-59874b73921b> | 3.609375 | 6,289 | Knowledge Article | Science & Tech. | 49.021661 | 95,552,791 |
First discovered 150 years ago, Neanderthals have been studied more widely than any other form of human. Thanks to a new interactive inventory and online catalogue developed in Europe, scientists worldwide can now probe the secrets of this primitive relative from the comfort of their computer.
Neanderthal humans (Homo neanderthalensis) was once common throughout Europe, but died out some 30,000 years ago. Since the discovery of Neanderthal remains in Düsseldorf, Germany in 1856, archaeologists have unearthed its fossils at dozens of different excavation sites, including those in Croatia, Belgium, France and Germany.
“These extensive finds explain why most of the scientific analysis of human evolution has been done on Neanderthals,” says Heinz Cordes, coordinator of the IST project TNT, which stands for The Neanderthal Tools.
Tara Morris | alfa
Study suggests buried Internet infrastructure at risk as sea levels rise
18.07.2018 | University of Wisconsin-Madison
Microscopic trampoline may help create networks of quantum computers
17.07.2018 | University of Colorado at Boulder
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Materials Sciences
20.07.2018 | Physics and Astronomy
20.07.2018 | Materials Sciences | <urn:uuid:91bad963-f8b8-461c-83fb-d906e04f08c5> | 3.421875 | 745 | Content Listing | Science & Tech. | 34.787717 | 95,552,795 |
The Project: Measure wind characteristics along Trail Ridge and atop Longs Peak during the winter and summer of 1980.
A team led by D.E. Glidden and sponsored by the Rocky Mountain Nature Association established a series of wind-monitoring sites in alpine and subalpine areas. In winter they serviced these sites with some difficulty due to the danger of attempting to stand in powerful and turbulent winds. Sustained subzero temperatures and ice created an extra challenge for instrument operation. The scientists analyzed the data and compared them with similar data from locations around Colorado and the world.
The Results: The winds in Rocky Mountain National Park are exceptionally turbulent and among the world’s most severe. Wind turbines would be impractical.
With gusts reaching 201 mph on Longs Peak and 155 mph on Trail Ridge, the park hosts some of the strongest winds in the world. (For comparison, the highest surface gust ever recorded was 231 mph on Mount Washington, New Hampshire, also the site of the highest annual average wind speed in the U.S.) Wind patterns are influenced not just by elevation but also by steep slopes and narrow valleys. The park’s complex landforms result in particularly gusty winds. To quantify this type of turbulence, researchers divided the maximum hourly wind speed by the average hourly wind speed. For example, a maximum wind speed of 60 mph divided by an average hourly wind speed of 14 mph yields a gust factor of 4.28. In winter maximum turbulence occurred near sunrise and minimum turbulence after sunset. During summer winds are generally most turbulent at midday and least turbulent at sunrise. Scientists frequently recorded gusts of 74 mph or higher, a hurricane force wind, at Alpine Visitor Center both winter and summer. Average summer gust factors at the visitor center exceeded those calculated in a separate study for Mount Washington. Since wind turbines are typically shut down when wind speeds exceed 40 mph, conditions are actually too windy above treeline to rely on wind turbines for power generation. Further studies are needed to reveal the relationship of severe windstorms to topographical and climatological patterns. In the meantime alpine visitors have a unique opportunity to be standing in a breeze one moment and a hurricane-force wind the next.
This summary is based on published, peer-reviewed and/or unpublished reports available at the time of writing. It is not intended as a statement of park policy or as a definitive account of research results. For more information on the park’s research program, see www.nps.gov/romo Written by: Judy Visty Date: November 2004 Updated: January 2008 Photo credit: top, NPS-RMNP and bottom, Dave Glidden | <urn:uuid:33f7d41a-872f-4d4d-a418-d7f514fb71ee> | 3.734375 | 544 | Knowledge Article | Science & Tech. | 42.9425 | 95,552,796 |
Standard: ISO/IEC 8652
INFORMATION TECHNOLOGY - PROGRAMMING LANGUAGES - ADA
This standard is available for individual purchase.
IHS Standards Expert subscription, simplifies and expedites the process for finding and managing standards by giving you access to standards from over 370 standards developing organizations (SDOs).FEATURES & BENEFITS
- Maximize product development and R&D with direct access to over 1.6 million standards
- Discover new markets: Identify unmet needs and discover next-generation technologies
- Improve quality by leveraging consistent standards to meet customer and market requirements
- Minimize risk: Mitigate liability and better understand compliance regulations
- Boost efficiency: Speed up research, capture and reuse expertise
HOW TO SUBSCRIBE
This International Standard specifies the form and meaning of programs written in Ada. Its purpose is to promote the portability of Ada programs to a variety of computing systems.
Ada is a programming language designed to support the construction of long-lived, highly reliable software systems. The language includes facilities to define packages of related types, objects, and operations. The packages may be parameterized and the types may be extended to support the construction of libraries of reusable, adaptable software components. The operations may be implemented as subprograms using conventional sequential control structures, or as entries that include synchronization of concurrent threads of control as part of their invocation. Ada supports object-oriented programming by providing classes and interfaces, inheritance, polymorphism of variables and methods, and generic units. The language treats modularity in the physical sense as well, with a facility to support separate compilation.
The language provides rich support for real-time, concurrent programming, and includes facilities for multicore and multiprocessor programming. Errors can be signaled as exceptions and handled explicitly. The language also covers systems programming; this requires precise control over the representation of data and access to system-dependent properties. Finally, a predefined environment of standard packages is provided, including facilities for, among others, input-output, string manipulation, numeric elementary functions, and random number generation, and definition and use of containers.
|Organization:||International Organization for Standardization|
|Document Number:||iso/iec 8652|
|Change Type:||COMPLETE REVISION|
|Most Recent Revision:||NO|
|Document #||Change Type||Update Date||Revision||Status|
|ISO/IEC 8652||Change Type: COR1||Revision: 3RD||Status: ACTV|
|ISO/IEC 8652||Change Type: COR1||Update Date: 2016-02-01||Revision: 3RD||Status: ACTV|
|ISO/IEC 8652||Change Type: A1||Revision: 2ND||Status: INAC|
|ISO/IEC 8652||Change Type: A1||Update Date: 2007-03-15||Revision: 2ND||Status: INAC|
|ISO/IEC 8652||Change Type: COR1||Revision: 2ND||Status: INAC|
|ISO/IEC 8652||Change Type: STCH||Update Date: 1995-01-01||Revision: 2ND||Status: INAC|
|ISO/IEC 8652||Change Type:||Revision: 1ST||Status: INAC|
This Standard References
Showing 10 of 13.
Standards That Reference This Standard
Showing 10 of 67. | <urn:uuid:43909206-e9a8-4afe-97c3-1973ac09f205> | 2.546875 | 731 | Documentation | Software Dev. | 10.02797 | 95,552,819 |
Report: U.S. carbon dioxide emissions up 18% since 1990
By David Gram, Associated Press Writer
MONTPELIER, Vt. Emissions of the greenhouse gas carbon dioxide rose 18% in the USA from 1990 to 2004, with Texas and Nevada leading the way, an environmental group reported Thursday.
Using data from the U.S. Department of Energy, the U.S. Public Interest Research Group analyzed carbon emissions in 48 states and rank-ordered them, finding that only Delaware, Massachusetts and the District of Columbia cut back on those emissions.
Among the findings:
Texas' carbon emissions grew by 95.8 million metric tons during the period, the largest increase of any state, followed by Florida, Illinois, North Carolina and Georgia.
Fast-growing Nevada ranked first for percentage growth in carbon emissions, at 55%, followed by Arizona (54), New Hampshire (50) and South Carolina (45).
Carbon emissions from power generation grew by 28% and by 23% in the transportation sector, with vehicle miles traveled growing fastest in the state of Florida, up 79% during the period.
"Global warming pollution is skyrocketing in the United States just as scientists are sounding alarms that we must rapidly reduce pollution to protect future generations," said Emily Figdor, director of the Washington, D.C.,-based U.S. Public Interest Research Group. "This report is a wake-up call to cap pollution levels now before it is too late."
The report comes on the heels of a United Nations Intergovernmental Panel on Climate Change report pointing to what scientists have concluded would be the dire effects of unchecked carbon emissions and resulting global warming.
In a supplemental report released Tuesday that focused on North America, the U.N. panel said cities like Chicago and Los Angeles could see heat waves much more often; New York and Boston could be flooded by ocean storm surges and cities in the West that use melting snow for water could face severe shortages.
U.S. Public Interest Research Group is supporting legislation U.S. Sen. Bernie Sanders, I-Vt., is the lead sponsor in the Senate that would cut carbon emissions to 80% below 1990 levels by 2050.
Copyright 2007 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Find this article at:
SAVE THIS | EMAIL THIS | Close
Check the box to include the list of links referenced in the article.
Copyright 2007 USA TODAY, a division of Gannett Co. Inc. | <urn:uuid:62eab845-86bc-4f67-a613-e2f4709b48f3> | 2.609375 | 521 | News Article | Science & Tech. | 53.998472 | 95,552,842 |
HISTORICAL WEATHER EVENTS - 18 July
From the files of the Aviation Weather Center, Kansas City, MO
- ...1889...A cloudburst in West Virginia along the small creeks in Wirt County, Jackson County and Wood County claimed twenty lives. Rockport, WV reported 19.00 inches of rain in two hours and ten minutes that Thursday evening, setting a 24-hour precipitation record for the Mountain State. Tygart Creek rose 22 feet in one hour, and villages were swept away on Tygart, Slate, Tucker, and Sandy Creeks. (The Weather Channel)
- ...1942...A record deluge occurred at Smethport in northern Pennsylvania, with 30.70 inches in just six hours. Several additional national records including 3-hour rainfall (28.50"), 4.5-hour rainfall (30.70"), and 12-hour rainfall (34.30").
The 24-hour rainfall total for the day was 34.50 inches, which set a maximum 24-hour precipitation for the Keystone State. The downpours and resultant flooding in Pennsylvania were devastating. (David Ludlum) (Intellicast) (NCDC)
- ...1986...One of the most "photogenic" tornadoes touched down in the northern suburbs of Minneapolis, MN during the late afternoon. The very slow moving F2 tornado actually appeared live on the evening news by way of an aerial video taken by the KARE-TV helicopter crew. The tornado, unlike most, was quite the prima donna, staying visible to tens of thousands of persons for thirty minutes. It was moderate in intensity, with winds of 113-157 mph, and caused 650 thousand dollars damage. (Storm Data)
- ...1987...Cool weather prevailed in the western U.S. Seven cities reported record low temperatures for the date, including Alamosa, CO with a reading of 38 degrees. The low of 52 degrees at Bakersfield, CA was a record for July. Up to eight inches of snow covered the Northern Sierra Nevada Range of California from a storm the previous day. During that storm, winds gusting to 52 mph at Slide Mountain, NV produced a wind chill reading of 20 degrees below zero. Susanville, CA reached 17 degrees that previous day, Blue Canyon, CA dipped to a July record of 36 degrees, and the high of 44 degrees at Klamath Falls, OR smashed their previous record for July by ten degrees. (The National Weather Summary)
- ...1988...Sweltering heat continued in California, with record high temperatures of 111 degrees at Redding and 112 degrees at Sacramento. Death Valley, CA hit 127 degrees. Late afternoon and evening thunderstorms in the Central Plains Region produced baseball size hail at Kimball, NE, wind gusts to 79 mph at Colby, KS, and six inches of rain near Lexington, NE. (The National Weather Summary) (Storm Data)
- ...1989...Thunderstorms produced severe weather in Oklahoma, northern Texas and Arkansas during the afternoon, and into the night. Thunderstorms produced baseball size hail at Stamford, TX, and wind gusts to 92 mph near Throckmorton, TX. Record heat continued in the southwestern U.S. Phoenix, AZ reported a record high of 115 degrees, and a 111-degree reading at Midland, TX was second only to their all-time record high of 112 degrees established sixteen days earlier. (The National Weather Summary)
- ...1996...An F5 tornado moved to the east-southeast, then to the east, and then to the northeast as it destroyed much of Oakfield, WI. Homes were swept clean of their foundations east of town. Canceled checks were later found 125 miles away in Lower Michigan.
A massive rainstorm in north central and northeast Illinois led to widespread flooding. Aurora reported 16.94 inches of rain, establishing a state record for the most rain in a single day. Other heavy totals included 13.60 inches at Joliet, 9.24 inches in Wheaton, 8.09 inches in DeKalb, and 7.82 inches at Elgin. This event is often called "the second most damaging weather disaster in Illinois History." (National Weather Service files)
Return to RealTime Weather Portal
Prepared by Edward J. Hopkins, Ph.D., email firstname.lastname@example.org
© Copyright, 2018, The American Meteorological Society. | <urn:uuid:80c28e23-b637-4483-a3b7-20ab964e4935> | 2.71875 | 908 | Knowledge Article | Science & Tech. | 68.817229 | 95,552,846 |
A thermonuclear weapon is a second-generation nuclear weapon design using a secondary nuclear fusion stage consisting of implosion tamper, fusion fuel, and spark plug which is bombarded by the energy released by the detonation of a primary nuclear fission bomb within, compressing the fuel material (tritium, deuterium or lithium deuteride) and causing a fusion reaction. Some advanced designs use produced by this second stage to ignite a third fast fission or fusion stage. The fission bomb and fusion fuel are placed near each other in a special radiation-reflecting container called a radiation case that is designed to contain x-rays for as long as possible. The result is greatly increased explosive power when compared to single-stage fission weapons. The device is colloquially referred to as a hydrogen bomb or, an H-bomb, because it employs the fusion of isotopes of hydrogen.The misleading term "hydrogen bomb" was already in wide public use before fission product fallout from the Castle Bravo test in 1954 revealed the extent to which the design relies on fission.
The Ivy Mike was carried out by the United States in 1952; the concept has since been employed by most of the world's nuclear powers in the design of their weapons.From National Public Radio Talk of the Nation, November 8, 2005, Siegfried Hecker of Los Alamos, "the hydrogen bomb – that is, a two-stage thermonuclear device, as we referred to it – is indeed the principal part of the U.S. arsenal, as it is of the Russian arsenal." The modern design of all thermonuclear weapons in the United States is known as the Teller–Ulam configuration for its two chief contributors, Edward Teller and Stanislaw Ulam, who developed it in 1951 on the Nuclear Non-Proliferation Institute website. This is the original classified paper by Teller and Ulam proposing staged implosion. This declassified version is heavily redacted, leaving only a few paragraphs. for the United States, with certain concepts developed with the contribution of John von Neumann. Similar devices were developed by the Soviet Union, United Kingdom, France, and China.
As thermonuclear weapons represent the most efficient design for weapon energy yield in weapons with yields above , virtually all the nuclear weapons of this size deployed by the five nuclear-weapon states under the Non-Proliferation Treaty today are thermonuclear weapons using the Teller–Ulam design. "So far as is known all high yield nuclear weapons today (>50 kt or so) use this design."
The radiation implosion mechanism exploits the temperature difference between the secondary stage's hot, surrounding radiation channel and its relatively cool interior. This temperature difference is briefly maintained by a massive heat barrier called the "pusher"/"tamper", which also serves as an implosion tamper, increasing and prolonging the compression of the secondary. If made of uranium, enriched uranium or plutonium, it can capture produced by the fusion reaction and undergo fission itself, increasing the overall explosive yield. In addition to that, some designs also make the radiation case out of a fissile material that undergoes fission. As a result, such bombs get a third fission stage, and the majority of current Teller–Ulam are fission-fusion-fission weapons. Fission of the tamper or radiation case is the main contribution to the total yield and produces radioactive fission product /ref>
Detailed knowledge of fission and fusion weapons is classified to some degree in virtually every industrialized nation. In the United States, such knowledge can by default be classified as "Restricted Data", even if it is created by persons who are not government employees or associated with weapons programs, in a legal doctrine known as "born secret" (though the constitutional standing of the doctrine has been at times called into question; see United States v. Progressive, Inc.). Born secret is rarely invoked for cases of private speculation. The official policy of the United States Department of Energy has been not to acknowledge the leaking of design information, as such acknowledgment would potentially validate the information as accurate. In a small number of prior cases, the U.S. government has attempted to prior restraint, with limited success. According to the New York Times, physicist Kenneth Ford defied government orders to remove classified information from his book, Building the H Bomb: A Personal History. Ford claims he used only pre-existing information and even submitted a manuscript to the government, which wanted to remove entire sections of the book for concern that foreign nations could use the information.
Though large quantities of vague data have been officially released, and larger quantities of vague data have been unofficially leaked by former bomb designers, most public descriptions of nuclear weapon design details rely to some degree on speculation, reverse engineering from known information, or comparison with similar fields of physics (inertial confinement fusion is the primary example). Such processes have resulted in a body of unclassified knowledge about nuclear bombs that is generally consistent with official unclassified information releases, related physics, and is thought to be internally consistent, though there are some points of interpretation that are still considered open. The state of public knowledge about the Teller–Ulam design has been mostly shaped from a few specific incidents outlined in a section below.
Surrounding the other components is a hohlraum or radiation case, a container that traps the first stage or primary's energy inside temporarily. The outside of this radiation case, which is also normally the outside casing of the bomb, is the only direct visual evidence publicly available of any thermonuclear bomb component's configuration. Numerous photographs of various thermonuclear bomb exteriors have been declassified.
The primary is thought to be a standard implosion method fission bomb, though likely with a plutonium pit boosted by small amounts of fusion fuel (usually 50/50% deuterium/tritium gas) for extra efficiency; the fusion fuel releases excess when heated and compressed, inducing additional fission. When fired, the plutonium-239 (Pu-239) or uranium-235 (U-235) core would be compressed to a smaller sphere by special layers of conventional arranged around it in an explosive lens pattern, initiating the nuclear chain reaction that powers the conventional "atomic bomb".
The secondary is usually shown as a column of fusion fuel and other components wrapped in many layers. Around the column is first a "pusher-tamper", a heavy layer of uranium-238 (U-238) or lead that helps compress the fusion fuel (and, in the case of uranium, may eventually undergo fission itself). Inside this is the fusion fuel itself, usually a form of lithium deuteride, which is used because it is easier to weaponize than liquefied tritium/deuterium gas. This dry fuel, when bombarded by , produces tritium, a heavy isotope of hydrogen which can undergo nuclear fusion, along with the deuterium present in the mixture. (See the article on nuclear fusion for a more detailed technical discussion of fusion reactions.) Inside the layer of fuel is the "spark plug", a hollow column of fissile material (plutonium-239 or uranium-235) often boosted by deuterium gas. The spark plug, when compressed, can itself undergo nuclear fission (because of the shape, it is not a critical mass without compression). The tertiary, if one is present, would be set below the secondary and probably be made up of the same materials.
Separating the secondary from the primary is the interstage. The fissioning primary produces four types of energy: 1) expanding hot gases from high explosive charges that implode the primary; 2) superheated plasma that was originally the bomb's fissile material and its tamper; 3) the electromagnetic radiation; and 4) the from the primary's nuclear detonation. The interstage is responsible for accurately modulating the transfer of energy from the primary to the secondary. It must direct the hot gases, plasma, electromagnetic radiation and neutrons toward the right place at the right time. Less than optimal interstage designs have resulted in the secondary failing to work entirely on multiple shots, known as a "fissile fizzle". The Castle Koon shot of Operation Castle is a good example; a small flaw allowed the neutron flux from the primary to prematurely begin heating the secondary, weakening the compression enough to prevent any fusion.
There is very little detailed information in the open literature about the mechanism of the interstage. One of the best sources is a simplified diagram of a British thermonuclear weapon similar to the American W80 warhead. It was released by Greenpeace in a report titled "Dual Use Nuclear Technology". A cleaned up version: The major components and their arrangement are in the diagram, though details are almost absent; what scattered details it does include likely have intentional omissions or inaccuracies. They are labeled "End-cap and Neutron Focus Lens" and "Reflector Wrap"; the former channels neutrons to the U-235/Pu-239 Spark Plug while the latter refers to an X-ray reflector; typically a cylinder made out of an X-ray opaque material such as uranium with the primary and secondary at either end. It does not reflect like a mirror; instead, it gets heated to a high temperature by the X-ray flux from the primary, then it emits more evenly spread X-rays that travel to the secondary, causing what is known as radiation implosion. In Ivy Mike, gold was used as a coating over the uranium to enhance the blackbody effect.
The first U.S. government document to mention the interstage was only recently released to the public promoting the 2004 initiation of the Reliable Replacement Warhead Program. A graphic includes blurbs describing the potential advantage of a RRW on a part by part level, with the interstage blurb saying a new design would replace "toxic, brittle material" and "expensive 'special' material... which unique facilities". "Improved Security, Safety & Manufacturability of the Reliable Replacement Warhead" , NNSA March 2007. The "toxic, brittle material" is widely assumed to be beryllium which fits that description and would also moderate the neutron flux from the primary. Some material to absorb and re-radiate the X-rays in a particular manner may also be used. A 1976 drawing that depicts an interstage that absorbs and re-radiates X-rays. From Howard Morland, "The Article", Cardozo Law Review, March 2005, p 1374.
Candidates for the "special material" are polystyrene and a substance called "FOGBANK", an unclassified codename. FOGBANK's composition is classified, though aerogel has been suggested as a possibility. It was first used in thermonuclear weapons with the W-76 thermonuclear warhead, and produced at a plant in the Y-12 Complex at Oak Ridge, Tennessee for use in the W-76. Production of FOGBANK lapsed after the W-76 production run ended. The W-76 Life Extension Program required more FOGBANK to be made. This was complicated by the fact that the original FOGBANK's properties weren't fully documented, so a massive effort was mounted to re-invent the process. An impurity crucial to the properties of the old FOGBANK was omitted during the new process. Only close analysis of new and old batches revealed the nature of that impurity. The manufacturing process used acetonitrile as a solvent, which led to at least three evacuations of the FOGBANK plant in 2006. Widely used in the petroleum and pharmaceutical industries, acetonitrile is flammable and toxic. Y-12 is the sole producer of FOGBANK. Speculation on Fogbank, Arms Control Wonk
Thermonuclear weapons may or may not use a boosted primary stage, use different types of fusion fuel, and may surround the fusion fuel with beryllium (or another neutron reflecting material) instead of depleted uranium to prevent early premature fission from occurring before the secondary is optimally compressed.
For two thermonuclear bombs for which the general size and primary characteristics are well understood, the Ivy Mike test bomb and the modern W-80 cruise missile warhead variant of the W-61 design, the radiation pressure was calculated to be 73 million bar (atmospheres) (7.3 tera- Pa) for the Ivy Mike design and 1,400 million bar (140 TPa) for the W-80.
The sequence of firing the weapon (with the foam) would be as follows:
This would complete the fission-fusion-fission sequence. Fusion, unlike fission, is relatively "clean"—it releases energy but no harmful radioactive products or large amounts of nuclear fallout. The fission reactions though, especially the last fission reaction, release a tremendous amount of fission products and fallout. If the last fission stage is omitted, by replacing the uranium tamper with one made of lead, for example, the overall explosive force is reduced by approximately half but the amount of fallout is relatively low. The neutron bomb is a hydrogen bomb with an intentionally thin tamper, allowing as much radiation as possible to escape.
[[Image:BombH explosion.svg|center|frame|Foam plasma mechanism firing sequence.
Current technical criticisms of the idea of "foam plasma pressure" focus on unclassified analysis from similar high energy physics fields that indicate that the pressure produced by such a plasma would only be a small multiplier of the basic photon pressure within the radiation case, and also that the known foam materials intrinsically have a very low absorption efficiency of the gamma ray and X-ray radiation from the primary. Most of the energy produced would be absorbed by either the walls of the radiation case or the tamper around the secondary. Analyzing the effects of that absorbed energy led to the third mechanism: ablation.
Rough calculations for the basic ablation effect are relatively simple: the energy from the primary is distributed evenly onto all of the surfaces within the outer radiation case, with the components coming to a thermal equilibrium, and the effects of that thermal energy are then analyzed. The energy is mostly deposited within about one X-ray Optical depth of the tamper/pusher outer surface, and the temperature of that layer can then be calculated. The velocity at which the surface then expands outwards is calculated and, from a basic Newtonian momentum balance, the velocity at which the rest of the tamper implodes inwards.
Applying the more detailed form of those calculations to the Ivy Mike device yields vaporized pusher gas expansion velocity of 290 kilometers per second and an implosion velocity of perhaps 400 kilometers per second if 3/4 of the total tamper/pusher mass is ablated off, the most energy efficient proportion. For the W-80 the gas expansion velocity is roughly 410 kilometers per second and the implosion velocity 570 kilometers per second. The pressure due to the ablating material is calculated to be 5.3 billion bar (530 tera- Pa) in the Ivy Mike device and 64 billion bar (6.4 Peta- Pa) in the W-80 device.
The calculated ablation pressure is one order of magnitude greater than the higher proposed plasma pressures and nearly two orders of magnitude greater than calculated radiation pressure. No mechanism to avoid the absorption of energy into the radiation case wall and the secondary tamper has been suggested, making ablation apparently unavoidable. The other mechanisms appear to be unneeded.
United States Department of Defense official declassification reports indicate that foamed plastic materials are or may be used in radiation case liners, and despite the low direct plasma pressure they may be of use in delaying the ablation until energy has distributed evenly and a sufficient fraction has reached the secondary's tamper/pusher.
Richard Rhodes' book Dark Sun stated that a layer of plastic foam was fixed to the lead liner of the inside of the Ivy Mike steel casing using copper nails. Rhodes quotes several designers of that bomb explaining that the plastic foam layer inside the outer case is to delay ablation and thus recoil of the outer case: if the foam were not there, metal would ablate from the inside of the outer case with a large impulse, causing the casing to recoil outwards rapidly. The purpose of the casing is to contain the explosion for as long as possible, allowing as much X-ray ablation of the metallic surface of the secondary stage as possible, so it compresses the secondary efficiently, maximizing the fusion yield. Plastic foam has a low density, so causes a smaller impulse when it ablates than metal does.
Two special variations exist that will be discussed in a subsequent section: the cryogenics cooled liquid deuterium device used for the Ivy Mike test, and the putative design of the W88 nuclear warhead—a small, MIRVed version of the Teller–Ulam configuration with a spheroid (Oval or watermelon shaped) primary and an elliptical secondary.
Most bombs do not apparently have tertiary "stages"—that is, third compression stage(s), which are additional fusion stages compressed by a previous fusion stage. (The fissioning of the last blanket of uranium, which provides about half the yield in large bombs, does not count as a "stage" in this terminology.)
The U.S. tested three-stage bombs in several explosions (see Operation Redwing) but is thought to have fielded only one such tertiary model, i.e., a bomb in which a fission stage, followed by a fusion stage, finally compresses yet another fusion stage. This U.S. design was the heavy but highly efficient (i.e., nuclear weapon yield per unit bomb weight) 25 Mt B41 nuclear bomb. The Soviet Union is thought to have used multiple stages (including more than one tertiary fusion stage) in their 50 megaton (100 Mt in intended use) Tsar Bomba (however, as with other bombs, the fissionable jacket could be replaced with lead in such a bomb, and in this one, for demonstration, it was). If any hydrogen bombs have been made from configurations other than those based on the Teller–Ulam design, the fact of it is not publicly known. (A possible exception to this is the Soviet early Sloika design).
In essence, the Teller–Ulam configuration relies on at least two instances of implosion occurring: first, the conventional (chemical) explosives in the primary would compress the fissile core, resulting in a fission explosion many times more powerful than that which chemical explosives could achieve alone (first stage). Second, the radiation from the fissioning of the primary would be used to compress and ignite the secondary fusion stage, resulting in a fusion explosion many times more powerful than the fission explosion alone. This chain of compression could conceivably be continued with an arbitrary number of tertiary fusion stages, each igniting more fusion fuel in the next stage
As discussed above, for destruction of cities and non-hardened targets, breaking the mass of a single missile payload down into smaller MIRV bombs, in order to spread the energy of the explosions into a "pancake" area, is far more efficient in terms of area-destruction per unit of bomb energy. This also applies to single bombs deliverable by cruise missile or other system, such as a bomber, resulting in most operational warheads in the U.S. program having yields of less than 500 kilotons.
Stanislaw Ulam, a co-worker of Teller, made the first key conceptual leaps towards a workable fusion design. Ulam's two innovations that rendered the fusion bomb practical were that compression of the thermonuclear fuel before extreme heating was a practical path towards the conditions needed for fusion, and the idea of staging or placing a separate thermonuclear component outside a fission primary component, and somehow using the primary to compress the secondary. Teller then realized that the gamma and X-ray radiation produced in the primary could transfer enough energy into the secondary to create a successful implosion and fusion burn, if the whole assembly was wrapped in a hohlraum or radiation case. Teller and his various proponents and detractors later disputed the degree to which Ulam had contributed to the theories underlying this mechanism. Indeed, shortly before his death, and in a last-ditch effort to discredit Ulam's contributions, Teller claimed that one of his own "graduate students" had proposed the mechanism.
The "George" shot of Operation Greenhouse of 9 May 1951 tested the basic concept for the first time on a very small scale. As the first successful (uncontrolled) release of nuclear fusion energy, which made up a small fraction of the 225 kt total yield, it raised expectations to a near certainty that the concept would work.
On November 1, 1952, the Teller–Ulam configuration was tested at full scale in the "Ivy Mike" shot at an island in the Enewetak Atoll, with a yield of 10.4 TNT equivalent (over 450 times more powerful than the bomb dropped on Nagasaki during World War II). The device, dubbed the Sausage, used an extra-large fission bomb as a "trigger" and liquid deuterium—kept in its liquid state by 20 (18 ) of Cryogenics equipment—as its fusion fuel, and weighed around 80 short tons (70 metric tons) altogether.
The liquid deuterium fuel of Ivy Mike was impractical for a deployable weapon, and the next advance was to use a solid lithium hydride fusion fuel instead. In 1954 this was tested in the "Castle Bravo" shot (the device was code-named Shrimp), which had a yield of 15 megatons (2.5 times expected) and is the largest U.S. bomb ever tested.
Efforts in the United States soon shifted towards developing miniaturized Teller–Ulam weapons that could fit into intercontinental ballistic missiles and submarine-launched ballistic missiles. By 1960, with the W47 warhead deployed on Polaris ballistic missile submarines, megaton-class warheads were as small as 18 inches (0.5 m) in diameter and 720 pounds (320 kg) in weight. It was later found in live testing that the Polaris warhead did not work reliably and had to be redesigned. Further innovation in miniaturizing warheads was accomplished by the mid-1970s, when versions of the Teller–Ulam design were created that could fit ten or more warheads on the end of a small MIRVed missile (see the section on the W88 below).
The first Soviet fusion design, developed by Andrei Sakharov and Vitaly Ginzburg in 1949 (before the Soviets had a working fission bomb), was dubbed the Sloika, after a Russian layer cake, and was not of the Teller–Ulam configuration. It used alternating layers of fissile material and lithium deuteride fusion fuel spiked with tritium (this was later dubbed Sakharov's "First Idea"). Though nuclear fusion might have been technically achievable, it did not have the scaling property of a "staged" weapon. Thus, such a design could not produce thermonuclear weapons whose explosive yields could be made arbitrarily large (unlike U.S. designs at that time). The fusion layer wrapped around the fission core could only moderately multiply the fission energy (modern Teller–Ulam designs can multiply it 30-fold). Additionally, the whole fusion stage had to be imploded by conventional explosives, along with the fission core, substantially multiplying the amount of chemical explosives needed.
The first Sloika design test, RDS-6s, was detonated in 1953 with a yield equivalent to 400 TNT equivalent (15–20% from fusion). Attempts to use a Sloika design to achieve megaton-range results proved unfeasible. After the United States tested the "Ivy Mike" bomb in November 1952, proving that a multimegaton bomb could be created, the Soviets searched for an additional design. The "Second Idea", as Sakharov referred to it in his memoirs, was a previous proposal by Ginzburg in November 1948 to use lithium deuteride in the bomb, which would, in the course of being bombarded by neutrons, produce tritium and free deuterium.
The Soviets demonstrated the power of the "staging" concept in October 1961, when they detonated the massive and unwieldy Tsar Bomba, a 50 megaton hydrogen bomb that derived almost 97% of its energy from fusion. It was the largest nuclear weapon developed and tested by any country.
In 1954 work began at Aldermaston to develop the British fusion bomb, with Sir William Penney in charge of the project. British knowledge on how to make a thermonuclear fusion bomb was rudimentary, and at the time the United States was not exchanging any nuclear knowledge because of the Atomic Energy Act of 1946. However, the British were allowed to observe the American Operation Castle and used sampling aircraft in the , providing them with clear, direct evidence of the compression produced in the secondary stages by radiation implosion.
Because of these difficulties, in 1955 British prime minister Anthony Eden agreed to a secret plan, whereby if the Aldermaston scientists failed or were greatly delayed in developing the fusion bomb, it would be replaced by an extremely large fission bomb.
In 1957 the Operation Grapple tests were carried out. The first test, Green Granite was a prototype fusion bomb, but failed to produce equivalent yields compared to the Americans and Soviets, achieving only approximately 300 kilotons. The second test Orange Herald was the modified fission bomb and produced 720 kilotons—making it the largest fission explosion ever. At the time almost everyone (including the pilots of the plane that dropped it) thought that this was a fusion bomb. This bomb was put into service in 1958. A second prototype fusion bomb Purple Granite was used in the third test, but only produced approximately 150 kilotons.
A second set of tests was scheduled, with testing recommencing in September 1957. The first test was based on a "… new simpler design. A two stage thermonuclear bomb that had a much more powerful trigger". This test Grapple X Round C was exploded on November 8 and yielded approximately 1.8 megatons. On April 28, 1958 a bomb was dropped that yielded 3 megatons—Britain's most powerful test. Two final air burst tests on September 2 and September 11, 1958, dropped smaller bombs that yielded around 1 megaton each.
American observers had been invited to these kinds of tests. After Britain's successful detonation of a megaton-range device (and thus demonstrating a practical understanding of the Teller–Ulam design "secret"), the United States agreed to exchange some of its nuclear designs with the United Kingdom, leading to the 1958 US–UK Mutual Defence Agreement. Instead of continuing with its own design, the British were given access to the design of the smaller American Mk 28 warhead and were able to manufacture copies.
The United Kingdom had worked closely with the Americans on the Manhattan Project. British access to nuclear weapons information was cut-off by the United States at one point due to concerns about Soviet espionage. Full cooperation was not reestablished until an agreement governing the handling of secret information and other issues was signed.
A story in The New York Times by William Broad reported that in 1995, a supposed Chinese double agent delivered information indicating that China knew secret details of the U.S. W88 warhead, supposedly through espionage., esp. Ch. 2, "PRC Theft of U.S. Thermonuclear Warhead Design Information". (This line of investigation eventually resulted in the abortive trial of Wen Ho Lee.)
In 1945, the French Atomic Energy Commission (Commissariat à l’Énergie Atomique, CEA) was founded under General Charles de Gaulle; the CEA served as the country’s atomic energy authority, overseeing commercial, military, and scientific uses of atomic power. However it was not until 1952 that a tangible goal of building plutonium reactors progressed. Two years later, a reactor was being built and a plutonium separating plant began construction shortly after. In 1954 the question about continuing to explore building an atomic bomb was raised. The French cabinet seemed to be favoring less the building of an atomic bomb. Ultimately, the Prime Minister decided to continue efforts developing an atomic bomb in secret. In late 1956, tasks were delegated between the CEA and Defense Ministry to propel atomic development such as finding a test site, providing the necessary uranium, and physical device assembly.
General Charles de Gaulle was elected the France’s Fifth Republic’s first president in 1958. De Gaulle, an avid supporter of the nuclear weapons program, approved the country’s first nuclear test to take place in one of the early months of 1960. The country’s first nuclear explosion took place on 13 February at Reggane in the Sahara Desert in French Algeria of the time. It was called "Gerboise Bleue", translating to "Blue jerboa". The first explosion was detonated at a tower height of 105 meters. The bomb used a plutonium implosion design with a yield of 70 kilotons. The Reggane test site was used for three more atmospheric tests before testing activity moved to a second site, Ecker, to carry out a total of 13 underground tests into 1967.
The French nuclear testing site was moved to the unpopulated French atolls in the Pacific Ocean. The first test conducted at these new sites was the "Canopus" test in the Fangataufa in French Polynesia on 24 August 1968, the country’s first multistage thermonuclear weapon test. The bomb was detonated from a balloon at a height of 520 meters. The result of this test was significant atmospheric contamination. Very little is known about France's development of the Teller–Ulam design, beyond the fact that France detonated a 2.6 Mt device in the 'Canopus" test. France reportedly had great difficulty with its initial development of the Teller-Ulam design, but it later overcame these, and is believed to have nuclear weapons equal in sophistication to the other major nuclear powers.
France and China did not sign or ratify the Partial Nuclear Test Ban Treaty of 1963, which banned nuclear test explosions in the atmosphere, underwater, or in outer space. Between 1966 and 1996 France carried out more than 190 nuclear tests. France’s final nuclear test took place on January 27, 1996, and then the country dismantled its Polynesian test sites. France signed the Comprehensive Nuclear-Test-Ban Treaty that same year, and then ratified the Treaty within two years.
France confirmed that its nuclear arsenal contains about 300 warheads, carried by submarine-launched ballistic missiles (SLBMs) and fighter-bombers in 2015. France has four Triomphant-class ballistic missile submarines. One ballistic missile submarine is deployed in the deep ocean, but a total of three must be in operational use at all times. The three older submarines are armed with 16 M45 missiles. The newest submarine, "Le Terrible", was commissioned in 2010, and it has M51 missiles capable of carrying TN 75 thermonuclear warheads. The air fleet is four squadrons at four different bases. In total, there are 23 Mirage 2000N aircraft and 20 Dassault Rafale capable of carrying nuclear warheads. The M51.1 missiles are intended to be replaced with the new M51.2 warhead beginning in 2016, which has a 3,000 km greater range than the M51.1.
President François Hollande announced 180 billion euros would be used from the annual defense budget to improve the country’s nuclear deterrence. France contains 13 International Monitoring System facilities that monitor for nuclear explosive activity on Earth through the use of seismic, infrasound, and hydroacoustic monitors.
France also has about 60 air-launched missiles tipped with TN 80/TN 81 warheads with a yield of about 300 kilotons each. France's nuclear program has been carefully designed to ensure that these weapons remain usable decades into the future. Currently, France is no longer deliberately producing critical mass materials such as plutonium and enriched uranium, but it still relies on nuclear energy for electricity, with Pu-239 as a byproduct.
In an interview in August 2009, the director for the 1998 test site preparations, Dr. K. Santhanam claimed that the yield of the thermonuclear explosion was lower than expected and that India should therefore not rush into signing the CTBT. Other Indian scientists involved in the test have disputed Dr. K. Santhanam's claim. International sources, using local data and citing a United States Geological Survey report compiling seismic data from 125 IRIS Consortium stations across the world, argue that the magnitudes suggested a combined yield of up to 60 kilotonnes, consistent with the Indian announced total yield of 56 kilotonnes.
It is well established that Edward Teller advised and guided the Israeli establishment on general nuclear matters for some twenty years.
On 3 September 2017, the country's state media reported that a hydrogen bomb test was conducted which resulted in "perfect success". According to the U.S. Geological Survey (USGS), the blast resulted in an earthquake with a magnitude of 6.3, 10 times more powerful than previous nuclear tests conducted by North Korea. U.S. Intelligence released an early assessment that the yield estimate was 140 kilotons, with an uncertainty range of 70 to 280 kilotons.
On 12 September, /ref>http://www.38north.org/2017/09/punggye091217/
On 13 September, an analysis of before and after synthetic-aperture radar satellite imagery of the test site was published suggesting the test occurred under of rock and the yield "could have been in excess of 300 kilotons".http://www.armscontrolwonk.com/archive/1203852/sar-image-of-punggye-ri/
Whether these statements vindicate some or all of the models presented above is up for interpretation, and official U.S. government releases about the technical details of nuclear weapons have been purposely equivocating in the past (see, e.g., Smyth Report). Other information, such as the types of fuel used in some of the early weapons, has been declassified, though precise technical information has not been.
Morland eventually concluded that the "secret" was that the primary and secondary were kept separate and that radiation pressure from the primary compressed the secondary before igniting it. When an early draft of the article, to be published in The Progressive magazine, was sent to the DOE after falling into the hands of a professor who was opposed to Morland's goal, the DOE requested that the article not be published, and pressed for a temporary injunction. The DOE argued that Morland's information was (1) likely derived from classified sources, (2) if not derived from classified sources, itself counted as "secret" information under the "born secret" clause of the 1954 Atomic Energy Act, and (3) was dangerous and would encourage nuclear proliferation.
Morland and his lawyers disagreed on all points, but the injunction was granted, as the judge in the case felt that it was safer to grant the injunction and allow Morland, et al., to appeal, which they did in United States v. The Progressive (1979).
Through a variety of more complicated circumstances, the DOE case began to wane as it became clear that some of the data they were attempting to claim as "secret" had been published in a students' encyclopedia a few years earlier. After another H-bomb speculator, Chuck Hansen, had his own ideas about the "secret" (quite different from Morland's) published in a Wisconsin newspaper, the DOE claimed that The Progressive case was moot, dropped its suit, and allowed the magazine to publish its article, which it did in November 1979. Morland had by then, however, changed his opinion of how the bomb worked, suggesting that a foam medium (the polystyrene) rather than radiation pressure was used to compress the secondary, and that in the secondary there was a spark plug of fissile material as well. He published these changes, based in part on the proceedings of the appeals trial, as a short erratum in The Progressive a month later. In 1981, Morland published a book about his experience, describing in detail the train of thought that led him to his conclusions about the "secret".
Morland's work is interpreted as being at least partially correct because the DOE had sought to censor it, one of the few times they violated their usual approach of not acknowledging "secret" material that had been released; however, to what degree it lacks information, or has incorrect information, is not known with any confidence. The difficulty that a number of nations had in developing the Teller–Ulam design (even when they apparently understood the design, such as with the United Kingdom), makes it somewhat unlikely that this simple information alone is what provides the ability to manufacture thermonuclear weapons. Nevertheless, the ideas put forward by Morland in 1979 have been the basis for all the current speculation on the Teller–Ulam design.
The reentry nose cone for the W88 and W87 are the same size, 1.75 meters (69 in) long, with a maximum diameter of 55 cm. (22 in). The higher yield of the W88 implies a larger secondary, which produces most of the yield. Putting the secondary, which is heavier than the primary, in the wider part of the cone allows it to be larger, but it also moves the center of mass aft, potentially causing aerodynamic stability problems during reentry. Dead-weight ballast must be added to the nose to move the center of mass forward.
To make the primary small enough to fit into the narrow part of the cone, its bulky insensitive high explosive charges must be replaced with more compact "non-insensitive" that are more hazardous to handle. The higher yield of the W88, which is the last new warhead produced by the United States, thus comes at a price of higher warhead weight and higher workplace hazard. The W88 also contains tritium, which has a half life of only 12.32 years and must be repeatedly replaced. If these stories are true, it would explain the reported higher yield of the W88, 475 kilotons, compared with only 300 kilotons for the earlier W87 warhead. | <urn:uuid:e137b0e3-5de4-4161-9537-633f46d3270f> | 3.234375 | 7,967 | Knowledge Article | Science & Tech. | 38.401117 | 95,552,871 |
Radical (chemistry)(Redirected from Free radical)
In chemistry, a radical (more precisely, a free radical) is an atom, molecule, or ion that has an unpaired valence electron. With some exceptions, these unpaired electrons make free radicals highly chemically reactive. Many free radicals spontaneously dimerize. Most organic radicals have short lifetimes.
A notable example of a free radical is the hydroxyl radical (HO•), a molecule that has one unpaired electron on the oxygen atom. Two other examples are triplet oxygen and triplet carbene (:CH
2) which have two unpaired electrons.
Free radicals may be generated in a number of ways, but typical methods involve redox reactions. Ionizing radiation, heat, electrical discharges, electrolysis, are known to produce radicals. Radicals are intermediates in many chemical reactions, more so than is apparent from the balanced equations.
Free radicals are important in combustion, atmospheric chemistry, polymerization, plasma chemistry, biochemistry, and many other chemical processes. A large fraction of natural products are generated by radical-generating enzymes. In living organisms, the free radicals superoxide and nitric oxide and their reaction products regulate many processes, such as control of vascular tone and thus blood pressure. They also play a key role in the intermediary metabolism of various biological compounds. Such radicals can even be messengers in a process dubbed redox signaling. A radical may be trapped within a solvent cage or be otherwise bound.
Depiction in chemical reactionsEdit
In chemical equations, free radicals are frequently denoted by a dot placed immediately to the right of the atomic symbol or molecular formula as follows:
- Chlorine gas can be broken down by ultraviolet light to form atomic chlorine radicals.
Radical reaction mechanisms use single-headed arrows to depict the movement of single electrons:
The homolytic cleavage of the breaking bond is drawn with a 'fish-hook' arrow to distinguish from the usual movement of two electrons depicted by a standard curly arrow. The second electron of the breaking bond also moves to pair up with the attacking radical electron; this is not explicitly indicated in this case.
Free radicals also take part in radical addition and radical substitution as reactive intermediates. Chain reactions involving free radicals can usually be divided into three distinct processes. These are initiation, propagation, and termination.
- Initiation reactions are those that result in a net increase in the number of free radicals. They may involve the formation of free radicals from stable species as in Reaction 1 above or they may involve reactions of free radicals with stable species to form more free radicals.
- Propagation reactions are those reactions involving free radicals in which the total number of free radicals remains the same.
- Termination reactions are those reactions resulting in a net decrease in the number of free radicals. Typically two free radicals combine to form a more stable species, for example: 2Cl·→ Cl2
The formation of radicals may involve the breaking of covalent bonds by homolysis, a process that requires significant amounts of energy. Such energies are known as homolytic bond dissociation energies, usually abbreviated as "ΔH °". Splitting H2 into 2H•, for example, requires a ΔH ° of +435 kJ·mol-1, while splitting Cl2 into 2Cl• requires a ΔH ° of +243 kJ·mol-1.
The energy needed to break a specific bond (generally covalent) between two atoms known as bond energy is a result of all the relative attractions and repulsions between the atoms of the molecule, however the most relevant are the bond's atoms and the immediate neighbors. As an approximation the most important parameters that influence the bonding between two atoms in a molecule are the mutual energy match and overlap of covalent orbitals and the repulsion between nonbonding orbitals. Likewise, radicals requiring more energy to form are less stable than those requiring less energy. An additional barrier can be the selection rule. Propagation, however, is very exothermic.
Radical formation through homolytic bond cleavage most often happens between two atoms of similar electronegativity; in organic chemistry, this is often between the O–O bond in peroxide species or between O–N bonds. Radicals may also be formed by single-electron oxidation or reduction of an atom or molecule: an example is the production of superoxide by the electron transport chain. Early studies in organometallic chemistry – especially F. A. Paneth and K. Hahnfeld's studies of tetra-alkyl lead species during the 1930s – supported the heterolytic fission of bonds and a radical-based mechanism. Although radical ions do exist, most species are electrically neutral.
Persistence and stabilityEdit
Although radicals are generally short-lived due to their reactivity, there are long-lived radicals. These are categorized as follows:
The prime example of a stable radical is molecular dioxygen (O2). Another common example is nitric oxide (NO). Organic radicals can be long lived if they occur in a conjugated π system, such as the radical derived from α-tocopherol (vitamin E). There are also hundreds of examples of thiazyl radicals, which show low reactivity and remarkable thermodynamic stability with only a very limited extent of π resonance stabilization.
Persistent radical compounds are those whose longevity is due to steric crowding around the radical center, which makes it physically difficult for the radical to react with another molecule. Examples of these include Gomberg's triphenylmethyl radical, Fremy's salt (Potassium nitrosodisulfonate, (KSO3)2NO·), aminoxyls, (general formula R2NO·) such as TEMPO, TEMPOL, nitronyl nitroxides, and azephenylenyls and radicals derived from PTM (perchlorophenylmethyl radical) and TTM (tris(2,4,6-trichlorophenyl)methyl radical). Persistent radicals are generated in great quantity during combustion, and "may be responsible for the oxidative stress resulting in cardiopulmonary disease and probably cancer that has been attributed to exposure to airborne fine particles."
Diradicals are molecules containing two radical centers. Multiple radical centers can exist in a molecule. Atmospheric oxygen naturally exists as a diradical in its ground state as triplet oxygen. The low reactivity of atmospheric oxygen is due to its diradical state. Non-radical states of dioxygen are actually less stable than the diradical. The relative stability of the oxygen diradical is primarily due to the spin-forbidden nature of the triplet-singlet transition required for it to grab electrons, i.e., "oxidize". The diradical state of oxygen also results in its paramagnetic character, which is demonstrated by its attraction to an external magnet. Diradicals can also occur in metal-oxo complexes, lending themselves for studies of spin forbidden reactions in transition metal chemistry.
Radical alkyl intermediates are stabilized by similar physical processes to carbocations: as a general rule, the more substituted the radical center is, the more stable it is. This directs their reactions. Thus, formation of a tertiary radical (R3C·) is favored over secondary (R2HC·), which is favored over primary (RH2C·). Likewise, radicals next to functional groups such as carbonyl, nitrile, and ether are more stable than tertiary alkyl radicals.
Radicals attack double bonds. However, unlike similar ions, such radical reactions are not as much directed by electrostatic interactions. For example, the reactivity of nucleophilic ions with α,β-unsaturated compounds (C=C–C=O) is directed by the electron-withdrawing effect of the oxygen, resulting in a partial positive charge on the carbonyl carbon. There are two reactions that are observed in the ionic case: the carbonyl is attacked in a direct addition to carbonyl, or the vinyl is attacked in conjugate addition, and in either case, the charge on the nucleophile is taken by the oxygen. Radicals add rapidly to the double bond, and the resulting α-radical carbonyl is relatively stable; it can couple with another molecule or be oxidized. Nonetheless, the electrophilic/neutrophilic character of radicals has been shown in a variety of instances. One example is the alternating tendency of the copolymerization of maleic anhydride (electrophilic) and styrene (slightly nucleophilic).
In intramolecular reactions, precise control can be achieved despite the extreme reactivity of radicals. In general, radicals attack the closest reactive site the most readily. Therefore, when there is a choice, a preference for five-membered rings is observed: four-membered rings are too strained, and collisions with carbons six or more atoms away in the chain are infrequent.
A familiar free-radical reaction is combustion. The oxygen molecule is a stable diradical, best represented by ·O-O·. Because spins of the electrons are parallel, this molecule is stable. While the ground state of oxygen is this unreactive spin-unpaired (triplet) diradical, an extremely reactive spin-paired (singlet) state is available. For combustion to occur, the energy barrier between these must be overcome. This barrier can be overcome by heat, requiring high temperatures. The triplet-singlet transition is also "forbidden". This presents an additional barrier to the reaction. It also means molecular oxygen is relatively unreactive at room temperature except in the presence of a catalytic heavy atom such as iron or copper.
Combustion consists of various radical chain reactions that the singlet radical can initiate. The flammability of a given material strongly depends on the concentration of free radicals that must be obtained before initiation and propagation reactions dominate leading to combustion of the material. Once the combustible material has been consumed, termination reactions again dominate and the flame dies out. As indicated, promotion of propagation or termination reactions alters flammability. For example, because lead itself deactivates free radicals in the gasoline-air mixture, tetraethyl lead was once commonly added to gasoline. This prevents the combustion from initiating in an uncontrolled manner or in unburnt residues (engine knocking) or premature ignition (preignition).
When a hydrocarbon is burned, a large number of different oxygen radicals are involved. Initially, hydroperoxyl radical (HOO·) are formed. These then react further to give organic hydroperoxides that break up into hydroxyl radicals (HO·).
In addition to combustion, many polymerization reactions involve free radicals. As a result, many plastics, enamels, and other polymers are formed through radical polymerization. For instance, drying oils and alkyd paints harden due to radical crosslinking by oxygen from the atmosphere.
Recent advances in radical polymerization methods, known as living radical polymerization, include:
- Reversible addition-fragmentation chain transfer (RAFT)
- Atom transfer radical polymerization (ATRP)
- Nitroxide mediated polymerization (NMP)
These methods produce polymers with a much narrower distribution of molecular weights.
The most common radical in the lower atmosphere is molecular dioxygen. Photodissociation of source molecules produces other free radicals. In the lower atmosphere, important free radical are produced by the photodissociation of nitrogen dioxide to an oxygen atom and nitric oxide (see eq. 1. 1 below), which plays a key role in smog formation—and the photodissociation of ozone to give the excited oxygen atom O(1D) (see eq. 1. 2 below). The net and return reactions are also shown (eq. 1. 3 and eq. 1. 4, respectively).
(eq. 1. 1)
(eq. 1. 2)
(eq. 1. 3)
(eq. 1. 4)
In the upper atmosphere, the photodissociation of normally unreactive chlorofluorocarbons (CFCs) by solar ultraviolet radiation is an important source of radicals (see eq. 1 below). These reactions give the chlorine radical, Cl•, which catalyzes the conversion of ozone to O2, i.e., Ozone depletion (eq. 2. 2–eq. 2. 4 below).
(eq. 2. 1)
(eq. 2. 2)
(eq. 2. 3)
(eq. 2. 4)
(eq. 2. 5)
Such reactions cause the depletion of the ozone layer, especially since the chlorine radical is free to engage in another reaction chain; consequently, the use of chlorofluorocarbons as refrigerants has been restricted.
Free radicals play important roles in biology. Many of these are necessary for life, such as the intracellular killing of bacteria by phagocytic cells such as granulocytes and macrophages. Free radicals are involved in cell signalling processes, known as redox signaling. For example, free radical attack of linoleic acid produces a series of 13-Hydroxyoctadecadienoic acids and 9-Hydroxyoctadecadienoic acids, which may act to regulate localized tissue inflammatory and/or healing responses, pain perception, and the proliferation of malignant cells. Free radical attacks on arachidonic acid and docosahexaenoic acid produce a similar but broader array of signaling products.
Free radicals may also be involved in Parkinson's disease, senile and drug-induced deafness, schizophrenia, and Alzheimer's. The classic free-radical syndrome, the iron-storage disease hemochromatosis, is typically associated with a constellation of free-radical-related symptoms including movement disorder, psychosis, skin pigmentary melanin abnormalities, deafness, arthritis, and diabetes mellitus. The free-radical theory of aging proposes that free radicals underlie the aging process itself. Similarly, the process of mitohormesis suggests that repeated exposure to free radicals may extend life span.
Because free radicals are necessary for life, the body has a number of mechanisms to minimize free-radical-induced damage and to repair damage that occurs, such as the enzymes superoxide dismutase, catalase, glutathione peroxidase and glutathione reductase. In addition, antioxidants play a key role in these defense mechanisms. These are often the three vitamins, vitamin A, vitamin C and vitamin E and polyphenol antioxidants. Furthermore, there is good evidence indicating that bilirubin and uric acid can act as antioxidants to help neutralize certain free radicals. Bilirubin comes from the breakdown of red blood cells' contents, while uric acid is a breakdown product of purines. Too much bilirubin, though, can lead to jaundice, which could eventually damage the central nervous system, while too much uric acid causes gout.
Reactive oxygen speciesEdit
Reactive oxygen species or ROS are species such as superoxide, hydrogen peroxide, and hydroxyl radical, commonly associated with cell damage. ROS form as a natural by-product of the normal metabolism of oxygen and have important roles in cell signaling. Two important oxygen-centered free radicals are superoxide and hydroxyl radical. They derive from molecular oxygen under reducing conditions. However, because of their reactivity, these same free radicals can participate in unwanted side reactions resulting in cell damage. Excessive amounts of these free radicals can lead to cell injury and death, which may contribute to many diseases such as cancer, stroke, myocardial infarction, diabetes and major disorders. Many forms of cancer are thought to be the result of reactions between free radicals and DNA, potentially resulting in mutations that can adversely affect the cell cycle and potentially lead to malignancy. Some of the symptoms of aging such as atherosclerosis are also attributed to free-radical induced oxidation of cholesterol to 7-ketocholesterol. In addition free radicals contribute to alcohol-induced liver damage, perhaps more than alcohol itself. Free radicals produced by cigarette smoke are implicated in inactivation of alpha 1-antitrypsin in the lung. This process promotes the development of emphysema.
Oxybenzone has been found to form free radicals in sunlight, and therefore may be associated with cell damage as well. This only occurred when it was combined with other ingredients commonly found in sunscreens, like titanium oxide and octyl methoxycinnamate.
ROS attack the polyunsaturated fatty acid, linoleic acid, to form a series of 13-Hydroxyoctadecadienoic acid and 9-Hydroxyoctadecadienoic acid products that serve as signaling molecules that may trigger responses that counter the tissue injury which caused their formation. ROS attacks other polyunsaturated fatty acids, e.g. arachidonic acid and docosahexaenoic acid, to produce a similar series of signaling products.
History and nomenclatureEdit
Until late in the 20th century the word "radical" was used in chemistry to indicate any connected group of atoms, such as a methyl group or a carboxyl, whether it was part of a larger molecule or a molecule on its own. The qualifier "free" was then needed to specify the unbound case. Following recent nomenclature revisions, a part of a larger molecule is now called a functional group or substituent, and "radical" now implies "free". However, the old nomenclature may still appear in some books.
The term radical was already in use when the now obsolete radical theory was developed. Louis-Bernard Guyton de Morveau introduced the phrase "radical" in 1785 and the phrase was employed by Antoine Lavoisier in 1789 in his Traité Élémentaire de Chimie. A radical was then identified as the root base of certain acids (the Latin word "radix" meaning "root"). Historically, the term radical in radical theory was also used for bound parts of the molecule, especially when they remain unchanged in reactions. These are now called functional groups. For example, methyl alcohol was described as consisting of a methyl "radical" and a hydroxyl "radical". Neither are radicals in the modern chemical sense, as they are permanently bound to each other, and have no unpaired, reactive electrons; however, they can be observed as radicals in mass spectrometry when broken apart by irradiation with energetic electrons.
In a modern context the first organic (carbon–containing) free radical identified was triphenylmethyl radical, (C6H5)3C•. This species was discovered by Moses Gomberg in 1900. In 1933 Morris Kharash and Frank Mayo proposed that free radicals were responsible for anti-Markovnikov addition of hydrogen bromide to allyl bromide.
In most fields of chemistry, the historical definition of radicals contends that the molecules have nonzero electron spin. However, in fields including spectroscopy, chemical reaction, and astrochemistry, the definition is slightly different. Gerhard Herzberg, who won the Nobel prize for his research into the electron structure and geometry of radicals, suggested a looser definition of free radicals: "any transient (chemically unstable) species (atom, molecule, or ion)". The main point of his suggestion is that there are many chemically unstable molecules that have zero spin, such as C2, C3, CH2 and so on. This definition is more convenient for discussions of transient chemical processes and astrochemistry; therefore researchers in these fields prefer to use this loose definition.
Radicals typically exhibit paramagnetism, but the bulk magnetic properties of a ion or molecule are often not conveniently measured. Electron spin resonance is instead the definitive and most widely used technique for characterizing free radicals. The nature of the atom bearing the unpaired electron and its neighboring atoms can often be deduced by the EPR spectrum.
- Electron pair
- Globally Harmonized System of Classification and Labelling of Chemicals
- Hofmann–Löffler reaction
- Free radical research
- IUPAC Gold Book radical (free radical) PDF
- Hayyan, M.; Hashim, M.A.; AlNashef, I.M. (2016). "Superoxide Ion: Generation and Chemical Implications". Chem. Rev. 116 (5): 3029–3085. doi:10.1021/acs.chemrev.5b00407.
- Oakley, Richard T. (1988). "Progress in Inorganic Chemistry" (PDF). Cyclic and Heterocyclic Thiazenes (section). Progress in Inorganic Chemistry. 36: 299–391. doi:10.1002/9780470166376.ch4. ISBN 978-0-470-16637-6.
- Rawson, J; Banister, A; Lavender, I (1995). "Advances in Heterocyclic Chemistry". The Chemistry of Dithiadiazolylium and Dithiadiazolyl Rings (section) =. Advances in Heterocyclic Chemistry. 62: 137–247. doi:10.1016/S0065-2725(08)60422-5. ISBN 978-0-12-020762-6.
- Griller, David; Ingold, Keith U. (1976). "Persistent carbon-centered radicals". Accounts of Chemical Research. 9: 13–19. doi:10.1021/ar50097a003.
- Lomnicki S.; Truong H.; Vejerano E.; Dellinger B. (2008). "Copper oxide-based model of persistent free radical formation on combustion-derived particulate matter". Environ. Sci. Technol. 42 (13): 4982–4988. Bibcode:2008EnST...42.4982L. doi:10.1021/es071708h. PMID 18678037.
- However, paramagnetism does not necessarily imply radical character.
- Svensson, Mats (1999). "Timing is Critical: Effect of Spin Changes on the Diastereoselectivity in Mn(Salen)-Catalyzed Epoxidation". Journal of the American Chemical Society. 121: 5083–4. doi:10.1021/ja9809915.
- Pacher P, Beckman JS, Liaudet L (2007). "Nitric oxide and peroxynitrite in health and disease". Physiol. Rev. 87 (1): 315–424. doi:10.1152/physrev.00029.2006. PMC . PMID 17237348.
- "Lipid peroxidation: pathophysiological and pharmacological implications in the eye". Frontiers in Physiology. 4. doi:10.3389/fphys.2013.00366.
- Broderick, J. B.; Duffus, B. R.; Duschene, K. S.; Shepard, E. M., (2014). "Radical S-Adenosylmethionine Enzymes". Chemical Reviews. 114: 4229–4317. doi:10.1021/cr4004709.
- Floyd, R. A. (1999). "Neuroinflammatory processes are important in neurodegenerative diseases: An hypothesis to explain the increased formation of reactive oxygen and nitrogen species as major factors involved in neurodegenerative disease development". Free Radical Biology and Medicine. 26 (9–10): 1346–1355. doi:10.1016/s0891-5849(98)00293-7.
- An overview of the role of free radicals in biology and of the use of electron spin resonance in their detection may be found in Rhodes C.J. (2000). Toxicology of the Human Environment – the critical role of free radicals. London: Taylor and Francis. ISBN 0-7484-0916-5.
- Rajamani Karthikeyan; Manivasagam T; Anantharaman P; Balasubramanian T; Somasundaram ST (2011). "Chemopreventive effect of Padina boergesenii extracts on ferric nitrilotriacetate (Fe-NTA)-induced oxidative damage in Wistar rats". J. Appl. Phycol. 23, Issue 2, Page 257 (2): 257–263. doi:10.1007/s10811-010-9564-0.
- Mukherjee, P. K.; Marcheselli, V. L.; Serhan, C. N.; Bazan, N. G. (2004). "Neuroprotecin D1: A docosahexanoic acid-derived docosatriene protects human retinal pigment epithelial cells from oxidative stress". Proceedings of the National Academy of Sciences of the USA. 101 (22): 8491–8496. Bibcode:2004PNAS..101.8491M. doi:10.1073/pnas.0402531101. PMC . PMID 15152078.
- Lyons, MA; Brown, AJ (1999). "7-Ketocholesterol". Int. J. Biochem. Cell Biol. 31: 369–75. doi:10.1016/s1357-2725(98)00123-x. PMID 10224662.
- Serpone, N; Salinaro, A; Emeline, AV; Horikoshi, S; Hidaka, H; Zhao, JC (2002). "An in vitro systematic spectroscopic examination of the photostabilities of a random set of commercial sunscreen lotions and their chemical UVB/UVA active agents". Photochemical & Photobiological Sciences. 1 (12): 970–981. doi:10.1039/b206338g.
- "Lipid peroxidation: pathophysiological and pharmacological implications in the eye". Frontiers in Physiology. 4. doi:10.3389/fphys.2013.00366.
- The Peroxide Effect in the Addition of Reagents to Unsaturated Compounds. I. The Addition of Hydrogen Bromide to Allyl Bromide M. S. Kharasch, Frank R. Mayo J. Am. Chem. Soc., 1933, 55, pp 2468–2496 doi:10.1021/ja01333a041
- Radicals: Reactive Intermediates with Translational Potential Ming Yan, Julian C. Lo, Jacob T. Edwards, and Phil S. Baran J. Am. Chem. Soc., 2016, 138 (39), pp 12692–12714 doi:10.1021/jacs.6b08856
- G. Herzberg (1971), "The spectra and structures of simple free radicals", ISBN 0-486-65821-X.
- 28th International Symposium on Free Radicals Archived 2007-07-16 at the Wayback Machine..
- Chechik, Victor; Carter, Emma; Murphy, Damien (2016). Electron Paramagnetic Resonance. Oxford University Press. ISBN 978-0-19-872760-6. | <urn:uuid:27fe3608-4502-4c15-8fa3-e17192faad7d> | 4.1875 | 5,744 | Knowledge Article | Science & Tech. | 36.13874 | 95,552,875 |
Uptake of picophytoplankton, bacterioplankton and virioplankton by a fringing coral reef community (Ningaloo Reef, Australia)
- 293 Downloads
We examined the importance of picoplankton and virioplankton to reef trophodynamics at Ningaloo Reef, (north-western Australia), in May and November 2008. Picophytoplankton (Prochlorococcus, Synechococcus and picoeukaryotes), bacterioplankton (inclusive of bacteria and Archaea), virioplankton and chlorophyll a (Chl a) were measured at five stations following the consistent wave-driven unidirectional mean flow path of seawater across the reef and into the lagoon. Prochlorococcus, Synechococcus, picoeukaryotes and bacterioplankton were depleted to similar levels (~40% on average) over the fore reef, reef crest and reef flat (=‘active reef’), with negligible uptake occurring over the sandy bottom lagoon. Depletion of virioplankton also occurred but to more variable levels. Highest uptake rates, m, of picoplankton occurred over the reef crest, while uptake coefficients, S (independent of cell concentration), were similarly scaled over the reef zones, indicating no preferential uptake of any one group. Collectively, picophytoplankton, bacterioplankton and virioplankton accounted for the uptake of 29 mmol C m−2 day−1, with Synechococcus contributing the highest proportion of the removed C. Picoplankton and virioplankton accounted for 1–5 mmol N m−2 day−1 of the removed N, with bacterioplankton estimated to be a highly rich source of N. Results indicate the importance of ocean–reef interactions and the dependence of certain reef organisms on picoplanktonic supply for reef-level biogeochemistry processes.
KeywordsCoral reef Picoplankton Virus Uptake Ningaloo Reef Indian Ocean
We thank D. Krikke, F. McGregor, S. Hinrichs, A. Chalmers and K. Meyers for assistance in the field. Funding was provided by grants from the University of Western Australia (UWA), The Faculty of Engineering, Computing and Mathematical Sciences and the Western Australian Marine Science Institution (Node 3) to A.M.W.; an Australian Research Council (ARC) Discovery Grant #DP0663670 to A.M.W. et al., an ARC Discovery Grant #DP0770094 to R.J.L. and postdoctoral research funding from UWA and The Australian Institute of Marine Science to N.L.P. The authors acknowledge the facilities, scientific and technical assistance of the Australian Microscopy & Microanalysis Research Facility at the Centre for Microscopy, Characterisation and Analysis, UWA, a facility funded by The University, State and Commonwealth Governments. We finally thank two anonymous reviewers who provided valuable comments that improved this manuscript.
- Anderson MJ (2001) A new method for non-parametric multivariate analysis of variance. Austral Ecol 26:32–46Google Scholar
- Anderson MJ, Gorley RN, Clarke KR (2008) PERMANOVA+ for PRIMER: guide to software and statistical methods. PRIMER-E, Plymouth, UKGoogle Scholar
- Dinsdale EA, Pantos O, Smigra S, Edwards R, Angly F, Wegley L, Hatay M, Hall D, Brown E, Haynes M, Krause L, Sala E, Sandin SA, Vega Thurber R, Willis BL, Azam F, Knowlton N, Rohwer F (2008) Microbial ecology of four coral atolls in the Northern Line Islands. PLoS ONE 3(2):e1584. doi: 10.1371/journalpone0001584 CrossRefPubMedPubMedCentralGoogle Scholar
- Feng M, Wild-Allen K (2009) The Leeuwin Current. In: Liu KK, Atkinson L (eds) Carbon and nutrient fluxes in continental margins: a global synthesis. Springer, New YorkGoogle Scholar
- Ginsburg RN (1983) Geological and biological roles of cavities in coral reefs. In: Barnes DJ (ed) Perspectives on coral reefs. Australian Institute of Marine Science, pp 148–153Google Scholar
- IPCC (2007) Climate change 2007: The physical science basis. In: Solomon S, Qin D, Manning M, Chen Z, Marquis MC, Averyt K, Tignor M, Miller HL (eds) Contribution of working group I to the fourth assessment report of the intergovernmental panel on climate change. Intergovernmental panel on Climate Change. Cambridge University Press, Cambridge, UK and New York, USAGoogle Scholar
- Marie D, Partensky F, Vaulot D, Brussard C (1999) Enumeration of phytoplankton, bacteria, and viruses in marine samples. In: Robinson JPEA (ed) Current protocols in cytometry, suppl 10. John Wiley & Sons, Inc, New York, pp 11.11.11–11.11.15Google Scholar
- Parsons TR, Maita Y, Lalli CM (1984) A manual for chemical and biological methods for seawater analysis. Pergamon Press, New YorkGoogle Scholar
- Partensky F, Blanchot J, Vaulot D (1999) Differential distribution and ecology of Prochlorococcus and Synechococcus in oceanic waters: a review. In: Charpy L, Larkum AWD (eds) Marine cyanobacteria. Bulletin de l’Institut Océanographique, Monaco, pp 457–475Google Scholar
- Pearce AF (1991) Eastern Boundary Currents of the southern hemisphere. J R Soc West Aust 74:35–45Google Scholar
- Stockner JG (1988) Phototrophic picoplankton: an overview from marine and freshwater ecosystems. Limnol Oceanogr 33:765–775Google Scholar | <urn:uuid:d5c95e40-5697-43da-9f8e-9194e7d12369> | 2.9375 | 1,286 | Academic Writing | Science & Tech. | 38.32152 | 95,552,876 |
(tĕktìt), naturally occurring, silica-rich (65%–80% SiO2) glass resembling obsidian and sometimes shale, and is normally jet black to olive green. They appear as small rounded or elongated objects that often have aerodynamic shapes and range from a fraction of an ounce to several pounds in weight. They are found in limited areas on the earth's surface called strewn fields (in contrast to meteorites, which show a random distribution over the whole earth). Tektites, originally named by Eduard Suess are usually given a name derived from the region in which they are found; moldavites (from the Vlatava, or Moldau, River in the Czech Republic), bediasites (from the territory of the Bedias Native Americans in Texas), indochinites, philippinites, australites, javanites, and Côte d'Ivoire tektites are the principal groups. Their peculiar composition, physical characteristics, and restricted geographic distribution gave rise to several theories: one suggests a lunar origin, i.e., that they were the result of a lunar meteorite impact that ejected splashes of molten lunar rock, some of which eventually made their way to earth; however, the composition of moon rocks does not resemble tektites, and the lunar-origin theory, for the most part, is questionable. Another theory suggests their origin to be through the fusion and ejection of terrestrial material by the impact of giant meteorites or comets on the earth; the moldavites and the Côte d'Ivoire tektites have been linked with such impacts, but the source of the remaining tektite groups is still uncertain.
Summary Article: tektite
from The Columbia Encyclopedia | <urn:uuid:0b0201bf-67a4-4314-8513-12e1b965bb25> | 3.78125 | 367 | Knowledge Article | Science & Tech. | 18.37375 | 95,552,877 |
Evolution, by and large, has had center stage in shaping life as we know it. Everything that roots or runs, burrows or swims, grazes, pecks, or stalks has been steered by its hand. The idea has also sparked one of the most heated debates humans have ever thrown their collective lot into — that of our origin.
This often-messy debate has also muddied the scientific waters from which the theory spawned. Presenting the debate as a choice between two entities tends to somewhat put the two on the same level in our subconscious. Pitting something against (quite literally) the concept of divinity, then, makes it rather easy for us to impart almost supernatural, godlike traits to the process — to think it almost a living creature, that shapes life according to its own wants and whims. After all, something which replaces the work of gods must be a pretty powerful and impressive force in and of itself, right?
Which brings us around to what we’re going to sink our teeth into today: some of the misconceptions that I have seen surrounding the idea of evolution.
- 1 That evolution created life, and the theory of evolution equates to a theory about the origin of life.
- 2 That evolution doesn’t have a direction — it’s random!
- 3 That evolution always favors the fittest.
- 4 That evolution always means ‘better’ versions of organisms.
- 5 Bonus round — That evolution is always slow and humans aren’t evolving any longer.
That evolution created life, and the theory of evolution equates to a theory about the origin of life.
Certain aspects of the theory of evolution do relate to the origins of life — such as what types of organisms came first, what they ate, where they lived, how they shaped all life to come. Overall, however, evolution aims to study how life behaved after it appeared, how it changed to suit its environment. This intrinsic property makes evolution and abiogenesis very separate things — no life, no evolution.
Equating the two is like saying all chemistry is actually physics because it deals with atoms, which are physical particles. If you look at a limited enough part of these fields, that statement can technically stand true — the big picture, however, is that we have different names for the fields because they deal with different bits of the natural world: chemistry with the composition, structure, properties, behavior, and the changes that compounds of atoms and molecules undergo through reactions with one another; physics is the study of matter, its motion and behavior through space, as well as energy and forces.
To stretch this analogy even further, evolution resembles chemistry in that it studies how organisms change over time through reactions with their environment and other organisms. Abiogenesis, like physics, looks at what processes allowed those organisms to form in the first place.
That it aims to create a ‘perfect organism’, at which point the process of evolution will be ‘complete’.
This one is a bit more convoluted, so bear with me.
First, it’s important to note that evolution doesn’t have a goal; it is the journey, not the destination. New generations partly inherit characteristics from their parents, partly take on new ones through mutations (a permanent change in an organism’s genotype). Mutations can be either good for the organism (such as making it more resistant to a certain pathogen), bad (making it more vulnerable to a certain pathogen), they can have mixed effects, or no effects at all! It’s a lottery.
An organism’s genotype is the part of its genetic makeup that determines one of its outer characteristics. The sum of the latter makes up the phenotype. As a rule of thumb, think of the genotype as an organism’s genetic blueprint, and of the phenotype as the end product, with all the scratches, dents, and new coats of paint it got over its lifetime.
Genes are (or, rather, used to be) built with four building blocks: nucleotides adenine, thymine, guanine, and cytosine, or A, T, G, and C — although in RNA, thymine is replaced by uracil or U. What makes each nucleotide distinct is a part called the nitrogenous base, and these form the rungs in the DNA ladder. The side rails are made of sugar molecules tied to pieces called phosphate bases, which stacked on top of another. A only ties with T and G only with C, but, because they use the same sugar molecules to connect to the rest of the DNA strand, these pairs can switch places — and you get a mutation. Alternatively, pairs can be unwittingly deleted or inserted into the strand which, again, causes a mutation.
Over successive generations, enough mutations build up in the genotype to subtly change a species’s phenotype. Over longer spans of time, this process will generate entirely new species (significant changes in phenotype) from pre-existing ones. That’s why evolution can’t be ‘complete’: it is the change.
Secondly, an organism’s ‘perfection’ can only be gauged in relationship to something: a perfect ocean mammal will be, literally and figuratively, completely out of its depth on a mountain peak.
Which brings us to the next point on our list.
That evolution doesn’t have a direction — it’s random!
By itself, evolution simply means a gradual development — in our case, the change in genetic heritage of organisms over time. We’ve seen that it’s an ongoing process which can bestow younglings with both great advantages or crippling shortcomings — however, in its eyes, all mutations are equal.
Evolution through natural selection, however — the theory Darwin developed — is what people generally refer to and shorthand as ‘evolution‘. The gradual development we’ve just talked about is random and yields random mutations, but the end product of evolution (through natural selection) isn’t.
Natural selection is where mutations go to be judged. It’s the quality control department of evolution; the sieve that separates the fittest wheat from the illest-adapted of chaff.
It sounds like a fancy concept, but it’s actually really simple. ‘Natural selection’ basically encapsulates the idea that environments put certain pressures on organisms: to find food, to stay warm, to not get eaten, to mate, etc. How well an organism meets these requirements determines a lot of factors, such as how long it will live, and how well it will be able to defend itself. In the end, however, all these factors are only proxies for the one overriding goal all life shares: to continue its own image by passing on its genes.
Nature is pretty much a dog-eat-dog rat race (double cliches are the best cliches), so even slight disadvantages from unfortunate mutations will make an organism more likely to die / limit its baby-making ability. At the same time, even a seemingly inconsequential boon from one’s genetics could provide an edge in mate-ability. Thus, species weed themselves of poor mutations over time, while retaining and even amplifying positive ones. Through the process, they change — but the overall direction of change isn’t random, it’s guided.
In other words, evolution causes each organism in a species to be a tentative first step towards a new genetic lineage, complete with strengths and weaknesses. But it is the gene-borne ability of each individual to adapt to their natural environment that selects which get to make babies and spread their DNA.
Which should make the next misconception pretty confusing to read at first:
That evolution always favors the fittest.
Reading between the lines of the point we’ve discussed above, specifically in the “how well an organism meets these requirements” part, there’s a hidden kernel of wisdom — one that, I find, does wonders during a stressful day: even in the eyes of evolution, you don’t have to be ‘perfect’ — ‘good enough’ is, really, good enough.
Natural selection is a sieve rather than a competition because it allows individuals to have whatever traits their genes give them, as long as they’re able to survive (and mate). The process doesn’t single out the best in the species and breeds that over and over. It just slowly murders those who can’t keep up, along with their genes, while letting everyone else keep on keeping on. What natural selection rewards is adaptability.
For example, take yours truly. I like to fancy myself a well-adapted guy, mostly due to my solid entrenchment at the top of the food chain. But my wisdom teeth, thanks to a genotype that made my jaw too small for them, are solid agony. There is no shadow of a doubt in my mind that, bereft of modern medicine, I would have gladly poked a bear with a sharp stick, trying to look tasty while pleading for it to maul me, just so I wouldn’t have to endure the pain.
I am definitely not the fittest. And, in the wild, I very likely wouldn’t have been fit enough either. But in my environment, I’m just good enough.
Another point I’d like to make here, again, is that the very idea of being the ‘fittest’ implies a standard of measurement. A standard which may or may not hold true between species — for example, a plant’s could be how well it photosynthesizes, a bear would judge its fitness through physical strength, while you or I would much prefer cognitive ability. So, while you may talk about the fittest individual in a species, many species can be just as fit to thrive in the same conditions.
Nature always finds new solutions to old problems, and the fitness of an organism can only be gauged in relation to its environment. The kakapo, for example (Strigops habroptila), was superbly adapted to its environment in New Zealand before humans arrived. This fluffy, flightless parrot had no natural predators and ample resources of food — so it evolved into this cute, fat bird that has almost no concept of fear and tons of libido. Tons of libido — here’s Stephen Fry intruding on the privacy of one happy kakapo and one quite unlucky photographer:
Today, however, the birds have fallen on hard times. After humans got there, first Polynesians and later Europeans, it was hunted it since it was plump, tasty, and didn’t run away. Humans have also brought invasive predators that decimated the species: as of February 2018, the Kakapo Recovery Programme’s page reports the total known adult population amounted to 151 living individuals. The kakapo has gone from one of the most successful species in its environment to being listed as critically endangered on the IUCN’s Red List — all because some new organisms entered its ecosystem.
That evolution always means ‘better’ versions of organisms.
My main gripe with this idea actually has to do with what people generally envision as progress. In an age when new phones have better specs than the last, new cars are faster than the old ones, and new rockets are cheaper than the old ones, we tend to assume that ‘progress’ equates to a linear improvement. When talking evolution, then, the impression is that progress means bigger muscles, sharper beaks, thicker furs — a certain number of characteristics that simply get an upgrade.
But let me ask you this: say you have your standard Mk.I polar bear, and one lineage then evolves into the Mk.II. It’s better across the board. It’s much faster and stronger because it has bigger muscles. It can bear much lower temperatures because it has a thicker fur and more fat under its skin. It’s also bigger, and nothing in its ecosystem can dare stand up to it. Let’s even go wild and give it a sonar, just like the ones dolphins have.
The Mk.II nearly wipes the Mk.I clean off the face of the Earth because it’s so much better at being a polar bear than the former. But as they compete, human-induced climate change starts heating up the poles. Mean temperatures increase, ice cover begins to shrink, and food becomes scarcer as polar bears need ice to hunt seals.
In this scenario, being bigger, fatter, and having more muscle could actually pose a disadvantage, as it means the Mk.II needs more food. Being heavier also means that it’s less likely the Mk.IIs will find ice chunks thick enough to sustain their bulk. The thicker mane means they have to spend more energy trying to regulate body heat, since it’s too warm for thick furs now. Size, while still an advantage in itself, also quickly becomes a liability, as other animals in their ecosystems are increasingly looking at them as potential food since they’re so meaty. Evolution (through natural selection) would quickly start to favor the smaller but less food-intensive Mk.I over the newer, assumed-to-be-better species.
It’s an extreme example, but it gets the point across: an organism with traits that are beneficial in one situation may be poorly equipped for survival when conditions change. Evolution and natural selection don’t work to churn out ‘better’ life. They work to make better-adapted life.
It’s a subtle nuance, but a significant one.
Bonus round — That evolution is always slow and humans aren’t evolving any longer.
Generally, evolution and natural selection take a lot of time. But there is evidence of (relatively) rapid evolution in the fossil record; for example, some species of foraminiferans, which are a type of single-celled organism. It’s not rapid as from one generation to the next — the paper I’ve linked describes how two genera, Morozovella and Acarinina, differentiated into two new morphotypes, M. africana, and A. sibaiyaensis, as well as a completely new species, M. allisonensis, in under ten thousand years.
Morphotypes are a group of individuals that are part of a species but different enough to stand out and be distinguishable from the rest.
Now, ensconced comfortably in our concrete ecosystems, it’s easy to think we’re out of the grasp evolution and natural selections. But boy, oh boy is that wrong.
We’re not yet at the point where we can control our mutations, so evolution still reigns supreme over our biology. Our technology, medicine, all our know-how do, however, influence the effect of natural selection on our species — but that’s not the same as eliminating it altogether.
For example, we can now treat diabetes with insulin, meaning that (at least in developed countries) the disease isn’t as powerful a selection criteria as it used to be — in blunter terms, the mutations that contribute to the condition aren’t as likely to get you killed. The reverse is that we’re also no longer weeding out the faulty gene versions as we used to.
At the same time, we’ve gained new selective pressures — we live in denser groups, meaning we’re at higher risk of epidemics. Throughout time, these genes have become increasingly important and increasingly selected for in our collective gene pool. Modern medicine largely insulates us from epidemics today, but if one hits and we don’t have a cure, those whose genes can provide an edge will be around to repopulate, and the rest likely will not. Thus, those genes will become even more prevalent.
Phew, that was long — you guys have probably already started to select for longer-than-average attention spans!
I do hope you found it interesting and that it helped patch up your understanding of the subject. These are just the most common misconceptions I’ve run into when discussing the topic, but that doesn’t mean these are the only ones floating around out there. If you’ve got something that you feel could join them on the list, leave us a comment down below.
Enjoyed this article? Join 40,000+ subscribers to the ZME Science newsletter. Subscribe now! | <urn:uuid:64311a14-a813-43ef-a98e-0fab686b5342> | 2.859375 | 3,481 | Personal Blog | Science & Tech. | 45.964126 | 95,552,898 |
Our objective in this study was to determine the density of the jaguar Panthera onca from camera-trap data, using an open population model, in a private protected natural area, the Northern Jaguar Reserve, and 10 adjoining cattle ranches in the state of Sonora, Mexico. The region is considered a long-term jaguar conservation unit. As well as being the most northerly recorded reproductive population of the jaguar, the arid habitat of this region is atypical for the species. During 16 months of sampling we identified 10 individual jaguars and the data met the three main assumptions of open population models. The estimated mean density was 1.05±SE 0.4 individuals per 100 km2, with a constant survival probability of 0.94 and capture probability of 0.23. This estimate of density is lower than reported in studies of the jaguar from more southerly locations in Mexico, Belize, Costa Rica, Bolivia and Brazil but cannot be attributed to a single factor even though in general there is an apparent relationship between jaguar density and precipitation. The main objectives of the management of the Northern Jaguar Reserve are to reduce the impact of cattle and restore jaguar habitat, with strategies focused on water retention, removal of invasive grass, reforestation and environmental education. Livestock have been gradually excluded since 2003 and, combined with the protection provided under the agreements with the surrounding ranches, the area is now a suitable place for long-term studies of the jaguar. © 2012 Fauna & Flora International.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below | <urn:uuid:1e4a6c85-5e26-40f2-b3d3-917f40689e82> | 3.40625 | 339 | Academic Writing | Science & Tech. | 28.316163 | 95,552,908 |
The study found as many as five adult large carnivores, including leopards and striped hyenas, per 100 square kilometers (38 square miles), a density never before reported in a human-dominated landscape.
Camera traps set up at night in a densely populated region of India virtually devoid of wilderness revealed leopards, striped hyenas, jackals -- and lots of people.
Credit: Project Waghoba
The study, called "Big Cats in Our Backyards," appeared in the March 6 edition of the journal PLoS One. Authors include: Vidya Athreya and Ullas Karanth of the Wildlife Conservation Society and Centre for Wildlife Studies in Bangalore; Morten Odden of Hedmark University College; John D. C. Linnell of the Norwegian Institute for Nature Research; and Jagdish Krishnaswamy of Asoka Trust for Research of Ecology in the Environment.Using camera traps, the authors founds that leopards often ranged close to houses at night though remained largely undetected by the public. Despite this close proximity between leopards and people, there are few instances of attacks in this region. The authors also photographed rusty spotted cat, small Indian civet, Indian fox, jungle cat, jackal, mongoose – and a variety of people from the local communities. The research took place in western Maharashtra, India.
The authors say that the findings show that conservationists must look outside of protected areas for a more holistic approach to safeguarding wildlife in a variety of landscapes.The Wildlife Conservation Society saves wildlife and wild places worldwide. We do so through science, global conservation, education and the management of the world's largest system of urban wildlife parks, led by the flagship Bronx Zoo. Together these activities change attitudes towards nature and help people imagine wildlife and humans living in harmony. WCS is committed to this mission because it is essential to the integrity of life on Earth. Visit http://www.wcs.org.
Stephen Sautner | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:557012a8-6b50-458d-b2de-b0b29968c3ff> | 3.5 | 1,006 | Content Listing | Science & Tech. | 36.403549 | 95,552,925 |
Introduction to Sybase, Part 2
Last month, we installed a Sybase database server. This month, we will install a client to access our server. First, we need to understand how the Sybase network interface works.
A Sybase client must create a network connection to a database server when it needs to access resources in the database (see Figure 1). Sybase has created a protocol to communicate over the network. This protocol is called the Tabular Data Stream (TDS) protocol. It operates on top of other networking protocols such as TCP/IP on UNIX systems or IPX/SPX on Novell networks. TDS is a proprietary protocol and not documented by Sybase. Fortunately, Sybase has created client libraries which can be used to communicate with the server. A group of people have tried to reverse-engineer the TDS protocol. Look at http://sunsite.unc.edu/freetds/ for more information.
Sybase supports two interfaces to the database. DB-Library is an API that has been used for quite awhile in Sybase products. I believe it is supported for backwards compatibility, and may not be supported in a future version of the product. CT-Library is the API Sybase created for version 10 and higher products. It supports advanced features such as cursors and asynchronous query processing. You don't need to understand these features to do basic processing with your database server. We will use CT-Library to communicate with our server.
We could write our client using C or C++. The libraries required to do this are included with the server. Look for examples in the sample directory under the server directory. There is a subdirectory for DB-Library and one for CT-Library. We don't have to use C or C++, however. An extension to the Perl language called sybperl enables the use of Perl to write clients to access the database.
Most Linux distributions come with the Perl language. On my system, I have installed Red Hat 5.1 which includes Perl version 5 by default. Fortunately, it is possible to install sybperl without recompiling Perl. Using this method precludes the use of the DB-Library, which is why we have chosen to use CT-Library.
If Perl is not installed on your system, install it now. If your distribution does not provide perl, you can download the source from CPAN (http://www.cpan.org/).
First, we must download sybperl from http://www.perl.com/CPAN-local/authors/Michael_Peppler/
The newest version available at the time of this writing is version 2.09_05 in the file sybperl-2.09_05.tar.gz (148KB). Change directories to the location of the sybperl tar file, and issue the command:
Change to the sybperl directory just created, and edit the CONFIG file. In the line DBLIBVS=1000, change the 1000 to 0. Make sure the line SYBASE=/opt/sybase contains the correct directory for the Sybase server. The line EXTRA_LIBS=-ltli must be changed to EXTRA_LIBS=-linsck.
We will build sybperl to work with CT-library. Most Linux distributions come with the Berkeley DB library. If Perl is configured to use this library, a conflict arises when using DB-library at the same time, since both use the call open_database. If you recompile Perl to leave out the Berkeley DB library, you can leave the line DBLIBVS=1000 in the CONFIG file and use DB-library.
Save the changes to the CONFIG file, then issue this command:
This will create a file that will build the software. It looks for the Perl installation in your path. If Perl isn't in your path, you'll need to change your path to include it. Now issue the make command to build the software; it will take a few minutes to run. sybperl has tests that can be run to ensure it is built properly. To run these tests, edit the PWD file to put your sa password and the name of your Sybase server on the proper lines. If you installed the server following the directions in the last issue, the name of the server is linux_dev. Save the file, then type the command
make testThis command will run a series of tests. If everything is working properly, the message “All tests successful” will be printed.
Now, let's install sybperl. If your Perl installation is in a directory that requires root access to modify, change to root using su. Run the command
Perl and sybperl are now installed, so it is time to write some programs.
All of the programs here are available for download. If you type in these programs, be sure to use chmod to make them executable.
Writing a sybperl program is quite simple. Listing 1 is our first example program. This will list the names of all the databases in the server. Here is a line-by-line explanation of the program.
Line 1 tells Linux which program to use to run this script. This must be the new version of Perl you just installed. Make sure you change this line to point to the correct version of Perl on your system.
Line 3 tells Perl to use the CT-library interface to Sybase. It should be at the beginning of all Perl scripts you write that access a Sybase server.
Line 5 attaches to the correct Sybase server. The first parameter is the user name, the second is the password and the third is the name of the server.
Line 7 is the SQL to run.
Lines 9-10 are commands that run the query on the server and return a reference to an array of rows, @rows. Note this command loads the entire result set into memory. This is fine for small result sets, but if you are expecting a large result set, you shouldn't use ct_sql. Later, I will give you an alternative method for executing commands and receiving large result sets.
Lines 12-14 will print all rows that were returned.
Listing 2 is an example of a sybperl program that updates data. In Line 7, we use the same ct_sql command to send the SQL to the database, except this time a set of rows is not returned. The insert, delete and update SQL commands also do not return rows. The SQL command use pubs2 tells Sybase to make the pubs2 database the default database for the rest of this session. In Line 10, we again use the ct_sql command to run the SQL. This time, we add a row to the discounts table. You can use the isql program to run an SQL SELECT command to verify that the row was added.
Linux is mostly used as a web server, and Perl is primarily used to write web applications. So, we will create a Perl program to access the Sybase database.
Writing a CGI program to access Sybase is quite simple. Listing 3 is the complete code of a CGI program to let you know who's logged in to your Sybase server. Place this program in your web server's cgi-bin directory. On a default Red Hat system, the directory is /home/httpd/cgi-bin/. For this example, name the program listing3.pl.
In lines 5 and 6, we set two environment variables. The Sybase DB-Library and CT-Library must find these environment variables, or an error will occur. When you run a CGI program, very few environment variables are passed to your program. These two environment variables must be set in each CGI program that needs to access Sybase. If you have many CGI programs, place these commands in a file included in all your CGI programs.
The SYBASE environment variable contains the directory of the Sybase software. The DSQUERY variable contains the name of the default server.
The only other difference between this example and the others is it outputs HTML to a browser.
These example programs show the basics of accessing a Sybase database server. In production programs, a few more things must be taken care of in your programs.
Errors from the server must be handled properly. If you ignore them, your program will stop when it encounters a server error.
In all our example programs, ct_sql was used to run queries. It works fine for SQL commands and stored procedures which don't return result sets, but would have severe problems for queries returning large result sets.
Listing 4 shows how to handle errors and demonstrate a replacement for the ct_sql command. In lines 3 and 4, we establish both a client and a server message call-back routine. These routines will be called when the server or client generates an informational or error message.
In lines 7-20, we handle a single SQL statement. Sybase allows a single statement to return multiple result sets. Lines 8-10 will process each result set. Lines 17-19 will handle each row in a result set. Lines 11-16 will look at the result set and print the names and types of each column. A Sybase result set contains more than just data—it also includes column definition information.
Lines 23-50 are the two call-back routines. These routines are called each time there is a message from the server or client. An example of a client message is the one returned if you can't log in to the server. An example of a server message is the one returned if you have an error in your SQL.
All this information can be found within the sybperl and the Sybase documentation.
The Sybase database server is a powerful piece of software. Unfortunately, all that power comes with a price. Setting up and supporting a Sybase server isn't as easy as using Postgres or MySQL, but if you need a heavy-duty, high-performance database, Sybase is your best bet.
Next month, we'll discuss application development using Sybase on Linux. This article will appear in the “Strictly On-line” section.
Jay Sissom is responsible for the web-based front end to the financial decision support data at Indiana University. He has installed and supported Sybase databases on many operating systems and has written database clients for the Web using C, C++ and sybperl and for Windows using tools like Visual Basic and PowerBuilder. When he isn't programming, he enjoys amateur radio and playing his bass guitar and keyboards. If you have questions, you can contact him via e-mail at firstname.lastname@example.org. | <urn:uuid:b6aada02-5e6e-4946-9c05-88483ada5e2b> | 2.890625 | 2,236 | Truncated | Software Dev. | 61.161186 | 95,552,926 |
Mannophryne venezuelensis is found on the slopes of the Península de Paria, Municipality of Arismendi, state of Sucre, Venezuela, from near sea level to about 600 masl (Manzanilla et al., 2007).
Habitat and Ecology
The species is found in and around mountain streams, some individuals have been found in a stream that runs through a cocoa field (Manzanilla et al., 2007).
No population status information is available for this species.
Major threats to this species include traditional shifting agriculture and use of agrochemicals in upstream coffee and cocoa plantations (Manzanilla et al., 2007).
The species is known to occur in a protected area, Peninsula de Paria, National Park. However, shifting agriculture is a common practice in the region, including within the realm of the national park (Manzanilla et al., 2007).
Red List Status
Near Threatened (NT)
Listed as Near Threatened since although the species is common within its restricted range, its Extent of Occurrence is less than 5,000 km2, and the extent and quality of its habitat is declining, thus making the species close to qualifying for Vulnerable.
Mannophryne venezuelensis can be distinguished from other similar species by a combination of morphological characters and differences in advertisement call traits and mitochondrial DNA sequences (Manzanilla et al., 2007).
Ariadne Angulo 2008. Mannophryne venezuelensis. The IUCN Red List of Threatened Species 2008: e.T136174A4255031. http://dx.doi.org/10.2305/IUCN.UK.2008.RLTS.T136174A4255031.en | <urn:uuid:3967c44d-4ce7-4deb-b475-5c419cf7299e> | 2.59375 | 366 | Knowledge Article | Science & Tech. | 48.254 | 95,552,974 |
The Cumulative Deformation, Work-Hardening and Fracture of Magnesium Oxide at Room Temperature, Under Repeated Point Loading Conditions
In earlier work1, it was established that repeated traversais increase the density of defects in the dislocated volume in magnesium oxide beneath a softer lubricated metal slider, thus increasing the number of operating slip systems and giving rise to significant work-hardening. Further stress cycles ultimately lead to the formation of cracks and cause fragmentation. The number of stress cycles (N) required to cause this kind of fatigue fracture was found to be inversely related to the hardness of the slider. Furthermore, a plot of slider hardness:number of traversais resembled a conventional fatigue plot and indicated that the surface would not fail when deformed by softer materials below a certain limiting value of hardness. Further work2 showed that the contact pressure transmitted to the crystal was directly related to the flow stress of the impressor and therefore dependent on the material used. When the contact pressure exceeds the critical resolved shear stress within the crystal, dislocations move and multiply in the harder solid. Thus, the flow stress of the softest material to achieve this effect may be used to estimate the critical resolved shear stress of the hard crystal.
KeywordsContact Pressure Slip Plane Resolve Shear Stress Critical Resolve Shear Stress Titanium Diboride
Unable to display preview. Download preview PDF.
- 6.C. A. Brookes, E. J. Brookes and G. Xing, The use of the soft indenter technique to investigate impression creep in ceramic crystals. “Mechanics of Creep Brittle Materials 2”, A. C. F. Cocks and A. R. S. Ponter, eds., Elsevier, London, (1991)Google Scholar
- 7.E. J. Brookes, “The Plasticity of Diamond”, PhD Dissertation, University of Hull (1992)Google Scholar
- 11.E. Field, Strength and fracture, in: “Properties of Diamond,” J. E. Field, ed., Academic Press, London (1979).Google Scholar | <urn:uuid:121170ea-0d37-4773-9453-b6b27f1d71be> | 2.6875 | 441 | Academic Writing | Science & Tech. | 47.272741 | 95,552,979 |
Greenhouse Effect on the Economy and You
How Rising CO2 Levels Are Making Your World a Hothouse
The greenhouse effect is when carbon dioxide and other gases in the Earth's atmosphere capture the Sun's heat radiation. Greenhouse gases include carbon dioxide, water vapor, methane, nitrous oxide, and ozone.
The greenhouse effect functions like the glass roof on a greenhouse that traps the sun's heat. Without greenhouse gases, the atmosphere would be 91 degrees Fahrenheit cooler. The Earth would be a frozen snowball and most life on Earth would cease to exist.
Since the 1850s, humans have been burning massive amounts of plant-based fuels such as coal, oil, and trees. These emit the carbon dioxide they had absorbed and stored during their lifetimes. Temperatures have risen 1.5 degrees Celsius since then.
Nature emits 230 gigatons of carbon dioxide into the atmosphere each year. But it also absorbs that same amount through plants and the oceans. It remained balanced until 10,000 years ago when humans began burning wood.
Oil and other fossil fuels are the remains of prehistoric plants. When they were alive, the plants absorbed carbon from the atmosphere during photosynthesis. In that process, they harness the sun’s energy to make sugar. They combine hydrogen from water with carbon from carbon dioxide. They emit oxygen as a byproduct. When they die, their remains contain all the carbon they absorbed. When we burn them as fuel, the carbon combines with oxygen and enters the atmosphere.
As proof of the impact of photosynthesis, every spring the Northern Hemisphere becomes green and the concentration of carbon dioxide in the atmosphere dips. In the fall and winter, the foliage dies, and CO2 rises. Some scientists say it's like the earth is breathing.
Heat and power generation releases 25 percent of the carbon man has put into the atmosphere.
Often overlooked is land use. Food production has released 135 gigatons of carbon into the atmosphere. The most damaging methods are clear-cutting, plowing, and heavy grazing. These methods obliterate carbon dioxide-absorbing plant life.
In 2015, there were 400 parts per million of carbon dioxide in the Earth's atmosphere. It's much higher than at any time in the past 800,000 years. It exceeded the prior record high of 300 ppm in 1950, gaining 100 ppm since then. Scientists warn that we need to remove this 2 trillion tons of "legacy C02" to stop further climate change.
There is so much excess CO2 that it would take 35,000 years for it all to dissipate at natural absorption rates. For that to work, humanity would have to stop emitting all CO2 effective immediately.
But will people stop emitting CO2 anytime soon? There are goals to reduce carbon emissions, but the significant emitters have no plans to stop using carbon-based fuels. Daniel Cziczo, Ph.D., associate professor of atmospheric chemistry at the Massachusetts Institute of Technology, estimates it could reach 600 ppm before humanity changes its ways. That could drive the temperature increase to 2, 3, or 4 degrees Celsius.
In 2017, carbon dioxide emissions increased 1.4 percent.
The International Energy Agency said two-thirds of the increase came from Asia. An unusually hot summer in China led it to run coal plants more often in order to run air conditioners.
The European Union saw emissions rise 1.5 percent. The United States reduced its emissions by 0.5 percent. One big reason for this is the switch from coal to shale gas. But people in both economies are buying larger vehicles, which would increase oil use. Britain, Mexico, and Japan also cut their emissions.
Methane or CH4 traps heat 25 times greater than an equal amount of carbon dioxide. But it dissipates after 10 to 12 years. CO2 lasts for hundreds of years, if not thousands.
Methane comes from three primary sources. The production and transport of coal, natural gas, and oil make up 40 percent. Cow digestion contributes another 26 percent, while manure management adds 10 percent.
The decay of organic waste in municipal solid waste landfills kicks in 16 percent.
Researchers have found a simple solution to the emission of methane from cows. Farmers should add seaweed to the animals' diet. Researchers found that replacing 2 percent of the feed with Australian red algae would reduce methane emissions by 99 percent. The researchers are testing milk and beef to make sure the seaweed doesn't affect the product.
In 2016, California said it would cut its methane emissions 40 percent below 1990 levels by 2030. It has 1.8 million dairy cows and 5 million beef cattle. The seaweed diet, if proven successful, would be an inexpensive solution.
The Environmental Protection Agency has launched the Landfill Methane Outreach Program to help reduce methane from landfills. The program helps municipalities use the biogas as a renewable fuel.
Nitrous oxide, also called N2O, contributes 6 percent of greenhouse gas emissions. It remains in the atmosphere for 114 years. It absorbs 300 times the heat of a similar amount of carbon dioxide. It is produced by agricultural and industrial activities. It's also a byproduct of fossil fuel and solid waste combustion.
More than two-thirds results from its use in fertilizer. Farmers can reduce nitrous oxide emissions by reducing nitrogen-based fertilizer use.
Fluorinated gases are the longest lasting. They are thousands of times more dangerous than an equal amount of carbon dioxide. Because they are so potent, they are called High Global Warming Potential gases.
There are four types. Hydrofluorocarbons are used as refrigerants. They replaced chlorofluorocarbons that were depleting the protective ozone layer in the atmosphere. Hydrofluorocarbons, though, are also being replaced by hydrofluoroolefins. These have a shorter lifespan.
Perfluorocarbons are emitted during aluminum production and the manufacturing of semiconductors. They remain in the atmosphere between 2,600 and 50,000 years. They are 7,390 to 12,200 times more potent than CO2. The EPA is working with the aluminum and semiconductor industries to reduce the use of these gases.
Sulfur hexafluoride is used in magnesium processing, semiconductor manufacturing, and as a tracer gas for leak detection. It's also used in electricity transmission. It’s the most dangerous greenhouse gas. It remains in the atmosphere for 3,200 years and is 22,800 times as potent as CO2. The EPA is working with power companies to detect leaks and recycle the gas.
Nitrogen trifluoride remains in the atmosphere for 740 years. It is 17,200 times more potent than CO2.
Greenhouse Effect Is Well Established by Science
Scientists have known for more than 100 years that carbon dioxide and temperature are related. In the 1850s, John Tyndall and Svante Arrhenius studied how gases responded to sunlight. They found that most of the atmosphere has no effect because it is inert. But 1 percent is very volatile. These components are CO2, ozone, nitrogen, nitrous oxide, CH4, and water vapor. When the sun's energy hits the earth's surface, it bounces off. But these gases act like a blanket. They absorb the heat and reradiate it back to the earth. Since they are so potent, a 40 percent increase is huge. The volume is having huge impacts on temperatures.
In 1896, Svante Arrhenius found that if you doubled CO2, which was then at 280 ppm, it would increase temperatures by 4 degrees Celsius. The term 280 ppm means there are 280 molecules of carbon dioxide per million molecules of total air.
In 1880, CO2 was 280 ppm. In 2012, it was 400 ppm. That’s a 43 percent increase in CO2. The average temperature is 1 degree Celsius warmer. Over large land areas, it’s 1.5 degrees Celsius warmer. It’s the warmest it’s been in thousands of years. Why hasn’t the temperature increased by Arrhenius’ predictions?
Clouds, fog, particles, and ice sheets reflect the sun’s radiation back into space before it ever reaches the earth’s surface. Scientists call it the Direct Effect. The Indirect Effect of particles creates more clouds. They cool the temperature at the same time greenhouse gases warm the temperature. Without the clouds created by pollution, it would be as warm as Arrhenius predicted. According to Cziczo, that’s what makes it so difficult to predict temperatures.
But Isn't More Carbon Dioxide Good for Plants?
Only a fraction of CO2 emissions benefit vegetation. Most of it goes into the atmosphere and the ocean. But scientists believe that the side effects outweigh the benefits. Higher temperatures, rising sea levels and an increase in droughts, hurricanes, and wildfires more than offset any gains in plant growth.
Reversing the Greenhouse Effect
In 2014, the Intergovernmental Panel on Climate Change said countries must remove carbon from the atmosphere. Even if we stopped emitting gases, there are already enough greenhouse gases in the air to create a catastrophe. These include the collapse of polar glaciers and flooded coastal cities.
In 2015, the Paris Climate Accord was signed by 195 countries. They pledged to cut greenhouse gas emissions by 26 to 28 percent below 2005 levels by 2025. Its goal is to keep global warming from worsening another 2 degrees Celsius above pre-industrial levels. Many experts consider that the tipping point. Beyond that, the consequences of climate change become unstoppable.
In 2017 alone, China installed as many solar panels as those that exist in France and Germany. But renewables only meet 19 percent of global energy demand. To meet the Paris Agreement climate goals, clean energy must grow five times as fast as it did in 2017.
To reverse the greenhouse effect, man must pull carbon dioxide out of the atmosphere and render it inert.
Several strategies have been discussed. None of these approaches are yet proved or affordable at the scale needed to make a difference. The most obvious hurdle is the additional energy some of them require. Unless this increased energy comes from a free, renewable source, it would add more costs.
- Scrubbing the air with great air conditioner-like machines.
- Fertilizing the oceans with iron dust to prompt algal blooms that, when they die, carry captured carbon to the bottom of the sea.
- Capturing and storing the carbon dioxide that results when energy is produced by burning trees and other plants that removed carbon from the atmosphere during their growth.
- Crushing and spreading certain types of rock, like basalt, that naturally absorb atmospheric carbon.
Carbon farming does this more affordably by growing plants. For example, builders are putting plants into bio-roofs. Agroforestry grows trees and crops together to increases carbon retention. No-till agriculture reduces the erosion and carbon loss caused by plowing. Keeping farmland covered would store more carbon than bare dirt bleeds carbon.
Whendee Silver is an ecologist at the University of California, Berkeley. She found that the best approach was to use manure as compost on the fields. It kept it from emitting carbon gases while it festered in lagoons. It also nourished grasses that absorbed more carbon. If only 41 percent of the rangeland were treated, it would offset 80 percent of California's agricultural emissions. | <urn:uuid:73bfb453-4ccf-4f11-b923-ccac15c59333> | 4.1875 | 2,372 | Nonfiction Writing | Science & Tech. | 52.344527 | 95,552,991 |
New approach to estimate thermospheric density from SLR observations of LEO satellites
The precise knowledge of the density of the Earth’s thermosphere is relevant for satellite mission planning, precise orbit determination (POD), re-entry predictions, and collision avoidances of Low Earth Orbiting (LEO) satellites. Empirical thermosphere models have been derived since the beginning of the space era from observations, e.g., from mass spectrometers and later from accelerometer data of CHAMP and GRACE.
Scientists of DGFI-TUM developed a new approach for the estimation of the thermospheric density using satellite laser ranging (SLR) observations to spherical LEO satellites in combination with a full POD. The approach is based on detailed analysis of the thermospheric drag. The drag coefficient is computed analytically using a gas-surface interaction model. In a case study, the derived procedure was applied to the spherical satellite ANDE-P at a mean altitude of around 350 km. From the analysis of the SLR observations to ANDE-P between August 16 and October 3, 2009, the scientists derived time series of estimated scale factors for the thermospheric density provided by four empirical models (see Figure). The results show that all models overestimate the true thermospheric density along the ANDE-P trajectory during the processed period. Furthermore, the study disclosed the partly high correlations between the thermospheric scale factors and orbital elements of the satellite’s motion. The approach and detailed results are described in the paper Towards thermospheric density estimation from SLR observations of LEO satellites: a case study with ANDE-Pollux satellite (Journal of Geodesy, 2018, DOI: 10.1007/s00190-018-1165-8).
Improved tide modelling in coastal regions
The ability to predict tidal elevations in coastal areas is of crucial importance for society. In certain regions, tidal events combined with extreme meteorological conditions are responsible for severe flooding and consequent environmental issues. Precise ocean tide information is also required to correct satellite observations, e.g. for the accurate determination of sea level changes or gravity field variations. The accuracy of ocean tide models has largely improved over the last decades as a result of the enormous amount of globally distributed sea level measurements from 25 years of satellite altimetry and enhanced data analysis capabilities.
Difficulties still remain in coastal areas where complex tidal conditions exist. Satellite measurements close to the coast are of reduced quantity and characterized by poor quality due to the contamination through land influences. Now, scientists of DGFI-TUM have shown how a dedicated coastal processing of altimetry observations can help to improve tidal modelling. By reprocessing the satellite altimetry observations with the ALES algorithm, precision and accuracy of coastal sea level measurements are significantly improved. A comparison with globally distributed in-situ tide gauge measurements reveals a substantial reduction of errors for distances closer than 20 km from land when using ALES data to compute tidal constituents. Moreover, the absolute differences at single locations feature improvements larger than 2 cm, with an average impact of over 10% for individual tidal constituents. The entire study is described in the open-access publication Coastal Improvements for Tide Models: The Impact of ALES Retracker (Remote Sensing, 2018, DOI:10.3390/rs10050700, [PDF]).
Improved and homogeneous altimeter sea level record
An improved and homogeneous altimeter sea level record has been derived within the second phase of the Sea Level project of the European Space Agency’s Climate Change Initiative. The record covers a 23-year long time span (1993-2015) and is based on the data from nine satellite altimeter missions. It includes the monthly gridded time series of multi-mission merged sea level anomalies at a 0.25° spatial resolution and derived products suitable for climate studies: global mean sea level time series, regional mean sea level trends (see Figure), and maps of the amplitude and phase of the annual and semi-annual signals of the sea level.
The use of improved geophysical corrections and improved orbit solutions, careful bias reduction between missions and inclusion of two new altimeter missions (SARAL/AltiKa and CryoSat-2) improved the sea level products and reduced their uncertainties on different spatial and temporal scales. The derived global mean sea level trend is 3.3 mm/a over the entire time span, the regional sea level trends range between -5 and +10 mm/a.
Scientists of DGFI-TUM contributed to the development of this new sea level record, in particular, by the analysis of new orbit solutions and the assessment of the impact of orbit improvements on regional sea level results. Processing strategy, results, quality assessment and validation of the new data set are described in the open-access publication An improved and homogeneous altimeter sea level record from the ESA Climate Change Initiative (Earth System Science Data, 2018, DOI:10.5194/essd-10-281-2018, [PDF]).
New approach to resolve water level changes of small rivers with CryoSat-2 SAR altimetry
CryoSat-2, launched in 2010, is the first altimetry satellite partly operating in synthetic-aperture radar (SAR) mode. Compared to conventional radar altimeters, the SAR mode enables altimetry measurements at increased along-track resolution and with smaller footprint. This opens new possibilities for the determination of water levels also of smaller inland water bodies that could not reliably be observed by the classical altimetry satellites. With its repeat time of 369 days CryoSat-2 does not provide a good temporal resolution, but on the other hand this long repeat orbit configuration results in a very dense spatial sampling of the water level, e.g. along a river.
The first step in the calculation of the water levels from CryoSat-2 data is the classification of the recorded radar returns into signals from water and land surfaces. Usually the classification is based on a predefined land-water mask. Such masks are often invariant with respect to time, i.e. they neither account for seasonal variations of the water extent nor for inter-annually shifting river banks. The determination of dynamic land-water (e.g. from remote sensing images) is difficult in particular in regions with frequent cloud coverage, and usually satellite images are not available simultaneously with the altimetry measurements.
By example of the Mekong river basin, scientists of DGFI-TUM and DTU Space (Denmark) developed and validated a new method for the identification of water returns in CryoSat-2 data that is independent from a land-water mask and relies solely on features derived from the SAR and range-integrated power (RIP) waveforms. The new approach has proven its effectiveness especially in the upstream region of the basin that is characterized by small- and medium-sized rivers. Compared to an approach based on a land-water mask, the new method significantly increased the number of valid measurements and their precision (1700 with 2% outliers vs. 1500 with 7% outliers). The approach and results are described in the open-access publication River Levels Derived with CryoSat-2 SAR Data Classification - A Case Study in the Mekong River Basin (Remote Sensing, 2017, DOI:10.3390/rs9121238, [PDF]).
Next-generation global geodetic reference frames: The potential of Satellite Laser Ranging
Consistent and accurate global terrestrial reference frames (TRFs, i.e., the realization of coordinate systems attached to the Earth) are a fundamental prerequisite for navigation, positioning, and for Earth system science by referencing smallest changes of our planet in space and time. As Earth system research is increasingly relying on satellite-based Earth observation data, also the precise orientation of the Earth with respect to inertial space is required to relate the satellite observations to coordinates in the Earth reference frame.
Satellite Laser Ranging (SLR) is among the most important space geodetic techniques contributing to the determination of TRFs. SLR relies on the two-way travel time of laser pulses from stations on the Earth’s surface to satellites equipped with retro-reflectors. The SLR measurement principle allows for the realization of origin (Earth’s center of mass) and scale of global networks with very high accuracy. One of the major factors limiting the present accuracy of SLR-derived TRFs and Earth orientation parameters (EOP), however, is the currently unbalanced global SLR station distribution with a lack of stations particularly in the southern hemisphere.
The benefit of the potential future development of the SLR network for the accuracy of TRF (including origin and scale) and EOP has been investigated in the study Future global SLR network evolution and its impact on the terrestrial reference frame (Journal of Geodesy, 2017, DOI: 10.1007/s00190-017-1083-1). It compares a simulated SLR network including potential future stations to improve the network geometry with the current network. The study highlights that an improved SLR network geometry means a cornerstone to meet the ambitious accuracy requirements for reference frames defined by IAG’s Global Geodetic Observing System (GGOS).
Tracking openings in sea ice to increase our knowledge of the Arctic and Antarctic Ocean
The Arctic and Antarctic oceans are located in areas that are experiencing new conditions due to climate change (higher atmospheric temperatures, melting of ice sheets). Nevertheless, it is difficult to understand the changes in sea level, due to the fact that large areas are seasonally or permanently covered by sea ice. Scientists at DGFI-TUM have developed a new technique to spot the leads, i.e. openings in sea ice that uncover the sea surface, by analysing the data of ESA's successful Cryosat-2 mission.
The radar altimeter on board sends electromagnetic waves and collects the reflections from the ocean surface at different incidence angles. But the retrieval of meaningful sea level estimates requires not only the recognition of the leads. It also needs to be ensured that the openings are perpendicular to the satellite position (nadir position). As the sea is calm and flat, the leads act like a mirror for the satellite: they could be easy to recognise, but their reflection is so strong that it has a signature on the data also at other incidence angles. An undetected lead is a missed opportunity to measure sea level, but a lead detected when not at nadir can cause a wrong estimation.
By tracking the signature that the leads leave on the collected data, it is possible to improve the detection capabilities. This is shown in the publication Lead Detection using Cryosat-2 Delay-Doppler Processing and Sentinel-1 SAR images (Advances in Space Research, 2017, DOI: 10.1016/j.asr.2017.07.011, [PDF]). The technique is also validated using radar images from Sentinel-1 (see picture above). This study will increase the reliability of the sea level analysis at high latitudes and thus contributes to improve the knowledge of the sea level dynamics in the Arctic and Antarctic oceans.
Monitoring the Arctic Seas: Satellite Altimetry traces open water in sea-ice regions
Open water areas in sea ice regions significantly influence the ocean-ice-atmosphere interaction. For the debate about Arctic climate change, the monitoring and quantification of such openings, so-called leads and polynyas, is of high relevance.
In a recent study, scientists from DGFI-TUM demonstrated the potential of high-frequency satellite altimetry data from the missions SARAL and Envisat for the detection of open water areas in the ice-covered Greenland Sea. In comparison with Synthetic Aperture Radar (SAR) images, they obtained a consistency rate of 76.9% for SARAL and 70.7% for Envisat. Some samples even resulted in true water detection rates of up to 94%.
The study is based on an innovative, unsupervised classification approach that relates the radar altimetry echoes (so-called waveforms) with different surface conditions, among them open water and sea ice. The algorithm has successfully been used for the detection of water areas with different spatial extent, and it can be applied to all pulse-limited altimetry data sets. The procedure and results are described in the article Monitoring the Arctic Seas: How Satellite Altimetry can be used to detect open water in sea-ice regions (Remote Sensing, 2017, available via open access).
Near real-time modelling of the electron content in the ionosphere
The rapidly growing number of terrestrial GNSS (Global Navigation Satellite System) receivers providing double frequency measurements in real-time and near real-time enables the computation of ionosphere parameters such as the Vertical Total Electron Content (VTEC) with increasing accuracy and decreasing latency.
Scientists of DGFI-TUM have now developed a comprehensive processing framework to compute VTEC maps in near real-time from low latency GNSS measurements using compactly supported B-splines and recursive filtering methods. Details and results are described in the article Near real-time estimation of ionosphere vertical total electron content from GNSS satellites using B-splines in a Kalman filter (Annales Geophysicae, 2017, DOI: 10.5194/angeo-35-263-2017).
Series expansions in terms of B-spline functions allow for an appropriate handling of heterogeneously distributed input data. Kalman filtering enables the processing of the data immediately after acquisition and paves the way of sequential (near) real-time estimation of the unknown parameters, i.e. VTEC B-spline coefficients and differential code biases. Under the investigated conditions, the validation tests of our near real-time products show promising results in terms of accuracy and agreement with the post-processed final products of the International GNSS Service (IGS) and its analysis centers which are usually publicly available with several days of latency.
Time series of sea level in the Mediterranean and North Sea with improved coastal performances
Satellite altimetry has been monitoring the sea level since more than 25 years. Measurements in the coastal zone, however, were routinely discarded due to poor quality. Recently, several studies addressed various techniques to improve the precision and the accuracy of coastal sea level measurements.
Scientists at DGFI-TUM are reprocessing the satellite signals using the ALES algorithm and have set up an experimental platform to distribute corrected sea level anomalies with a documented procedure. The COastal Sea level Tailored ALES (COSTA) dataset is now available in the Mediterranean and in the North Sea and provides the user with time series at each point along the satellite tracks of two satellite missions (ERS-2 and Envisat) covering the years 1996-2010. The COSTA dataset improves the precision of the standard product in over 70% of the domain. Details are provided in the recent presentation ALES Coastal Processing Applied to ERS: Extending the Coastal Sea Level Time Series (10th Coastal Altimetry Workshop, 21-24 February 2017, Florence, Italy). COSTA guarantees therefore not only improvements in the coastal data, but it means also a step forward in the precision of satellite altimetry in the open sea.
COSTA data is available for download in our Science Data Products section.
High-resolution river water levels based on multi-mission altimetry data
Classical approaches of inland altimetry determine water level variations of rivers at so-called virtual stations, i.e. fixed locations given by the crossings of altimeter tracks and rivers. Depending on the repeat cycles of the individual altimeter missions, the temporal resolution of these time series is limited to 35 or 10 days.
Now, scientists of DGFI-TUM have developed a method to provide a complete spatio-temporal description of the Mekong River based on multi-mission altimetry data. Details and results are described in the article Combination of Multi-Mission Altimetry Data Along the Mekong River with Spatio-Temporal Kriging (Journal of Geodesy, 2016, DOI: 10.1007/s00190-016-0980-z). The approach uses statistical robust interpolation methods to combine measurements of different altimetry missions, considering diverse flow velocities of the river as well as remote relationships between different catchment areas. With this approach, water level time series can be determined for arbitrary locations along the river at a significantly increased temporal resolution. For the test case of the Mekong river, a resolution of 5 days could be achieved, and the accuracy was improved by 23 to 34% compared to standard methods.
New flyers available: DTRF2014 and DAHITI
Two new flyers have recently been issued: The first one contains background information and directions for data access for DGFI-TUM's most recent realization of the International Terrestrial Reference System, the DTRF2014 (for more information about the DTRF2014 see message below).
The second flyer advertises DAHITI, DGFI-TUM's Database for Hydrological Time Series of Inland Waters that has been operated since 2013. DAHITI provides time series of water levels of lakes, reservoirs, rivers, and wetlands derived from multi-mission satellite altimetry for more than 400 globally distributed targets.
Both flyers can be accessed by clicking on the images on the right.
DGFI-TUM releases a new realization of the International Terrestrial Reference System: DTRF2014
The DTRF2014 is DGFI-TUM’s new realization of the International Terrestrial Reference System (ITRS). It comprises positions and velocities of 1712 globally distributed stations of the space geodetic observation techniques VLBI, SLR, GNSS and DORIS as well as consistently estimated Earth orientation parameters. The DTRF2014 includes six additional years of data compared to the previous realization, i.e., the DTRF2008 (Seitz et al., 2012). Additionally, for the first time, non-tidal atmospheric and hydrological loading is considered in the DTRF2014.
In its role as an "ITRS Combination Centre" within the International Earth Rotation and Reference Systems Service (IERS), DGFI-TUM took the responsibility for providing realizations of the ITRS in regular intervals. An up-to-date ITRS realization at highest accuracy and long-term stability is an indispensable requirement for various applications in daily life (e.g., for navigation and positioning, for the realization of height systems and precise time systems or for the computation of spacecraft and satellite orbits). Furthermore, it is the backbone for Earth system research by providing the metrological basis and uniform reference for monitoring processes in the context of global change (e.g., ice melting, sea level rise). Read more about the DTRF2014 here (also available in German here).
The DTRF2014 is available for download in our Science Data Products section.
New surface deformation model for Latin America after the 2010 earthquakes in Chile and Mexico (VEMOS2015)
Strong earthquakes cause changes in positions (up to several meters) and velocities of geodetic reference stations. Hence, existing global and regional reference frames become unusable in the affected regions. To ensure the long-term stability of the geodetic reference frames, the transformation of station positions between different epochs requires the availability of reliable continuous surface deformation models.
Scientists of DGFI-TUM now published the new continental continuous surface deformation model VEMOS2015 (Velocity Model for SIRGAS [Sistema de Referencia Geocéntrico para Las Américas]) for Latin America and the Caribbean inferred from GNSS (GPS+GLONASS) measurements gained after the strong earthquakes in 2010 in Chile and Mexico. VEMOS2015 is based on a multi-year velocity solution for a network of 456 continuously operating GNSS stations covering a five years period from March 14, 2010 to April 11, 2015. The approach and results are described in the article Crustal deformation and surface kinematics after the 2010 earthquakes in Latin America (Journal of Geodynamics, 2016, DOI: 10.1016/j.jog.2016.06.005).
VEMOS2015 as well as the SIRGAS reference frame realization SIR15P01 (multi-year solution for 456 GNSS stations, including weekly residual time series) are provided in our Science Data Products section. More information on DGFI-TUM's research activities related to SIRGAS can be found here.
Innovative processing method for altimetry data allows for monitoring water level variations in wetlands
Over the last years satellite altimetry has proven its potential to monitor water level variations not only over the oceans but also over inland water bodies. DGFI-TUM provides altimetry-derived time series of water stage variations of various globally distributed rivers and lakes via its web service "Database for Hydrological Time Series over Inland Waters" (DAHITI; see below).
Now, scientists of DGFI-TUM have developed an innovative processing method for monitoring and analyzing water level variations in wetlands and flooded areas. The approach is based on automated altimeter data selection by waveform classification and an optimized waveform retracking. It is described in the article Potential of ENVISAT Radar Altimetry for Water Level Monitoring in the Pantanal Wetland (Remote Sensing, 2016, available via open-access).
Using the example of the Pantanal wetland in South America, this study demonstrates the capability and limitations of the ENVISAT radar altimeter for monitoring water levels in inundation areas. The accuracy of water stages varies between 30 and 50 cm (RMSE) and is in the same order of magnitude as reported for smaller rivers. Most areas of the Pantanal show clear annual water level variations with maximum water stages between January and June. The amplitudes can reach up to about 1.5 m for larger rivers and their floodplains. However, some areas of the wetland show water level variations of less than a few decimeters, which is below the accuracy of the method. These areas cannot reliably be monitored by ENVISAT. Further investigations will show if the usage of Delay-Doppler altimeter data (such as measured by the recently launched Sentinel-3 mission) might improve the results there.
IAG adopts a new conventional value for the reference gravity potential W0 of the geoid
The Global Geodetic Observing System (GGOS) of the International Association of Geodesy (IAG) promotes through its Focus Area 1 Unified Height System the definition and realization of a global vertical reference system with homogeneous consistency and long-term stability. For the term 2011-2015 DGFI-TUM coordinated the Working Group Vertical Datum Standardization, which main purpose was to determine an updated value for the gravity potential W0 of the geoid to be introduced as the conventional reference level for the realization of a global height system.
The derived value was officially adopted by the IAG in its Resolution No. 1, July 2015, as the conventional W0 value for the definition and realization of the International Height Reference System. A detailed description about the DGFI-TUM computation strategy of W0, applied models, conventions and standards, as well as results is presented in the recent publication A conventional value for the geoid reference potential W0 (Journal of Geodesy, 2016, DOI: 10.1007/s00190-016-0913-x).
Read more about the background and DGFI-TUM's activities related to the determination of the new W0 value here.
New flexible combination approach for regional gravity field modelling applied to Northern Germany
Different measurement techniques of the Earth's gravity field are characterized by different spectral sensitivities, i.e they allow for detecting structures of the gravity field at different spatial scales. By combining the observations from various measurement techniques a data set of a broad spectral range can be obtained. Typically, high-resolution gravity data from regional measurements are combined with global satellite information of lower spatial resolution.
To exploit the gravitational information as optimally as possible, scientists of DGFI-TUM set up a regional modeling approach. It uses radial spherical basis functions and emphasizes the strengths of various data sets by a flexible combination of high- and middle-resolution terrestrial, airborne, shipborne, and altimetry measurements. The resulting regional models can serve as a basis for various applications, such as the refinement of global gravity field models, national geoid determination, and the detection of mass anomalies in the Earth’s interior. Details can be found in the recently published article Combination of various observation techniques for regional modeling of the gravity field (Journal of Geophysical Research, 2016, DOI:10.1002/2015JB012586).
Data portal DAHITI: Water level time series of rivers and lakes from multi-mission satellite altimetry
Scientists of DGFI-TUM have developed a new approach for the automated estimation of water levels of inland water bodies based on satellite observations from multi-mission altimetry. Time series of water stage variations of various globally distributed rivers and lakes are made available through DGFI-TUM's web service “Database for Hydrological Time Series over Inland Waters” (DAHITI).
The approach is described in the publication DAHITI – an innovative approach for estimating water level time series over inland waters using multi-mission satellite altimetry (Hydrology and Earth System Sciences, 2015, available via open-access).
The new method is based on an extended outlier rejection and a Kalman filter incorporating cross-calibrated multi-mission altimeter data from Envisat, ERS-2, Jason-1,Jason-2, TOPEX/Poseidon, and SARAL/AltiKa, including their uncertainties. The paper presents water level time series for a variety of lakes and rivers in North and South America. A comprehensive validation is performed by comparisons with in situ gauge data and results from external inland altimeter databases. The new approach yields RMS differences with respect to in situ data between 4 and 36 cm for lakes and 8 and 114 cm for rivers. For most study cases, more accurate height information than from other available altimeter databases can be achieved.
DGFI-TUM contributes to the implementation of an UN Resolution for a Global Geodetic Reference Frame
In February 2015, the UN General Assembly adopted its first geospatial resolution „A Global Geodetic Reference Frame for Sustainable Development“. This resolution recognizes the importance of geodesy for many societal and economic benefit areas, including navigation and transport, construction and monitoring of infrastructure, process control, surveying and mapping, and the growing demand for precisely observing our planet's changes in space and time. The resolution stresses the significance of the global reference frame for accomplishing these tasks, for natural disaster management, and to provide reliable information for decision-makers.
The United Nations Global Geospatial Information Management (UN-GGIM) Working Group on the Global Geodetic Reference Frame (GGRF) has the task for drafting a roadmap for the enhancement of the GGRF under UN mandate.
Based on its competence in the realization of reference frames DGFI-TUM is involved in this activity by contributing to the compilation of a concept paper in the frame of the International Association of Geodesy (IAG). The main purpose of this paper is to provide a common understanding for the definition of the GGRF and the scientific basis for the preparation of the roadmap to be accomplished by the UN-GGIM Working Group on the GGRF. [more] | <urn:uuid:fdc0a292-8f8b-4bbe-9037-6f21b357a5b3> | 2.71875 | 5,749 | Content Listing | Science & Tech. | 24.996462 | 95,552,993 |
You are here: Home Products 5 Sustainable Innovations in 2014 You Probably Didn't Know About 5 Sustainable Innovations in 2014 You Probably Didn't Know About by guestauthor June 16, 2014, 7:57 am A prototype of the solar roadways concept Each year, technology and innovative advancement brings humanity a step closer to a much more sustainable existence. From renewable energy to improved methods for cultivating food and consumer products, science is lending a hand to enhance humanity’s way of life. Although many are aware of the developments of the recent past, there are many sustainable innovations that are developed without all the pomp and circumstance. What innovative ideas have been developed in 2014 that you may have missed? 1. Redefining Solar Cells – In Australia, Professor Stuart Wenham uncovered a method to control hydrogen atoms in order to compensate for the deficiencies in silicon. This material is one of the more expensive aspects to developing solar cells and a higher quality of silicon is more productive in collecting light in photovoltaic technology. The mechanism is capable of improving poor quality silicon to become as efficient if not more so than the more pure counterpart. This would decrease the cost of solar panels greatly since high quality silicone would no longer be a valued commodity. 2. Wastewater Capabilities – Cambrian Innovation installed its first EcoVolt device which allows for electrically charged microbes to treat wastewater while creating power. The company is currently working on similar devices to be used in various fields such as agricultural and military capacities. The company is also developing plans to convert carbon dioxide into fuels for use on the planet as well as space exploration. Bioelectric treatment units are capable of creating biogas at a rate of 100 cubic feet per minute as the electricity interacts with the microbes while carbon dioxide is consumed simultaneously, creating a more pure fuel source. 3. Walmart’s Commitments – In the 2014 Sustainability Product Expo, Proctor and Gamble, Walmart’s product partner announced championing two methods for improving sustainability in products. The first is a method to help consumers recycle more products rather than tossing them in the trash for the landfill. The second is a more intricate plan that can decrease the amount of water that is used in product development. By decreasing the water used in manufacturing liquid laundry detergents, the company will save more than 45 million gallons of water in the United States. This could also decrease plastic bottle use as formulas will be more concentrated increasing the number of loads that can be washed using a single unit. 4. Earthwards Recognition – Proctor and Gamble isn’t the only organization concerned with innovative sustainability. Johnson and Johnson awards employees directly participating in sustainable improvements made to the various products under the brand name. This includes everything from Band-aids to Sterilmed Trocar devices that are recycled and reused. 5. Solar Roadways – Developing solar roadways has been talked about for quite some time and the possibilities are great for more than just clean energy. Scott and Julie Brusaw launched an Indiegogo.com project recently to begin developing this idea and have accumulated two million dollars in pledges. If the couple can begin laying the foundation for an ice-free, solar-collecting, road way, they could become pioneers in a new way of developing streets across the United States. If the project is successful, it could cut greenhouse gasses by 75 percent. Editor’s note: While the criticism of this project has lightened up over the years, there are a few detractors still out there… Technology continues to advance at an incredible rate. As each improvement has the capacity to benefit devices and techniques in other fields, humanity comes ever closer to realizing a healthier existence. With the planet changing climates at an alarming rate, reducing the impact humanity has had on the planet is more important than ever. With leaps and bounds, what innovative plan will be set into motion next year? Ken Myers is a father of three and passionate about great childcare. He’s always looking for ways to help families find the support they need to live fuller, richer lives. Find out more about expert childcare by checking out @go_nannies on Twitter. See more Previous article Top News from the Food Front: Safety Fails, GMO Nopes, Vegan Wins Next article Is Coca-Cola Vegan? Yes and No. One Ping Pingback:5 Sustainable Innovations in 2014 You Probably Didn’t Know About Leave a Reply Cancel reply Your email address will not be published. Required fields are marked *Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Upload a photo / attachment to this comment (PNG, JPG, GIF - 6 MB Max File Size): (Allowed file types: jpg, gif, png, maximum file size: 6MB. | <urn:uuid:bd93b588-1cba-4db9-8e2b-6826d3211800> | 2.96875 | 974 | Listicle | Science & Tech. | 34.651403 | 95,552,999 |
In addition to providing a barrier to the movements of molecules between the intracellular and extracellular fluids, plasma membranes are involved in interactions between cells to form tissues. Many cells are physically joined at discrete locations along their membranes by specialized types of junctions known as desmosomes, tight junctions, and gap junctions. Desmosomes consist of a region between two adjacent cells where the apposed plasma membranes are separated by about 20 nm and have a dense accumulation of protein at the cytoplasmic surface of each membrane and in the space between the two membranes. Protein fibers extend from the cytoplasmic surface of desmosomes into the cell and are linked to other desmosomes on the opposite side of the cell. Desmosomes hold adjacent cells firmly together in areas that are subject to considerable stretching, such as in the skin. The specialized area of the membrane in the region of a desmosome is usually disk-shaped, and these membrane junctions could be likened to rivets or spot-welds.
Asecond type of membrane junction, the tight junction, is formed when the extracellular surfaces of two adjacent plasma membranes are joined together so that there is no extracellular space between them. Unlike the desmosome, which is limited to a diskshaped area of the membrane, the tight junction occurs in a band around the entire circumference of the cell. Most epithelial cells are joined by tight junctions. For example, epithelial cells cover the inner surface of the intestinal tract, where they come in contact with the digestion products in the cavity (lumen) of the tract. During absorption, the products of digestion move across the epithelium and enter the blood. This transfer could take place theoretically by movement either through the extracellular space between the epithelial cells or through the epithelial cells themselves. | <urn:uuid:ddb80009-3b78-4b25-8694-5389b27234d7> | 3.96875 | 377 | Knowledge Article | Science & Tech. | 23.053144 | 95,553,003 |
- Open Access
The unsuitability of html-based colour charts for estimating animal colours – a comment on Berggren and Merilä (2004)
© Stevens and Cuthill; licensee BioMed Central Ltd. 2005
Received: 28 April 2005
Accepted: 30 August 2005
Published: 30 August 2005
A variety of techniques are used to study the colours of animal signals, including the use of visual matching to colour charts. This paper aims to highlight why they are generally an unsatisfactory tool for the measurement and classification of animal colours and why colour codes based on HTML (really RGB) standards, as advocated in a recent paper, are particularly inappropriate. There are many theoretical arguments against the use of colour charts, not least that human colour vision differs markedly from that of most other animals. However, the focus of this paper is the concern that, even when applied to humans, there is no simple 1:1 mapping from an RGB colour space to the perceived colours in a chart (the results are both printer- and illumination-dependent). We support our criticisms with data from colour matching experiments with humans, involving self-made, printed colour charts.
Colour matching experiments with printed charts involving 11 subjects showed that the choices made by individuals were significantly different between charts that had exactly the same RGB values, but were produced from different printers. Furthermore, individual matches tended to vary under different lighting conditions. Spectrophotometry of the colour charts showed that the reflectance spectra of the charts varied greatly between printers and that equal steps in RGB space were often far from equal in terms of reflectance on the printed charts.
In addition to outlining theoretical criticisms of the use of colour charts, our empirical results show that: individuals vary in their perception of colours, that different printers produce strikingly different results when reproducing what should be the same chart, and that the characteristics of the light irradiating the surface do affect colour perception. Therefore, we urge great caution in the use of colour charts to study animal colour signals. They should be used only as a last resort and in full knowledge of their limitations, with specially produced charts made to high industry standards.
The use of colour charts to estimate or categorise the colours of animal signals is a technique utilised in numerous studies [e.g. [1–5]]. In particular, a recent proposal argues that researchers could produce custom-made charts, designed from the HTML colour code (the standard for colour representation on the World Wide Web). In this paper we present theory and data showing why the use of such charts to estimate colour is seriously flawed and only should be undertaken as a last resort.
'Colour is in the eye of the beholder'
A major problem with using colour charts is one frequently stressed: that human vision differs markedly from most animals other than Old World primates [6–11]. Signals are often aimed at specific animals, and it has long been realised that there is an association between the evolution of a particular signal and the receivers' visual system , and so signals should be considered from the perspective of the signal receivers' sensory experience [6, 13]. The description of a certain colour is something specific to a particular visual system, and this perception may differ greatly between animals [6–8, 14].
Colour perception is the product of reflectance, the irradiant light characteristics, the transmission characteristics of the medium, and the characteristics of the animal's visual system . Most of the hues an animal can perceive can be produced by mixing wavelengths of light (called primaries) in different proportions, and so: (a) different light spectra can produce a sensation of the same hue if the output of the animal's photoreceptor types is the same (metamerism), (b) the same spectra will produce different hues to animals that differ in the absorption spectra of their photoreceptors, and (c) the dimensionality of colour space is determined by the number of interacting receptor types [8, 15, 16]. For instance, birds typically have four single cone types, compared to three in humans, and unlike humans, most birds are capable of perceiving light into the ultraviolet spectrum [[17–24], reviewed by [25–28]]. This means that birds should be capable of perceiving a wider range of hues, and will differ from humans in the magnitude of perceived colour differences, even for those spectra visible to humans.
Whilst avian vision has been described to illustrate why human colour-matching can never quantify the colours perceived by other animals, it is equally important to realise that colour matching to charts of the type proposed in is not even adequate for human perception.
The inadequacy of colour charts to classify colour
To understand why using certain colour charts to study visual signals is often inadequate, it is helpful to briefly consider some of the main aspects of the colour spaces from which charts are created.
One way to represent colour is to agree on a set of primaries and describe a colour by the values of the weights of those primaries used by subjects to match a test light (additive colour mixing) [29, 30]. The CIE (Commission International d'Éclairage) XYZ colour space is one such specification of colour stimuli, produced by additive mixing of three imaginary primaries.
It is usually important to know if a colour difference is perceivable, determined by experimentally modifying colours in small degrees to determine threshold perceptible differences. When plotted in colour space, these differences form the boundary of a region of colours that are indistinguishable from other colours, with ellipses fitted to the boundaries . In colour spaces such as CIE XYZ, the shape and size of the ellipses depends strongly on the location of the difference in the colour space, meaning that the magnitude of the difference in CIE XYZ space is a poor indicator of the real perceived difference between colours [29–31]. Therefore, what is often preferable is a uniform colour space where the distance in coordinate space is comparable to the perceived difference in colour by an observer.
Currently, the most popular uniform colour space is the CIELAB space, obtained by a non-linear transformation of the XYZ space. This colour space is uniform, meaning that equations allow the Euclidian distance between two points in the CIELAB space to predict more accurately the observed difference in colour, although comparisons of 'colour constancy' in CIELAB to empirically measured colour constancy are still often quite poor .
It is a misconception that RGB (red, green, blue) colour space is an accurate method to classify colours as seen by humans; it is not (and certainly is not for non-human animals). Indeed, it is not generally associated with studies aiming to match colours to charts. RGB space is most readily associated with colour reproduction on computers, and with its associated CMY colour space for printing. RGB values are those used to represent digital photographs on a colour monitor, with values of R, G, and B usually ranging from 0 to 255 (an 8-bit scale). HTML colour-coding (as advocated by ) is simply a concise encoding of the RGB colour format.
There are several criticisms of using RGB colour codes to specify colours from charts. Firstly, RGB space is non-uniform, and therefore differences in RGB values do not equate to equal differences in colour perception. Secondly, unlike the CIELAB space, RGB ratios are not capable of producing all the possible perceptual combinations of colours to humans (let alone to other species). For instance, values of L* = 100, a* = -80 and b* = -2 changing continuously to values of L* = 100, a* = -80 and b* = -59 in CIELAB space, are all represented by the same RGB values (R = 0, G = 255, B = 255), and therefore, differences in the colours produced by CIELAB space over this range simply cannot be reproduced on a computer screen or on a printed chart. Thirdly, and perhaps most importantly, unlike the various CIE colour spaces, the colours generated from RGB colour co-ordinates are device-dependent . That is, a photographic image of a given colour may be represented by different RGB values in different cameras, and a given RGB coordinate in a camera or computer may translate to different colours on different printers.
The experiments in this paper were designed to demonstrate that the faults with Berggren & Merilä's approach are not simply theoretical abstractions. The experiments illustrate the contentions that individual human observers vary in their assessment and ranking of colours, that the surface irradiance characteristics may affect perceptions of colour and, finally, that different printers will produce colour sheets with significantly different reflectance values, even if the RGB/HTML values of the colour charts on a computer are identical (discounting variation arising from different toner levels which are, nonetheless, an important consideration in practice). These are not new arguments, and more thorough experiments are routine in colour science, but rather are presented here to illustrate the pitfalls for those studying animal colouration.
Colour Matching Experiments
These results show that, for both the blue-green and red chart experiments, the colour matching choices that subjects made were very different for charts produced from different printers. There was also a suggestion of a smaller effect of light conditions on colour matching. The variation in colour-matching judgements was large: for the red experiment, the same target colour was matched to chart elements with R-pixel values ranging from 125 to 250 (C.V. = 16%); for the blue experiment, the best match ranged from B/G-pixel values of 75 to 250 (C.V. = 25%). Although not usually interpretable in repeated-measures ANOVA, one could argue that the between-subject variation is of direct interest in this application, because in many applications of colour-matching, only one or a few researchers would be responsible. When treated as a fixed effect, 'subject' was significant for both colour-matching tasks (red: F(10,210) = 7.70, P < 10-9; blue: F(10,210) = 2.55, P = 0.006), although the effect sizes were not as substantial as that of printer type (partial eta2 = 0.268 and 0.108, respectively).
The experiments detailed in this study show three important results with respect to the use of printed colour charts to identify the colours of animal signals. Firstly, people vary with respect to the choices they make when asked to match one colour to the perceived closest match from a set of colours on a chart. This means that there may be differences in colour matches made by different individuals. These differences may, to some extent, be reduced by using high-quality charts with a greater range of colour matching options. Secondly, there were very large and significant differences in the matching choices made by individuals between charts produced from different printers. Different inks have significantly different spectral properties . This is a critical problem of 'self-made' charts, such as those made from RGB colour space. The aim of matching a colour signal to a specific section of a colour chart is that the chromatic content of the signal can be recorded, such as in terms of an RGB value. However, when charts with identical RGB values are produced from different printers these do not have the same properties, making comparisons to an RGB value irrelevant. Even if charts are printed from the same printer, with the same cartridge model and paper type, the exact values of the printed charts will still vary depending upon the toner levels (Cuthill & Stevens unpublished data). This means that complex printer calibrations are needed to ensure that the reproduction of more than one colour chart is accurate and invariable with respect to the chart properties. Furthermore, there still remains the problem that a linear increase in a colour value on a chart may not be linear in terms of the measured reflectance spectra and the perceived difference in colour, for many, if not all colour spaces. Thirdly, and perhaps least expected, there was the suggestion that colour matching results were significantly affected by the irradiant light conditions (non-significant for red charts, at P = 0.06, but in a significant interaction with printer type for the blue chart). For the red charts, the largest mean difference for judgements measured under different light sources was a difference of 14 pixel values on an 8-bit scale. For the blue-green charts, the effect of light varied between charts from different printers: for one printer the average difference in pixel values of best matches was 29, for another it was only 5. That the effects of lighting were modest was expected because humans possess colour constancy, where the visual system is capable of maintaining a constant appearance of colour quasi-independently of changes in the irradiant light. Presumably many other animals also show colour constancy [33–37]. However, the adapting mechanisms are not perfect [38–40] and changes in the irradiant light do have some effect on colour constancy. Different environments can vary significantly in their irradiant light characteristics , and thus influence colour appearance.
Whilst in a natural situation, the perception of colours under different conditions will be about the same (i.e. a light red signal will always look light red), in terms of quantitative, or even qualitative scientific experiments, changes in perception may significantly impact upon results, sometimes by large degrees. Results in the field will be affected by the time of day, the weather, and the natural environment [6, 41], and will also differ under various laboratory lighting conditions. Finally, charts based on RGB colour space, even if used in studies of human vision, are incapable of reproducing the full range of colours perceptible to humans. The use of charts based on CIE data to estimate colours are better than the use of charts based on RGB colour space, but still unfortunately are based upon human subjective assessment.
The spectrophotometry analysis further supports the argument that printed charts could produce seriously inaccurate colour matching results. Firstly, different printers vary significantly in the reflectance properties of the colour charts that they produce. This means that comparisons between different printers will be unreliable, even discounting the effects of toner level. More seriously however, is the result that some printers do not show even gradations in reflectance between colour blocks with an even spacing in RGB colour space. Therefore, comparing the colours of animal signals between individuals (for example) via charts to obtain an R, G or B values could be seriously flawed – made worse when considering the non-linearity of RGB space in terms of visual perception.
Finally, as stated above, there are crucial differences between the visual perceptions of humans and non-human animals. The perception of a given colour signal to a human may be markedly different from that of the animal towards which the signal is directed. The fact that colour charts have numerous errors associated with them, especially self-made charts, in terms of human judgement, only emphasises the inadequacy of the method when used with respect to non-human animals. The fact remains that other animals' perceptions of a signal may drastically differ from our own.
The inadequacy of colour charts as a means to estimate the colour of animal signals is not a new topic, yet too often researchers outside of the technical colour sciences have adopted this procedure, despite the serious implications of doing so. Theoretically, the use of colour charts is poor practice when considering signals aimed at non-human animals, since these will often have significantly different visual perceptions. Also, some colour spaces on which colour charts are based are not linear in the perceptual differences between one point in space and another, even for humans. This is the case for the RGB/HTML colour space used to create charts by Berggren & Merilä . Even charts that are uniform in colour space are far from perfect and an active area of research in the human vision sciences. Our results cast further doubt on the use of charts to estimate colour, in that individual people vary in their colour matching choices, and that the light environment also can affect colour perception, albeit to a far smaller degree than printer variation. Matching between different people will be variable and error prone and, even if the experiments are all performed by the same individual, their perceptions can also change based on the environmental conditions. In the case of 'self-produced' printed colour charts, different charts vary in their properties, and the same printers may not produce equal steps between colour blocks on a chart, even if that is the case on a computer.
In some instances, access to expensive equipment such as spectrophotometers and calibrated digital cameras may be difficult. In this case, the use of a colour chart may be the only option, and is certainly better than an abstract description of an observed colour. However, we urge caution with the use of colour charts, and advocate that they are used only as a last resort. We would also not recommend that anyone produces self-made charts based on empirically or perceptually non-uniform colour spaces, and are extremely dubious of results obtained in this way. If colour charts are to be used, we recommend the use of a well studied, perceptually uniform colour space, such as CIELAB, with colour matching experiments undertaken by the same individual in as carefully controlled conditions as is possible.
Whilst theoretical arguments indicate why the use of colour charts, in particular those based on RGB/HTML colour space, are a poor method to estimate the chromatic components of animal signals, we wished also to provide quantitative evidence. Our experiments aimed to show that there may be at least three problems with using humans to match the colour of an object to a set of charts, even discounting differences between human vision and that of other species. Firstly, perception of colour may vary between individuals. Secondly, the exact reproduction of a colour chart will vary depending on the printer from which the charts are produced (not to mention the toner levels and paper type). Thirdly, the light environment may also affect colour matching results.
Colour Matching Experiments
We designed colour charts in Jasc Paint Shop Pro® based on RGB values, consisting of coloured rectangles 68 mm by 20 mm in size, with 12 different rectangles per sheet, inserted into a Microsoft Word® file for easy printing. Colour charts were of two types, red or blue-green, with RGB values ranging from 0,0,0 to 250,0,0 for the red charts, and 0,0,0 to 0,250,250 for the blue-green charts. Copies of each chart type were printed from eight different printer types. The quality of the printers ranged from relatively inexpensive office ink jet types, to high quality laser printers used by the University of Bristol Print Services (HP Colour LaserJet 2500, HP InkJet Combi, HP DeskJet 1220c, HP DeskJet 6127, Epson Stylus Photo 915, Epson Stylus Colour 760, Cannon LaserJet 2100, Cannon LaserJet 5100).
The experiment was a repeated-measures design, with each of the 11 subjects (normally sighted according to self-report) asked to match a colour sample to what they perceived to be the most similar colour on each of the printed colour charts, under each of four different lighting conditions. The order of testing was randomised across subjects, with the authors blind to which colour match was optimal, and the subjects blind to the experimental aims. For each chart and lighting condition, a different colour sample was selected at random from an envelope. In fact, to simplify the subsequent analysis, within the two categories of colour stimuli (red and blue-green), all the samples to be matched were nominally identical, but subjects were unaware of this. The samples were obtained from paint charts (Dulux, Slough, UK, 'Spring 04 colour card') and their similarity verified using spectrophotometry (see below). The pretence of random selection from an apparently large set of samples was introduced to discount the possibility that subjects would recognise the same card, and bias their choices to the same match with each choice made.
Colour matching experiments were performed under four different light conditions: a 150 Watt Xenon arc lamp (Light Support, Berkshire, UK), a 20 Watt desktop incandescent lamp (Philips, PL-Electronic-T), general laboratory fluorescent lighting (Sylvania T5 FHE, Raunheim, Germany), and outside skylight. Incandescent lights contain filaments heated to high temperatures, and typically emit light richer in longer wavelengths (giving a reddish tinge) . The xenon arc lamp tends to have an output richer in short wavelengths. The outside conditions used to test people were under sunshine and cloud, but avoiding direct sunlight, and would tend to be white-ish . The order that each colour chart (from the different printers) was presented, and the order of light conditions under which the charts were viewed, was randomised for each subject so that the possibility of any biases developed towards specific colour patches were controlled for.
Spectrophotometry of Colour Charts
We aimed to quantify the properties of each colour rectangle on each of the colour charts from different printers via spectrophotometry. These results would also show how much variation exists between the different printed charts, which, in theory, should all show the same reflectance spectra. Reflectance measurements of each colour block on each chart was undertaken with a Zeiss MCS 230 diode array photometer, with illumination by a Zeiss CLX 111 Xenon lamp (Carl Zeiss Group, Jena) held at 45° to normal to reduce specular reflection. Measurements were taken normal to the surface, from a 2 mm area, recorded in 1-nm intervals from 300 to 700 nm, and expressed relative to a Spectralon 99% white reflectance standard (Labsphere, Congleton). White standard measurements were taken between measurements of each colour chart, to avoid error associated with drift in the light source and sensor. In all, 10 measurements, in different locations, were taken of each of the 12 colour blocks, on the 8 red and the 8 blue-green charts. Plus, 10 measurements were taken from a random sample of 8 red and 8 blue-green paint cards, giving a total of 2080 reflectance spectra measurements. For each colour block measured, or for the paint cards, the repeated samples were used to produce average spectra.
MS was supported by a Biotechnology & Biological Sciences (UK) studentship; further support was provided by BBSRC grants to ICC, Tom Troscianko and Julian Partridge. The comments of three anonymous referees greatly improved the clarity of our paper.
- Berggren Å, Merilä J: WWW design code – a new tool for colour estimation in animal studies. Frontiers in Zool. 2004, 1: 2-10.1186/1742-9994-1-2.View ArticleGoogle Scholar
- Dawkins MS, Guilford T: Design of an intention signal in the bluehead wrasse (Thalassoma bifasciatum). Proc R Soc Lond B. 1994, 257: 123-128.View ArticleGoogle Scholar
- Heinen JT: The significance of color-change in newly metamorphosed American toads (Bufo americanus americanus). J Herpetol. 1994, 28: 87-93.View ArticleGoogle Scholar
- Ellis T, Howell BR, Hughes RN: The cryptic responses of hatchery-reared sole to a natural sand substratum. J Fish Biol. 1997, 51: 389-401. 10.1111/j.1095-8649.1997.tb01674.x.View ArticleGoogle Scholar
- Bortolotti GR, Fernie KJ, Smits JE: Carotenoid concentration and coloration of American kestrels (Falco sparverius) disrupted by experimental exposure to PCBs. Funct Ecol. 2003, 17: 651-657. 10.1046/j.1365-2435.2003.00778.x.View ArticleGoogle Scholar
- Endler JA: On the measurement and classification of colour in studies of animal colour patterns. Biol J Linn Soc. 1990, 41: 315-352.View ArticleGoogle Scholar
- Goldsmith TH: Optimisation, constraint, and history in the evolution of eyes. Q Rev Biol. 1990, 65: 281-322. 10.1086/416840.View ArticlePubMedGoogle Scholar
- Bennett ATD, Cuthill IC, Norris KJ: Sexual selection and the mismeasure of color. Am Nat. 1994, 144: 848-860. 10.1086/285711.View ArticleGoogle Scholar
- Jacobs GH: Ultraviolet vision in vertebrates. Am Zool. 1992, 32: 544-554.View ArticleGoogle Scholar
- Jacobs GH: The distribution and nature of colour vision among the mammals. Biol Rev Camb Philos Soc. 1993, 68: 413-471.View ArticlePubMedGoogle Scholar
- Tovée MJ: Ultra-violet photoreceptors in the animal kingdom: their distribution and function. TREE. 1995, 10: 455-459.PubMedGoogle Scholar
- Cott HB: Adaptive Colouration in Animals. 1940, London: Methuen & Co. LtdGoogle Scholar
- Guilford T, Dawkins MS: Receiver psychology and the evolution of animal signals. Anim Behav. 1991, 42: 1-14.View ArticleGoogle Scholar
- Cuthill IC, Bennett ATD, Partridge JC, Maier EJ: Plumage reflectance and the objective assessment of avian sexual dichromatism. Am Nat. 1999, 160: 183-200. 10.1086/303160.View ArticleGoogle Scholar
- Kelber A, Vorobyev M, Osorio D: Animal colour vision – behavioural tests and physiological concepts. Biol Rev Camb Philos Soc. 2003, 78: 81-118. 10.1017/S1464793102005985.View ArticlePubMedGoogle Scholar
- Thompson E, Palacios A, Varela FJ: Ways of coloring: comparative color vision as a case study for cognitive science. Behav Brain Sci. 1992, 15: 1-74.View ArticleGoogle Scholar
- Huth HH, Burkhardt D: Der Spektrale Sehbereich eines Violetta Kolibris. Naturwissenschaften. 1972, 59: 650-10.1007/BF00609559.View ArticlePubMedGoogle Scholar
- Wright AA: The influence of ultraviolet radiation on the pigeon's color discrimination. J Exp Anal Behav. 1972, 17: 325-337.PubMed CentralView ArticlePubMedGoogle Scholar
- Goldsmith TH: Hummingbirds see near ultraviolet light. Science. 1980, 207: 786-788.View ArticlePubMedGoogle Scholar
- Chen D, Collins JS, Goldsmith TH: The ultraviolet receptor of bird retinas. Science. 1984, 225: 337-340.View ArticlePubMedGoogle Scholar
- Chen D, Collins JS, Goldsmith TH: The ultraviolet receptor of bird retinas. Science. 1984, 225: 337-340.View ArticlePubMedGoogle Scholar
- Maier EJ, Bowmaker JK: Colour vision in the passeriform bird, Leiothrix lutea : correlation of visual pigment absorbency and oil droplet transmission with spectral sensitivity. J Comparative Physiol A. 1993, 172: 295-301. 10.1007/BF00216611.View ArticleGoogle Scholar
- Hart NS: The visual ecology of avian photoreceptors. Prog Retin Eye Res. 2001, 20: 675-703. 10.1016/S1350-9462(01)00009-X.View ArticlePubMedGoogle Scholar
- Odeen A, Hastad O: Complex distribution of avian color vision systems revealed by sequencing the SWS1 opsin from total DNA. Mol Biol Evol. 2003, 20: 855-861. 10.1093/molbev/msg108.View ArticlePubMedGoogle Scholar
- Cuthill IC, Partridge JC, Bennett ATD, Church SC, Hart NS, Hunt S: Ultraviolet vision in birds. Adv Stud Behav. 2000, 29: 159-214.View ArticleGoogle Scholar
- Honkavaara J, Koivula M, Korpimaki E, Siitari H, Viitala J: Ultraviolet vision and foraging in terrestrial vertebrates. Oikos. 2002, 98: 505-511. 10.1034/j.1600-0706.2002.980315.x.View ArticleGoogle Scholar
- Shi YS, Yokoyama S: Molecular analysis of the evolutionary significance of ultraviolet vision in vertebrates. PNAS. 2003, 100: 8308-8313. 10.1073/pnas.1532535100.PubMed CentralView ArticlePubMedGoogle Scholar
- Church SC, Bennett ATD, Cuthill IC, Partridge JC: Avian ultraviolet vision and its implications for insect protective colouration. Insect and Bird Interactions. Edited by: van Emden H, Rothschild M. 2004, Andover (Hants, UK): Intercept Ltd, 165-184.Google Scholar
- Wyszecki IG, Stiles WS: Color science: Concepts and Methods, Quantitative Data and Formulae. 1982, New York: John Wiley & Sons, 2Google Scholar
- Forsyth DA, Ponce J: Computer Vision: A Modern Approach. Pearson Educational International. 2003Google Scholar
- Westland S, Ripamonti C: Computational Colour Science Using MATLAB. 2004, John Wiley & Sons. LtdView ArticleGoogle Scholar
- Munsell Color Company: Munsell Book of Color. Glossy Finish Collection. 1976, Munsell/Macbeth/Kollmorgen Corporation, Baltimore, 2:Google Scholar
- Dyer AG: Broad spectral sensitivities in the honeybee's photoreceptors limit colour constancy. J Comp Physiol A. 1999, 185: 445-453. 10.1007/s003590050405.View ArticleGoogle Scholar
- Dorr S, Neumeyer C: Color constancy in goldfish: the limits. J Comp Physiol A. 2000, 186: 885-896. 10.1007/s003590000141.View ArticlePubMedGoogle Scholar
- Vorobyev M, Marshall J, Osorio D, de Ibarra NH, Menzel R: Colourful objects through animal eyes. Color Res Appl. 2001, 26: S214-S217. 10.1002/1520-6378(2001)26:1+<::AID-COL45>3.0.CO;2-A.View ArticleGoogle Scholar
- Neumeyer C, Dorr S, Fritsch J, Kardelky C: Colour constancy in goldfish and man: influence of surround side and lightness. Perception. 2002, 31: 171-187. 10.1068/p05sp.View ArticlePubMedGoogle Scholar
- Lotto RB, Chittka L: Seeing the light: Illumination as a contextual cue to color choice behavior in bumblebees. PNAS. 2005, 102: 3852-3856. 10.1073/pnas.0500681102.PubMed CentralView ArticlePubMedGoogle Scholar
- Arend LE, Reeves A: Simultaneous color constancy. J Opt Soc Am. 1986, 3: 1743-1751.View ArticleGoogle Scholar
- Troost J, de Weert C: Naming versus matching in color constancy. Percept and Psychophys. 1991, 50: 591-602.View ArticleGoogle Scholar
- Lucassen M, Walraven J: Quantifying color constancy – evidence for non-linear processing of cone-specific contrast. Vis Res. 1993, 33: 739-757. 10.1016/0042-6989(93)90194-2.View ArticlePubMedGoogle Scholar
- Endler JA: The color of light in forests and its implications. Ecol Monogr. 1993, 63: 1-27.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | <urn:uuid:8195edbb-7673-4bcf-86cb-14cd0ea0e5d6> | 2.609375 | 6,759 | Truncated | Science & Tech. | 48.166094 | 95,553,014 |
Introduction to Planet Earth – Chapter One notes
The word geology is derived from the Greek words geo and logos and means the “study of
Geology: the scientific study of the planet Earth
Geologist: is a professional who studies the planet Earth
The movement of continents on the Earth’s surface was suggested in the twentieth century
by the German meteorologist Alfred Wegener, who wrote on continental drift in 1912.
Wegener recognized that today’s continents had previously been clustered together in a
large land mass, but had subsequently moved apart. He called the land mass Pangea.
Pangea: was the supercontinent that existed during the Palaeozoic and Mesozoic eras
about 200 million years ago, before the component continents were separated into their
Wegener’s idea (with great persistence and great amounts of time dedicated to collecting
lots of evidence) ultimately allowed development of the plate tectonics theory.
Canadian geophysicist J. Tuzo Wilson was responsible for bringing together several of the
key elements in what we now know as plate tectonics theory.
Geology involves vast great amounts of time, often referred to as deep time.
Volcanic eruptions and great landslides happen fairly quickly, however they have to do with
previously stored energy. Most geological process are slow but relentless, reflecting the pace
at which the Earth’s processes work.
The Earth is estimated to be at least 4.55 million years old (4,550,000,000 years). Fossils
tell us that complex forms of life have existed on Earth for about 545 million years, Reptiles
230 million years. Dinosaurs evolved from reptiles and became extinct around 65 million
years ago. Humans have been on the Earth for about 3 million years.
Very slow geological processes are impossible to duplicate. A geologist who wants to study a
certain process cannot repeat sluggish chemical reactions that take millions of years to
occur in nature.
Exploration geologists: geoscientists who work for exploration companies looking for
gold, silver, or diamonds.
As global population becomes increasingly concentrated in larger urban “super cities,” the
risk to public safety from hazards such as volcanism, earthquakes, and severe storms is | <urn:uuid:358ed289-dcc3-4491-8096-f088ef814892> | 3.71875 | 477 | Knowledge Article | Science & Tech. | 35.83337 | 95,553,018 |
Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Learning Common Lisp
Clone this wiki locally
Learning Common Lisp is a permanent condition, affecting even the most experienced hackers. But for those who are just beginning, the following resources will be helpful.
- Start with the Common Lisp Wiki (or CLiki) pages on Getting Started and Education.
- The Wikipedia page on Common Lisp has an overview, code examples, and links. Browsing through the examples there will familiarize you with the syntax, which will help a lot in getting started.
- Here is an interesting Common Lisp feature list.
- There are also many free lisp books and other educational resources. A good starting point is Practical Common Lisp.
- If you have general Common Lisp questions, you can ask comp.lang.lisp on Usenet, LispForum, or #lisp on irc.freenode.org.
- User:Arbscht has a FAQ for lisp newbies.
- Casting SPELs in Lisp comics. | <urn:uuid:598e9ca0-5bc4-4139-a937-06df25ab4426> | 2.75 | 239 | Tutorial | Software Dev. | 53.383452 | 95,553,031 |
This episode of DNews is brought to you by Full Sail University. This week the Japanese Aerospace Exploration Agency announced it wants to get solar power from space. I'm having flashbacks to disasters from SimCity2000. Hello! I'm Trace. Thank you for watching DNews. Back in the 1960s, American aerospace engineer Peter Glaser proposed launching solar panels into space, and beaming the power they collect back to the surface for our use. Since the late 60s the idea has been in a holding pattern mainly because of the expense and worries about maintenance and equipment, but thanks.
To recent developments in solar panel tech the Japanese space firm JAXA says they can finally try it. The plan is ambitious at minimum and cost hasn't yet been calculated, BUT JAXA is determined, and they're not the only ones. The U.S. Naval Research Laboratory is also interested in spacebased solar. The reason everyone's looking up, is because that's where the sun sits! If we can put a satellite in orbit to collect the sun's rays BEFORE the atmosphere filters them out, and without the worry of a cloudy day, that would be rad. The JAXA Space Based Power System.
Or SBPS will orbit 22,400 miles up, and if done to their specs, would completely replace a nuclear power plant by producing 1 gigawatt of electricity enough to power halfamillion homes. Their plan uses a 10,000 metric ton system, which is. well pretty ridiculous. The largest rocket ever launched was the Saturn V it took our boys to the moon and could only lift about 130 metric tons. Once the power is gathered, a converter in space will convert the electricity into a microwave beam not like in your house, literally waves of energy that are.
On the micro scale. Microwaves can be converted from energy at 80 percent efficiency, which means there would be some loss, but it would be pretty damn efficient about 48 of the power collected would reach consumers. Which doesn't sound great, but it really is. To make sure the array is getting sunlight 24hours a day, JAXA plans to put mirrors on either side of the planet to reflect sunlight at the collector all the time. The Japanese are announcing their plan so far in advance in the hope other countries.
Will gather and help realize the dream of solar from space. Once created, it could provide clean, unlimited energy anywhere on the planet, regardless of remote location. Here's the kicker. It would cost about a trillion dollars. Which sounds like a lot, but it's like 125 bucks per person on the planet in 2030. And that's WAY more than you're paying doe your power bill. Not to mention oil wars and all the pollution, ecological damage, mining and drilling that goes with fossil fuels. We've got commenting areaaassss, check them out and type your feelings on spacebased.
Solar so we can all talk about it! None of this would be possible without computers to run the system, and we need people to write the computer programs. Full Sail University in Florida offers courses to help train tech professionals by blending code and realworld experience. Students of Full Sail have handson access to technology on their first day, get a discounted laptop and all the software they'll need to earn degrees in software, mobile and web development. To find out more and support the show go to fullsail.edudnews! Thanks.
Burning Stuff With 2000F Solar Power!!
In a previous project I found a free tv and turned it into a giant solar scorcher. This shoots out a deadly beam of sunlight, that's hot enough to abuse food, melt metal and burn things you probably shouldn't. Today seemed like a good day to play with my Solar Scorcher. I positioned my frame and found the focal point, then added some concrete tiles as a base for my projects. Ok, I've got power, and I'll test it out with this piece of wood and when the light makes contact.
I've got instant fire. The sunlight at this spot is around 2000 degrees Fahrenheit, enough to melt this spot of concrete into a glowing orange liquid. I'm curious to see what I can do with all this heat so I've filled a glass bottle with water and I'll punch a hole in the cap. It's incredible to see that the instant I focus my lens on the bottle, it starts smoking. Just a few moments later this water is so hot it's boiling, and I'm a little nervous the bottle might blow. Yep, there it goes. The glass pieces are melting and that's cool,.
But now I want to try this on some food. I'll get some hot dogs, and when they hit the beam they really do get hot. This might be a little well done for my taste, and I'm still hungry so let's try an egg. The egg is actually working very well. It's so reflective it doesn't burn as fast, and even my wife is interested. A little salt and pepper and it's tempting to try a bite. Ok, so I wasn't actually expecting to eat this, but it looks safe enough, and.
Even my kids are anxious to try. Surprisingly, it's pretty good. Alright, let's see what else this will do. I'll try burning a penny, and, wow, it melted. How about a stack of pennies Yep, they're nothing but liquid metal now, and I'm thinking that slag in the mixture must be what's left of the copper coating. It's only taking about 4 seconds to melt these, and melting metal is really great, but now I want to see something burst. I wonder what would happen to this egg It's spewing some kind of debris and smoking like crazy. I hear.
Some little pops and it's even forming some interesting growths. huh, Look at that. But no explosion. How about if I put a pop top on this bottle of water and let the pressure build up Yeah, that's what I'm looking for. Let's do that again. The lid is back on, and pressure is building. Awesome! Alright, the sun is setting and I've readjusted my Aframe. I'm just wondering if this would ignite gasoline. It does. Hopefully it goes without saying that this is very dangerous and you shouldn't try this at home. Well, I'm convinced there's.
An insane amount of power behind these lenses. If you'd like to see where I got this one, take a look at my tutorial on how I hacked it out of an old TV. This one boiled water in less than a minute, welded a nickel to concrete, and instantly torched any piece of wood in it's way. Well that was fun, but I'm still hungry so I'll put everything away and go get some real home cooking. That's it for now. If you liked this project, perhaps you'll like some of my others. Check them out at thekingofrandom.
Solar Panel Installation Las Vegas Get $0 Down Solar Panel Install
SolarCity offers a cleaner more affordable alternative to your utility bill. Anyone with sunshine on their roof can benefit from solar power. SolarCity will install our customers solar systems for free. Our customers pay for solar power by the month, just like their utility bill, only lower. SolarCity makes it really easy and simple for our customers. We take care of everything from designing and permitting, to installing, even monitoring and maintaining the system. I like to garden quite a bit and solar to me means that the Sun is powering not.
Only my yard, but my home energy. And that is kind of comforting in a way, not to be relying on power companies. It's not just dollars, it's sense. It's good sense. The great thing about SolarCity is that we are a large company across the entire nation, but we have local operation centers that will be there. We customize every energy plan for our customers based on their family's energy needs, and the architecture of their home. We understand that our customers want their home to feel good and look good.
My wife and I have always discussed going solar you know primarily because of the environmental benefits and I think when we learned that financially it was sort of a win as well it sort of made it a nobrainer. We chose SolarCity for a number of reasons. The work was so thorough, it was clear that they were experts, they gave us information right away. Like I saw on the laptop about how much power we generated. It was obvious that they would do a good job and they laid out the process really clearly.
They basically handled everything, whether it was dealing with the utility, any approvals that needed to happen. Any inspections, and then everything was installed and up and running. Part of our offering is to make sure that we're monitoring the solar system. Solar Guard is an online account where our customers can monitor the production of their solar system. SolarCity provides a performance guarantee, that your system will produce as much power as we promised. It is one those rare winwinwins in life. You get to help the environment, you actually save money.
Solar Panels and Solar Panels Facts
In this tutorial you will discover the 3 secrets of how our customers are saving hundreds of pounds off their energy bills, often cutting their bills in half, with little or no risk at all by using the most advanced and affordable energy systems available. Not only that, install these systems and the energy companies could be paying you! With the global crisis of depleting energy sources such as Gas, Electric, Oil and Solid Fuel, prices have increased on average over the last 20 years at around 12 each year. Add that to the UK's rising inflation and things start to get very expensive. We.
Are all feeling the pinch. That is why renewable energy systems such as solar energy systems have become so popular. Hundreds of thousands of people, just like you, have already grabbed the opportunity of free energy and are starting to take back control, making considerable savings. Of course, the energy companies want you to believe there their way is the only way, but it's just not the case. Here at ProLite Energy Systems, we give you the power to fight back and tell you, for free, with no obligation, if how you can save on your energy bills using our advanced.
Energy systems. So. what are the secrets that the energy companies don't want you to know. 1 Weather has got very little to do with how effective our system are. Summer or Winter. it makes very little difference. our systems work in all conditions, saving you money! 2 Energy companies will pay you for your energy! Yes, it's true. You can actually sell the energy back to the grid' and the energy companies will reward you handsomely just for having it installed. We can provide you with more information on this in our free downloadable pack. In fact, you can look below.
This tutorial right now and click the link. That will give you instant access. 3 And third, solar is the cleanest form of energy and you'll be reducing your carbon footprint dramatically, making your property more energy efficient. Now, be aware, some solar companies have allowed their customers to invest up to 6000 pounds on a traditional system that hasn't worked properly if at all from day one. We promise to always give you all the indepth, tailored advice you need so that before you make any decision, you will know what your.
Return on investment could be. To find out exactly what you can expect from installing one of our advanced solar energy systems, then all you need to do is download our FREE information pack. Just look below the tutorial and click the link.Your information pack includes. a detailed report on what solar energy system is right for you,a checksheet that will tell you exactly what to look for when considering your system,an Energy Performance Certificate voucher worth over 60 and a calculation chart to show you your expected monthly returns. SO. to get access to this exclusive information, just look below this.
How to Install Solar Panels Wiring Fuses to Solar Panel DC Side
Alright, now we're ready to wire up our fuses and charge controller on the solar panel DC side. We've got our charge controller ready to go, we're just going to slip our BX wire, which I've actually insulated one of the tips, here, so we don't spark our machine. We'll feed that wire through a strainer leaf and put another little strainer leafer, here, on our BX. Tighten that up a bit. So you can see with that little spark there, that we do have power and we should use some.
Kind of fusing in here just so, if something goes wrong and the fuse blows out instead of the electronics. So, we've got a standard car fuse, this one's rated 40 amps at 32 volts. Really, all we need is about a 15 amp fuse, here, but this should give us some measure of protection. We're going to wire this through on the positive side. We've got a fuse holder that matches our fuse, and we'll just strip off a little bit of this insulation here. We'll form a little edge, a crimp, on this so it will loop under our device nicely. Got.
Screw terminals, here, and we're just going to slip that positive wire right under one of our terminals. In a larger installation this will be done in a separate box, but for a small one, this should be fine. Now, we need to run the other side of the positive wire to our charge controller and we need a little piece of wire for that. So now we've got our little jumper wire ready to go, we're going to strip that off and connect it to the other side of our fuse. Demo So.
Creative Edge Solar5 5000mAh Solar Charger Power Bank Review
The Creative Edge Solar5 Charger is great for people on the go. If you do a lot of outdoor activites, camping, sports, traveling, or you just don't always have access to a power outlet then the Solar5 will be the best choice to charge your mobile devices. It charges through solar power and has a maximum of 5000mAh capacity. 5000MAh is enough to charge most smartphones for two full charges. It can even fill up some tablets depending on their battery size. It has two standard USB ports for output and 1 micro usb for charging.
Input. It has four LED lights on the front which indicate how much power is available or if the device is receiving solar power. As you can see in my test, I'm charging a modern phone such as the Galaxy S5 with no problems. The ports deliver 2.1 Amps of power which is a good rate for fast charging. The other great advantage of this charger is that it's highly durable. The rubber exterior and tough solar panel make it shock resistant. I've even tested for water resistance and this device shows no problems with staying on or leaking water.
Duke Energy CEO Lynn Good Protested for Blocking Solar!
Mic Check! Lynn Good We won't restrict your energy, but we will take some time What you lead is opposition to energy advance! You only care about leading with energy..that can maintain your monopoly. Otherwise, why would Duke Energy..oppose the Energy Freedom Act The Republicansponsored legislation..that would let North Carolinians..get affordable solar power..from qualified thirdparty vendors. Solar is democratic! Solar creates jobs! Solar fouls no water! And if we had energy freedom. It would be cheap as sunlight! But ours is one of the five states.
San Diego Solar Is A Smart Investment w Cali Solar Works
Hi this is Jim I'm hoping you can help me out for a moment I'm just wondering if you now open to saving money from the rising cost of electricity if you are open to saving money then we might have a solution for you now find out if solar San Diego is a good fit for you or not and did you know between 2012 in 20014 to SDGE means adding worried 43.02 rate increase as a matter of fact that won't stop now the solution is what solar energy San Diego did you know that you can now save up to 30 to.
70 on your electricity bill now if you're home qualifies the true beauty is you get to switch to wholesale solar electric rate for free installation of your system is free monitoring of your solar electric system is free insurance of your system is free and you start saving money today solar panels guaranteed for 25 years which means peace of mind and virtually everyone loves the fact that it is zero down solar and no out of pocket cost and the process is easy as 1,2,3 we walk every step of the way. For Your San Diego Solar Installation.
And of course you do get a free solar consultation and remember there is no obligation and ever an no high pressure sales tactics.I just want to mention that it's company policy that we don't believe in pressuring people or assuming that they need what we have, because San Diego solar is easy to get we believe in operating as problem solvers not just trying to close the sale and I hope you're comfortable with that and if you feel comfortable with that these feel free to contact us fill out the contact form on the website and.
Understanding Solar Power In Ypsilanti Dave Strenski At TEDxEMU
Understanding Solar Power In Ypsilanti Dave Strenski At TEDxEMU,In the spirit of ideas worth spreading, TEDx is a program of local, selforganized events that bring people together to share a TEDlike experience. At a TEDx..
Solar Panels System For The Beginner.Solar Panels System for the beginner I show how to hook up solar panels with a battery bank. simple instructions. home solar power station. very easy to put..
How To Hook Up Solar Panels (with Battery Bank) - Simple 'detailed' Instructions - DIY Solar System.shows how to hook up solar panels with a battery bank. simple instructions. home solar power station. very easy to put together. all you need is 1 or more..
Why Should We Launch Solar Panels Into Space?.To solve the energy crisis currently facing the world, one Japanese space firm is aiming to launch a giant solar panel into space! While this would cost a lot of..
Germany Becoming The First 100% Solar Powered Nation..The US will continue to experience once every 100 year storms, every year,. then every month, until we stop burning coal oil. Storms are caused by burning..
Which Power Source Is Most Efficient?.Australian researchers just unveiled the most efficient solar panels ever. How efficient are they, and what is the most efficient source of energy Get 15 off..
How To Connect Solar Panel To 12 Volt Battery
How To Connect Solar Panel To 12 Volt Battery,.rvfourseasons Colorados 1st Choice RV Dealer in New and Used Travel Trailers for Sale! Save time and money. when you shop for that..
Solar Panel Cells..WINDENERGY7.COM Solar Panel Cells and Solar Panel Cells systems are important to the performance of our Solar Panel Cell Kits. Review the..
Burning Stuff With 2000ºF Solar Power!!.Melt a stack of pennies, burst a glass bottle, damage various food items, and incinerate wood using the power of the Sun! This 4 foot magnifying lens will melt..
Solar Power Myth Busting Video..tecg Find out more about The Energy Conservation Group by going to our site. We install excellent quality solar PV systems and are passionate..
How A Solar Power Purchase Agreement Can Benefit You..Lets firstly consider what is a Solar Power Purchase Agreement SPPA A Solar Power Purchase Agreement SPPA is a financial arrangement in which a..
NARENDRA MODI World's First Canal-top Solar Plant In Guj--1.flv.NARENDRA MODI worlds first canaltop solar plant in Guj. Gujarat begins research on generating power from flowing canal water with micro turbines..
Facts About Solar Energy.Get additional information about solar energy by visiting these websites Greendiyenergy Solar,Wind Energy DIY Guide 1eigJVV Green Powered.. | <urn:uuid:2786e7dd-3215-409f-aa7c-4dd18c0fda93> | 3.578125 | 4,182 | Content Listing | Science & Tech. | 64.701633 | 95,553,043 |
Probabilities and Information Theory
We review some basic notions and results using in this book concerning probability, probability spaces and random variables. This chapter is also intended to establish notation. There are many textbooks on probability theory, including [Bauer96], [Feller68], [GanYlv67], [Gordon97] and [Rényi70].
KeywordsConditional Probability Mutual Information Probability Space Statistical Distance Security Parameter
Unable to display preview. Download preview PDF. | <urn:uuid:e6ed4a33-cbba-49b7-b6aa-6d431cf60936> | 2.625 | 102 | Truncated | Science & Tech. | -6.646361 | 95,553,049 |
Scientists say decline in monarch butterflies brings risk of extinction
Western monarch butterflies, which crowd trees along the California coast every winter and flush them with color, have declined so dramatically since the 1980s that the species will likely go extinct in the next few decades if nothing is done, scientists said Thursday in a population study of the treasured creatures.
Fewer than 300,000 of the brilliant orange and black insects were counted last year at some 300 locations stretching from Marin County to the Baja California peninsula, where millions of wintering monarchs historically took up shop, according to the report published in the journal Biological Conservation.
LATEST SFGATE VIDEOS
- San Francisco Chronicle food critic Michael Bauer talks about taking a caviar bump at SF restaurant Avery Drew Costley/San Francisco Chronicle
- The Regulars: The Painter San Francisco Chronicle
- Why do some murders go unsolved? Drew Costley/San Francisco Chronicle
- How will cannabis legalization in California affect drug arrests? Drew Costley/San Francisco Chronicle
- The new Highway 1 road at Mud Creek The Chronicle
- The new Highway 1 road at Mud Creek The Chronicle
- The new Highway 1 road at Mud Creek Santiago Mejia, The Chronicle
- Berkeley attack on July 6, 2018 Berkeley Police Department
- Drone video shows how far Jeep crashed down Big Sur cliff George Krieger
- Star Athlete loses scholarship after anti-gay rant Abdul Lasaing, Facebook
“We believe there were at least 10 million butterflies in many of the years during the 1980s,” said Cheryl Schultz, an associate professor of biological sciences at Washington State University and the lead author of the study. “It’s gone down from 10 million to 300,000. That’s why we were so shocked. We did not expect it to be that sharp of a decline.”
The U.S. Fish and Wildlife Service, alarmed by an estimated 75 percent drop in the population just since the early 2000s, funded the latest study to help officials decide whether to list the monarch under the Endangered Species Act.
The decline is similar to that seen among the more abundant eastern monarchs, which spend their winters in Mexico before heading back across the United States and settling as far north as Canada. That population is famous because the butterflies form a blanket over trees, turning whole sections of forest into a kaleidoscope of color, the insects so abundant that human beings can hear the sound of their flapping wings.
Eastern monarchs have declined more than 90 percent since 1996, when scientists estimated there were 1 billion nesting in the trees. Last winter, 78 million eastern monarchs were counted in Mexico, compared with 100 million the year before.
The study published Thursday represents the most comprehensive measurement of monarch population declines in the west. Earlier counts only went back to 1997, so Schultz and her colleagues used historic observations and developed a statistical method to estimate the population back to 1981.
What they found is that the California population, first observed by a Russian expedition looking for a passage across the Arctic Ocean in 1816, is declining at an average of 7 percent a year, slightly more than the 6 percent drop seen in the eastern monarch population, said Schultz and her co-authors at the nonprofit Xerces Society, the University of Georgia and Tufts University in Massachusetts.
The monarch, one of the largest butterflies in the world, is found throughout North America. Over the years, it has expanded its range around the globe, including to Hawaii, New Zealand and Australia. It is extremely susceptible to changes in habitat and weather and to toxins in the environment.
The highest concentrations of western monarchs arrive in the Monterey and Pacific Grove areas in November and spend the winter living in clusters on pine, cypress and eucalyptus trees. They leave in February and March, spanning out to breed in Northern California, Nevada, Oregon, Idaho, Washington, Arizona and Utah.
Recent studies have blamed the decline of the eastern population on urban sprawl and a lack of milkweed and other nectar-bearing flowers along the migratory route through the Midwest.
Nobody knows exactly what is killing off the western monarchs, but a lack of milkweed is clearly one of the problems, according to researchers. Schultz said the western population is also losing habitat to development and is suffering due to pesticides, herbicides and changes in the climate.
“One of the pieces we’ll be working on is to get a sense of what’s really driving these declines,” she said.
The winter migration of the monarch is one of the most remarkable of any species. It is not clear how much the two groups of butterflies mix during breeding season, but scientists believe the eastern and western migratory populations divide at the Rocky Mountains when they head south for the winter.
The western group spends much of the winter in California while the eastern population winters in Mexico, more than 2,500 miles from where it started.
By winter’s end, the males have died and the females head back north. But each butterfly lasts only a couple hundred miles, before laying eggs on milkweed as a final act before death. The trip back is essentially a relay race involving generations of adult butterflies, which feed on flowers along the way before breeding and dying.
Scientists still can’t figure out how the butterflies pull off this seemingly magical bit of genetic imprinting.
“It’s alarming and shocking to see declines this significant, but I think there is time to turn the trend around,” said Schultz, who pointed to the story of Oregon’s Fender’s blue butterfly. The species fell to 1,500 individuals in the 1990s before rebounding, and today is back up to 28,000 insects after 20 years of work by a public, private and nonprofit partnership.
“It will take a commitment, hard work and it will take time, but we need to start taking action, and we need to do it now,” she said. “There is so much energy and interest in the monarch. It’s an iconic butterfly that everybody knows. There is really good potential to make this happen.”
Monarchs in trouble
Western monarch butterflies spend the winter in more than 300 forested groves along the California coast, including large populations in Riverside and Los Angeles counties, Pacific Grove, Monterey and at Natural Bridges State Beach in Santa Cruz.
They can normally be seen from November to March.
The California winter population declined an estimated 90 percent between 1997 and 2009. There are now roughly 300,000 monarchs in California compared with an estimated 10 million in 1981. | <urn:uuid:3a7ceea9-9f75-4ebc-aeaf-cf7740121038> | 2.8125 | 1,368 | News Article | Science & Tech. | 40.833482 | 95,553,053 |
When comet 45P zipped past Earth early in 2017, researchers observing from NASA's Infrared Telescope Facility, or IRTF, in Hawai'i gave the long-time trekker a thorough astronomical checkup. The results help fill in crucial details about ices in Jupiter-family comets and reveal that quirky 45P doesn't quite match any comet studied so far.
Like a doctor recording vital signs, the team measured the levels of nine gases released from the icy nucleus into the comet's thin atmosphere, or coma. Several of these gases supply building blocks for amino acids, sugars and other biologically relevant molecules. Of particular interest were carbon monoxide and methane, which are so hard to detect in Jupiter-family comets that they've only been studied a few times before.
The gases all originate from the hodgepodge of ices, rock and dust that make up the nucleus. These native ices are thought to hold clues to the comet's history and how it has been aging.
"Comets retain a record of conditions from the early solar system, but astronomers think some comets might preserve that history more completely than others," said Michael DiSanti, an astronomer at NASA's Goddard Space Flight Center in Greenbelt, Maryland, and lead author of the new study in the Astronomical Journal.
The comet--officially named 45P/Honda-Mrkos-Pajdušáková--belongs to the Jupiter family of comets, frequent orbiters that loop around the Sun about every five to seven years. Much less is known about native ices in this group than in the long-haul comets from the Oort Cloud.
To identify native ices, astronomers look for chemical fingerprints in the infrared part of the spectrum, beyond visible light. DiSanti and colleagues conducted their studies using the iSHELL high-resolution spectrograph recently installed at IRTF on the summit of Maunakea. With iSHELL, researchers can observe many comets that used to be considered too faint.
The spectral range of the instrument makes it possible to detect many vaporized ices at once, which reduces the uncertainty when comparing the amounts of different ices. The instrument covers wavelengths starting at 1.1 micrometers in the near-infrared (the range of night-vision goggles) up to 5.3 micrometers in the mid-infrared region.
iSHELL also has high enough resolving power to separate infrared fingerprints that fall close together in wavelength. This is particularly necessary in the cases of carbon monoxide and methane, because their fingerprints in comets tend to overlap with the same molecules in Earth's atmosphere.
"The combination of iSHELL's high resolution and the ability to observe in the daytime at IRTF is ideal for studying comets, especially short-period comets," said John Rayner, director of the IRTF, which is managed for NASA by the University of Hawai'i.
While observing for two days in early January 2017--shortly after 45P's closest approach to the Sun--the team made robust measurements of water, carbon monoxide, methane and six other native ices. For five ices, including carbon monoxide and methane, the researchers compared levels on the sun-drenched side of the comet to the shaded side. The findings helped fill in some gaps but also raised new questions.
The results reveal that 45P is running so low on frozen carbon monoxide, that it is officially considered depleted. By itself, this wouldn't be too surprising, because carbon monoxide escapes into space easily when the Sun warms a comet. But methane is almost as likely to escape, so an object lacking carbon monoxide should have little methane. 45P, however, is rich in methane and is one of the rare comets that contains more methane than carbon monoxide ice.
It's possible that the methane is trapped inside other ice, making it more likely to stick around. But the researchers think the carbon monoxide might have reacted with hydrogen to form methanol. The team found that 45P has a larger-than-average share of frozen methanol.
When this reaction took place is another question--one that gets to the heart of comet science. If the methanol was produced on grains of primordial ice before 45P formed, then the comet has always been this way. On the other hand, the levels of carbon monoxide and methanol in the coma might have changed over time, especially because Jupiter-family comets spend more time near the Sun than Oort Cloud comets do.
"Comet scientists are like archaeologists, studying old samples to understand the past," said Boncho Bonev, an astronomer at American University and the second author on the paper. "We want to distinguish comets as they formed from the processing they might have experienced, like separating historical relics from later contamination."
The team is now on the case to figure out how typical their results might be among similar comets. 45P was the first of five such short-period comets that are available for study in 2017 and 2018. On the heels of 45P were comets 2P/Encke and 41P/Tuttle-Giacobini-Kresak. Due next summer and fall is 21P/Giacobini-Zinner, and later will come 46P/Wirtanen, which is expected to remain within 10 million miles (16 million kilometers) of Earth throughout most of December 2018.
"This research is groundbreaking," said Faith Vilas, the solar and planetary research program director at the National Science Foundation, or NSF, which helped support the study. "This broadens our knowledge of the mix of molecular species coexisting in the nuclei of Jovian-family comets, and the differences that exist after many trips around the Sun."
"We're excited to see this first publication from iSHELL, which was built through a partnership between NSF, the University of Hawai'i, and NASA," said Kelly Fast, IRTF program scientist at NASA Headquarters. "This is just the first of many iSHELL results to come."
More information about NASA's IRTF: http://irtfweb.
More information about comets: http://www.
Liz Zubritsky | EurekAlert!
First evidence on the source of extragalactic particles
13.07.2018 | Technische Universität München
Simpler interferometer can fine tune even the quickest pulses of light
12.07.2018 | University of Rochester
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
13.07.2018 | Event News
13.07.2018 | Materials Sciences
13.07.2018 | Life Sciences | <urn:uuid:3e000eb0-0663-4adf-9fc2-5e693aeed739> | 3.828125 | 1,936 | Content Listing | Science & Tech. | 41.903953 | 95,553,077 |
When light rays pass from air to water or from water to air, they bend as they pass through the surface of the water.
This bending is called refraction.
Refraction of Light Experiment: Can you bend a pencil?Can you bend a pencil without breaking it?
You can make a pencil look as if it has been bent, by putting it in water.
You will need:
- a glass
- a pencil
1. Fill the glass halfway with water Put a pencil in the glass and lean it against the side.
2. Look at the water from the side The pencil will look bent.
3. Now take the pencil out of the water. Nothing has happened to it after all!
Light rays pass through other substances, as well as air and water.
They travel at different speeds as they pass through these substances.
As rays pass out of one substance and into another, they are refracted.
The amount that light is refracted depends on two things.
The first is the color of the light.
Red light bends less than other colors, and violet light bends more.
The second is the angle at which the light reaches the surface of the second substance.
This is called the angle of incidence.
The angle at which the light leaves the second substance is called the angle of refraction.
Try your experiment again, looking at the pencil from a number of different angles.
You will see that the pencil seems to bend more at some viewing angles than at others. | <urn:uuid:6a913724-b051-497e-874d-9c677cab23d2> | 3.71875 | 312 | Tutorial | Science & Tech. | 77.385712 | 95,553,093 |
A team of scientists announced today a critical step on the path of realizing the promise of embryonic stem (ES) cells for medicine. As described in the April 21 issue of Cell, the researchers have discovered unique molecular imprints coupled to DNA in mouse ES cells that help explain the cells’ rare ability to form almost any body cell type. These imprints, or "signatures," appear near the master genes that control embryonic development and probably coordinate their activity in the early stages of cell differentiation. Not only do these findings help to unlock the basis for ES cells’ seemingly unlimited potential, they also suggest ways to understand why ordinary cells are so limited in their abilities to repair or replace damaged cells.
"This is an entirely new and unexpected discovery," said Brad Bernstein, lead author of the study, assistant professor at Massachusetts General Hospital and Harvard Medical School, and a researcher in the Chemical Biology program at the Broad Institute. "It has allowed us to glimpse the molecular strategies that cells use to maintain an almost infinite potential, which will have important applications to our understanding of normal biology and disease."
Chromatin–the protein scaffold that surrounds DNA – acts not only as a support for the double helix but also as a kind of gene "gatekeeper." It accomplishes the latter task by selecting which genes to make active or inactive in a cell, based on the nearby chemical tags joined to its backbone. By examining the chromatin in mouse ES cells across the genome, the scientists discovered an unusual pair of overlapping molecular tags in the chromatin structure, which together comprise what they called a "bivalent domain," reflecting the dual nature of its design. These domains reside in the sections of chromatin that control the most evolutionarily conserved portions of DNA, particularly the key regulatory genes for embryonic development.
"These signatures appear frequently in ES cells, but largely disappear once the cells choose a direction developmentally," said Bernstein. "This suggests they play a significant role in regulating the cells’ unique plasticity."
The remarkable design of bivalent domains, which has not been previously described, merges two opposing influences – one that activates genes and another that represses them. When combined in this way, the negative influence seems to prevail and, as a result, the genes positioned near bivalent domains are silenced. However, the activating influence appears to keep the genes poised for later activity. "For genes, this is equivalent to resting one finger on the trigger," said Stuart Schreiber, an author of the Cell paper, the director of the Chemical Biology program at the Broad Institute, and professor at Harvard University. "This approach could be a key strategy for keeping crucial genes quiet, but primed for when they will be most needed."
Although most people think of heredity in terms of DNA and the genes encoded by it, chromatin also carries inherited instructions known as "epigenetic" information. Thus, the chromatin scaffold (including its bivalent domains) forms a sort of molecular memory that, along with DNA, can be transferred from a cell to its descendants. Yet ES cells signify the earliest cellular ancestors, leaving the question of how epigenetic history first begins. The scientists found that bivalent domains coincide with characteristic DNA sequences, indicating that this molecular memory may originate from the DNA itself. "How the initial epigenetic state is established and then altered during development has implications not only for stem cell biology, but also for cancer and other diseases where epigenetic defects are implicated," Bernstein said.
A related study led by Rick Young, a member of the Whitehead Institute and an associate member of the Broad Institute, appears in the same issue of Cell and describes new control features found in human ES cells.
Michelle Nhuch | EurekAlert!
Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides
16.07.2018 | Tokyo Institute of Technology
The secret sulfate code that lets the bad Tau in
16.07.2018 | American Society for Biochemistry and Molecular Biology
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
16.07.2018 | Physics and Astronomy
16.07.2018 | Life Sciences
16.07.2018 | Earth Sciences | <urn:uuid:4bb9568e-500f-4db1-a590-55ff7f28e4cb> | 3.375 | 1,408 | Content Listing | Science & Tech. | 33.356783 | 95,553,094 |
posted by Alex
The free-fall acceleration at the surface of planet 1 is 18 m/s^2. The radius and the mass of planet 2 are twice those of planet 1. What is the free-fall acceleration on planet 2?
For Planet 1, Let it be g1=GM1/R1^2
And for planet 2, g2=GM2/R2^2[M2=2M1 and R2= 2 R1]
Hope this helped!!! | <urn:uuid:50bf4f98-51d4-4590-8b99-01ed36b98486> | 2.609375 | 103 | Q&A Forum | Science & Tech. | 109.487549 | 95,553,122 |
"From a fundamental point of view, the question is: did anything like this before?", says Adam Frank, Professor of physics and astronomy at the University of Rochester. "And it is highly likely that our time and place — not only of those where there was advanced civilization."
"the Question is whether there are advanced civilizations in other parts of the Universe, is always intertwined with the three unknowns in the Drake equation," says Frank. "We know approximately how many stars there are. We don't know how many of those stars have planets that can support life, how often you may receive life and lead to the emergence of intelligent beings and how long it can hold civilization before disappearing".
"We don't even know if there is a high-tech civilization, to survive for more than a few centuries."
"Our findings suggest that our biological and cultural evolution was not unique and probably has happened many times before. Other examples probably include many energy-consuming civilizations that faced with a crisis on his planet with the development. This means that we can start to investigate the problem, using simulations to understand what leads to long-lived civilizations, and what is not."
A New study shows that recent discoveries of exoplanets in combination with an active study of this question allow you to assign almost empirical validity of the existence of technological civilizations. If briefly — that they had to exist. And if only the chances of development of advanced life will not be incredibly low, the human species will definitely not be the first technology or advanced civilization.
In 2016, in a paper published in Astrobiology, scientists first showed that mean "pessimism" or "optimism" in assessing the likelihood of extraterrestrial life.
"Through the NASA satellite Kepler and other searches, we now know that approximately every fifth star has a planet to "potentially habitable zone" where temperatures could support known life. Thus, one of the three great uncertainties acquired limitations."
Thanks to the new results of Frank and Woodruff Sullivan (University of Washington), scientists can equip all there is to know about planets and climate, to start the simulation of interactions energy-intensive in their home world, assuming a large sample of such cases already existed in the space.
Frank said that the third big question Drake — how long can a civilization survive — remains unanswered completely. "The fact that people had primitive technology, about ten thousand years, tells us nothing about how many can live out our society," he explains.
In 1961, the astrophysicist Frank Drake introduced an equation to estimate the number of developed civilization that could exist in the milky Way galaxy. It looks like this: N = R*(fp)(ne)(fl)(fi)(fc)L, decoding of each variable below. The basis of simple statistics, it is easy to calculate that somewhere there may be thousands, even millions of alien civilizationsthe
Fp — the percentage of stars with planets. the
The Drake Equation has proved to be a robust basis for research, and space technology have allowed scientists to define several variables. But we can only guess what might be a variable of L, expected durability of other advanced civilizations.
Using your approach to the analysis of data about exoplanets in the Universe, Frank Sullivan and sewed to the conclusion that human civilization will be unique in the cosmos, in that case, if civilization will develop on a suitable planet less than one time out of 10 billion trillion (1022)
"One of the ten billion trillion — a very little," says Frank. "For me, this means that other intelligent, technologically advanced species probably evolved before us. Even if the chance of occurrence of reasonable life estimate in one in a trillion, it would mean that throughout cosmic history intelligent life has appeared at least ten billion times."
Now many of us are familiar with Moore's law, the famous principle, according to which the development of computing power follows an exponential curve, doubling in the ratio of price-quality (that is, speed per unit cost) every 18 months or so. When ...
a Method of editing DNA CRISPR/Cas9 is considered one of the most important discoveries in modern genetics. However, the large potential of the technology hides a monstrous side effects. Figure it out, scientists were allowed extensive analysis of th...
Astronomers from the Carnegie institution announced the discovery of 12 new satellites of the gas giant Jupiter. 11 of discovered objects, scientists have attributed to the "normal" outer moons, while one is "strange". The discovery of new satellites...
Astronomers announced the discovery of a cold ring of cosmic dust around nearby Solar system a star is a dim red dwarf Proxima Centauri. The opening suggests that this star, in addition is home to the nearest earth-like planet tha...
in recent years, a stumbling block of many teams of researchers from different parts of the world. With their help, the researchers plan to treat a variety of diseases, including neurogenerative. But there is one big but hamperin...
What separates Elon musk as an entrepreneur from others is the fact that any enterprise which he undertakes, is born of bold and inspiring vision for the future of our species. Recently, Musk announced the creation of a new compan... | <urn:uuid:6a773683-d6c6-45ed-96bc-bddeb6c0a7f4> | 3.65625 | 1,066 | Content Listing | Science & Tech. | 37.914869 | 95,553,126 |
Freshwater megafauna such as river dolphins, crocodilians and sturgeons play vital roles in their respective ecosystems. In a recent scientific publication, researchers of the Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB) in Berlin have teamed up with international colleagues to illustrate the factors that currently threaten these large vertebrates. The authors also call for a more comprehensive assessment on these large freshwater animals and for a more targeted conservation plan. Also, a wider range of freshwater species and freshwater ecosystems suffering from biodiversity decrease have the potential to benefit from such megafauna-based actions.
Many large aquatic vertebrates, referred to as freshwater megafauna, cover long distances between their breeding and feeding grounds. To ensure their safe passage, they are dependent on free-flowing waters. However, this makes them vulnerable to the increasing fragmentation of river catchments caused by damming.
The Yangtze Finless Porpoise is listed as critically endangered on the IUCN Red List of Threatened Species.
Photo: Huigong Yu
The Russian sturgeon, for example, has lost 70 per cent of its spawning grounds in the Caspian Sea basin and its entire Black Sea basin spawning grounds over the last 60 years due to dam construction. The boom in dam construction also affects many other species, such as the Amazonian manatee, the Ganges river dolphin and the Mekong giant catfish – these species are now classified as threatened. “The fragmentation of habitats is one of the central threats to freshwater megafauna, as well as overexploitation” explained Fengzhi He.
The IGB researcher is the lead author of the study on the disappearance of large vertebrates from rivers and lakes, published recently in WIREs Water. According to The International Union for Conservation of Nature (IUCN) Red List of Threatened Species, more than half of the world’s large-bodied vertebrates weighing more than 30 kg that live in freshwater ecosystems are threatened.
Yet, freshwater megafauna species play key roles in their respective ecosystems: owing to their size, most are at the top of the food chain, meaning that a large proportion of creatures in the local ecosystem would be affected by their extinction.
The mode of life of the Eurasian beaver and the North American beaver, for example, induces them to shape entire river courses, affecting not only biochemical and hydrological processes, but also in-stream and riparian assemblages; in the Everglades, the American alligator creates and maintains small ponds, providing habitats for a large number of plants and smaller animals.
“The importance of freshwater megafauna for biodiversity and humans cannot be overstated,” stressed Fengzhi He. Together with colleagues from IUCN, the University of Tübingen and Queen Mary University of London, Fengzhi He describes in this publication which factors pose threats to freshwater megafauna. Besides the obstruction and fragmentation of water bodies following dam construction, these factors include overexploitation, environmental pollution, habitat destruction, species invasion and the changes associated with climate change.
According to the authors, megafauna species are highly susceptible to external factors owing to their long lifespan, large body size, late maturity and low fecundity. Despite the fact that many megafauna species are under great threat, they have been largely neglected in previous research and conservation actions.
Fengzhi He and his co-authors call for research focusing on the distribution patterns, life history and population dynamics of freshwater megafauna. Freshwaters are among the most endangered ecosystems on the planet, where biodiversity is declining faster than in marine and terrestrial realms. For this reason, it is all the more important to develop sustainable nature conservation strategies for freshwater ecosystems and their megafauna.
Link to study
He, F., Zarfl, C., Bremerich, V., Henshaw, A., Darwall, W., Tockner, K. and Jähnig, S. C. (2017), Disappearing giants: a review of threats to freshwater megafauna. WIREs Water, e1208. doi:10.1002/wat2.1208
Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB)
Müggelseedamm 310, 12587 Berlin
Dr. Sonja Jähnig
Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB)
Justus-von-Liebig-Str. 7, 12489 Berlin
+49 (0)30 6392 4085
Work at IGB combines basic research with preventive research as a basis for the sustainable management of freshwaters. In the process, IGB explores the structure and function of aquatic ecosystems under near-natural conditions and under the effect of multiple stressors. Its key research activities include the long-term development of lakes, rivers and wetlands under rapidly changing global, regional and local environmental conditions, the development of coupled ecological and socio-economic models, the renaturation of ecosystems, and the biodiversity of aquatic habitats. Work is conducted in close cooperation with universities and research institutions from the Berlin/Brandenburg region as well as worldwide. IGB is a member of the Forschungsverbund Berlin e.V., an association of eight research institutes of natural sciences, life sciences and environmental sciences in Berlin. The institutes are members of the Leibniz Association.
Angelina Tittmann | idw - Informationsdienst Wissenschaft
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:dab5b5ea-496e-48aa-a0d2-915dda72afa1> | 3.703125 | 1,812 | Content Listing | Science & Tech. | 32.474063 | 95,553,133 |
What Is Geoengineering - An Explanation Of Geoengineering
So is this a good idea? Well, there are many different potential ways of tackling climate change, and this remains one of the most controversial. This is because of the potential risk or danger - doing something to change the environment could have an unforeseen and damaging impact on the climate.
The climate is a system that is very, very complicated to model indeed - in fact scientists who have studied the climate for a long, long time to some extent have relatively little knowledge about what causes various weather patterns and the impact of different weather systems around the world and effects on our weather - such as how important, or not, sunspots and periods of strong, or less, solar activity.
There are a cornucopia of feedback mechanisms to consider, some short term, others long term, some subtle, others not so subtle, and when these will be triggered can be very hard to determine. Then when a feedback mechanism is triggered, the impact on other cycles has to be taken into account: will one lead to another, and lead to some sort of runaway... and so on and so forth!
In short, this means that geoengineering is inherently risky. One of the favoured approaches would be to use a sulphurous gas - specifically sulphur dioxide - placed in the stratosphere to help cool down the planet, because it will reflect the suns light rather than absorb it like greenhouse gases.
Will geoengineering happen? Who knows. And who knows what testing is being done around the world by governments to see if it is likely to work and try to work out the impact. The important thing is that there should be a public debate about the various options for mitigating climate change, and what the relevant pros and cons are, so that people can make an informed decision. However the financial crisis of the last couple of years around the world, the environmental agenda has taken a back seat, but this does not mean it is not still there.
Date published: 26 Jul 2010
More Green Household Related ArticlesHow Microwaves Will Save Industry Energy
Keep The Temperature Down And Save Money And Energy
Technology And Solving The Food Crisis
The 1992 Earth Summit In Rio De Janeiro
Where Does Green Energy Comes From? | <urn:uuid:ef621291-62de-4fc8-b409-db7b38094ea2> | 2.953125 | 458 | Knowledge Article | Science & Tech. | 43.080757 | 95,553,151 |
Theorists propose a way to make superconducting quantum devices such as Josephson junctions and qubits, atom-by-atom, inside a silicon crystal. Such systems could combine the most promising aspects of silicon spin qubits with the flexibility of superconducting circuits.
The light-warping structures known as metamaterials have a new trick in their ever-expanding repertoire. Researchers have built a silver, glass and chromium nanostructure that can all but stop visible light cold in one direction while giving it a pass in the other. The device could someday play a role in optical information processing and in novel biosensing devices.
Researchers are looking to combat dangerous sub-dermal implsnt infections by upgrading your new hip or kneecap in a fashion appreciated since ancient times - adding gold. The result is a new antibacterial material based on gold nanoparticles.
Scientists have proposed a new type of photo-energy detector - of infrared pulsed laser light - using a nanoporous ZnO/n-Si structure that would be relatively simple and inexpensive to develop. Photodetectors are a core component in optoelectronic devices, and this new detector could have expanded applications in the future. | <urn:uuid:3365e32d-991e-4874-8e6e-769e7da99b76> | 3.03125 | 252 | Content Listing | Science & Tech. | 21.974433 | 95,553,159 |
Previous overviews of plant invasion in Hungary were based on local case studies and the authors’ experience. The MÉTA survey provided an opportunity to outline a more exact picture based on the survey of the whole country. This paper summarises the basic statistics related to plant invasion: cover of invaded area estimated for the country, each geographical region and each distinguished (semi-)natural habitat category, and cover of the selected 15 alien species in each habitat category.
Botta-Dukát Z: Invasion of alien species ... (2008)
Invasion of alien species to Hungarian (semi-)natural habitats
Acta Botanica Hungarica 50(Suppl.): 219-227 | <urn:uuid:762a9d64-65ab-4206-a7d3-8102d01bd53e> | 2.5625 | 141 | Academic Writing | Science & Tech. | 28.853366 | 95,553,176 |
Since the 1970s, when anthropogenic acidification was recognized as a wide spread phenomenon in many European and North American areas, tremendous progress has occurred in the documentation, understanding, and predictions of acid rain effects on element cycling in waters and soils and whole ecosystem functioning. In parallel with significantly reduced emissions of S (and also N in some regions), numerous terrestrial and aquatic ecosystems have been significantly recovering from acidic stress (especially in central Europe) since the late 1980s.
While the chemical recovery, manifested by decreasing terrestrial export of H+, strong acid anions, base cations, and ionic Al, had been expected, the rapid increase in DOC leaching was a surprise, not predicted by available models and previous experience. Nitrate has become the dominant strong acid anion in terrestrial export in some areas, which increased scientific interest in N-saturation. The increased DOC leaching, N-saturation, combined effects of climate change and atmospheric pollution on nutrient cycling in terrestrial ecosystems, and possible trajectories of their recovery have become new targets of the present environmental research in the acidified areas.
The new environmental models combine more individual cycles, like gears in an “ecosystem clock” and use them to explain observed changes in ecosystem functioning. The future research should focus on deeper understanding of relative roles of these cycles in ecosystems. For example, elevated deposition of N and S compounds caused changes in soil fungi to bacteria ratio, decline in soil water pH, and elevated availability of electron acceptors (SO42- and NO3-) in anoxic soil microsites. All these changes may individually have only relatively small effect on soil DOC concentrations. But their combined effect can reduce DOC availability to soil microorganisms to such an extent that some microbial process shift from N to DOC limitation, resulting in elevated NO3- leaching. The elevated terrestrial export of NO3- is accompanied by Al leaching, which affects P availability in aquatic ecosystems. One change can thus affect cycles of many elements in the whole system.
|Mo. 16.07.2018 aktuell|
Ermittlung von Grundwasserverweilzeiten mittels Radon als natürlichem Tracer für ein Trinkwasserförderungsgebiet der Stadt Fürth
Absolventenfeier Geoökologie 2018/19 | <urn:uuid:21a0ea68-1e86-4cbf-8702-5d33e311b219> | 2.78125 | 486 | Knowledge Article | Science & Tech. | 13.160923 | 95,553,180 |
Wavelets, Introduction to
Wavelets emerged in the late 1980s as a valuable new tool in science and engineering, and as a topic forfruitful mathematical research. Wavelet transforms are now regularly applied in areas such as image processing, statistical analysis, andseismic research. Wavelets are at the heart of the WSQ standard used by the United States' Federal Bureau of Investigation to compress fingerprint images . They have been implemented in the Red professional digital video camera and they are now being used in an attempt to analyzepaintings by famous artists . In the past twenty years, related “-lets”tools have also emerged, such as ridgelets and shearlets (see Curvelets and Ridgelets). The 1992 book by Daubechies is a classic in the field.
Wavelet analysis shares with Fourier analysis the key idea of having a basis of functions with which to analyze signals and otherfunctions. While Fourier analysis uses sine and cosine functions (or complex exponentials) – which are smooth and...
KeywordsWavelet Analysis Wavelet Function Continuous Wavelet Transform Wavelet Filter Fingerprint Image
- 1.Bradley J, Brislawn C, Hopper T (1993) The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression. In: Proc Conf Visual Info Process II Proc SPIE, vol 1961. pp 293–304Google Scholar
- 2.RED Digital Cinema Camera Company www.red.com
- 4.Daubechies I (1992) Ten lectures on wavelets. SIAMGoogle Scholar | <urn:uuid:9b9ce561-9bb5-413b-ab8b-dd66ef8a2647> | 3.1875 | 330 | Academic Writing | Science & Tech. | 38.18844 | 95,553,183 |
An Alfred Edward Chalon watercolour of Augusta Ada King-Noel, Countess of Lovelace (nee Ada Byron) circa 1940 © Donaldson Collection/Michael Ochs Archives/Getty Images
The name Ada Lovelace (more formally, Augusta Ada King, Countess of Lovelace) is one that has become closely linked to early developments in computing. This is due to her collaboration with the nineteenth-century polymath Charles Babbage on designs for his proposed 'Analytical Engine', which we now recognise as a steam-powered programmable computer.
The precise nature of Lovelace’s contributions has seen a great deal of argument amongst scholars, with much of the debate focusing on whether she had the necessary mathematical background knowledge. In a recent book, Ada Lovelace: The Making of a Computer Scientist, we have argued, based upon mathematical manuscripts held by the Bodleian Library in Oxford, that she was indeed a competent mathematician, and fully capable of working with Babbage.
Babbage started work on his Analytical Engine in the mid-1830s, with the idea of creating a new calculating machine that could “eat its own tail”, by which he meant that it could modify its calculation while it was running. It would do this by pausing during a calculation, and using the values it had already determined to choose between two possible next steps. Babbage listed the basic operations that such a machine, with a large enough memory, would need if it were to execute “the whole of the developments and operations of analysis”, in other words any calculation that could be conceived of at the time. We now know that the basic operations that he described are what are needed to compute anything that can be calculated by any modern computer. This means that the Analytical Engine would have been, in modern terms, a general-purpose computer, a concept first identified by Alan Turing in the 1930s.
The Analytical Engine was never built, but many aspects of its design were recorded in immaculate detail in Babbage’s drawings and mechanical notation. It was to be programmed by means of punched cards, similar to those used in the weaving looms designed by Joseph Marie Jacquard. Separate decks of cards made up what we would now call the program, and gave the starting values for the computations. A complex mechanism allowed the machine to repeat a deck of cards, so as to execute a loop. The hardware involved many new and intricate mechanisms and was conceived on a massive scale. The central processing unit, which Babbage called the Mill, would be fifteen feet (4.5m) tall; the memory, or Store, holding a hundred 50-digit numbers would be twenty feet (6m) long (Babbage even considered machines with ten times that capacity); and other components included a printer, card punch, and graph plotter. Babbage estimated it would take three minutes to multiply two 20-digit numbers. A machine of that size would indeed have required steam power.
Disillusioned with what he saw as a lack of support from the British scientific establishment, Babbage looked for funding abroad. In 1840, Italian scientists invited him to Turin, where he lectured on the principles of the Engine. In the audience was Luigi Menabrea, who in October 1842 published the first account of the Engine, in French, based on Babbage’s lectures. Ada Lovelace had been thinking for some time about how she might contribute to Babbage’s projects. Another scientific friend, Charles Wheatstone, asked if she would translate Menabrea’s article, and Babbage suggested she expand it with a number of appendices.
After several months of furious effort by them both, the resulting paper was published in Taylor's Scientific Memoirs in August 1843. It was signed only with her initials, A.A.L., and of its sixty-six pages, forty-one make up her appendices.
The paper is most famous for the final appendix, Note G. This demonstrates the operation of the machine by giving the example of the calculation of the so-called 'Bernoulli numbers', which crop up in many places in modern mathematics. The Bernoulli numbers are particularly amenable to machine calculation because they are defined recursively: we may use the first to determine the second, the second for the third, and so on. There are several different ways of calculating these numbers, and Lovelace did not choose the simplest: she noted instead that the “object is not simplicity or facility of computation, but the illustration of the powers of the engine”.
The paper contained a detailed explanation of how the various quantities involved in the computation of the Bernoulli numbers are fetched from the Store, used in calculation in the Mill, and moved back again, according to the instructions on the cards. The process is illustrated using a large table, whose columns represent the values of the data, the variables, and the intermediate results, as the engine carried out each stage of the calculation.
This table is often described as “the first computer program”, though Lovelace wrote, more accurately, that it “presents a complete simultaneous view of all the successive changes” in the components of the machine, as the calculation progresses. In other words, the table is what computer scientists would now call an “execution trace”. The “program”, had the idea existed at the time, would have been the deck of punched cards that caused the machine to make those successive changes. Babbage’s designs were rather unclear about aspects of how the cards would be manipulated, so it is hard to reconstruct the exact program. Such tables were still used as a method for explaining computation 100 years later, when Geoff Toothill drew a similar diagram to illustrate the working of the first stored program computer, the “Manchester Baby”.
'Note G' is the culmination of Lovelace’s paper, following many pages of detailed explanation of the operation of the Engine and the cards, and the notation of the tables. The paper shows Lovelace’s obsessive attention to mathematical detail – it also shows her imagination in thinking about the bigger picture.
Lovelace observed a fundamental principle of the machine, that the operations, defined by the cards, are separate from the data and the results. She observed that the machine might act upon things other than number, if those things satisfied mathematical rules. “Supposing”, she wrote:
“that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”
She thought about how the engine might do algebra, how it:
“weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves”
and how it might make new discoveries:
“We might even invent laws for series or formulæ in an arbitrary manner, and set the engine to work upon them, and thus deduce numerical results which we might not otherwise have thought of obtaining.”
These led her to think about what we now call artificial intelligence, though she argued that the engine was not capable of original ideas:
“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”
Alan Turing disagreed. He wrote a famous paper on 'Computing machinery and intelligence', where he challenged what he called “Lady Lovelace’s objection” by suggesting that we can “order” the machine to be original, by programming it to produce answers that we cannot predict.
Lovelace’s thoughts about using the machine are very familiar to present-day programmers. She understood how complicated programming is, and how difficult it can be to get things right, as
“[t]here are frequently several distinct sets of effects going on simultaneously; all in a manner independent of each other, and yet to a greater or less degree exercising a mutual influence.”
And, echoing a concern of every programmer ever, she also appreciated the need to “reduce to a minimum the time necessary for completing the calculation”.
Lovelace’s paper is an extraordinary accomplishment, probably understood and recognised by very few in its time, yet still perfectly understandable nearly two centuries later. It covers algebra, mathematics, logic, and even philosophy; a presentation of the unchanging principles of the general-purpose computer; a comprehensive and detailed account of the so-called “first computer program”; and an overview of the practical engineering of data, cards, memory, and programming.
Lovelace and Babbage’s collaboration by letter, as they exchanged versions of the table for the Bernoulli numbers, echoes the frustrations of all collaborators — “Where is it gone?” wrote Babbage, as they lost track of Note G. Towards the end of the work tempers became frayed. Lovelace refused to let Babbage add to the paper a strong criticism of the British government; and Babbage turned down her offer to become further involved in organising the building of the engine.
However Babbage continued to speak admiringly of Lovelace, writing to Michael Faraday of:
“that Enchantress who has thrown her magical spell around the most abstract of Sciences and has grasped it with a force which few masculine intellects (in our own country at least) could have exerted over it.”
They did not collaborate again, but remained friends: Lovelace’s letters to Babbage are full of details of the mathematics books she was reading, the progress of her children, and the antics of her dogs, chickens, and starlings. In the last year of her life, Babbage accompanied the now frail Lovelace to the Great Exhibition, and encouraged her to “put on worsted stockings, cork soles and every other thing which can keep you warm”. To his annoyance, none of his machines were displayed there.
- Five experiments that might have influenced Mary Shelley’s Frankenstein
- 10 amazing women in science history you really should know about
- Joseph Lister and the grim reality of Victorian surgery
- Five of the greatest mathematicians you’ve (probably) never heard of
- 10 of the most mysterious codes and ciphers in history
- Five weird facts about maths | <urn:uuid:8e46b8b6-e04f-4e1d-957d-f6c4282aa9b6> | 3.625 | 2,183 | Nonfiction Writing | Science & Tech. | 27.883133 | 95,553,197 |
What you learn when you dive in underwater caves
Maybe when you picture a university professor doing research it involves test tubes and beakers, or perhaps poring over musty manuscripts in a dimly lit library, or maybe going out into the field to examine new crop-growing techniques or animal-breeding methods. All of it’s good, solid research and I commend them all.
Then there is what I do – cave diving. To study the biology and ecology of coastal, saltwater caves and the marine fauna that inhabit them, my cave diving partners and I head underground and underwater to explore these unique and challenging ecosystems. Often we go to places no other human has been. While the peaks of the tallest mountains can be viewed from an airplane or the depths of the sea mapped with sonar, caves can only be explored firsthand.
Around the globe, from Australia to the Mediterranean, from Hawaii to the Bahamas and throughout the Caribbean, I have explored more than 1,500 such underwater caves over the last 40 years. The experience can be breathtaking. When you are down 60 to 100 feet in a cave that has zero light and is 20 miles long, you never know what you are about to see as you turn the next corner.
My primary focus is searching for new forms of life – mostly white, eyeless crustaceans – that are specifically adapted to this totally dark, food-poor environment. Cave diving is an essential tool in our investigations since the caves I’m interested are filled with water: typically a layer of fresh or brackish water on the surface and then saltwater at depths of 10 to 20 meters or more.
There’s no other way to access these unexplored areas than to strap on your scuba tanks and jump in.
Scientific research as extreme sport
The list of what can go wrong in a cave dive could fill your event planner.
Equipment or light failure, leaking scuba tanks, broken guide lines, getting lost, cave collapse, stirred up silt resulting in zero visibility, poisonous gas mixtures – you get the idea.
It’s fieldwork that can be a matter of life or death. I have had some close calls over the years, and sadly, have lost several good friends and researchers in cave accidents.
To put it mildly, underwater caves can be very hostile and unforgiving. One such cave – the Devil’s system in north-central Florida – has claimed at least 14 lives in the last 30 years, and there are other examples elsewhere in Florida and in Mexico.
Most of the time, human error is to blame, when divers don’t follow the rules they should or lack essential training and experience in cave diving.
My family has gotten used to the idea that what I do is not always a walk in the park. They know that since I’m 69, I stress safety, being physically and mentally prepared, and that I religiously abide by the cardinal rule of cave diving – that you never, ever dive alone. My colleagues and I usually go into a cave with teams of two to three divers and constantly look after each other to see if there is anything going wrong during our dives, which usually last about 90 minutes, but can be as long as three hours or more.
To read this article in one of Houston's most-spoken languages, click on the button below.
Death-defying dives pay off in discoveries
It’s not just new species we are discovering, but also higher groups of animals including a new class, orders, families and genera, previously unknown from any other habitat on the planet. Some of our newfound animals have close relatives living in similar caves on opposite margins of the Atlantic Ocean or even the far side of the Earth (such as the Bahamas versus Western Australia).
While most of these caves are formed in limestone, they can also include seawater-flooded lava tubes created by volcanic eruptions. Amazingly, similar types of animals inhabit both.
In the deserts of West Texas, our team discovered and explored the deepest underwater cave in the U.S., reaching a depth of 462 feet.
The graduate students in my lab work on a diverse group of questions. They’re uncovering the nature of chemosynthetic processes in caves – how microorganisms use energy from chemical bonds, rather than light energy as in photosynthesis, to produce organic matter – and their significance to the cave food web.
Other students are examining records of Ice Age sea level history held in cave sediments, as well as the presence of tree roots penetrating into underwater caves and their importance to the overlying tropical forest. We’re finding evidence that sister species of cave animals on opposite shores of the Atlantic separated from one another about 110 million years ago as tectonic plate movements initiated the opening of the Atlantic, as well as determining how environmental and ecological factors affect the abundance and diversity of animals in saltwater caves.
Our research has significant implications, especially concerning endangered species and environmental protection. Since many cave animals occur only in a single cave and nowhere else on Earth, pollution or destruction of caves can result in species extinctions. Unfortunately, the creation of many protected areas and nature reserves failed to take cave species into account.
Some discoveries can be completely unanticipated. For example, when we sequenced DNA from a variety of arthropods, including crustaceans and insects, the data strongly support a sister group relationship between hexapods (the insects) and remipedes, a small and enigmatic group of marine crustaceans exclusively found in underwater caves. This places the remipedes in a pivotal position to understanding the evolution of crustaceans and insects.
Even at this stage of my life, to me the risks attendant to my cave diving research are worth it. It’s like the Star Trek mantra come true – to boldly go where no man has gone before. The chance to discover new forms of marine life, to view never-before-seen underwater formations, vast chambers, endless tunnels and deep chasms, to swim in some of the bluest and purest water on Earth – I will take that sort of research and its challenges any day.
Yes, it can give new meaning to the old line about “publish or perish” in academia. But I love it, and I will tell you with all honesty, I can’t wait until my next trip.
Bookmark Gray Matters. Never, ever dive alone. | <urn:uuid:bc9914ab-4ecc-4df2-9afb-1f9cb222b989> | 2.734375 | 1,331 | Truncated | Science & Tech. | 39.811303 | 95,553,214 |
Global warming may bring about the heat that’s twice as bad as what climate models project. In this scenario, polar ice caps could collapse and the desert could become green – effects that are underestimated in current forecasts. ( Pixabay )
A new study suggests that global warming that’s twice as bad may be in store for humans in the future.
The new projections are rather grim: the polar ice caps could collapse and the Sahara Desert could become green as a result of aggressive changes in various ecosystems.
Global warming may be twice as projected by climate models and sea levels may rise 20 feet even if the world meets the 35.6 degrees Fahrenheit target, an international team of researchers from 17 nations noted in the new research.
The team explored evidence from three warm periods that occurred in the last 3.5 million years when Earth was 32.9 degrees F (0.5 degrees Celsius) to 35.6 degrees F (2 degrees C) hotter than pre-industrial 19th-century temperatures.
In their observations, the team saw that there are “amplifying mechanisms,” not well-represented in climate models, which make long-term warming worse than what is forecasted in climate models.
“This suggests the carbon budget to avoid 2°C of global warming may be far smaller than estimated, leaving very little margin for error to meet the Paris targets,” said Hubertus Fischer, lead author and University of Bern professor.
The researchers derived their conclusion from their analysis of three warm periods, namely the Holocene thermal maximum period some 5,000 to 9,000 years earlier, the last interglacial period 129,000 to 116,000 years ago, and the mid-Pliocene period some 3.3 to 3 million years earlier.
They combined measurements from the likes of ice cores and fossil records, then studied the impact of the climatic changes. The first two periods, for instance, warmed from predictable changes that happened in the planetary orbit.
The periods offered strong proof of how much warmer Earth would become after the climate normalized. Sea-level rise could persist for thousands of years, warned Alan Mix, co-author and Oregon State professor.
Present Climate Models Are Focused On Near Team
These profound changes can hit the planet even potentially with just 34.7 degrees F or 1.5 degrees C, yet climate projects today generally underestimate them as implications of long-term warming, Mix said.
He explained that these models may be relied upon for low-emission scenarios in the decades leading up to 2100, but not really for larger or higher-emission scenarios.
Parts of the world continue to grapple with the ongoing symptoms and effects of climate change. U.S. homeowners, for instance, face a particular threat: sea-level flooding could wipe out more than 300,000 homes in the country.
The findings are discussed in the journal Nature Geoscience.
© 2018 Tech Times, All rights reserved. Do not reproduce without permission. | <urn:uuid:d76eb52e-519b-4efd-842e-4a484c200ff9> | 3.734375 | 617 | News Article | Science & Tech. | 55.035341 | 95,553,215 |
Physicists Pinpoint W Boson, Narrow Search for Higgs
Scientists have produced the most precise measurement of a
fundamental particle called the W boson. It will help them
search for the elusive Higgs boson, the discovery of which would
be an epoch-making event.
The W boson’s new mass is 80.387 giga electron volts, or GeV,
plus or minus 0.019 GeV. (Scientists often give a particle’s
mass in units of energy because, according Einstein’s famous
E=MC˛ equation, the two are interchangeable.) The most precise
previous measurement had an uncertainty of about 0.060 GeV.
At the subatomic scale, such little differences are immense.
The new result is “exquisite” and places the uncertainty “in
another category with respect to the past results,” wrote
physicist Tommaso Dorigo in
his blog. The finding was presented Feb. 23 at the Fermi
National Accelerator Laboratory in
Researchers with the CDF
Fermilab produced the estimate using data from the now-closed
Tevatron, formerly the world’s
premier particle accelerator, where measurements of
collisions between protons and antiprotons fired around a
4-mile-long track provide insight into the subatomic
world. Though CERN’s Large
Hadron Collider has
eclipsed the Tevatron, the result shows that the U.S. lab still
has a few tricks up its sleeve.
The W boson, along with its counterpart the Z boson, are
responsible for carrying the weak
force, much the same way that photons convey electromagnetic
force. Together with gravity and strong nuclear force, these
comprise the four fundamental forces of nature. The W boson’s
discovery in 1983 was a major success for the Standard
Model, developed by physicists to explain the interactions
of all subatomic particles and forces, and its mass is an
important input for many nuclear and astrophysical calculations.
It’s also intimately linked to two other subatomic particles:
the top quark, the heaviest of the six types of quarks, and the
Higgs boson. “If you know the mass of any two, you know the mass
of the third,” said physicist Rob Roser, co-spokesman for the
'It's basically make it or
break it for the Standard Model.'
That potential extrapolation is crucial. While the Higgs boson
has been theoretically predicted to exist, and is believed integral
to the very essence of mass,
it hasn’t actually been spotted.
Last December, researchers at the Large Hadron Collider sawhints
of what may be the Higgs boson, and pegged its mass at about
125 GeV. The extra-precise measurement of the W boson fits with
this measurement of the Higgs. The result also means that
physicists shouldn’t expect to find the Higgs anywhere higher
than 145 GeV.
All eyes are now on this final sliver of energy where the Higgs
may be hiding, said physicist Ashutosh
Kotwal of Duke
University in North Carolina, who presented the latest results
from the CDF collaboration. If the Higgs turns up there, it will
confirm scientists’ theories. If it doesn’t, they will have to
start looking for new, more exotic ways to explain the universe.
“It’s basically make it or break it for the Standard Model,”
Though the Large Hadron Collider has progressed further in the
Higgs search, Fermilab scientists still hope to be part of the
month they will
present their latest Tevatron data, which may include a Higgs
signal. And even if Fermilab doesn’t find the Higgs themselves,
the LHC may never be able to measure the W boson with comparable
precision. Its mass may be one of the Tevatron’s great legacy
calculations, said Roser.
In another three or four years, the CDF collaboration will use
the remaining Tevatron data to produce a final estimate, which
could go down in history as the most precise W boson measurement
Image: Fermilab physicist
Pat Lukens stands in front of the CDF detector. CDF/Fermilab
1 2 3 4 5 6 7 8 9 10 Newest | <urn:uuid:71207654-0d65-4d20-b17c-10658e0eea45> | 3.125 | 974 | News Article | Science & Tech. | 49.95941 | 95,553,230 |
Paul originated from a low pressure circulation embedded within the monsoon trough over the Arufura Sea between the northern coast of Australia and New Guinea. As the circulation drifted southward towards northern Australia it intensified slowly and only became a Category 1 cyclone on the evening of March 28, 2010 (local time) when the center was right over the northeast coast of the Northern Territory where it brought wind gusts of up to 110 kph (~70 mph, equivalent to a tropical storm on the US Saffir-Simpson scale).
Since its launch back in 1997, the Tropical Rainfall Measuring Mission satellite (better known as TRMM) has served as a valuable platform for monitoring tropical cyclones using its unique combination of active radar and passive microwave sensors. TRMM captured this first image of Paul at 9:08 UTC on March 28, 2010 (6:38 pm Australian CST) when the center was right over the northeast coast of the Northern Territory. The image shows the horizontal distribution of rain intensity inside the storm. Rain rates in the center of the swath are from the TRMM Precipitation Radar (PR), the only spaceborne precipitation radar of its kind, while those in the outer portion are from the TRMM Microwave Imager (TMI). The rain rates are overlaid on infrared (IR) data from the TRMM Visible Infrared Scanner (VIRS).
Although Paul does not have a visible eye in the IR data, the center of the storm's circulation is clearly evident in the rain pattern over the coast. Paul's center of circulation is bordered by a band of moderate intensity rain to the northwest and surrounded by outer rainbands that spiral inwards to the south and east that have light to moderate rain. Embedded within the rainbands are occasional areas of heavy rain.
TRMM data was used to create a 3-D perspective of the storm from data from TRMM's Precipitation Radar instrument. The most prominent feature is a deep convective tower, which penetrates up to 9 miles (15 km) high. This corresponds with an area of intense rain in the northwestern eyewall evident in the TRMM's image of horizontal rainfall. These tall towers are associated with convective bursts and can be a sign of future strengthening as they indicate areas where heat, known as latent heat, is being released into the storm. This heating is what drives the storm's circulation. Despite Paul's proximity to land, it was able to intensify into a Category 2 cyclone (equivalent to a minimal Category 1 hurricane) by the following morning with wind gusts of up to 140 kph (~85 mph). Paul is hovering over land along the coast and is expected to weaken slowly over the next day or so; however, it could eventually re-emerge over the very warm waters of the Gulf of Carpentaria and re-intensify.
TRMM is a joint mission between NASA and the Japanese space agency JAXA.
Rob Gutro | EurekAlert!
Global study of world's beaches shows threat to protected areas
19.07.2018 | NASA/Goddard Space Flight Center
NSF-supported researchers to present new results on hurricanes and other extreme events
19.07.2018 | National Science Foundation
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:0178d843-781b-49b4-91b4-5d3a8ba22405> | 3.25 | 1,245 | Content Listing | Science & Tech. | 45.789158 | 95,553,240 |
A group of scientists from the New York-based Wildlife Conservation Society (WCS), working with the Marine and Coastal Management Branch of South Africa, have perfected an unusual, hands-on method to study great white sharks, where these fearsome predators are gently hauled into research vessels to receive high-tech satellite tags.
According to the scientists, the technique is safe to both shark and researcher, resulting in better data to understand – and ultimately protect – one of natures largest and most maligned carnivores. So far, seven sharks ranging up to eleven-and-a-half feet long and over 800 pounds have been tagged using this technique off the coast of South Africa, one of the worlds hot spots for great whites.
The sharks are baited with a hook and line, and quickly hoisted into a specially constructed cradle. At that point, a team of two veterinarians inserts a hose with oxygen-rich water into the sharks mouth to keep them breathing while monitoring their condition. Meanwhile, a group of scientists attach the tag to the dorsal fin to record the fishs movements. Before the shark is released, it is given a cocktail of medicine to ensure rapid recovery.
Dr. Ramon Bonfil | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
19.07.2018 | Earth Sciences
19.07.2018 | Power and Electrical Engineering
19.07.2018 | Materials Sciences | <urn:uuid:300d62c7-4c36-4afc-a164-4b4bdc15cc6b> | 3.234375 | 906 | Content Listing | Science & Tech. | 42.353914 | 95,553,241 |
It uses the structural conventions of a programming language, but is intended for human reading rather than machine reading. Pseudocode typically omits details that are essential for machine understanding of the algorithm, such as variable declarations, system-specific code and some subroutines. The programming language is augmented with natural language description details, where convenient, or with compact mathematical notation. The purpose of using pseudocode is that it is easier for people to understand than conventional programming language code, and that it is an efficient and environment-independent description of the key principles of an algorithm. It is commonly used in textbooks and scientific publications that are documenting various algorithms, and also in planning of computer program development, for sketching out the structure of the program before the actual coding takes place. | <urn:uuid:cf3ee150-502e-402a-b2d7-13f12af716ea> | 3.9375 | 156 | Knowledge Article | Software Dev. | 7.167191 | 95,553,256 |
Off the coast of Central California, in the inky darkness of the deep sea, a bright orange metal pyramid about the size of two compact cars sits quietly on the seafloor. Nestled within the metal pyramid is the heart of the Monterey Accelerated Research System (MARS) - the first deep-sea ocean observatory offshore of the continental United States. Six years and $13.5 million dollars in the making, the MARS Observatory went "live" on Monday, November 10, 2008, returning the first scientific data from 900 meters (3,000 feet) below the ocean surface.
Construction of the observatory was coordinated by the Monterey Bay Aquarium Research Institute (MBARI). According to Marcia McNutt, MBARI president and CEO, "Getting all of the components of the observatory to work together perfectly in the remote, unforgiving, inhospitable environment of the deep sea was no easy task. But the tougher the challenge, the greater the glory when it is finally achieved. Some day we may look back at the first packets of data streaming in from the MARS observatory as the equivalent of those first words spoken by Alexander Graham Bell: 'Watson, come here, I need you!'"
Like the Hubble Space Telescope, the MARS Observatory is not designed for human occupation, but is operated remotely. The observatory will serve as both a "power strip" and a "high-speed internet connection" for scientific instruments in the deep sea. It will allow marine scientists to continuously monitor the dark, mysterious world of the deep sea, instead of relying on brief oceanographic cruises and instruments that run on batteries.
The heart of observatory consists of two titanium pressure cylinders packed with computer networking and power distribution equipment. These cylinders are nestled within a protective metal pyramid on the deep seafloor. This central hub is connected to shore by a 52-kilometer-long cable that can carry up to 10,000 watts of power and two gigabits per second of data. Most of the cable is buried a meter (three feet) below the seafloor.
Over the next few months, a variety of scientific instruments will be hooked up to the observatory using underwater "extension cords." These will include instruments to monitor earthquakes and to capture deep-sea animals on video. Researchers will also be testing an experiment to study the effects of ocean acidification on seafloor animals. MBARI technicians will use remotely operated vehicles (robot submarines) to plug these instruments into the central hub. After the instruments are hooked up, researchers will be able to run experiments and study deep-sea data and images from anywhere in the world.
Researchers whose experiments are hooked up to the MARS observatory will no longer have to worry about their instruments' batteries wearing out, because their experiments will get all of their electrical power from shore. Even better, researchers won't have to wait for weeks or months to recover their instruments and find out how the experiments turned out. They will be able to look at data and video from the deep sea in real time, 24 hours a day. This will allow researchers will to know immediately if their experiments are working or not.
Providing a place for researchers to test their deep-sea instruments is one of the primary goals of the MARS observatory. Many instruments will be tested on MARS before being hooked up to other deep-sea observatories offshore of the U. S. and other countries.
Funded in 2002 by a grant from the National Science Foundation, the MARS Observatory was constructed through a collaborative effort by MBARI, Woods Hole Oceanographic Institution, the University of Washington Applied Physics Laboratory, NASA's Jet Propulsion Laboratory, L-3 Communications MariPro, and Alcatel-Lucent. Each group was responsible for preparing a different part of the observatory. According to Keith Raybould, the MARS project manager, "MARS was a very challenging project. Our academic partners and contractors used many new, cutting-edge technologies. Everything had to be carefully coordinated so that all the parts of the system would work together seamlessly."
Designing and constructing this one-of-a-kind system took over six years of hard work. The environmental review process for the MARS cable alone lasted over two years and cost roughly one million dollars. One of the biggest technical challenges was creating an underwater electrical system that could convert the 10,000 volts of direct current coming through the cable to the much lower voltages required by scientific instruments.
In April 2007, an undersea cable was laid from the observatory site to shore. On February 26, 2008, MBARI engineers and remotely-operated-vehicle pilots installed the the central hub and powered up the system. Unfortunately, after only 20 minutes of operation, the plug for the main power-supply began to leak, and the system had to be shut down.
Following this setback, MBARI staff and contractors spent the next eight months repairing and testing the observatory hub. This effort culminated in early November, 2008, when the cable-laying ship IT Intrepid arrived in Monterey Bay and hauled the trawl-resistant frame up to the surface. Working around the clock, technicians on board the ship replaced the failed underwater connector, then lowered the frame back down to the seafloor. On November 10, 2008, the MBARI marine operations group reinstalled the observatory hub and powered the system up. All systems worked perfectly. The MARS observatory had finally become a reality.
One of the first experiments that will be hooked up to the MARS observatory is the FOCE project. Led by MBARI chemist Peter Brewer, this experiment will allow researchers to find out how the increasing acidity of seawater is affecting deep-sea animals. Seawater is becoming more acidic in many parts of the ocean as human-generated carbon dioxide in the atmosphere dissolves into the oceans.
Another experiment that will be hooked up to the MARS Observatory is a special low-light video camera called the Eye-in-the-Sea. Developed under the direction of marine biologist Edie Widder, this system illuminates the seafloor with a dim red light that is invisible to many deep-sea animals. When Widder placed an earlier version of this instrument on the seafloor in the Gulf of Mexico, it captured rare footage of deep-sea sharks and of a large squid that was entirely new to science.
A third experiment scheduled for attachment to the observatory is an ultra-sensitive seismometer that will help geologists better understand fault zones and earthquakes along the Central California coast. | <urn:uuid:db2b6c91-81a0-4c28-9cfa-85eb275aad85> | 3.34375 | 1,355 | Knowledge Article | Science & Tech. | 36.253639 | 95,553,266 |
From: John Cowan (firstname.lastname@example.org)
Date: Wed Oct 22 2003 - 06:59:43 CST
Philippe Verdy scripsit:
> I also have some old documents that use <VT>=U+000B instead of
> LF=U+000A to increase the interparagraph spacing. This is still
> mapped to the source '\v' character constant in C/C++ (and Java
> as well, except that Java _requires_ that '\v' be mapped only to
The XML Core WG also looked at FF, but decided that like PS it might
be markup, and therefore shouldn't arbitrarily be mapped to LF.
We didn't look at VT as far as I remember.
Historically and originally, VT was meant to control line printers,
which had a paper tape loop inside that selected the number of lines
per page, and was advanced by one frame for each line printed. A hole
punched in a certain column represented line 1, and so FF was implemented
by advancing the tape and the paper until this hole was detected. Another
column could contain holes for vertical tabulation points, and VT advanced
the tape and paper until the next such hole was reached. Thus VT was
strictly analogous to TAB.
> Some applications still seem to use <VT> after <CR> to create soft line
> breaks, in text files where paragraphs are normally ended by <CR><LF>.
IIRC, Microsoft Word uses VT internally to indicate a hard line break,
and CR for a paragraph break.
> CR was intended to create an overstrike on the previously written (but
> still complete) line, for example to underline some characters on that
> line. This is what '\r' should imply in C, and in fact such '\r' should no
> more be used in C, as it relies to add visual attributes to the previous
> text. That why <CR> comes before <LF> that terminates the paragraph.
In addition, Teletype terminals that received <LF, CR> would not reliably
print the next character in the first horizontal position, because of
the time it took to execute a CR.
-- Not to perambulate John Cowan <email@example.com> the corridors http://www.reutershealth.com during the hours of repose http://www.ccil.org/~cowan in the boots of ascension. --Sign in Austrian ski-resort hotel
This archive was generated by hypermail 2.1.5 : Thu Jan 18 2007 - 15:54:24 CST | <urn:uuid:836c2e1d-9905-4726-a579-8d43871398f8> | 2.765625 | 571 | Comment Section | Software Dev. | 64.873191 | 95,553,267 |
In total, seven adult male mitten crabs have been documented from the two bays since 2005. Prior to this, the potentially invasive species had never been recorded from coastal waters of the eastern United States.
The mitten crab is native to eastern Asia and has already invaded Europe and the western United States, where it has established reproductive populations. The crab occurs in both freshwater and saltwater. Young crabs spend their lives in freshwater and migrate to saltwater estuaries for reproduction.
Named for the unusual thick fur-like coating on its claws, the mitten crab looks very different than native crabs and is easily recognized. It is listed as injurious wildlife under the Federal Lacey Act, due to its potential to cause ecological and economic damage.
“We don’t know the present status of this crab along the eastern U.S. coast” said Gregory Ruiz, senior scientist at the Smithsonian Environmental Research Center. “At the moment, it is not clear whether these crabs are reproducing or established in the Mid-Atlantic region, or whether the captured crabs are just a few individuals that originated elsewhere.” These crabs may have arrived in the ballast water of ships or through live trade.
Kimbra Cutlip | EurekAlert!
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:b2054405-5ad5-448e-b5da-2dcb11b2728c> | 3.515625 | 857 | Content Listing | Science & Tech. | 38.46197 | 95,553,276 |
ch 1: Patterns and Number Sense.
ch 2: Understanding Addition.
ch 3: Understanding Subtraction.
ch 4: Data and Graphs.
ch 5: Addition Strategies to 12.
ch 6: Subtraction Strategies to 12.
ch 7: Time.
ch 8: Numbers to 100.
Looking ahead to Grade 2.
Download California Mathematics Grade 1 (Volume 1) [PDF]
California Mathematics Grade 1 (Volume 1)
Start Smart.ch 1: Patterns and Number Sense.ch 2: Understanding Addition.ch 3:...
NJ ASK Grade 4 Mathematics
REA�s Ready, Set, Go! NJ ASK Grade 4 Mathematics Test...
Mathematics Daily Review, Grade 3
Silver Burdett Ginn Mathematics ( 2001) components for Grade...
Comparing And Scaling (Connected Mathematics 2, Grade 7)
Are soft-bound, 3-hole-punched to fit in students' binders 4-color with an engaging Unit...
Geometry, Analysis and Topology of Discrete Groups (volume 6 in the Advanced Lectures in Mathematics series) (volume 6 in the Advanced Lectures in Mathematics series) (Advanced Lectures in Mathematics)
Discrete subgroups of Lie groups are foundational objects in modern mathematics and occur naturally...
Everyday Mathematics, Grade 3, Common Core State Standards Edition
Common Core Standards Edition, University of Chicago School Mathematics...
Computational Conformal Geometry (Volume 3 Of The Advanced Lectures In Mathematics Series) (Volume 3 Of The Advanced Lectures In Mathematics Series) (International Press)
Computational conformal geometry is an emerging inter-disciplinary field, with applications to...
Superstring Theory (volume 1 of the Advanced Lectures in Mathematics series) (volume 1 of the Advanced Lectures in Mathematics series) (Advanced Lectures in Mathematics)
Interest in string theory is driven largely by the hope that it will evolve to be the ultimate...
History of California Volume 23
This historic book may have numerous typos and missing text. Purchasers can usually download a free...
Studies in Mathematics and Its Applications, Volume 20
This volume is a thorough introduction to contemporary research in elasticity, and may be used as a... | <urn:uuid:6f1ec471-f238-4d93-8761-d9de26971e2f> | 3.4375 | 469 | Content Listing | Science & Tech. | 44.129025 | 95,553,299 |
Beijing: China`s first moon rover will run on a nuclear-powered battery, a scientist said.
The rover lands on moon next year on board Chang`e-3, China`s third lunar probe, said Ouyang Ziyuan, the chief scientist of China`s lunar project.
The battery will be able to power the 100-kg vehicle for more than 30 years, reported Shanghai Daily.
"The nuclear power system will make China the third country apart from the United States and Russia to be able to apply nuclear technology to space exploration," Ouyang was quoted as saying.
The rover, controlled by scientists on Earth, will patrol the surface for at least three months, said Ye Peijian, another senior official.
Ouyang said the rover would be powered by the sun during daytime and by nuclear power during the night.
A lunar night is for 14 days with extremely low temperatures and the battery will be the only source of energy during that period, preventing the equipment from freezing. | <urn:uuid:002717a3-a2f4-4b67-8b8f-227f9a18d33f> | 2.578125 | 209 | News Article | Science & Tech. | 54.728182 | 95,553,314 |
Florida: Earthlings, you are not alone. Scientists from NASA have reportedly found “compelling” new evidence of life on Mars.
A special mission to the Red Planet has revealed the presence of a form of pond scum — the building blocks of life as we know it, reports the Sun.
Experts from NASA put forward the claim as they unveiled the results of the recent Opportunity and Spirit probes, which were sent millions of miles through the solar system to discover signs of extraterrestrial life.
The researchers say that the results are so promising that the agency has already planned a host of other missions to discover whether there is extraterrestrial life in the universe.
The recent missions have gathered evidence of sulphates on Mars, a strong indication there is water on the planet and, therefore, life. Previous missions to Mars have concluded there is probably water on the planet.
But the NASA researchers said the recent missions have gone further than any others in proving there is life on Mars.
They were particularly excited about the discovery of a sulphate called gypsum which, it has emerged recently, is found in large quantities among fossils in the Mediterranean.
“One, thanks to Opportunity and the rovers and orbital imaging it is clear that there are literally vast areas of Mars that are carpeted with various sorts of sulphates, including gypsum.,” the Sun quoted Bill Schopf, a researcher at the University of California in Los Angeles, as saying.
Almost 30 NASA missions to discover life in space - including one to bring back rocks from Mars — have already been planned. | <urn:uuid:ce08bd66-d9c3-4401-a0e2-324b7d2a3f6e> | 2.84375 | 325 | Truncated | Science & Tech. | 36.290605 | 95,553,315 |
Clubs and Organizations
January 9, 2016
Dear EarthTalk: How are the world's penguins faring in this day and age of global warming? What can we do to help them? -- M.M.
Not surprisingly, penguins those cute and quirky flightless birds of the Southern Hemisphere that are loved by humans and have inspired countless films, books, comic strips and sports teams are in deep trouble as a result of reckless human activity.
The nonprofit International Union for the Conservation of Nature (IUCN), which maintains the "Red List" of at-risk species around the world, considers five of the world's 18 penguin species "endangered." IUCN classifies five more penguin species as "vulnerable" and yet another five as "near threatened." Only three species still exists in healthy enough numbers to qualify for IUCN's "least concern" classification.
Penguins have evolved over millions of years and adapted to big ecosystem and climatic changes along the way, but they face their biggest challenges from threats posed by humans over just the last century.
One of the more dire threats to penguins is commercial fishing. "Overfishing and concentrated fishing efforts near penguin colonies for forage species such as Antarctic krill can make it more difficult for penguins to find nourishment…especially when fishing grounds overlap with the foraging grounds of penguins," reports the Pew Charitable Trusts, a leading nonprofit with a focus on ocean conservation.
Meanwhile, predators and non-native invasive species introduced by humans are also taking their toll. According to Pew, several colonies of little penguins in Australia, for example, have been wiped out by non-indigenous dogs and foxes, while the Galápagos penguin has suffered big losses as a result of pathogen-borne illnesses introduced by non-native species and some natural bird migration.
Yet another threat is habitat destruction. "Tourism-related pressures, such as foot traffic and litter, can encroach on penguin colonies and nesting sites," says Pew. "Oil spills have had severe effects on the health of individual colonies of penguins as well as their foraging habitats."
And climate change with its resulting melting of vast sheets of sea ice could well be the greatest threat to already struggling penguin populations. "Ice plays a crucial role in the breeding process for several species of Antarctic penguins and also provides a place for penguins to rest and to avoid predators during long foraging trips," reports Pew. "The loss of sea ice along the Antarctic Peninsula is contributing to reductions in the abundance of Antarctic krill, a favorite food of several penguin species."
But according to Pew, the situation isn't completely hopeless. The creation of more marine reserves where penguins can thrive without the stresses of overfishing and other human activity is a big step in the right direction. Pew is also pushing for better fisheries management in order to increase food sources for penguins and other marine wildlife dependent on nutrients further down the food chain, and also for a reduction in the number of introduced predators and invasive species.
According to Pew, the penguins' plight is a portent of larger environmental concerns: "These birds are sentinels for the health of the entire sea. Changes to their populations can indicate trouble for other species that depend on these waters for survival."
EarthTalk® is produced by Roddy Scheer & Doug Moss and is a registered trademark of the nonprofit Earth Action Network. To donate, visit www.earthtalk.org. Send questions to: firstname.lastname@example.org. | <urn:uuid:1fea0c69-e6a4-4339-b72d-f2158770f942> | 3.234375 | 738 | Nonfiction Writing | Science & Tech. | 40.289431 | 95,553,323 |
The Galilean transformation is used to transform between the coordinate of two reference frames which differ only by constant relative motion within the construct of Newtonian physics. This is a passive transformation point of view. In special relativity the Galilean transformations are replaced by Lorentz transformations.
The notation that can be found below describes the relationship under the Galilean transformation between the coordinates (x, y, z, t) and (x’, y’, x’, t’) of a single arbitrary event as measured in two coordinate systems S and S’, in uniform relative motion in their common x and x’ directions, with their spatial origins coinciding at t=t’=0.
x’ = x – vt
y’ = y
z’ = z
t’ = t
The transformation can also be considered a shear mapping and is described with matrix acting on a vector. When the motion is parallel to the x-axis, the transformation acts on only two components:
(x',t')= (x,t)(1 0)
With this matrix representation are not strictly necessary for Galilean transformation, they provide the means for direct comparison to transformation methods in special relativity.© BrainMass Inc. brainmass.com July 17, 2018, 9:28 pm ad1c9bdddf | <urn:uuid:4a1da42e-8216-4215-858f-7f24e41317e1> | 3.609375 | 281 | Knowledge Article | Science & Tech. | 46.478224 | 95,553,332 |
The short wavelength of graphene plasmons relative to the light wavelength makes them attractive for
applications in optoelectronics and sensing. However, this property limits their coupling to external
light and our ability to create and detect them. More efficient ways of generating plasmons are therefore
desirable. Here we demonstrate through realistic theoretical simulations that graphene plasmons can be
efficiently excited via electron tunneling in a sandwich structure formed by two graphene monolayers
separated by a few atomic layers of hBN. We predict plasmon generation rates of ~ 10^12 - 10^14 1/s over
an area of the squared plasmon wavelength for realistic values of the spacing and bias voltage,
while the yield (plasmons per tunneled electron) has unity order . Our results support electrical
excitation of graphene plasmons in tunneling devices as a viable mechanism for the development of
optics-free ultrathin plasmonic devices.
S. de Vega and F. J. García de Abajo, ACS. Phot. 4 (2017)
Sandra de Vega and F. Javier García de Abajo, "Plasmon generation through electron tunneling in graphene (Conference Presentation)," Proc. SPIE 10672, Nanophotonics VII, 106721Q (Presented at SPIE Photonics Europe: April 26, 2018; Published: 23 May 2018); https://doi.org/10.1117/12.2309974.5788799616001.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the conference proceedings. They include the speaker's narration along with a video recording of the presentation slides and animations. Many conference presentations also include full-text papers. Search and browse our growing collection of more than 12,000 conference presentations, including many plenary and keynote presentations. | <urn:uuid:9e20bc6f-4519-4766-95d3-a112cbe10815> | 2.6875 | 395 | Academic Writing | Science & Tech. | 38.046384 | 95,553,338 |
A classical field theory is a physical theory that predicts how one or more physical fields interact with matter through field equations. The term 'classical field theory' is commonly reserved for describing those physical theories that describe electromagnetism and gravitation, two of the fundamental forces of nature. Theories that incorporate quantum mechanics are called quantum field theories.
A physical field can be thought of as the assignment of a physical quantity at each point of space and time. For example, in a weather forecast, the wind velocity during a day over a country is described by assigning a vector to each point in space. Each vector represents the direction of the movement of air at that point, so the set of all wind vectors in an area at a given point in time constitutes a vector field. As the day progresses, the directions in which the vectors point change as the directions of the wind change.
The first field theories, Newtonian gravitation and Maxwell's equations of electromagnetic fields were developed in classical physics before the advent of relativity theory in 1905, and had to be revised to be consistent with that theory. Consequently, classical field theories are usually categorized as non-relativistic and relativistic. Modern field theories are usually expressed using the mathematics of tensor calculus. A more recent alternate mathematical formalism describes classical fields as sections of mathematical objects called fiber bundles .
Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described.
The first field theory of gravity was Newton's theory of gravitation in which the mutual interaction between two masses obeys an inverse square law. This was very useful for predicting the motion of planets around the Sun.
Any massive body M has a gravitational field g which describes its influence on other massive bodies. The gravitational field of M at a point r in space is found by determining the force F that M exerts on a small test mass m located at r, and then dividing by m:
Stipulating that m is much smaller than M ensures that the presence of m has a negligible influence on the behavior of M.
The experimental observation that inertial mass and gravitational mass are equal to unprecedented levels of accuracy leads to the identification of the gravitational field strength as identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity.
For a discrete collection of masses, Mi, located at points, ri, the gravitational field at a point r due to the masses is
If we have a continuous mass distribution ? instead, the sum is replaced by an integral,
Note that the direction of the field points from the position r to the position of the masses ri; this is ensured by the minus sign. In a nutshell, this means all masses attract.
In the integral form Gauss's law for gravity is
while in differential form it is
This is a consequence of the gravitational force F being conservative.
A charged test particle with charge q experiences a force F based solely on its charge. We can similarly describe the electric field E so that . Using this and Coulomb's law the electric field due to a single charged particle is
The electric field is conservative, and hence is given by the gradient of a scalar potential, V(r)
Gauss's law for electricity is in integral form
while in differential form
A steady current I flowing along a path l will exert a force on nearby charged particles that is quantitatively different from the electric field force described above. The force exerted by I on a nearby charge q with velocity v is
The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r):
Gauss's law for magnetism in integral form is
while in differential form it is
The physical interpretation is that there are no magnetic monopoles.
In general, in the presence of both a charge density ?(r, t) and current density J(r, t), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to the electric charge density (charge per unit volume) ? and current density (electric current per unit area) J.
Alternatively, one can describe the system in terms of its scalar and vector potentials V and A. A set of integral equations known as retarded potentials allow one to calculate V and A from ? and J,[note 1] and from there the electric and magnetic fields are determined via the relations
Fluid dynamics has fields of pressure, density, and flow rate that are connected by conservation laws for energy and momentum. The mass continuity equation is a continuity equation, representing the conservation of mass
and the Navier-Stokes equations represent the conservation of momentum in the fluid, found from Newton's laws applied to the fluid,
The term "potential theory" arises from the fact that, in 19th century physics, the fundamental forces of nature were believed to be derived from scalar potentials which satisfied Laplace's equation. Poisson addressed the question of the stability of the planetary orbits, which had already been settled by Lagrange to the first degree of approximation from the perturbation forces, and derived the Poisson's equation, named after him. The general form of this equation is
where ? is a source function (as a density, a quantity per unit volume) and ? the scalar potential to solve for.
In Newtonian gravitation; masses are the sources of the field so that field lines terminate at objects that have mass. Similarly, charges are the sources and sinks of electrostatic fields: positive charges emanate electric field lines, and field lines terminate at negative charges. These field concepts are also illustrated in the general divergence theorem, specifically Gauss's law's for gravity and electricity. For the cases of time-independent gravity and electromagnetism, the fields are gradients of corresponding potentials
so substituting these into Gauss' law for each case obtains
In the case where there is no source term (e.g. vacuum, or paired charges), these potentials obey Laplace's equation:
For a distribution of mass (or charge), the potential can be expanded in a series of spherical harmonics, and the nth term in the series can be viewed as a potential arising from the 2n-moments (see multipole expansion). For many purposes only the monopole, dipole, and quadrupole terms are needed in calculations.
Modern formulations of classical field theories generally require Lorentz covariance as this is now recognised as a fundamental aspect of nature. A field theory tends to be expressed mathematically by using Lagrangians. This is a function that, when subjected to an action principle, gives rise to the field equations and a conservation law for the theory. The action is a Lorentz scalar, from which the field equations and symmetries can be readily derived.
Throughout we use units such that the speed of light in vacuum is 1, i.e. c = 1.[note 2]
Given a field tensor ?, a scalar called the Lagrangian density
can be constructed from ? and its derivatives.
From this density, the action functional can be constructed by integrating over spacetime,
Where is viewed as the 'jacobian' in curved spacetime.
Therefore, the Lagrangian itself is equal to the integral of the Lagrangian density over all space.
Then by enforcing the action principle, the Euler-Lagrange equations are obtained
Two of the most well-known Lorentz-covariant classical field theories are now described.
Historically, the first (classical) field theories were those describing the electric and magnetic fields (separately). After numerous experiments, it was found that these two fields were related, or, in fact, two aspects of the same field: the electromagnetic field. Maxwell's theory of electromagnetism describes the interaction of charged matter with the electromagnetic field. The first formulation of this field theory used vector fields to describe the electric and magnetic fields. With the advent of special relativity, a more complete formulation using tensor fields was found. Instead of using two vector fields describing the electric and magnetic fields, a tensor field representing these two fields together is used.
The electromagnetic four-potential is defined to be Aa = (-?, A), and the electromagnetic four-current ja = (-?, j). The electromagnetic field at any point in spacetime is described by the antisymmetric (0,2)-rank electromagnetic field tensor
To obtain the dynamics for this field, we try and construct a scalar from the field. In the vacuum, we have
We can use gauge field theory to get the interaction term, and this gives us
To obtain the field equations the electromagnetic tensor in the Lagrangian density needs to be replaced by its definition in terms of the 4-potential A, and its this potential which enter the Euler-Lagrange equations. The EM field F is not varied in the EL equations. Therefore,
Evaluating the derivative of the Lagrangian density with respect to the field components
and the derivatives of the field components
obtains Maxwell's equations in vacuum. The source equations (Gauss' law for electricity and the Maxwell-Ampère law) are
while the other two (Gauss' law for magnetism and Faraday's law) are obtained from the fact that F is the 4-curl of A, or, in other words, from the fact that the bianchi identity holds for the electromagnetic field tensor.
where the comma indicates a partial derivative.
After Newtonian gravitation was found to be inconsistent with special relativity, Albert Einstein formulated a new theory of gravitation called general relativity. This treats gravitation as a geometric phenomenon ('curved spacetime') caused by masses and represents the gravitational field mathematically by a tensor field called the metric tensor. The Einstein field equations describe how this curvature is produced. Newtonian gravitation is now superseded by Einstein's theory of general relativity, in which gravitation is thought of as being due to a curved spacetime, caused by masses. The Einstein field equation describes how this curvature is produced by masses,
where ? = 8?G/c4 is a constant which appears in the Einstein field equations (and not the action), and
The vacuum solution can be obtained by varying the following Einstein-Hilbert action with respect to the metric
The vacuum field equations are the field equations written without matter (including sources). Solutions of the vacuum field equations are called vacuum solutions. The field equations may be derived by using the Einstein-Hilbert action. Varying the Lagrangian
Attempts to create a unified field theory based on classical physics are classical unified field theories. During the years of between two World Wars, the idea of unification of gravity with electromagnetism was actively pursued by several mathematicians and physicists like Albert Einstein, Theodor Kaluza, Hermann Weyl, Arthur Eddington , Gustav Mie and Ernst Reichenbacher.
Early attempts to create such theory were based on incorporation of electromagnetic fields into geometry of general relativity. In 1918, the case for the first geometrization of the electromagnetic field was proposed in 1918 by Hermann Weyl. In 1919, an idea of five-dimensional approach suggested by Theodor Kaluza. From that, a theory called Kaluza-Klein Theory was developed. It attempts to unify gravitation and electromagnetism, in a five-dimensional space-time. There are several ways of extending the representational framework for a unified field theory which have been considered by Einstein and other researchers. These extensions in general are based in two options. The first option is based in relaxing the conditions imposed on the original formulation, and the second is based in introduction other mathematical objects into the theory. An example of the first option is relaxing the restrictions to four-dimensional space-time by considering higher-dimensional representations. That is used in Kaluza-Klein Theory. For the second, the most prominent example arises from the concept of the affine connection that was introduced into the theory of general relativity mainly through the work of Tullio Levi-Civita and Hermann Weyl.
Further development of quantum field theory changed the focus of searching for unified field theory from classical to quantum description. Because of that, many theoretical physicists gave up looking for a classical unified field theory. Quantum field theory would include unification of other two fundamental forces of nature, strong and weak nuclear force which act on subatomic level. | <urn:uuid:51bc6a26-55cd-40ba-8042-710902689fd5> | 3.65625 | 2,652 | Knowledge Article | Science & Tech. | 34.087366 | 95,553,344 |
Share this article:
"At the midpoint in totality, the corona stands out most clearly, its shape and extent never quite the same from one eclipse to another. And only the eye can do the corona justice, its special pattern of faint wisps and spikes on this day never seen before and never to be seen again." quoted from MrEclipse.com
The first total solar eclipse the U.S. has seen in over a generation is just months away.
Before then, we have a couple of minor eclipses coming soon. A penumbral lunar eclipse, which is the wet blanket of eclipses, will occur Feb. 11. The moon will only dim slightly and will not be noticeable to the untrained eye.
A couple of weeks later, there will be a solar eclipse. Many times, a lunar eclipse is followed closely by a solar eclipse. The February solar eclipse will be an annular eclipse, occurring over the Southern Hemisphere. An annular eclipse is much more exciting than a penumbral lunar eclipse, but still not as dramatic as a total solar eclipse.
The moon is too far away to block the entire solar disk. Then we will wait 6 months for the headlining act.
Before the big one, we will have a partial lunar eclipse on Aug. 7, visible over the Eastern Hemisphere. There will not be another total lunar eclipse until Jan. 31, 2018.
Coast to coast total solar eclipse Aug. 21
There has not been a total solar eclipse in the lower 48 since 1979. That eclipse only affected a small portion of the Northwest. This year's eclipse will occur from the Pacific to Atlantic coast. There has not been a coast to coast eclipse within the lower 48 since 1918! So, you could say this is a 1 in 100 year event!!!
There are a lot of places to find information on the event. Greatamericaneclipse.com is full of good information, including maps of each state affected by totality. You can follow a few Facebook pages like NationalEclpise.com.
Even though the sun will become completely blocked by the moon, you still cannot look directly at it with the naked eye. You can find eclipse glasses to order online. You basically need the type of lens that a welder's mask has (#14 or higher). Viewing any portion of the eclipse without the proper equipment can permanently blind you!!!
Cloud cover climatology
I am sure many people are wondering about the best location to see it. Let's take a look at who sees the most clear sky in August!
The climatology shows that clouds are more prevalent in the eastern states than in the west. Eastern Oregon looks to be the place to have the best chance for a clear sky Aug. 21.
A few major cities will be within the path of totality. Charleston and Columbia, South Carolina are squarely in the eclipse path. Nashville, Tennessee and Pudach, Kentucky will have totality. The eclipse path will skirt Kansas City and St. Louis. Oregon's Willamette Valley, including the state capital Salem. A total of five state capitals will experience totality, also including Lincoln, Nebraska and Jefferson City, Missouri.
Experiencing the eclipse
An eclipse observer has described totality like this..."The sky surrounding the Sun will grow very dark very quickly. In real time, you will be able to see the deep blue turn to twilight blue, and then to bluish-black. Stars and planets will pop out of nowhere. Roosters will crow and insects will chirp as though night is falling."
Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated.
The moon brightens in advance of next week's full moon. The Mars opposition happens on the same day as a lunar eclipse and the peak of a few different meteor showers!
A crystal clear air mass will bring splendid conditions for stargazers across the East Coast of the U.S. Look for the thin crescent moon near Venus this evening, then the Milky Way later at night. | <urn:uuid:00494ea4-aa60-4877-8185-414f517b36f3> | 2.53125 | 849 | Content Listing | Science & Tech. | 60.829987 | 95,553,362 |
This article may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (October 2013) (Learn how and when to remove this template message)
The Aharonov–Bohm effect, sometimes called the Ehrenberg–Siday–Aharonov–Bohm effect, is a quantum mechanical phenomenon in which an electrically charged particle is affected by an electromagnetic potential (V, A), despite being confined to a region in which both the magnetic field B and electric field E are zero. The underlying mechanism is the coupling of the electromagnetic potential with the complex phase of a charged particle's wave function, and the Aharonov–Bohm effect is accordingly illustrated by interference experiments.
The most commonly described case, sometimes called the Aharonov–Bohm solenoid effect, takes place when the wave function of a charged particle passing around a long solenoid experiences a phase shift as a result of the enclosed magnetic field, despite the magnetic field being negligible in the region through which the particle passes and the particle's wavefunction being negligible inside the solenoid. This phase shift has been observed experimentally. There are also magnetic Aharonov–Bohm effects on bound energies and scattering cross sections, but these cases have not been experimentally tested. An electric Aharonov–Bohm phenomenon was also predicted, in which a charged particle is affected by regions with different electrical potentials but zero electric field, but this has no experimental confirmation yet. A separate "molecular" Aharonov–Bohm effect was proposed for nuclear motion in multiply connected regions, but this has been argued to be a different kind of geometric phase as it is "neither nonlocal nor topological", depending only on local quantities along the nuclear path.
Werner Ehrenberg (1901–1975) and Raymond E. Siday first predicted the effect in 1949. Yakir Aharonov and David Bohm published their analysis in 1959. After publication of the 1959 paper, Bohm was informed of Ehrenberg and Siday's work, which was acknowledged and credited in Bohm and Aharonov's subsequent 1961 paper. The effect was confirmed experimentally, with a very large error, while Bohm was still alive. By the time the error was down to a respectable value, Bohm had died.
In the 18th and 19th centuries, physics was dominated by Newtonian dynamics, with its emphasis on forces. Electromagnetic phenomena were elucidated by a series of experiments involving the measurement of forces between charges, currents and magnets in various configurations. Eventually, a description arose according to which charges, currents and magnets acted as local sources of propagating force fields, which then acted on other charges and currents locally through the Lorentz force law. In this framework, because one of the observed properties of the electric field was that it was irrotational, and one of the observed properties of the magnetic field was that it was divergenceless, it was possible to express an electrostatic field as the gradient of a scalar potential (e.g. Coulomb's electrostatic potential, which is mathematically analogous to the classical gravitational potential) and a stationary magnetic field as the curl of a vector potential (then a new concept – the idea of a scalar potential was already well accepted by analogy with gravitational potential). The language of potentials generalised seamlessly to the fully dynamic case but, since all physical effects were describable in terms of the fields which were the derivatives of the potentials, potentials (unlike fields) were not uniquely determined by physical effects: potentials were only defined up to an arbitrary additive constant electrostatic potential and an irrotational stationary magnetic vector potential.
The Aharonov–Bohm effect is important conceptually because it bears on three issues apparent in the recasting of (Maxwell's) classical electromagnetic theory as a gauge theory, which before the advent of quantum mechanics could be argued to be a mathematical reformulation with no physical consequences. The Aharonov–Bohm thought experiments and their experimental realization imply that the issues were not just philosophical.
The three issues are:
- whether potentials are "physical" or just a convenient tool for calculating force fields;
- whether action principles are fundamental;
- the principle of locality.
Potentials vs fieldsEdit
It is generally argued that Aharonov–Bohm effect illustrates the physicality of electromagnetic potentials, Φ and A, in quantum mechanics. Classically it was possible to argue that only the electromagnetic fields are physical, while the electromagnetic potentials are purely mathematical constructs, that due to gauge freedom aren't even unique for a given electromagnetic field.
However, Vaidman has challenged this interpretation by showing that the AB effect can be explained without the use of potentials so long as one gives a full quantum mechanical treatment to the source charges that produce the electromagnetic field. According to this view, the potential in quantum mechanics is just as physical (or non-physical) as it was classically. Aharonov, Cohen, and Rohrlich responded that the effect may be due to a local gauge potential or due to non-local gauge-invariant fields.
Two papers published in the journal in 2017 Physical Review A have demonstrated a quantum mechanical solution for the system. Their analysis shows that the phase shift can be viewed as generated by solenoid's vector potential acting on the electron or the electron's vector potential acting on the solenoid or the electron and solenoid currents acting on the quantized vector potential.
Global action vs. local forcesEdit
Similarly, the Aharonov–Bohm effect illustrates that the Lagrangian approach to dynamics, based on energies, is not just a computational aid to the Newtonian approach, based on forces. Thus the Aharonov–Bohm effect validates the view that forces are an incomplete way to formulate physics, and potential energies must be used instead. In fact Richard Feynman complained that he had been taught electromagnetism from the perspective of electromagnetic fields, and he wished later in life he had been taught to think in terms of the electromagnetic potential instead, as this would be more fundamental. In Feynman's path-integral view of dynamics, the potential field directly changes the phase of an electron wave function, and it is these changes in phase that lead to measurable quantities.
Locality of electromagnetic effectsEdit
The Aharonov–Bohm effect shows that the local E and B fields do not contain full information about the electromagnetic field, and the electromagnetic four-potential, (Φ, A), must be used instead. By Stokes' theorem, the magnitude of the Aharonov–Bohm effect can be calculated using the electromagnetic fields alone, or using the four-potential alone. But when using just the electromagnetic fields, the effect depends on the field values in a region from which the test particle is excluded. In contrast, when using just the electromagnetic four-potential, the effect only depends on the potential in the region where the test particle is allowed. Therefore, one must either abandon the principle of locality, which most physicists are reluctant to do, or accept that the electromagnetic four-potential offers a more complete description of electromagnetism than the electric and magnetic fields can. On the other hand, the AB effect is crucially quantum mechanical; quantum mechanics is well-known to feature non-local effects (albeit still disallowing superluminal communication), and Vaidman has argued that this is just a non-local quantum effect in a different form.
In classical electromagnetism the two descriptions were equivalent. With the addition of quantum theory, though, the electromagnetic potentials Φ and A are seen as being more fundamental. Despite this, all observable effects end up being expressible in terms of the electromagnetic fields, E and B. This is interesting because, while you can calculate the electromagnetic field from the four-potential, due to gauge freedom the reverse is not true.
Magnetic solenoid effectEdit
The magnetic Aharonov–Bohm effect can be seen as a result of the requirement that quantum physics be invariant with respect to the gauge choice for the electromagnetic potential, of which the magnetic vector potential forms part.
Therefore, particles, with the same start and end points, but travelling along two different routes will acquire a phase difference determined by the magnetic flux through the area between the paths (via Stokes' theorem and ), and given by:
In quantum mechanics the same particle can travel between two points by a variety of paths. Therefore, this phase difference can be observed by placing a solenoid between the slits of a double-slit experiment (or equivalent). An ideal solenoid (i.e. infinitely long and with a perfectly uniform current distribution) encloses a magnetic field , but does not produce any magnetic field outside of its cylinder, and thus the charged particle (e.g. an electron) passing outside experiences no magnetic field . However, there is a (curl-free) vector potential outside the solenoid with an enclosed flux, and so the relative phase of particles passing through one slit or the other is altered by whether the solenoid current is turned on or off. This corresponds to an observable shift of the interference fringes on the observation plane.
The same phase effect is responsible for the quantized-flux requirement in superconducting loops. This quantization occurs because the superconducting wave function must be single valued: its phase difference around a closed loop must be an integer multiple of (with the charge for the electron Cooper pairs), and thus the flux must be a multiple of . The superconducting flux quantum was actually predicted prior to Aharonov and Bohm, by F. London in 1948 using a phenomenological model.
The first claimed experimental confirmation was by Robert G. Chambers in 1960, in an electron interferometer with a magnetic field produced by a thin iron whisker, and other early work is summarized in Olariu and Popèscu (1984). However, subsequent authors questioned the validity of several of these early results because the electrons may not have been completely shielded from the magnetic fields. An early experiment in which an unambiguous Aharonov–Bohm effect was observed by completely excluding the magnetic field from the electron path (with the help of a superconducting film) was performed by Tonomura et al. in 1986. The effect's scope and application continues to expand. Webb et al. (1985) demonstrated Aharonov–Bohm oscillations in ordinary, non-superconducting metallic rings; for a discussion, see Schwarzschild (1986) and Imry & Webb (1989). Bachtold et al. (1999) detected the effect in carbon nanotubes; for a discussion, see Kong et al. (2004).
Monopoles and Dirac stringsEdit
The magnetic Aharonov–Bohm effect is also closely related to Dirac's argument that the existence of a magnetic monopole can be accommodated by the existing magnetic source-free Maxwell's equations if both electric and magnetic charges are quantized.
A magnetic monopole implies a mathematical singularity in the vector potential, which can be expressed as a Dirac string of infinitesimal diameter that contains the equivalent of all of the 4πg flux from a monopole "charge" g. The Dirac string starts from, and terminates on, a magnetic monopole. Thus, assuming the absence of an infinite-range scattering effect by this arbitrary choice of singularity, the requirement of single-valued wave functions (as above) necessitates charge-quantization. That is, must be an integer (in cgs units) for any electric charge qe and magnetic charge qm.
Like the electromagnetic potential A the Dirac string is not gauge invariant (it moves around with fixed endpoints under a gauge transformation) and so is also not directly measurable.
Just as the phase of the wave function depends upon the magnetic vector potential, it also depends upon the scalar electric potential. By constructing a situation in which the electrostatic potential varies for two paths of a particle, through regions of zero electric field, an observable Aharonov–Bohm interference phenomenon from the phase shift has been predicted; again, the absence of an electric field means that, classically, there would be no effect.
From the Schrödinger equation, the phase of an eigenfunction with energy E goes as . The energy, however, will depend upon the electrostatic potential V for a particle with charge q. In particular, for a region with constant potential V (zero field), the electric potential energy qV is simply added to E, resulting in a phase shift:
where t is the time spent in the potential.
The initial theoretical proposal for this effect suggested an experiment where charges pass through conducting cylinders along two paths, which shield the particles from external electric fields in the regions where they travel, but still allow a varying potential to be applied by charging the cylinders. This proved difficult to realize, however. Instead, a different experiment was proposed involving a ring geometry interrupted by tunnel barriers, with a bias voltage V relating the potentials of the two halves of the ring. This situation results in an Aharonov–Bohm phase shift as above, and was observed experimentally in 1998.
Aharonov–Bohm nano ringsEdit
Nano rings were created by accident while intending to make quantum dots. They have interesting optical properties associated with excitons and the Aharonov–Bohm effect. Application of these rings used as light capacitors or buffers includes photonic computing and communications technology. Analysis and measurement of geometric phases in mesoscopic rings is ongoing. It is even suggested they could be used to make a form of slow glass.
Several experiments, including some reported in 2012, show Aharonov-Bohm oscillations in charge density wave (CDW) current versus magnetic flux, of dominant period h/2e through CDW rings up to 85 µm in circumference above 77 K. This behavior is similar to that of the superconducting quantum interference devices (see SQUID).
This section needs additional citations for verification. (October 2012) (Learn how and when to remove this template message)
The Aharonov–Bohm effect can be understood from the fact that one can only measure absolute values of the wave function. While this allows for measurement of phase differences through quantum interference experiments, there is no way to specify a wavefunction with constant absolute phase. In the absence of an electromagnetic field one can come close by declaring the eigenfunction of the momentum operator with zero momentum to be the function "1" (ignoring normalization problems) and specifying wave functions relative to this eigenfunction "1". In this representation the i-momentum operator is (up to a factor ) the differential operator . However, by gauge invariance, it is equally valid to declare the zero momentum eigenfunction to be at the cost of representing the i-momentum operator (up to a factor) as i.e. with a pure gauge vector potential . There is no real asymmetry because representing the former in terms of the latter is just as messy as representing the latter in terms of the former. This means that it is physically more natural to describe wave "functions", in the language of differential geometry, as sections in a complex line bundle with a hermitian metric and a U(1)-connection . The curvature form of the connection, , is, up to the factor i, the Faraday tensor of the electromagnetic field strength. The Aharonov–Bohm effect is then a manifestation of the fact that a connection with zero curvature (i.e. flat), need not be trivial since it can have monodromy along a topologically nontrivial path fully contained in the zero curvature (i.e. field free) region. By definition this means that sections that are parallelly translated along a topologically non trivial path pick up a phase, so that covariant constant sections cannot be defined over the whole field free region.
Given a trivialization of the line-bundle, a non-vanishing section, the U(1)-connection is given by the 1-form corresponding to the electromagnetic four-potential A as where d means exterior derivation on the Minkowski space. The monodromy is the holonomy of the flat connection. The holonomy of a connection, flat or non flat, around a closed loop is (one can show this does not depend on the trivialization but only on the connection). For a flat connection one can find a gauge transformation in any simply connected field free region(acting on wave functions and connections) that gauges away the vector potential. However, if the monodromy is nontrivial, there is no such gauge transformation for the whole outside region. In fact as a consequence of Stokes' theorem, the holonomy is determined by the magnetic flux through a surface bounding the loop , but such a surface may exist only if passes through a region of non trivial field:
The monodromy of the flat connection only depends on the topological type of the loop in the field free region (in fact on the loops homology class). The holonomy description is general, however, and works inside as well as outside the superconductor. Outside of the conducting tube containing the magnetic field, the field strength . In other words, outside the tube the connection is flat, and the monodromy of the loop contained in the field-free region depends only on the winding number around the tube. The monodromy of the connection for a loop going round once (winding number 1) is the phase difference of a particle interfering by propagating left and right of the superconducting tube containing the magnetic field. If one wants to ignore the physics inside the superconductor and only describe the physics in the outside region, it becomes natural and mathematically convenient to describe the quantum electron by a section in a complex line bundle with an "external" flat connection with monodromy
- magnetic flux through the tube /
rather than an external EM field . The Schrödinger equation readily generalizes to this situation by using the Laplacian of the connection for the (free) Hamiltonian
Equivalently, one can work in two simply connected regions with cuts that pass from the tube towards or away from the detection screen. In each of these regions the ordinary free Schrödinger equations would have to be solved, but in passing from one region to the other, in only one of the two connected components of the intersection (effectively in only one of the slits) a monodromy factor is picked up, which results in the shift in the interference pattern as one changes the flux.
Effects with similar mathematical interpretation can be found in other fields. For example, in classical statistical physics, quantization of a molecular motor motion in a stochastic environment can be interpreted as an Aharonov–Bohm effect induced by a gauge field acting in the space of control parameters.
- Aharonov, Y; Bohm, D (1959). "Significance of electromagnetic potentials in quantum theory". Physical Review. 115: 485–491. Bibcode:1959PhRv..115..485A. doi:10.1103/PhysRev.115.485.
- Batelaan, H. & Tonomura, A. (Sep 2009). "The Aharonov–Bohm effects: Variations on a Subtle Theme". Physics Today. 62 (9): 38–43. Bibcode:2009PhT....62i..38B. doi:10.1063/1.3226854.
- Sjöqvist, E (2002). "Locality and topology in the molecular Aharonov–Bohm effect". Physical Review Letters. 89 (21): 210401. arXiv: . Bibcode:2002PhRvL..89u0401S. doi:10.1103/PhysRevLett.89.210401. PMID 12443394.
- Ehrenberg, W; Siday, RE (1949). "The Refractive Index in Electron Optics and the Principles of Dynamics". Proceedings of the Physical Society. Series B. 62: 8–21. Bibcode:1949PPSB...62....8E. doi:10.1088/0370-1301/62/1/303.
- Peat, FD (1997). Infinite Potential: The Life and Times of David Bohm. Addison-Wesley. ISBN 0-201-40635-7. Archived from the original on 2015-03-18.
- Aharonov, Y; Bohm, D (1961). "Further Considerations on Electromagnetic Potentials in the Quantum Theory". Physical Review. 123: 1511–1524. Bibcode:1961PhRv..123.1511A. doi:10.1103/PhysRev.123.1511.
- Peshkin, M; Tonomura, A (1989). The Aharonov–Bohm effect. Springer-Verlag. ISBN 3-540-51567-4.
- "Seven wonders of the quantum world", newscientist.com
- Vaidman, L. (Oct 2012). "Role of potentials in the Aharonov-Bohm effect". Physical Review A. 86 (4): 040101. arXiv: . Bibcode:2012PhRvA..86d0101V. doi:10.1103/PhysRevA.86.040101.
- P. Pearle; A. Rizzi (2017). "Quantum-mechanical inclusion of the source in the Aharonov-Bohm effects". Phys Rev A. 95: 052123.
- P. Pearle; A. Rizzi (2017). "Quantized vector potential and alternative views of the magnetic Aharonov-Bohm phase shift". Phys Rev A. 95: 052124.
- Feynman, R. The Feynman Lectures on Physics. 2. pp. 15–25.
knowledge of the classical electromagnetic field acting locally on a particle is not sufficient to predict its quantum-mechanical behavior. and ...is the vector potential a "real" field? ... a real field is a mathematical device for avoiding the idea of action at a distance. .... for a long time it was believed that A was not a "real" field. .... there are phenomena involving quantum mechanics which show that in fact A is a "real" field in the sense that we have defined it..... E and B are slowly disappearing from the modern expression of physical laws; they are being replaced by A [the vector potential] and [the scalar potential]
- London, F (1948). "On the Problem of the Molecular Theory of Superconductivity". Physical Review. 74: 562. Bibcode:1948PhRv...74..562L. doi:10.1103/PhysRev.74.562.
- Chambers, R.G. (1960). "Shift of an Electron Interference Pattern by Enclosed Magnetic Flux". Physical Review Letters. 5: 3–5. Bibcode:1960PhRvL...5....3C. doi:10.1103/PhysRevLett.5.3.
- Popescu, S. (2010). "Dynamical quantum non-locality". Nature Physics. 6 (3): 151–153. Bibcode:2010NatPh...6..151P. doi:10.1038/nphys1619.
- Olariu, S; Popescu, II (1985). "The quantum effects of electromagnetic fluxes". Reviews of Modern Physics. 57: 339. Bibcode:1985RvMP...57..339O. doi:10.1103/RevModPhys.57.339.
- P. Bocchieri and A. Loinger, Nuovo Cimento Soc. Ital. Fis. 47A, 475 (1978); P. Bocchieri, A. Loinger, and G. Siragusa, Nuovo Cimento Soc. Ital. Fis. 51A, 1 (1979); P. Bocchieri and A. Loinger, Lett. Nuovo Cimento Soc. Ital. Fis. 30, 449 (1981). P. Bocchieri, A. Loinger, and G. Siragusa, Lett. Nuovo Cimento Soc. Ital. Fis. 35, 370 (1982).
- S. M. Roy, Phys. Rev. Lett. 44, 111 (1980)
- Akira Tonomura, Nobuyuki Osakabe, Tsuyoshi Matsuda, Takeshi Kawasaki, and Junji Endo, "Evidence for Aharonov-Bohm Effect with Magnetic Field Completely Shielded from Electron wave", Phys. Rev. Lett. vol. 56, pp. 792–795 (1986).
- Osakabe, N; et al. (1986). "Experimental confirmation of Aharonov–Bohm effect using a toroidal magnetic field confined by a superconductor". Physical Review A. 34 (2): 815–822. Bibcode:1986PhRvA..34..815O. doi:10.1103/PhysRevA.34.815. PMID 9897338.
- Webb, RA; Washburn, S; Umbach, CP; Laibowitz, RB (1985). "Observation of h/e Aharonov–Bohm Oscillations in Normal-Metal Rings". Physical Review Letters. 54 (25): 2696–2699. Bibcode:1985PhRvL..54.2696W. doi:10.1103/PhysRevLett.54.2696. PMID 10031414.
- Schwarzschild, B (1986). "Currents in Normal-Metal Rings Exhibit Aharonov–Bohm Effect". Physics Today. 39 (1): 17. Bibcode:1986PhT....39a..17S. doi:10.1063/1.2814843.
- Imry, Y; Webb, RA (1989). "Quantum Interference and the Aharonov–Bohm Effect". Scientific American. 260 (4): 56–62. Bibcode:1989SciAm.260d..56I. doi:10.1038/scientificamerican0489-56.
- Schönenberger, C; Bachtold, Adrian; Strunk, Christoph; Salvetat, Jean-Paul; Bonard, Jean-Marc; Forró, Laszló; Nussbaumer, Thomas (1999). "Aharonov–Bohm oscillations in carbon nanotubes". Nature. 397 (6721): 673. Bibcode:1999Natur.397..673B. doi:10.1038/17755.
- Kong, J; Kouwenhoven, L; Dekker, C (2004). "Quantum change for nanotubes". Physics World. Retrieved 2009-08-17.
- van Oudenaarden, A; Devoret, Michel H.; Nazarov, Yu. V.; Mooij, J. E. (1998). "Magneto-electric Aharonov–Bohm effect in metal rings". Nature. 391 (6669): 768. Bibcode:1998Natur.391..768V. doi:10.1038/35808.
- Fischer, AM (2009). "Quantum doughnuts slow and freeze light at will". Innovation Reports. Retrieved 2008-08-17.
- Borunda, MF; et al. (2008). "Aharonov–Casher and spin Hall effects in two-dimensional mesoscopic ring structures with strong spin-orbit interaction". arXiv: [cond-mat.mes-hall].
- Grbic, B; et al. (2008). "Aharonov–Bohm oscillations in p-type GaAs quantum rings". Physica E. 40: 1273. arXiv: . Bibcode:2008PhyE...40.1273G. doi:10.1016/j.physe.2007.08.129.
- Fischer, AM; et al. (2009). "Exciton Storage in a Nanoscale Aharonov–Bohm Ring with Electric Field Tuning". Physical Review Letters. 102: 096405. arXiv: . Bibcode:2009PhRvL.102i6405F. doi:10.1103/PhysRevLett.102.096405.
- M. Tsubota; K. Inagaki; T. Matsuura & S. Tanda (2012). "Aharonov-Bohm effect in charge-density wave loops with inherent temporal current switching". EPL. 97 (5): 57011. arXiv: . Bibcode:2012EL.....9757011T. doi:10.1209/0295-5075/97/57011.
- Chernyak, VY; Sinitsyn, NA (2009). "Robust quantization of a molecular motor motion in a stochastic environment". Journal of Chemical Physics. 131 (18): 181101. arXiv: . Bibcode:2009JChPh.131r1101C. doi:10.1063/1.3263821. PMID 19916586.
- D. J. Thouless (1998). "§2.2 Gauge invariance and the Aharonov–Bohm effect". Topological quantum numbers in nonrelativistic physics. World Scientific. pp. 18ff. ISBN 981-02-3025-7. | <urn:uuid:ef15bf0d-5a6f-4c2d-a48a-7734d892afb8> | 3.453125 | 6,244 | Knowledge Article | Science & Tech. | 51.621202 | 95,553,384 |
The text state represents a one line plain text edit control for the element's value.
autocomplete= on/ off/ default
The on state indicates that the value is not particularly sensitive and the user can expect to be able to rely on his user agent to remember values he has entered for that control.
The off state indicates either that the control's input data is particularly sensitive (for example the activation code for a nuclear weapon); or that it is a value that will never be reused (for example a one-time-key for a bank login) and the user will therefore have to explicitly enter the data each time, instead of being able to rely on the UA to prefill the value for him; or that the document provides its own autocomplete mechanism and does not want the user agent to provide autocompletion values. [Example A]
The default state indicates that the user agent is to use the autocomplete attribute on the element's form owner instead. (By default, the autocomplete attribute of form elements is in the on state.)
list= ID reference
Identify an element that lists predefined options suggested to the user.
If present, its value must be the ID of a datalist element in the same document.
maxlength= positive integer
Gives the maximum allowed value length of the element.
Gives the name of the input element.
Specifies a regular expression against which the control's value is to be checked.
When an input element has a pattern attribute specified, authors should include a title attribute to give a description of the pattern.
Represents a short hint (a word or short phrase) intended to aid the user with data entry.
A hint could be a sample value or a brief description of the expected format. For a longer hint or other advisory text, the title attribute is more appropriate. [Example B]
Controls whether or not the user can edit the form control.
When specified, the element is required.
size= valid non-negative integer
The number of options meant to be shown by the control represented by its element.
Gives the default value of the input element.
Banks frequently do not want UAs to prefill login information [try it]:
<p><label>Account: <input type="text" name="ac" autocomplete="off"></label></p> <p><label>PIN: <input type="password" name="pin" autocomplete="off"></label></p>
Here is an example of a mail configuration user interface that uses the placeholder attribute [try it]:
<fieldset> <legend>Mail Account</legend> <p><label>Name: <input type="text" name="fullname" placeholder="John Ratzenberger"></label></p> <p><label>Address: <input type="email" name="address" placeholder="email@example.com"></label></p> <p><label>Password: <input type="password" name="password"></label></p> <p><label>Description: <input type="text" name="desc" placeholder="My Email Account"></label></p> </fieldset>
The HTML5 specification defines the Text state in 220.127.116.11.2 Text state and Search state. | <urn:uuid:740eca97-3311-4f5d-bd24-cfb11d6bedee> | 2.765625 | 688 | Documentation | Software Dev. | 39.880273 | 95,553,390 |
|MLA Citation:||Bloomfield, Louis A. "Question 1568: What does a radio wave consist of?"|
How Everything Works 22 Jul 2018. 22 Jul 2018 <http://howeverythingworks.org/print1.php?QNum=1568>.
The idea of a wave that travels through space itself was a rather disorienting notion to scientists in the late 1800s. They were used to the idea that waves are disturbances in a tangible material or "medium": fluctuations in the density of air, ripples on the surface of water, vibrations of a taut string. Having observed that light and radio waves are electromagnetic waves, they set about looking for the medium that supported those waves. They were expecting to find this "luminiferous aether" but they failed. In fact, the absence of an aether led in part to Einstein's theory of special relativity.
The structure of a radio wave, or any electromagnetic wave, is quite simple. It consists only of a fluctuating electric field and a fluctuating magnetic field. An electric field is a structure in space that affects electric charge; it pushes on charge and causes that charge to accelerate. Similarly, a magnetic field is a structure that affects magnetic pole. Remarkably, changing electric fields produce magnetic fields and changing magnetic fields produce electric fields. That interrelatedness allows the wave's fluctuating electric field to produce its fluctuating magnetic field and vice verse. The wave's electric and magnetic fields endless recreate one another. Although electric charge or magnetic pole is needed to emit or receive a radio wave, that wave can travel perfectly well for billions of light years without involving any charge or pole. It travels through space itself. | <urn:uuid:2f071f9b-dc91-4d79-ad40-611b9ff44386> | 3.46875 | 347 | Knowledge Article | Science & Tech. | 46.571422 | 95,553,393 |
Working with elements in XAML Designer
The new home for Visual Studio documentation is Visual Studio 2017 Documentation on docs.microsoft.com.
The latest version of this topic can be found at Working with elements in XAML Designer.
You can add elements—controls, layouts, and shapes—to your app in XAML, in code, or by using XAML Designer. This topic describes how to work with elements in XAML Designer in Visual Studio or Blend for Visual Studio.
Adding an element to a layout
Layout is the process of sizing and positioning elements in a UI. To position visual elements, you must put them in a layout Panel. A
Panel has a child property which is a collection of FrameworkElement types. You can use various
Panel child elements, such as Canvas, StackPanel, and Grid, to serve as layout containers and to position and arrange the elements on a page.
By default, a
Grid panel is used as the top-level layout container within a page or form. You can add layout panels, controls, or other elements within the top-level page layout.
To add an element to a layout
In XAML Designer, do one of the following:
Changing the layering order of elements
When there are two elements on the artboard in XAML Designer, one element will appear in front of the other in the layering order. At the bottom of the list of elements in the Document Outline window is the front-most element (except for when the ZIndex property for an element is set). When you insert an element into a page, form, or layout container, the element is automatically placed in front of other elements in the active container element. To change the order of elements, you can use the Order commands or drag the elements in the object tree in the Document Outline window.
To change the layering order
Do one of the following:
In the Document Outline window, drag the elements up or down to create the desired layering order.
Right-click the element in the Document Outline window or the artboard for which you want to change the layering order, point to Order, and then click one of the following:
Bring to Front to bring the element all the way to the front of the order.
Bring Forward to bring the element forward one level in the order.
Send Backward to send the element back one level in the order.
Send to Back to send the element all the way to the back of the order.
Change the ZIndex property in the Layout section in the Properties window. For overlapping elements, the ZIndex property takes precedence over the order of elements shown in the Document Outline window. An element that has a lower ZIndex value appears in front when elements overlap.
Changing the alignment of an element
You can align elements in the artboard by using menu commands or by dragging elements to snaplines.
A snapline is a visual cue that helps you align an element relative to other elements in the app.
To align two or more elements by using menu commands
Select the elements that you want to align. You can select more than one element by pressing and holding the Ctrl key while you select the elements.
Select one of the following properties under HorizontalAlignment in the Layout section of the Properties window: Left, Center, Right, or Stretch.
Select one of the following properties under VerticalAlignment in the Layout section of the Properties window: Top, Center, Bottom, or Stretch.
To align two or more elements by using snaplines
In XAML Designer, in a layout that contains at least two elements, drag or resize one of the elements so that the edge is aligned with another element.
When the edges are aligned, an alignment boundary appears to indicate alignment. The alignment boundary is a red dashed line. Alignment boundaries appear only when snapping to snaplines is enabled. For an illustration of the artboard that shows an alignment boundary, see Creating a UI by using XAML Designer.
Changing the an element's margins
The margins in XAML Designer determine the amount of empty space that is around an element on the artboard. For example, margins specify the amount of space between the outside edges of an element and the boundaries of a
Grid panel that contains the element. Margins also specify the amount of space between elements that are contained in a
To change an element's margins in the Properties window
Select the element whose margins you want to change.
Under Layout in the Properties window, change the value (in pixels or device-independent units, which are approximately 1/96 inch) for any of the Margin properties (Top, Left, Right, or Bottom).
To change an element's margins in the artboard
To change the margins of an element relative to its layout container, click the margin adorners that appear around the element in the artboard when the element is selected and is within a layout container. For an illustration that shows margin adorners, see Creating a UI by using XAML Designer.
If a margin adorner is open, vertically or horizontally, that margin isn't set. If a margin adorner is closed, that margin is set.
When you open a margin adorner and the opposite margin isn't set, the opposite margin is set to the correct value according to the location of the element in the artboard. For opposite margins, such as the Left and Right margins, at least one property is always set.
Elements placed inside some layout containers, such as a T:Windows.UI.Xaml.Controls.Canvas, don't have margin adorners. Elements placed inside a T:Windows.UI.Xaml.Controls.StackPanel have margin adorners for either the left and right margins or the top and bottom margins, depending on the orientation of the
Grouping and ungrouping elements
Grouping two or more elements in XAML Designer creates a new layout container and places those elements within that container. Placing two or more elements together in a layout container enables you to easily select, move, and transform the group as if the elements in that group were one element. Grouping is also useful for identifying elements that are related to each other in some way, such as the buttons that make up a navigation element. When you ungroup elements, you are simply deleting the layout container that contained the elements.
To group elements into a new layout container
Select the elements that you want to group. (To select multiple elements, press and hold the Ctrl key while you click them.)
Right-click the selected elements, point to Group Into, and then click the type of layout container in which you want the group to reside.
If you select T:Windows.UI.Xaml.Controls.Viewbox, T:Windows.UI.Xaml.Controls.Border, or T:Windows.UI.Xaml.Controls.ScrollViewer to group your elements, the elements are placed in a new T:Windows.UI.Xaml.Controls.Grid panel within the T:Windows.UI.Xaml.Controls.Viewbox, T:Windows.UI.Xaml.Controls.Border, or T:Windows.UI.Xaml.Controls.ScrollViewer. If you ungroup elements in one of these layout containers, only the T:Windows.UI.Xaml.Controls.Viewbox, T:Windows.UI.Xaml.Controls.Border, or T:Windows.UI.Xaml.Controls.ScrollViewer is deleted, and the T:Windows.UI.Xaml.Controls.Grid panel remains. To delete the
Grid panel, ungroup the elements again.
To ungroup elements and delete the layout
- Right-click the group that you want to ungroup and click Ungroup.
You can also group or ungroup elements by right-clicking selected items in the Document Outline window and clicking Group Into or Ungroup.
Resetting the element layout
You can restore default values for specific layout properties of an element by using the Layout Reset commands. By using this command, you can reset the margin, alignment, width, height, and size of an element, either individually or collectively.
To reset the element layout
- In the Document Outline window or the artboard, right-click the element, choose Layout, Reset PropertyName, where PropertyName is the property that you want to reset (or choose Layout, Reset All to reset all the layout properties for the element). | <urn:uuid:a950aac8-d675-4933-90e9-0a4b9496329c> | 2.890625 | 1,795 | Documentation | Software Dev. | 56.083771 | 95,553,399 |
The ionospheric structure of the major planets has been measured since 1973, using the technique of radio occultation (Chap. II) from Pioneer and Voyager spacecrafts. The principal difference between the Pioneer and the Voyager measurements is that the former were carried out using a single frequency oscillator (S-band at 2.293 GHz or 13 cm-λ), while the latter employed dual frequency mode (S-band, and X-band at 8.6 GHz or 3.6 cm-λ).
KeywordsRadio Occultation Outer Planet Neutral Temperature Thermospheric Wind Major Planet
Unable to display preview. Download preview PDF. | <urn:uuid:4aedb1a1-124a-41bb-87c0-c30ac4bff259> | 3.0625 | 130 | Truncated | Science & Tech. | 62.093454 | 95,553,444 |
Comprehensive analysis shows that natural gas could displace both coal and low-emitting energy sources over the long term
A new analysis of global energy use, economics and the climate shows that without new climate policies, expanding the current bounty of inexpensive natural gas alone would not slow the growth of global greenhouse gas emissions worldwide over the long term, according to a study appearing today in Nature.
Scott Butner firstname.lastname@example.org
The Hermiston Generating Plant in Umatilla County, Oregon, resides nine miles south of the Columbia River. This 474 MW natural gas power plant generates electricity for consumers, steam for an adjacent potato processing plant, and contributes gray water to farmers. https://www.flickr.com/photos/rs_butner/
Because natural gas emits half the carbon dioxide of coal, many people hoped the recent natural gas boom could help slow climate change—and according to government analyses, natural gas did contribute partially to a decline in U.S. carbon dioxide emissions between 2007 and 2012. But, in the long run, according to this study, a global abundance of inexpensive natural gas would compete with all energy sources – not just higher-emitting coal, but also lower-emitting nuclear and renewable energy technologies such as wind and solar. Inexpensive natural gas would also accelerate economic growth and expand overall energy use.
“The effect is that abundant natural gas alone will do little to slow climate change,” said lead author Haewon McJeon, an economist at the Department of Energy's Pacific Northwest National Laboratory. “Global deployment of advanced natural gas production technology could double or triple the global natural gas production by 2050, but greenhouse gas emissions will continue to grow in the absence of climate policies that promote lower carbon energy sources.”
Recent advances in gas production technology based on horizontal drilling and hydraulic fracturing – also known as fracking – have led to bountiful, low-cost natural gas. Because gas emits far less carbon dioxide than coal, some researchers have linked the natural gas boom to recent reductions in greenhouse gas emissions in the United States. But could these advanced technologies also have an impact on emissions beyond North America and decades into the future?
To find out, a group of scientists, engineers and policy experts, led by PNNL's Joint Global Change Research Institute, gathered at a workshop in Cambridge, Maryland, in April 2013 to consider the long-term impact of an expansion of the current natural gas boom on the rest of the world. The researchers, hailing from the U.S., Australia, Austria, Germany and Italy, went home and projected what the world would be like in 2050 with and without a global natural gas boom. The five teams used different computer models that had been independently developed.
Their computer models included not just energy use and production, but also the broader economy and the climate system. These "integrated assessment models" accounted for energy use, the economy, and climate and the way these different systems interact with one another. The groups each computed projections halfway into the century.
Five for Five
“We didn’t really know how our first experiment would turn out, but we were surprised how little difference abundant gas made to total greenhouse gas emissions even though it was dramatically changing the global energy system,” said James "Jae" Edmonds, PNNL's chief scientist at JGCRI. "When we saw all five modeling teams reporting little difference in climate change, we knew we were onto something."
The key, the researchers said, is that the five different models provide an integrated, comprehensive view of the economy and the Earth system. Swapping out coal for natural gas in a simple model would cut greenhouse gas emissions, a result many people expected to see. But incorporating the behavior of the entire economy and how people create and use energy from all sources affect emissions in several ways:
• Natural gas replacing coal would reduce carbon emissions. But due to its lower cost, natural gas would also replace some low-carbon energy, such as renewable or nuclear energy. Overall changes result in a smaller reduction than expected due to natural gas replacing these other, low-carbon sources. In a sense, natural gas would become a larger slice of the energy pie.
• Abundant, less expensive natural gas would lower energy prices across the board, leading people to use more energy overall. In addition, inexpensive energy stimulates the economy, which also increases overall energy use. Consequently, the entire energy pie gets bigger.
• The main component of natural gas, methane, is a more potent greenhouse gas than carbon dioxide. During production and distribution, some methane inevitably escapes into the atmosphere. The researchers considered both high and low estimates for this so-called fugitive methane. Even at the lower end, fugitive methane adds to climate change.
The combined effect of the three, the scientists found, is that the global energy system could experience unprecedented changes in the growth of natural gas production and significant changes to the types of energy used, but without much reduction to projected climate change if new mitigation policies are not put in place to support the deployment of renewable energy technologies.
"Abundant gas may have a lot of benefits—economic growth, local air pollution, energy security, and so on. There’s been some hope that slowing climate change could also be one of its benefits, but that turns out not to be the case," said McJeon.
Scientists, engineers and economists from the following institutions contributed to the research: the JGCRI, a collaboration between PNNL and the University of Maryland, BAEconomics, the International Institute for Applied Systems Analysis, the Potsdam Institute for Climate Impact Research, the Centro Euromediterraneo sui Cambiamenti Climatici, and Resources for the Future.
PNNL researchers on this project were supported by the Global Technology Strategy Project, a public-private partnership. [http://www.globalchange.umd.edu/gtsp/]
Reference: Haewon McJeon, Jae Edmonds, Nico Bauer, Leon Clarke, Brian Fisher, Brian P. Flannery, Jérôme Hilaire, Volker Krey, Giacomo Marangoni, Raymond Mi, Keywan Riahi, Holger Rogner, Massimo Tavoni. Limited impact on decadal-scale climate change from increased use of natural gas, Nature October 15, 2014, doi: 10.1038/nature13837.
Interdisciplinary teams at Pacific Northwest National Laboratory address many of America's most pressing issues in energy, the environment and national security through advances in basic and applied science. Founded in 1965, PNNL employs 4,300 staff and has an annual budget of about $950 million. It is managed by Battelle for the U.S. Department of Energy’s Office of Science. As the single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information on PNNL, visit the PNNL News Center, or follow PNNL on Facebook, Google+, LinkedIn and Twitter.
Mobile: 208 520-1415
Mary Beckman | newswise
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy.
Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
18.07.2018 | Materials Sciences
18.07.2018 | Life Sciences
18.07.2018 | Health and Medicine | <urn:uuid:608ab406-bdea-49b9-a9ad-4b57f23fded0> | 3.84375 | 2,142 | Content Listing | Science & Tech. | 34.659185 | 95,553,466 |
Weighing without using a scale.
It’s quite possibly one of the most massive objects in the universe.
An incredibly fruitful mission sheds new secrets about the Milky Way.
NASA’s future planet hunter has arrived — and it’s set for glory.
A trio of beautiful, and very useful images.
Probably the most interesting you’re going to read today.
The Milky Way galaxy is on an eating spree.
A new take on the whole ‘twinkle twinkle’ thing.
The theory shows that as long as Earth’s interior stays hot, it should avoid this fate.
The elusive dark matter surprises us once again… this time by being absent.
If you happened to be alive 70,000 years ago, you’d be in for quite a show.
We’ve learned a lot about the early universe.
They’ve found planets in another galaxy — if that’s not jaw-dropping, I don’t know what is.
Supermassive black holes slowly eject cold gas.
It’s the farthest, oldest, and perhaps most mysterious object we’ve ever discovered.
Too big for a planet, too small for a star.
Quite possibly the biggest question in modern science.
If confirmed, this could indicate a remarkable progress in modern physics.
Aside from the romantic aspect, the study also offers an important scientific perspective.
It’s an impressive achievement. | <urn:uuid:7e96f6a0-c823-40bc-a843-4edcf21a9485> | 2.546875 | 311 | Content Listing | Science & Tech. | 61.427959 | 95,553,468 |
Dr Rodolphe Gozlan from Bournemouth's Centre for Conservation Ecology & Environmental Change, believes that too much is made of the small risks associated with these introductions.
Dr Gozlan’s study - "Introduction of non-native freshwater fish: Is it all bad?" - published in the March issue of the journal 'Fish and Fisheries', reveals that more than half of the 103 non-native freshwater species introduced worldwide were reported to have no adverse ecological impact on their environment.
His analysis of data from the Food and Agriculture Organisation (FAO) and FishBase found that the risk of ecological impact after the introduction of non-native freshwater fish was less than 10% for a large majority (84%) of the species analysed.
The research, funded by the European Commission, foresees an increase in the number of non-native freshwater fish introductions. Dr Gozlan believes that environmental changes to freshwater ecosystems will inevitably have implications on the distribution of native freshwater fish with a growing reality that we will increasingly depend upon non-native introductions, especially as aquaculture production increases.
To support his work, Dr Gozlan cites the introduction of non-native rainbow trout from North America, catfish from Africa and carp from Asia to Europe as having numerous benefits. Even non-native species that are considered as detrimental to ecosystems – such as the zebra mussel in the Great Lakes of North America or the Nile perch in Africa’s Lake Victoria – are not evaluated against other environmental pressure (i.e. habitat destruction, overfishing etc.)
Dr Gozlan advises that a more realistic, though controversial, attitude is needed in assessing future risks and calls for a critical debate to be opened on the real threats posed by non-native fish.
"This would mean protecting some introductions that present beneficial outcomes for biodiversity alongside a more systematic ban of species or families of fish presenting a higher historical ecological risk,” says Dr Gozlan. “The public perception of risk is something which cannot be ignored by any government or ruling body, but in order to gain public support in the fight for conservation of freshwater fish biodiversity, the message needs to be clear, detailed and educational."
Dr Gozlan also observes that over-assessing the risks attributed to the introduction of non-native freshwater fish has lead to a public perception that all similar introductions are harmful. This perception, he believes, overshadows the measurable benefits to be gained to the ecology and economy by the appropriate introduction of non-native species.
“It is the over-assessment of the small risks associated with introducing non-native freshwater fish that has led to the common perception that such introductions pose a threat to biodiversity,” he says.
Charles Elder | alfa
Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany
25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Dry landscapes can increase disease transmission
20.06.2018 | Forschungsverbund Berlin e.V.
A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices.
The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses...
For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth.
To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength...
For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications.
Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar...
Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction.
A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical...
Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy.
"Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy....
13.07.2018 | Event News
12.07.2018 | Event News
03.07.2018 | Event News
20.07.2018 | Power and Electrical Engineering
20.07.2018 | Information Technology
20.07.2018 | Materials Sciences | <urn:uuid:984e7bff-6439-4fb7-839a-78d0539cb2df> | 2.9375 | 1,167 | Content Listing | Science & Tech. | 31.541616 | 95,553,491 |
Saturday, May 9, 2015
Decades of satellite observations and astronaut photographs show that clouds dominate space-based views of Earth. One study based on nearly a decade of satellite data estimated that about 67 percent of Earth’s surface is typically covered by clouds. This is especially the case over the oceans, where other research shows less than 10 percent of the sky is completely clear of clouds at any one time. Over land, 30 percent of skies are completely cloud free.
Earth’s cloudy nature is unmistakable in this global cloud fraction map, based on data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Aqua satellite. While MODIS collects enough data to make a new global map of cloudiness every day, this version of the map shows an average of all of the satellite’s cloud observations between July 2002 and April 2015. Colors range from dark blue (no clouds) to light blue (some clouds) to white (frequent clouds).
There are three broad bands where Earth’s skies are most likely to be cloudy: a narrow strip near the equator and two wider strips in the mid-latitudes. The band near the equator is a function of the large scale circulation patterns—or Hadley cells—present in the tropics. Hadley cells are defined by cool air sinking near the 30 degree latitude line north and south of the equator and warm air rising near the equator where winds from separate Hadley cells converge. (The diagram here illustrates where Hadley cells are located and how they behave.) As warm, moist air converges at lower altitudes near the equator, it rises and cools and therefore can hold less moisture. This causes water vapor to condense into cloud particles and produces a dependable band of thunderstorms in an area known as the Inter Tropical Convergence Zone (ITCZ).
Clouds also tend to form in abundance in the middle latitudes 60 degrees north and south of the equator. This is where the edges of polar and mid-latitude (or Ferrel) circulation cells collide and push air upward, fueling the formation of the large-scale frontal systems that dominate weather patterns in the mid-latitudes. While clouds tend to form where air rises as part of atmospheric circulation patterns, descending air inhibits cloud formation. Since air descends between about 15 and 30 degrees north and south of the equator, clouds are rare and deserts are common at this latitude.
Earth Observatory: More information and additional image
Image Credit: NASA Earth Observatory image by Jesse Allen and Kevin Ward, using data provided by the MODIS Atmosphere Science Team, NASA Goddard Space Flight Center
Caption: Adam Voiland, with information from Steve Platnick and Tom Arnold | <urn:uuid:e8e792f3-1891-4da0-9e02-bbc92929fcc8> | 4.09375 | 562 | Personal Blog | Science & Tech. | 34.71829 | 95,553,497 |
From the data given in the attachment, determine the following: moles of H2(g) produced, moles of HCl(aq) reacted, the balanced chemical reaction, and the value of "y" in Mcl_y(a)
See attached file for data.© BrainMass Inc. brainmass.com July 23, 2018, 12:08 am ad1c9bdddf
IMPORTANT POINT: WHEN CONFRONTED WITH A LENGTHY PROBLEM, JUST TACKLE IT ONE STEP AT A TIME. EVEN IF YOU DON'T KNOW HOW TO GET THE ANSWER AT THE BEGINNING, JUST START DOING WHAT YOU CAN DO.
I'LL SHOW YOU HOW.
Step 1: Determine the moles of NaOH reacted
(2.4862 mol/L)(0.02750 L) = 0.06837 mol NaOH
According to the titration, this is the amount of NaOH that reacted with the leftover HCl. Therefore, it equals the moles of HCl leftover
Step 2: Determine the moles of HCl that we started with
(1.5222 mol/L)(0.12000 L) = 0.18266 ...
This solution involves step by step calculations with explanations for finding moles or molecules in a chemical reaction. | <urn:uuid:894da98f-421f-460e-a1b9-05de720b60c6> | 3.609375 | 284 | Tutorial | Science & Tech. | 85.804241 | 95,553,515 |
A line has a rise of 2 and a run of 11. What is the slope?
Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...):
Showing 0 comments:
Be the first to comment!
To solve this example are needed these knowledge from mathematics:
Next similar examples:
Find the slope of the line: x=t and y=1+t.
- Supplementary angles
One of the supplementary angles are three times larger than the other. What size is larger of supplementary angles?
- Line segment
The 4 cm long line segment is enlarged in the ratio of 5/2. How many centimeters will measure the new line segment?
- Smallest internal angle
Calculate what size has the smallest internal angle of the triangle if values of angles α:β:γ = 3:4:8
- Image scale
The actual image dimensions are 60 cm x 80cm and has a reduced size 3 cm x 4 cm. At what scale the image was reduced?
Peter had a sachet of candy. He wanted to share with his friends. If he gave them 30 candies, he would have 62 candies. If he gave them 40 candies, he would miss 8 candies. How many friends did Peter have?
- Waiting room
In the waiting room are people and flies. Together they have 15 heads and 50 legs (fly has 6 legs). How many people and flies are in the waiting room?
- Angles ratio
In a triangle ABC true relationship c is less than b and b is less than a. Internal angles of the triangle are in the ratio 5:4:9. The size of the internal angle beta is:
- Type of triangle
How do I find the triangle type if the angle ratio is 2:3:7 ?
In the box are yellow (a), green (b) and red (c) peppers. Their amount is in a ratio 2:4:1 . Most are yellow peppers and green the least. Calculate the number of peppers each type if the total number of peppers is 70.
John, Teresa, Daniel and Paul have summary 56 years. Their ages are in a ratio of 1:2:5:6. Determine how many years have each of them.
1 mason casts 30.8 meters square in 8 hours. How long casts 4 masons 178 meters square?
- Liters od milk
The cylinder-shaped container contains 80 liters of milk. Milk level is 45 cm. How much milk will in the container, if level raise to height 72 cm?
The enrollment at a local college increased 4% over last year's enrollment of 8548. Find the increase in enrollment (x1) and the current enrollment (x2).
- Obtuse angle
Which obtuse angle is creating clocks at 17:00?
- Isosceles trapezoid v3
In an isosceles trapezoid ABCD is the size of the angle β = 81° Determine size of angles α, γ and δ.
- Divisible by 5
How many three-digit odd numbers divisible by 5, which are in place ten's number 3? | <urn:uuid:dd170f17-79d1-4d9d-b6f5-2e3aeb855dbe> | 3.78125 | 671 | Tutorial | Science & Tech. | 81.456085 | 95,553,519 |
Back to Complex Numbers page
Given a positive integer , it can be shown that every complex number of the form , where and are integers, can be uniquely expressed in the base using the integers as digits. That is, the equation
is true for a unique choice of non-negative integer and digits chosen from the set , with . We write
to denote the base expansion of . There are only finitely many integers that have four-digit expansions
Find the sum of all such .
This problem is copyrighted by the American Mathematics Competitions.
Instructions for entering answers:
For questions or comments, please email firstname.lastname@example.org. | <urn:uuid:87f1f70e-6cba-4dd1-b811-98606a4aa503> | 3.015625 | 135 | Tutorial | Science & Tech. | 39.621696 | 95,553,530 |
Living organisms have long provided inspiration for technology. Biomimicry of birds helped us design our first aircraft, while the structure of seed burs was copied for Velcro. Today, biomimicry is being applied to advanced technology such as robotics and computer vision.
Another modern-day application of biomimicry is in artificial intelligence (AI). With AI, machines take on natural cognitive functions such as learning and problem-solving. Artificial neural networks (ANNs) take biomimicry a step further by creating computing systems inspired by the brains of living organisms.
But just how intelligent can a system be that is modeled after a relatively unsophisticated biological brain? It turns out, thanks to evolution, even relatively simple brains of living creatures can be very intelligent when it comes to a task that is necessary for their survival. For a moth, that means the sense of smell.
Sometimes, smaller is better
Even though a moth’s brain is the size of a pinhead, it is highly efficient when it comes to learning new odors. Its sense of smell is needed for finding food and finding mates, both critical tasks for their survival as a species.
Researchers from the University of Washington developed a neural network, dubbed MothNet, based on the structure of a moth’s brain.
“The moth olfactory network is among the simplest biological neural systems that can learn,” the researchers, Charles B. Delahunt, Jeffrey Riffell, and J. Nathan Kutz, stated in their paper, Biological Mechanisms for Learning: A Computational Model of Olfactory Learning in the Manduca sexta Moth, with Applications to Neural Nets.
The MIT Technology Review article, Why even a moth’s brain is smarter than AI, described the biological system copied by MothNet, “The olfactory learning system in moths is relatively simple and well mapped by neuroscientists. It consists of five distinct networks that feed information forward from one to the next.”
Instead of identifying scents, the researchers used supervised learning to train the ANN to recognize handwritten numbers with just 15 to 20 images of each digit from zero to nine. The training samples came from the MNIST (Modified National Institute of Standards and Technology) database of digits commonly used for training and testing in the field of machine learning. Some Examples from the MNIST database are shown below:
They found MothNet could learn much faster than machines. MothNet “learned” to recognize numbers with just a few training samples with an accuracy of 75 to 85 percent. A typical convolutional neural network, by comparison, requires thousands of training examples to achieve 99% accuracy.
Developing Better Machine Learning Algorithms
The researchers found that the moth’s biological system was efficient at learning due to three main characteristics which could aid in the development of new machine learning algorithms:
- First, it learned quickly by filtering information at each step and only passing along the most critical information to the next phase is the system. While the first of the five distinct networks starts with nearly 30,000 receptors in the antenna, the second network is comprised of 4,000 cells. By the time the information reaches the last network in the system, the neurons number in the 10s.
- Second, the filtering process had the added benefit of removing noise from the signals. The sparse layer between the first two networks acts as an effective noise filter, protecting the downstream neurons from the noisy signal received by the “antennas”.
- Lastly, the brain “rewarded” successfully identifying odors with a release of a chemical neurotransmitter called Octopamine, reinforcing the successful connections in the neural wiring. The active connections strengthen for an assigned digit, the rest wither away.
“The results demonstrate that even very simple biological architectures hold novel and effective algorithmic tools applicable to [machine learning] tasks, in particular, tasks constrained by few training samples or the need to add new classes without full retraining,” the researchers stated in the paper.
5 CommentsOldest to Newest
This is really good
Kind of a joke to claim code is published. The above linked git repo is empty minus a readme and license file… i.e. NO Code.
Thanks for pointing it out. The author’s comment is “We are currently refactoring the Matlab code base for easier use.”
The full Matlab code for simulation of moths learning to read MNIST is now available at github/charlesDelahunt/PuttingABugInML.
Thank you, Charles Delahunt
Thank you for the update! | <urn:uuid:2b61f27c-ca72-4ea6-bf2f-a2cb830cdce0> | 3.875 | 975 | Comment Section | Science & Tech. | 37.13177 | 95,553,533 |
Stephen Hawking is one of the most brilliant minds next to Albert Einstein in the 20th and 21st Century. As an acclaimed physicists in the science community, Stephen Hawking has announced another ingenious idea; why not send a nano-satellite to the nearest star system, Alpha Centauri? If we were to travel to Alpha Centauri with mankind’s fastest recorded speed at 33000 mph, this would take us approximately 43,000 years to reach. Using nano-technology, Stephen Hawking and his team created a nano-satellite, called Star Chip, and their goal is to “beam” it using laser technology to Alpha Centauri. With this idea, Star Chip can travel at 1/5 of the speed of light and reach Alpha Centauri in approximately 20 years. Star Chip will consists of standard satellite functions including a camera, which means, we will be able see images of its voyage.
It’s 2017 and technology has come a long way. We have science fiction movies and TV shows where humanoid androids and robots exist, now we are one step closer to making it a reality. Along with microchips getting smaller, Artificial Intelligence getting smarter, and drones circling the sky, humanity is now creating operational machines to assist with large loads, and of course for war games. Are we one step closer to our own demise, or will we see a bright future? Enjoy the video 🙂
Have you ever wondered…what are you eating? Or how we can feed 6 Billion people? Before technology, many wondered how we would feed the ever growing population of the human race. Think about it…crops takes a while to grow, and there are limitations, such as the location, and disturbances such as weather and insects. These are some thoughts humans were thinking prior to technology. Now that technology is in place, we can solve almost anything. And now the question is…how has technology change what we eat? Enjoy the video 🙂
Whoever watches or reads Marvel’s comic books should be psyched, specially if you’re an Iron Man fan. A British inventor is in the verge of making history, and revolutionizing flight technology by taking science fiction to reality by inventing a real life Iron Man suit. Enjoy the video 🙂
Looks like humanity is heading to a bright future.
Whoever grew up in the 90s, remember the Term CPU and what it does? Now the game is changing… along with quantum computers in the making and evolving, companies like Google, Microsoft and IBM are racing towards making the most powerful AI accelerators . These AI accelerators are CPUs but with intelligence. They will have the ability to learn, and adapt according to yours and PC needs. For example, similar to that in cell phones where the phone uses its sensors to turn on, off, and lower its screen brightness accordingly to save battery life, these AI accelerators will take those settings to the next level. Imagine this technology taking over lives…and when it does, do you think our lives will become dictated by technology?
We are still in the age of rocks and stones when it comes to quantum technology. To operate such technology requires a great caliber of energy and resource. For example, in order for the quantum chip to operate, you need to place it in a room that’s colder than space, colder than the coldest temperature observed in the universe which is -272 degrees Celsius. (Watch the video for more details)
D-Wave a Canadian Vancouver based company has unveiled a second version of their revolutionary quantum computer. The second version of this technology is now allowing developers to understand and calculate even more complex algorithm, such as, solving some issues of humanity – poverty, famine, global warming…etc.
With such powerful tool, are we one step closer to creating the Matrix?
From heavy and thorough sterilization to a Portable LED Foil humanity is one step closer to purifying water from almost anywhere. Ohio State university has invented a Deep Ultra-Violet (Deep UV) portable foil, that can emit powerful UV lights when charged. Currently, those requiring to purify water are using Large Mercury lamps. According to one of the engineers from Ohio State, Robert Meyers, “”Mercury is toxic and the lamps are bulky and electrically inefficient. LEDs, on the other hand, are really efficient, so if we could make UV LEDs that are safe and portable and cheap, we could make safe drinking water wherever we need it.”
This Portable LED Foil, uses new foil-based nanotechnology that uses a semiconductor growth method known as molecular beam epitaxy, where beams of molecules are used to vaporize materials and then allow them to land on a surface where they self-organize into nanostructure layers.
With this new technology, it can change many peoples lives, specially those who does not have the opportunity to drink purified water.
So how does a UV Light kill microbes? Check out the video Below;
Over the century, ever since we’ve came to known about space, and how extremely dangerous it is – boy… how lucky we are to be alive here on Earth. Knowing some of the dangers in space, such as, Solar Storm from our own Sun, Hyper Novas, Black Holes and specially Asteroids – the real known threat to Earth, in which, at one point in time wiped out the Dinosaurs. Humanity is always thinking when that next asteroid is going to hit Earth. And during these 100 years we’ve come close encounters with many large Asteroids, but humanity did not find out until the Asteroid had actually passed through. For example, on Oct 29, 2016 there was another Asteroid that passed by the Earth and we did not notice it until it passed us. Now with time running short, NASA has invented Scout, a computer program designed to detect Asteroids. Scout accesses data from all of Earth’s telescopes, and tries to identifies any threats using rigorous calculations. Thus far, Scout is still in Beta and currently being tested in the NASA Jet Propulsion Laboratory in Pasadena, California. Let’s cross our fingers that Scout will do its job. | <urn:uuid:4ec75469-18ca-4f86-a804-6bc27398c97a> | 3.140625 | 1,254 | Content Listing | Science & Tech. | 43.356775 | 95,553,543 |
+44 1803 865913
Edited By: Henry R Murkin, Arnold G van der Valk and William R Clark
Summarizes the contribution of the Marsh Ecology Research Program (MERP) conducted on the Delta Marsh in southern Manitoba between 1980 and 1989, to the current scientific understanding of prairie wetland ecology.
There are currently no reviews for this book. Be the first to review this book!
Your orders support book donation projects
I just received my book from you - it arrived quickly and in perfect condition because it was packed so well.
Search and browse over 110,000 wildlife and science products
Multi-currency. Secure worldwide shipping
Wildlife, science and conservation since 1985 | <urn:uuid:02fac2a1-a724-42a4-a070-a624e6a10745> | 2.765625 | 144 | Product Page | Science & Tech. | 47.606818 | 95,553,547 |
See the attached file.
1. The electrostatic field E in a particular region can be expressed in terms of spherical coordinates. Derive an expression for the potential difference.
2. The electrostatic potential in a region is given by a function. Derive an expression for the electrostatic field in this region, and hence determine the field at the point x = 1.0m, y = 2.0m, z = 3.0m. Enter the numerical values for the components of this field in the boxes in the equation below:
3. A cube of volume L^3 is bounded by the planes x = 0 and x = L, y = 0 and y = L, and z = 0 and z = L. he charge density p(x) within the cube is given by and equation. Calculate the total charge contained within the cube.
4. The region between two concentric spheres of radi alpha and 3*alpha contains a uniform charge density p and elsewhere the charge density is zero. Calculate the radial component of the electric field a a distance 2a from the centre of the spheres, E(2a).© BrainMass Inc. brainmass.com July 18, 2018, 3:05 am ad1c9bdddf
See attachments for full solutions.
Question 1: For spherical coordinate system we may choose a path such that the field remains parallel to the path everywhere. Such path will have one part along the arc of angle ?? of radius r1, than along the radius from r1 ...
The solution examines electrostatic fields and potential in spheres. | <urn:uuid:50f0cbf6-7d9f-4d61-a9b5-7ed52c4aa20d> | 4.125 | 327 | Tutorial | Science & Tech. | 71.242774 | 95,553,554 |
Aromatic sulfonation is an organic reaction in which a hydrogen atom on an arene is replaced by a sulfonic acid functional group in an electrophilic aromatic substitution. Aryl sulfonic acids are used as detergents, dye, and drugs.
Stoichiometry and mechanism
Typical conditions involve heating the aromatic compound with sulfuric acid:
- C6H6 + H2SO4 → C6H5SO3H + H2O
To drive the equilibrium, dehydrating agents such as thionyl chloride can be added.
- C6H6 + H2SO4 + SOCl2 → C6H5SO3H + SO2 + 2 HCl
Chlorosulfuric acid is also an effective agent:
- C6H6 + HSO3Cl → C6H5SO3H + HCl
In contrast to aromatic nitration and most other electrophilic aromatic substitutions this reaction is reversible. Sulfonation takes place in concentrated acidic conditions and desulfonation is the mode of action in a dilute hot aqueous acid. The reversibility is very useful in protecting the aromatic system because of this reversibility. Due to their electron withdrawing effects, sulfonate protecting groups can be used to prevent electrophilic aromatic substitution. They can also be intalled as directing groups to affect the position where a substitution may take place.
Specialized sulfonation methods
Many method have been developed for introducing sulfonate groups aside from direction sulfonation.
A classic named reaction is the Piria reaction (R. Piria, 1851) in which nitrobenzene is reacted with a metal bisulfite forming an aminosulfonic acid as a result of combined nitro group reduction and sulfonation.
Tyrer sulfonation process
In the Tyrer sulfonation process (1917), at some time of technological importance, benzene vapor is led through a vessel containing 90% sulfuric acid the temperature of which is increased from 100 to 180°C. Water and benzene are continuously removed in a condenser and the benzene layer fed back to the vessel. In this way an 80% yield is obtained.
Reactions of aryl sulfonic acids
- RC6H4SO3H + H2O → RC6H5 + H2SO4H
When treated with strong base, benzenesulfonic acid derivatives converts to phenols (via the phenoxides).
- C6H5SO3H + 2 NaOH → C6H5ONa + NaHSO4H
- March, Jerry (1985), Advanced Organic Chemistry: Reactions, Mechanisms, and Structure (3rd ed.), New York: Wiley, ISBN 0-471-85472-7.
- Otto Lindner, Lars Rodefeld "Benzenesulfonic Acids and Their Derivatives" in Ullmann's Encyclopedia of Industrial Chemistry 2005, Wiley-VCH, Weinheim. doi:10.1002/14356007.a03_507
- T.W> Graham Solomons: Organic Chemistry, 11th Edition, Wiley, Hoboken, NJ, 2013, p. 676, ISBN 978-1-118-13357-6.
- Piria, Raffaele (1851). "Über einige Produkte der Einwirkung des schwefligsäuren Ammoniaks auf Nitronaphtalin". Annalen der Chemie und Pharmacie. 78: 31–68. doi:10.1002/jlac.18510780103. ISSN 0075-4617.
- THE PIRIA REACTION. I. THE OVER-ALL REACTION W. H. Hunter, Murray M. Sprung J. Am. Chem. Soc., 1931, 53 (4), pp 1432–1443 doi:10.1021/ja01355a037.
- U.S. Patent 1,210,725
- Siegfried Hauptmann: Organische Chemie, 2nd Edition, VEB Deutscher Verlag für Grundstoffindustrie, Leipzig, 1985, p. 511, ISBN 3-342-00280-8. | <urn:uuid:59bb0c41-ef64-4365-977f-96566edeb0a2> | 3.359375 | 905 | Knowledge Article | Science & Tech. | 50.61865 | 95,553,571 |
Hidden at the depth of 1 km under the mountain, Ikeno, in a zinc mine Kamioka, 290 km North from Tokyo (Japan) is the place as its lair would dream of any super villain from some movie or story about supercoach. Here is a "Super-Kamiokande" (or "Super-K") — neutrino detector. Neutrinos are a subatomic fundamental particles, very weakly interacting with ordinary matter. They are able to penetrate completely into all and everywhere. The observation of these fundamental particles helps scientists to find the collapsing star and learn new information about our Universe. Business Insider spoke with three staff "Super-Kamiokande" and found out how it all works and what experiments performed by scientists.
Neutrinos are hard to detect. So difficult that the famous American astrophysicist and science popularizer Neil Degrasse Tyson once called "the most elusive prey in space."
"Matter is a neutrino of any obstacles. These subatomic particles can pass through a hundred light-years metal and not even slow down" — said Degrasse Tyson.
But why should scientists even try to grasp?
"When the supernova occurs, the star collapses into itself and becomes a black hole. If this event happens in our galaxy, the neutrino detectors like the same "Super-K" is able to capture emitted in this process, neutrinos. Such detectors are very few in the world," — explains Yoshi Uchida, Imperial College London.
Before the star collapses, it emits in all directions of outer space neutrinos, and the laboratory of such a "Super-Kamiokande" serve in the role of early warning systems that tell scientists where to look to see the last moments of the lives of stars.
"Simplified calculations say that the events of a supernova explosion in the radius in which our detectors can catch only happen once in 30 years. In other words, if you miss one, you'll have to wait on average a few decades until the next event," — says Uchida.
A neutrino Detector "Super-K" not just detects neutrinos falling on him from outer space. In addition, it is transmitted neutrinos with the T2K experimental setup, located in the town of Tokaj, in the opposite part of Japan. Sent to the neutrino beam has to go about 295 miles, after which it enters the detector "Super-Kamiokande", located in the Western part of the country.
The Observation of how neutrinos change (or oscillate) when moving through matter, can tell scientists more about the nature of the Universe, for example, about the relationship between matter and antimatter.
"Our model "Big Bang" suggests that matter and antimatter were created in equal proportions," — said in an interview with Business Insider, Morgan Vasko from Imperial College London.
"However, the bulk of antimatter for some or for some reason disappeared. Ordinary matter is much more than antimatter."
Scientists believe that the study of neutrinos may be one of the ways by which the answer to this mystery will finally be found.the
Located at a depth of 1000 meters under the ground, "Super-Kamiokande" size of a 15-storey building is a something like this.
Schema neutrino detector "Super-Kamiokande"
A stainless steel tank in the form of a cylinder filled with 50 thousand tons of specially purified water. Passing through this water, the neutrino is moving at a speed faster than the speed of light.
"Neutrinos entering the tank produce light scheme is the same as Concorde had broken the sound barrier" — says Uchida.
"If the plane moves very fast and breaks the sound barrier, behind it creates a very powerful shock wave of sound. Similarly, a neutrino passing through the water and moving faster than the speed of light creates a light shock wave", — explains the scientist.
On the walls, the ceiling and the bottom of the tank is installed a little more than 11 000 special gold-colored "light bulbs". They are called photomultipliers are very sensitive to light. They catch the light, these shock waves are generated by neutrinos.
Look photomultipliers so
Morgan Vasco describes them as "back bulbs". These devices are so sensitive that even with a single quantum of light is able to generate an electrical impulse, which is then processed by a special electronic system.the
To the light from shock waves created by neutrinos made up of sensors the water in the tank should be crystal clear. So clean that you can't even imagine. "Super-Kamiokande" it is a constant process of special multi-level cleaning. Scientists irradiate it with ultraviolet light to kill all possible bacteria. In the end, she is getting so much hate.
"Sorokina water can dissolve anything. Sorokina the water here is very, very unpleasant thing. It has the properties of acid and alkali", — says Uchida.
"Even a drop of this water canto bring you such trouble that you can not dream", — adds Vasco.
People floating on a boat inside the tank, "Super-Kamiokande"
If you need to conduct maintenance inside the tank, for example for replacement of faulty sensors, the researchers have to use a rubber boat (pictured above).
When Matthew fry was a graduate student at the University of Sheffield he and two other students "lucky" to do the same work. By the end of the day, when it's time to rise up intended for this purpose omit the gondola broke down. Physicists had nothing to do, how to go back in the boat and wait until it has been repaired.
"I didn't know when she was lying on her back in this boat and talked with the others, as a tiny part of my hair, literally not more than three inches in length, touched the water," — says Malek.
While they were floating inside a "Super-Kamiokande", and scientists upstairs was fixing the gondola, Malek nothing to worry. He was concerned early in the morning the next day, realizing that there was something terrible.
"I woke up at 3 in the morning the unbearable itching on the head. It was probably the most horrible itching I've ever experienced in my life. Worse than chickenpox, which I had in childhood. He was so awful that I just couldn't sleep", — continued the scholar.
Fry realized that the drop of water that got on the tip of his hair, "sucked dry" of them all nutrients and their deficiency has reached his skull. He hastily ran to the shower and spent over an hour trying to get back in his hair.
Another story told Vasco. He had heard that in 2000, the year when maintenance staff is pulled from the tank water and found at the bottom of the shape of a wrench.
"Apparently this key is accidentally left one of the employees when they filled the tank with water in 1995. After water in 2000, they found that the key disappeared".
Despite the fact that "Super-Kamiokande" is already a very large neutrino detector, scientists have proposed to create a large installation called "Hyper-Kamiokande".
"If we get approval to build "Hyper-Kamiokande", the detector is ready for use in approximately 2026", — said Vasco.
Under the proposed concept, the detector "Hyper-Kamiokande" is 20 times more "Super-Kamiokande". It is planned to use about 99 000 photomultipliers.
Now many of us are familiar with Moore's law, the famous principle, according to which the development of computing power follows an exponential curve, doubling in the ratio of price-quality (that is, speed per unit cost) every 18 months or so. When ...
a Method of editing DNA CRISPR/Cas9 is considered one of the most important discoveries in modern genetics. However, the large potential of the technology hides a monstrous side effects. Figure it out, scientists were allowed extensive analysis of th...
Astronomers from the Carnegie institution announced the discovery of 12 new satellites of the gas giant Jupiter. 11 of discovered objects, scientists have attributed to the "normal" outer moons, while one is "strange". The discovery of new satellites...
Scientists spent years trying to create lasers smaller and smaller sizes. The latest invention labs Berkeley, however, special in its kind and can lead to significant changes in medicine. An international group of scientists has d...
According to statistics, the chances that you will die from asteroid has a much lower chance of being killed by lightning. From time to time space the stones closer to our planet, but the vast majority of cases all ends well – the...
recently we talked about the fact that media companies must increasingly look to the demands of Generation Z. Colleagues from Business Insider interviewed 100 teenagers. The task is to find out how they watch TV. The results of th... | <urn:uuid:19acff62-3690-4a95-8493-8ccc9d7a50af> | 3.15625 | 1,905 | News Article | Science & Tech. | 51.101248 | 95,553,578 |
Carbon sequestration by Australian tidal marshes.
- Publication Type:
- Journal Article
- Scientific Reports, 2017, 7 pp. 44071 - ?
- Issue Date:
Files in This Item:
|Carbon sequestration by Australian tidal marshes.pdf||Published Version||675.54 kB|
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Australia's tidal marshes have suffered significant losses but their recently recognised importance in CO2 sequestration is creating opportunities for their protection and restoration. We compiled all available data on soil organic carbon (OC) storage in Australia's tidal marshes (323 cores). OC stocks in the surface 1 m averaged 165.41 (SE 6.96) Mg OC ha-1 (range 14-963 Mg OC ha-1). The mean OC accumulation rate was 0.55 ± 0.02 Mg OC ha-1 yr-1. Geomorphology was the most important predictor of OC stocks, with fluvial sites having twice the stock of OC as seaward sites. Australia's 1.4 million hectares of tidal marshes contain an estimated 212 million tonnes of OC in the surface 1 m, with a potential CO2-equivalent value of $USD7.19 billion. Annual sequestration is 0.75 Tg OC yr-1, with a CO2-equivalent value of $USD28.02 million per annum. This study provides the most comprehensive estimates of tidal marsh blue carbon in Australia, and illustrates their importance in climate change mitigation and adaptation, acting as CO2 sinks and buffering the impacts of rising sea level. We outline potential further development of carbon offset schemes to restore the sequestration capacity and other ecosystem services provided by Australia tidal marshes.
Please use this identifier to cite or link to this item: | <urn:uuid:080df76b-dcf0-43df-a79e-b5497154304f> | 2.953125 | 385 | Truncated | Science & Tech. | 53.477501 | 95,553,593 |
NASA's Kepler Space Telescope is one of the most prolific hunters of exoplanets. A new survey catalog introduces 219 candidate planets, 10 of which are about the size of the Earth and in orbit in the habitable zones around their host stars. The latest release brings up the total planet candidates identified by Kepler to 4,034, out of which 2,335 have been verified as exoplanets. Kepler has identified roughly 50 earth sized exoplanet candidates in the habitable zones of their host stars, out of which over 30 have been confirmed.
The survey has also identified two distinct populations of small planets, which could have implications in the search for alien life. The survey indicates that half of the exoplanets have no surface, or that the surface lies beneath a crushingly deep atmosphere. Life as we know it on Earth is less likely to exist on exoplanets with such extreme conditions. The catalog allows scientists to estimate the frequency of Earth sized planets, which will form the basis of efforts to directly image an exoplanet.
Susan Thompson, Kepler research scientist for the SETI Institute in California, and lead author of the catalog study says "This carefully-measured catalog is the foundation for directly answering one of astronomy’s most compelling questions – how many planets like our Earth are in the galaxy?" The Kepler Space Telescope continues to observe the skies. Nasa’s Spitzer Space Telescope discovered seven Earth sized exoplanets in the Trappist-1 system, and Kepler observations were used to figure out the intricate dance in their orbits. | <urn:uuid:507376a2-844c-4662-acd3-120a96f9511c> | 3.625 | 313 | News Article | Science & Tech. | 32.351818 | 95,553,627 |
Sums of Random Variables, Random Walks and the Central Limit Theorem
Why do we care about sums of random variables? The answer is that everywhere around us the processes that we see often depend on the accumulation of many contributions or are the result of many effects. The pressure in a room, measured on a surface, is the sum of the order of 1023 momentum exchanges between the air molecules and the surface. The large time tectonic deformation is the (tensorial) sum of the deformation associated with the myriad of earthquakes. Errors and/or uncertainties in measurements are often the aggregation of many sources and are in many cases distributed according to a Gaussian law (see below). In fact, it is hard to find an observation that is not controlled by many variables. Studying the sum of random variables allows us to grasp the fundamental notion of collective behavior without the need for further complications.
KeywordsFractal Dimension Random Walk Central Limit Theorem Step Length Continuous Limit
Unable to display preview. Download preview PDF. | <urn:uuid:553929c7-0b67-4c35-b9c5-13ec83cfc584> | 2.6875 | 212 | Truncated | Science & Tech. | 36.218086 | 95,553,638 |
NASA has updated its plans for how to deal with an apocalyptic asteroid on a crash-course with Earth.
There’s no immediate threat (thankfully), but if the day arises when space rocks come calling, Nasa now has a workable plan.
The US space agency has released an 18-page document titled the “National near-Earth Object Preparedness Strategy and Action Plan”.
It describes a number of steps that need to be taken to prepare for an incoming asteroid – including potential ways to dispatch it.
“An asteroid impact is one of the possible scenarios that we must be prepared for,” said Leviticus Lewis, the chief of FEMA, a US emergency department that worked with Nasa on the report.
He added that it’s a “low-probability but high-consequence event”, and that “some degree of preparedness is necessary”.
Speaking during a press conference on Wednesday evening, Nasa’s planetary defense officer (yes, that’s a real job) Lindley Johnson explained the reasoning behind the plan.
“This plan is an outline not only to enhance the hunt for hazardous asteroids, but also to better predict their chances of being an impact threat well into the future and the potential effects that it could have on Earth.
He added that the plan will help “step up our efforts to demonstrate possible asteroid deflection and other mitigation techniques, and to better formalize across the U.S. government the processes and protocols for dissemination of the best information available so that timely decisions can be made.”
Nasa’s five-step plan for saving the Earth from space death
In the 18-page plan, Nasa outlined five key ways it would avoid the total destruction of Earth and all human life.
Step One – Nasa wants to get better at tracking asteroids through space.
There are already a number of observatories around the world that Nasa uses to spot deadly asteroids, but sometimes we only catch them very late – just hours before impact.
Nasa said it wants to “identify opportunities in existing and planned telescope programs to improve detection and tracking, by enhancing the volume and quality of current data streams”.
More awareness of asteroids equals less chance of total annihilation, which makes sense.
Step Two – Once you’ve spotted an asteroid, it’s all about working out when and where it’s going to smack into Earth.
Nasa’s second aim is to improve this process, so that when it sees incoming objects, it can accurately predict exactly what sort of cataclysm we’re facing.
It wants to work with other agencies to improving “modelling, prediction and information integration” – which sounds boring, but could probably save us all from certain death.
What's the difference between an asteroid, meteor and comet?
Here's what you need to know, according to Nasa...
- Asteroid: An asteroid is a small rocky body that orbits the Sun. Most are found in the asteroid belt (between Mars and Jupiter) but they can be found anywhere (including in a path that can impact Earth)
- Meteoroid: When two asteroids hit each other, the small chunks that break off are called meteoroids
- Meteor: If a meteoroid enters the Earth’s atmosphere, it begins to vapourise and then becomes a meteor. On Earth, it’ll look like a streak of light in the sky, because the rock is burning up
- Meteorite: If a meteoroid doesn’t vapourise completely and survives the trip through Earth’s atmosphere, it can land on the Earth. At that point, it becomes a meteorite
- Comet: Like asteroids, a comet orbits the Sun. However rather than being made mostly of rock, a comet contains lots of ice and gas, which can result in amazing tails forming behind them (thanks to the ice and dust vapourising)
Step Three – Nasa’s third goal is to find out a half-decent way of deflecting asteroids on a collision course with Earth.
This will involve developing new tech to enable “rapid-response NEO [near-Earth objects] reconnaissance missions”.
The idea is that we send a spacecraft out towards asteroids, and then knock them off course somehow, saving the planet.
There’s already a planned mission – called the Double Asteroid Redirection Test – scheduled fore 2021 that will do exactly this.
Its target practice will go into effect in 2022, with the asteroid system Didymos.
Step Four – No man is an island, so they say – and that includes Nasa.
The fourth objective is to boost international cooperation, because more eyes on the sky can only be a good thing when it comes to spotting falling rocks of death.
“It’s a global hazard that we all face together, and the best way to approach and address that hazard is cooperatively,” said Aaron Miles, who works on science policy at the White House.
They want to develop an international response strategy for incoming asteroids, which would involve sharing data, and smashing space rocks together.
Step Five – The final (and most necessary step) is to develop a proper step-by-step plan for when a large asteroid is spotted heading for Earth.
This will involve improving emergency exercises ahead of time, so that everyone doesn’t panic when the big day comes.
It means Nasa and FEMA will need to get better at notifying people who might be affected, putting in effective natural disaster alerts for the public.
MOST READ IN TECH
The good news is that it’s very unlikely we’ll be smashed up by an asteroid soon. Just because Nasa is planning for possible human extinction doesn’t mean you need to rush out into your garden to build a bunker.
“NASA and its partners have identified more than 95 percent of all asteroids that are large enough to cause a global catastrophe, and none of those found poses a threat within the century,” said Miles.
“Effective emergency-response procedures can save lives, and unlike most natural disasters, asteroid impacts are preventable.”
Do you think we’re doomed if an apocalyptic asteroid takes aim at Earth, or can Nasa save the day? Let us know in the comments!
We pay for your stories! Do you have a story for The Sun Online news team? Email us at email@example.com or call 0207 782 4368 . We pay for videos too. Click here to upload yours. | <urn:uuid:4feacae2-eeba-4a9f-9782-bdcd55d1261c> | 3.359375 | 1,377 | News Article | Science & Tech. | 48.24137 | 95,553,653 |
|MLA Citation:||Bloomfield, Louis A. "Question 154"|
How Everything Works 20 Jul 2018. 20 Jul 2018 <http://howeverythingworks.org/print1.php?QNum=154>.
Most rocket engines are chemical engines. They combine stored chemical fuels to produce hot, high-pressure gas. This gas is allowed to expand out of a narrow orifice—the throat of the engine's exhaust nozzle. Gases always accelerate toward lower pressure, so the high-pressure gas moves faster and faster as it rushes out of the nozzle. It reaches sonic velocity (the speed of sound) in the nozzle's throat and continues to move faster and faster as it flows out of the nozzle's widening bell. By the time the gas leave the engine completely, it's traveling several thousand meters per second. A liquid fuel rocket has an exhaust velocity of about 4,500 meters per second or about 3 miles per second. Accelerating the gas to this enormous speed takes a huge force—the engine pushes down hard on the gas. The gas pushes back and propels the rocket upward. | <urn:uuid:e985f699-e331-4b01-ad4a-ed83d9ca4b20> | 4.09375 | 225 | Knowledge Article | Science & Tech. | 63.417322 | 95,553,667 |