text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
The Landsat 5 satellite captured the top image on May 16, 2011, before the flooding began. It shows the Souris River within its banks. The Souris River flows through the middle of Minot, N.D. Landsat 7 captured the second image on June 25. This view shows the extent of the flooding (dark blue) of the Souris River. (Credit: USGS/NASA) The Souris River reading at Minot’s Broadway Bridge around 11:00 p.m. on June 25 reached nearly four feet higher than the all-time high set in 1881. The Landsat Program is a series of Earth-observing satellite missions jointly managed by NASA and the U.S. Geological Survey. Landsat satellites have been consistently gathering data about our planet since 1972. They continue to improve and expand this unparalleled record of Earth’s changing landscapes, for the benefit of all. The next Landsat satellite is scheduled to launch in December 2012.Rob Gutro Rob Gutro | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Materials Sciences 20.07.2018 | Physics and Astronomy 20.07.2018 | Materials Sciences
<urn:uuid:820ee4a4-fe63-41c2-aefb-831d545be01d>
2.96875
785
Content Listing
Science & Tech.
46.459528
95,553,670
Vast expanses of our seas are littered with debris, hazardous waste like chemical sludge and plastic pollution. All of these are a direct effect of our species' disregard for the health of the planet and the blue ocean heart at its core. The collective impact on marine life and water quality is immense. Millions of tons of plastic, unable to biodegrade, lie suspended or floating in vast gyres that are driven by ocean currents. This is where the debris will remain unless we actively do something to clean it up. By contrast, the seas are also littered with old ship wrecks. Over time, these become encrusted with coral growth. The superstructure offers sanctuary in the form of an artificial reef where life can thrive. These shoaling snappers are using an old anchor as a refuge. In just the same way this anchor offers safety and shelter, we have to find a way to provide the same for the future of the planet, the oceans and all of life. Minor adjustments: exposure, crop, clarity, w/b
<urn:uuid:85a0afe9-e0aa-4c7a-93ff-a29849de8d77>
2.890625
212
Personal Blog
Science & Tech.
47.911357
95,553,674
Various candidate aircraft and spacecraft materials were analyzed and compared in a low-energy neutron environment using the Monte Carlo N-Particle (MCNP) transport code with an energy range up to 20 MeV. Some candidate materials have been tested in particle beams, and others seemed reasonable to analyze in this manner before deciding to test them. The two metal alloys analyzed are actual materials being designed into or used in aircraft and spacecraft today. This analysis shows that hydrogen-bearing materials have the best shielding characteristics over the metal alloys. It also shows that neutrons above 1 MeV are reflected out of the face of the slab better by larger quantities of carbon in the material. If a low-energy absorber is added to the material, fewer neutrons are transmitted through the material. Future analyses should focus on combinations of scatterers and absorbers to optimize these reaction channels and on the higher energy neutron component (above 50 MeV).
<urn:uuid:0df72fb4-5c46-4169-b2e6-a1d72c0916bf>
2.65625
187
Academic Writing
Science & Tech.
24.898418
95,553,678
1 year ago The Amazon Rain Forest could be tougher than scientists previously thought. A new study suggests the expansive jungle is more resilient to deforestation than expected. Researchers from the University of Bristol used the latest satellite data and the most updated complexity science models to reach the conclusion. Their findings, published in Nature Communications Tuesday, refute previous studies that found large areas of the Amazon rainforest were close to a tipping point, caused by deforestation or drought, that would spark a forest-killing cycle. The theory, called bistability, hinges on the notion that the jungle would turn into a savannah—and remain that way until the conditions that started the cycle reversed. However, previous research failed to account for the proximity of these areas to human settlements, which cause the boundary between the rainforest and savannah to naturally shift towards the wetter areas. Once the researchers did that “suddenly the property of bistability disappeared almost completely,” the paper’s lead author Bert Wuyts wrote. Since the Amazon rainforest absorbs 25 percent of the world’s carbon per year, environmentalists believe major losses to the jungle foliage would intensify global warming.
<urn:uuid:aece88f5-2dec-48c2-905a-13916561f69f>
3.828125
239
News Article
Science & Tech.
24.73125
95,553,691
A scorcher of a heat wave broke temperature records across the West and helped fuel new wildfires. Arctic melt season has begun in earnest with sea ice at its lowest June volume on record. Billions of people call cities home, and they're going to get a lot hotter because of climate change. Weather-related disruptions at so-called "chokepoints" could limit global food supplies and push up prices. A major heat wave that helped fuel deadly wildfires in Portugal was more likely because of global warming. Large wildfires are becoming increasing common and severe in boreal forests around the world. Planes have been grounded in Phoenix as life-threatening heat descends across the Southwest. The Texas metropolis has more casualties and property loss from floods than any other locality in the U.S. The Sierra Nevada continues to sport a deep snowpack after an epic winter. The number of large wildfires burning up swaths of the Great Plains rose 350 percent during 30 years. A new study finds a sharp increase in the risk of frequent flooding events in coastal areas in the U.S. Even the modest rise in global temperatures to date have made India's already sweltering heat waves deadlier. Protecting open ocean from industrial uses is key to addressing climate change, a new study says. Beachgoers are flip-flopping along a rebuilt boardwalk that reflects a coastal reimagination underway. Sea level rise and more severe storms are overwhelming U.S. coastal communities. Above-average hurricane activity is expected in Atlantic this hurricane season, despite a potential El Niño. Extreme weather is estimated to cut production of major crops by 23 percent over the next 30 years. An El Niño could form again this summer, just a year after the demise of one of the strongest on record. New Jersey's working class are forgotten as federal government funds fixes for wealthier neighbors. After years of widespread, intense drought, the U.S. is experiencing its smallest drought footprint since 2000.
<urn:uuid:48718920-debb-4445-8ca3-ab51f3f17e34>
2.828125
413
Content Listing
Science & Tech.
49.418908
95,553,705
Jellyfish belong to one of the oldest extant animal phyla, the Cnidaria. The first Cnidaria appear in the fossil record 600 million years ago, preceeding the Cambrian explosion. They are an extremely successful group present in all marine environments and some freshwater environments. In contrast to many animal phyla in which vision is a primary sense Cnidarians do not, generally, employ image forming eyes. One small class stands alone: the Cubozoa. Cubomedusae are commonly known as box jellyfish. They possess image forming eyes (Coates et al., 2001) which certainly evolved independently from other metazoans. Cubomedusae therefore offer a unique perspective on the evolution of image forming eyes. This literature review collects, into one place, what is known about: the multiple eye types of box jellyfish, cubomedusan life history and ecology, and the sensory and neural systems of box jellyfish. Here I discuss how these features set cubomedusae apart from scyphomedusae and hydromedusae. Knowledge in these areas is sparse; the work done to date inspires increased efforts. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:19d6fb8d-ad88-4dd7-988d-edfe17391824>
3.359375
256
Academic Writing
Science & Tech.
22.705668
95,553,716
January 26, 2018 Good Code Speaks The Customer's LanguageSomething we devote time to on the Codemanship TDD training course is the importance of choosing good names for the stuff in our code. Names are the best way to convey the meaning - the intent - of our code. A good method name clearly and concisely describes what that method does. A good class name clearly describes what that class represents. A good interface name clearly describes what role an object's playing when it implements that interface. And so on. I strongly encourage developers to write code using the language of the customer. Not only should other developers be able to understand your code, your customers should be able to follow the gist of it, too. Take this piece of mystery code: What is this for? What the heck is a "Place Repository" when it's at home? For whom or for what are we "allocating" places? Perhaps a look at the original user story will shed some light. The passenger selects the flight they want to reserve a seat on. They choose the seat by row and seat number (e.g., row A, seat 1) and reserve it. We create a reservation for that passenger in that seat. Now the mist clears. Let's refactor the code so that it speaks the customers language. This code does exactly what it did before, but makes a lot more sense now. The impact of choosing better names can be profound, in terms of making the code easier to understand and therefore easier to change. And it's something we all need to work much harder at. Posted 6 months, 3 days ago on January 26, 2018
<urn:uuid:893e5f77-eb77-48ff-b24d-f7413193a0c3>
2.515625
344
Personal Blog
Software Dev.
65.342836
95,553,722
Electricity Produced From Ocean Currents A group in Taiwan headed by Professor Chen Yang-Yih, senior vice president of Taiwan’s National Sun Yat-sen University, has succeeded in generating power from the oceanic current. “No experiment made over any oceanic current has been successful in the world until now, but we have been successful, and we are able to produce 26.31 kilowatts of power with 1.27 meters per second of ocean current,” Yang-Yih said. The process works by towing a generator into the ocean. Blades are then dropped into the sea from a multifunctional platform where they then capture energy from the current which is converted to electricity. Yang-Yih’s generator is capable of producing 50 kilowatts of power at maximum capacity and is effective even in winter months when the current slows down. Yang-Yih explained that because ocean currents are continuous, they are sustainable sources of energy.
<urn:uuid:6a9696c7-98f6-4e2d-918c-e1817b1e038d>
3.3125
203
News Article
Science & Tech.
38.971575
95,553,724
There exists a set with no elements.This axiom describes the empty set. If every element of X is an element of Y and every element of Y is an element of X then X = Y.This axiom states that if sets X and Y have exactly the same elements, then they are the same set. This is also the definition of equality of sets. Let P(x) be a property of x. For any set A there is a set B such that x ∈ B if and only if x ∈ A and P(x).This axiom states that if a property (P(x)) of elements (x) of a set (A) can be identified, then a subset (B) of the original set can be constructed that contains only the elements of A that have the property. For example, one property of integers is that an integer is either even or not even. Given the existence of the set of integers, and the property of even, a set containing all even integers can be constructed. For any A and B, there is a set C such that x ∈ C if and only if x = A or x = B.This axiom says that unordered pairs can be created. For any set S, there exists a set U such that x ∈ U if and only if x ∈ A for some A ∈ S. This axioms says that the union of any two or more sets can be formed. It is understood here that set S is a set containing other sets. For any set S there exists a set P such that X ∈ P if and only if X ⊆ P. This axioms states that a power set can be created for any set. A power set is a set that contains all subsets of a set. Let C be a collection of nonempty sets. Then we can choose a member from each set in that collection. In other words, there exists a function f defined on C with the property that, for each set S in the collection, f(S) is a member of S. This axioms states that a member can be selected from each of a series of infinite sets. This axiom is sometimes called Zermelo's axiom of choice. There is a set of which the null set is a member, and such that if any set is a member, the union of it and its unit set is also a member. page 186 This axiom guarantees the existenc of at least one infinite set, the set of natural numbers. Any function whose domain is a set has a range which is also a set. page 316 This axiom guarantees that if the input to a function is a set, then the output to the function is a set. Every non-empty set A contains an element that is not the set A itself. This axiom states that a set may not be a member of itself. it is also called the axiom of regularity. In logic notation: All Math Words Encyclopedia is a service of Life is a Story Problem LLC. Copyright © 2018 Life is a Story Problem LLC. All rights reserved. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License
<urn:uuid:95eaf3f5-38df-4e79-aeda-5a65124caff4>
4.25
678
Knowledge Article
Science & Tech.
74.171558
95,553,725
Fundamental Data Types, Declarations, Definitions and Expressions This chapter begins by describing the different kinds of Fundamental Data Type (FDT) in C++. The chapter then introduces identifiers (constant and variable) and the declaration and definition of identifiers. This is followed by discussing assignment; initialisation of identifiers, expressions and statements; operators; and type conversions. Basic input from the keyboard and output to the screen is covered, along with the use of manipulators for formatting input and output. The topic of casting (the process of converting an object of one data type to an object of another data type) is discussed. The chapter concludes by examining the auto, extern, register and static storage class specifiers for specifying exactly how a variable is to be stored, and the asm declaration for directly integrating assembly language code into C++ code. KeywordsCharacter Constant Head File ASCII Character Void Main Extern Specifier Unable to display preview. Download preview PDF.
<urn:uuid:58ef8aef-8d8a-40a4-8e3b-77bd333ef1d5>
3.9375
197
Truncated
Software Dev.
19.881949
95,553,757
A new type of adhesive that combines the bonding chemistry of shellfish with a bio-based polymer has been shown to perform as well as commercially available products and can be easily degraded, representing a potential non-toxic alternative. “Adhesives releasing toxins including carcinogenic formaldehyde are almost everywhere in our homes and offices. The plywood in our walls, the chairs we sit on, and the carpet beneath our feet are all off-gassing reactive chemicals” said Jonathan Wilker, a professor of chemistry and materials engineering at Purdue University. “Most of these glues are also permanent, preventing disassembly and recycling of electronics, furniture and automobiles. In order to develop the next generation of advanced adhesives we have turned to biology for inspiration.” Mussels extend hair-like fibers that attach to surfaces using plaques of adhesive. Proteins in the glue contain the amino acid DOPA, which harbors the chemistry needed to facilitate the “cross-linking” of protein molecules, providing strength and adhesion. Purdue researchers have now combined this bonding chemistry of mussel proteins with a polymer called poly(lactic acid), or PLA, a bio-based polymer that can be derived from corn. The adhesive was created by harnessing the chemistry of compounds called catechols, contained in DOPA. “We found the adhesive bonding to be appreciable and comparable to several petroleum-based commercial glues,” Wilker said. Findings are detailed in a research paper published online Jan. 4 in the journal Macromolecules and to appear in an upcoming print issue of the journal. The paper was authored by Wilker and graduate students Courtney L. Jenkins and Heather M. Siebert in Purdue’s Department of Chemistry. Jenkins is now an assistant professor of chemistry at Ball State University. “Results presented here show that a promising new adhesive system can be derived from a renewable resource, display high-strength bonding, and easily degrade in a controlled fashion,” Wilker said. “Particularly unique was the ability to debond this adhesive under mild conditions.” Early adhesives were made of natural materials such as starch, but have been replaced in recent decades with synthetic glues possessing superior performance. About 9 billion kilograms of glue are now manufactured annually in the United States, with nearly 4 billion kilograms containing formaldehyde. “The detrimental health and environmental effects of synthetic glues are becoming more of a concern, with alternatives being developed,” Wilker said. “Renewable, nontoxic, and removable adhesives are thus in great demand to decrease our exposure to pollutants as well as waste in landfills.” The researchers tested the adhesive by measuring the force needed to pull apart metal and plastic plates bonded together, finding that it compared favorably with various commercial products. Unlike synthetic glues, however, the adhesive can be easily degraded in water. “This new system may help lead us toward nontoxic materials sourced from nature, capable of being broken down into benign components, and enhanced recyclability of the products all around us,” Wilker said.
<urn:uuid:8304a3d2-9af9-48a4-b92c-11d904a500aa>
3.546875
649
News Article
Science & Tech.
21.655261
95,553,767
What text editor should I use? What is a text editor, and why does it matter which one I use? Text editors are programs that type simple text without the sort of formatting a word processor will so rudely slip in. No comic sans, no forced margins, no line breaks (I just tested this with a line of Python, and yep, I can make a line of code that will wrap around the planet if I want). A text editor is just you and your ASCII, absent bells, whistles, or beauty. As you start out programming, you’ll quickly find your text editor is your best friend. Or your frenemy, depending on how coding is going that day. It’s essential to start figuring out which text editor works best for you. Like most tools, the basics of every text editor are the same. They all have a place to interface text (because, of course), most feature syntax-based color coding, virtually all feature hot keys and intuitive text features to lighten the load of a long coding project. As you start out programming, you’ll quickly find your text editor is your best friend. Or your frenemy, depending on how coding is going that day. It’s essential to start figuring out which text editor works best for you. There are already plenty of blog posts on what kinds of text editors to use, but I happen to be retaking One Month’s Python course at the moment, and felt like this would be a good opportunity to test out a few different ones (despite the fact that Eric expressly tells us to work with Sublime Text; we students are rebels). I’ll mostly be looking at Mac-based editors (or cross-platform editors that work on Mac), because that’s the type of machine I’m working on. When you’re starting out coding, it’s also best to give yourself a little flexibility in terms of the tools you use; you don’t want to limit yourself to working on one platform because you never know where you’ll be working. I’m also going to try to focus on editors that will be good for beginners. This is because that’s where I am with coding (and that’s where we all need to start). (A brief aside before we start: I am ethically obligated by the higher order or people who write about text editors to point out at this point that text editors aren’t the same as IDEs or Integrated Development Environments. IDEs are more like Swiss Army Knives, whereas text editors are like screwdrivers. Word screwdrivers. A couple of the text editors we look at will tread the boundary between these.) Sublime Text: $70 (or unlimited free trial) This is the first editor I wrote code in, and there’s a soft spot in my heart for it. It passes what I think is the most essential test for any text editor, which is that it’s intuitive to start using. You just open up a file as you would with any interface, and can begin coding. The extra features with it are pretty bog standard things like code folding. What’s code folding, I wondered, can I make code origami? Imagine my disappointment when I found out it just hides lines of code when I’m not actively working on them. Useful, but no cranes for me.). I like the dive in and begin aspects of Sublime Text. If you’re used to typing in a word processor, Sublime Text is a pretty solid introductory text editor. If you’re used to typing in a word processor, Sublime Text is a pretty solid introductory text editor. There’s also an open secret with Sublime Text: While the program isn’t free, it comes with an unlimited trial period. You should absolutely buy a copy if you love using it, but I like that there’s no deadline bearing down on me to make that decision. I’ll be honest: Vim scares the crap out of me. If Sublime Text is the cozy programming home I feel comfortable putting my feet up in, Vim is an enormous mansion set high atop a hill with a heavy iron gate between it and me. even downloading and installing Vim is fairly difficult, which makes it a tough text editor to touch if you’re new to programming. That’s not to say that Vim is bad — far from it. Vim is a great text editor; it’s free, heavily customizable, has a huge community of users and a long history of use. You can make Vim work the way you want it to. It is so useful, in fact, that it’s occasionally compared to an IDE, because it has tools aplenty. Vim just won’t hold your hand. In fact, it sort of slaps your hand away while shouting at you “Get up! Learn to walk on your own!” If Sublime Text is the cozy programming home I feel comfortable putting my feet up in, Vim is an enormous mansion set high atop a hill with a heavy iron gate between it and me. All those tools, all that customization means there’s a pretty steep learning curve, which makes it kind of a nonstarter for a beginning programmer. In fairness, Vim’s designers are up front about its difficulty; personally, I’ll save real play on this for when I’m writing more advanced code. Coda: $99; One Week Trial I really liked playing around with Coda. This is another tool that feels more like it’s leaning toward an IDE than a text editor; in fact, despite what they say on their website, I’d go so far as to call it an IDE. It’s heavy on features like a built in Terminal interface, SSH connectivity, controls for pushing code automatically to a hub. It’s not exactly bells and whistles-free, but a lot of the features are easy enough to figure out and are essential tools for developing a web app. I’d go so far as to call Coda an IDE. It’s not exactly bells and whistles-free, but a lot of the features are easy enough to figure out and are essential tools for developing a web app. My favorite aspect of Coda, which you won’t find in almost any text editor, is a preview button that lets you see what the code you’re writing will look like live. This is a major time saver compared to pushing code, running it on a server, failing, pushing again, etc. There’s definitely a bit of a learning curve for using Coda. So, if you’re just looking for a tool that lets you dive in and start writing some code, this is probably not the way to go. But with a little experimenting, it has some pretty powerful features you’ll want anyway. Worth the investment if you’re an intermediate coder who’s going to be sticking with it for a while. Basically, it’s like getting a knife that you can later turn into a scalpel and then into a LASIK tool. Atom is a groovy text editor to work with. Its interface has a similar feel to Sublime Text’s, but the iconography of their file structure is ever so slightly more intuitive. It also has a convenient hotkey to list all available command functions. What makes Atom so cool to use, though, is that it’s open source, completely (and easily) hackable, and entirely user friendly. There isn’t any learning curve with it. You can dive right in and start entering code — but as you become more advanced as a programmer, you can make Atom a more complex text editor for your needs. Basically, it’s like getting a knife that you can later turn into a scalpel and then into a LASIK tool. So which of these is the best? From my perspective, which is to say the perspective of a novice, a good text editor is one that allows me to dive in and start coding, while also giving me room to grow and get more experience as part of a broader community. It’s what I like to call the bike shop problem. When you walk into a bike shop for the first time, odds are pretty good it’ll be a bit intimidating with all the experts walking around talking the talk. Odds are good you just want to get on a bike and go. The rest of the stuff you can learn later as you become more of an expert, but if you need all that expertise just to get on the bike, you’ll never get started. It’s what I like to call the bike shop problem. When you walk into a bike shop for the first time, it’ll be intimidating with all the experts walking around talking the talk. If you need all that expertise just to get on the bike, you’ll never get started. With this criteria, Atom is the best program on this list for letting you get started. It also gives you room to grow. Atom has a large community of users, just like a more intimidating program like Vim, but it also gives me room to start working with it right away. It’s intuitive and easy-to-use, but also expansive and flexible to the needs of its programmer. To me, this is a great feature of any program. Especially one that I know I need to use as a long-term tool. Part of the frustration of working with a tool is the FOMO of it all. Am I really getting the maximum functionality out of my text editor? Is this the best possible tool I could be using? Atom clears that up by letting me build from a simple text editor to a more complex one.
<urn:uuid:1ec95f9b-5e56-41ce-8704-e991f4f9e525>
2.53125
2,078
Personal Blog
Software Dev.
62.12328
95,553,778
By remotely "combing" the atmosphere with a custom laser-based instrument, researchers from the National Institute of Standards and Technology (NIST), in collaboration with researchers from the National Oceanic and Atmospheric Administration (NOAA), have developed a new technique that can accurately measure—over a sizeable distance—amounts of several of the major "greenhouse" gases implicated in climate change. The technique potentially could be used in several ways to support research on atmospheric greenhouse gases. It can provide accurate data to support ongoing and future satellite monitoring of the composition of the atmosphere. With development, more portable systems based on the technology could provide very accurate, continuous regional monitoring of these gases over kilometer scales—a capability lacking with current monitoring techniques. Photo illustration of NIST experiment using a pair of laser frequency combs (depicted as rainbow-colored cartoons) to detect the simultaneous signatures of several 'greenhouse' gases along a 2-kilometer path between a NIST laboratory roof and a nearby mesa. Each comb 'tooth' represents a different frequency of light. To identify gases in the atmosphere, researchers measured the amount of comb light absorbed at different frequencies along the path. Credit: Burrus and Irvine/NIST In the recent demonstration,* NIST's pair of laser frequency combs measured the simultaneous signatures of several greenhouse gases—including carbon dioxide, methane and water vapor—along a 2-kilometer path between a NIST laboratory roof in Boulder, Colo., and a nearby mesa. Frequency combs are laser-generated tools made up of a large number of very precisely defined frequencies that are evenly spaced, like the teeth on a pocket comb. Each comb "tooth" represents an individual color, or frequency, enabling very accurate measurements of the characteristic absorption signatures of different gas molecules of interest. Researchers identified gases in the atmosphere by measuring the amount of comb light absorbed at different frequencies during its trip from the NIST lab roof to a mirror on the mesa and back to a detector in a lab. Because the optical frequencies are too high to be measured directly, the researchers borrowed a trick from early radio. They created two combs with slightly different spacing between the teeth. Mixing light from these dual frequency combs together creates a "beat" frequency shifted down to the radio band, low enough to be measured. This was the first demonstration of the technique over long distances outdoors. Remote sensing of atmospheric gases—from a satellite, for instance—can be performed with conventional instruments called spectrometers, but while satellite instruments have global coverage, they sample specific regions on Earth infrequently. Therefore, regional measurements are made with ground-based point sensors, which have a range that can be measured in meters and varies with wind conditions. There are no portable sensors that can measure multiple gases at long range with consistent results. The NIST comb system was built to detect gases, including carbon dioxide, methane, and water over 2 kilometers. In principle, the dual-comb technique could detect an even wider range of gases over many kilometers. Accuracy in the measured atmospheric transmission is assured by the well-defined position of each frequency comb tooth. Because the technique makes repeated measurements rapidly over the same path, it is immune to signal distortions caused by atmospheric turbulence. And because the comb measurements can be averaged over the entire path length rather than relying on a few spot measurements, the comb method is better matched to the scale of atmospheric transport models. In the demonstration, the research team collected data continuously for three days under varied weather conditions. The results were comparable to data collected by a nearby point sensor under well-mixed atmospheric conditions. The comb measurements were also very precise—with uncertainty of less than 1 part per million for carbon dioxide, for example, obtained in five minutes. That's precise enough to ensure detection of small increases in trace gases due to large, distributed sources such as cities. Future systems should be able to achieve even better sensitivities over shorter timescales. Overall, the study results suggest that the dual comb technique is ideally suited to precise, reproducible sensing of trace gases in the atmosphere and can support the development of accurate models for use in global, satellite-based greenhouse gas monitoring. NIST researchers now plan to optimize the comb system by boosting power to improve sensitivity and expanding spectral coverage to identify additional gases. Portable frequency comb systems** could eventually support regional gas monitoring at costs comparable to point sensors, the researchers say, but over the kilometer scales relevant to many transport models and to monitoring of distributed sources such as large cities. * G.B. Rieker, F.R. Giorgetta, W.C. Swann, J. Kofler, A.M. Zolot, L.C. Sinclair, E. Baumann, C. Cromer, G. Petron, C. Sweeney, P.P. Tans, I. Coddington and N.R. Newbury. Frequency comb-based remote sensing of greenhouse gases over kilometer air paths. Optica. Vol. 1, Issue 5. Posted online Oct. 29, 2014. DOI: 10.1364/OPTICA.1.000290. ** See "Portable Frequency Comb Rolls Out of the Lab" at http://www.nist.gov/pml/div686/sources_detectors/portable_frequency_comb.cfm Laura Ost | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:d6c45488-4403-404b-b129-92c7f07f1f82>
3.765625
1,685
Content Listing
Science & Tech.
34.595658
95,553,780
infrared, infrared frequency(noun) the infrared region of the electromagnetic spectrum; electromagnetic wave frequencies below the visible range "they could sense radiation in the infrared" The numerical value of infrared frequency in Chaldean Numerology is: 1 The numerical value of infrared frequency in Pythagorean Numerology is: 9 Images & Illustrations of infrared frequency Find a translation for the infrared frequency definition in other languages: Select another language: Discuss these infrared frequency definitions with the community: Word of the Day Would you like us to send you a FREE new word definition delivered to your inbox daily? Use the citation below to add this definition to your bibliography: "infrared frequency." Definitions.net. STANDS4 LLC, 2018. Web. 22 Jul 2018. <https://www.definitions.net/definition/infrared frequency>.
<urn:uuid:67988402-962f-48f6-9d81-fe31d60e1482>
2.640625
183
Structured Data
Science & Tech.
23.684484
95,553,813
IRD scientists have revealed, in an article just published in Nature, that the cooling event known in the Northern Hemisphere as the Younger Dryas (about 12 000 years B.P.) was expressed in the Pacific by the absence of any South Pacific Convergence Zone activity and the movement of tropical waters closer to the Equator. This observation shows the interaction which occurs between the low and high latitudes and provides boundaries relevant for building ocean-atmosphere climatic models. Geochemical analyses (determining strontium/calcium ratios and oxygen isotope levels) were performed on a coral core sample from Vanuatu. The fossil coral, belonging to a single species, Diploastrea heliopora, bears a record that is a key to tracing fluctuations in sea-surface temperature and salinity and in rainfall over that cold period. The Younger Dryas period, about 12 000 years ago, was marked by a sharp cooling event in the Northern Hemisphere. Temperatures there fell by between 2 and 10°C. The East Antarctic in contrast experienced an episode of warming. Data have up to now been insufficient or too inconclusive to enable palaeoclimatologists to track this climatic event in the southern temperate regions and the tropics. An IRD researcher campaign took a 2 m drill core sample from the isle of Espiritu Santo, Vanuatu, found to contain a giant fossil coral of a single species, Diploastrea heliopora, well preserved in a condition of growth. The specimen age was estimated at between 12 449 and 11 719 calendar years, a span covering nearly the entire Younger Dryas. This unique fossil provides clear evidence of the spatial signature this major climatic cooling event left in the tropics. Mineral skeleton growth of these corals is a steady few millimetres per year over many centuries, which offers a precise record of ancient environmental conditions. Fossil skeleton concentrations in chemical elements such as strontium or oxygen isotopes indicate the sea surface temperature (SST) on which they depended when the corals were alive. Corals of the genus Porites, which which grow by about 1 cm per year, are the type most used as paleothermometers, but the Diploastrea used in this study have the advantage of growing more slowly. Moreover, there is only one species of this marker, Diploastrea heliopora, which eliminates any inter-specific differences, always a source of uncertainty. Bénédicte Robert | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:7d33a47a-f3ac-4fac-aee4-b82b4d9b44f6>
3.8125
1,093
Content Listing
Science & Tech.
35.819046
95,553,815
NASA has fast-tracked a mission to an asteroid so full of metallic iron and nickel that it’s worth £8,000 quadrillion – and would crash Earth’s economy instantly.Woman 'has throat slit' in attack at Hilton hotel in Manchester The 124-mile wide Psyche asteroid contains so much metal it would alter life on Earth permanently if we could drag it back here. NASA’s Psyche mission is now set to launch in 2022 – and will target a metal-rich asteroid known as 16 Psyche, arriving in 2026. American companies such as Planetary Resources – backed by Titanic director James Cameron – are already planning to send robotic vehicles to mine precious metals and rare resources from asteroids. Planetary Resources describes asteroids as ‘the low-hanging fruit of our solar system,’ and says, ‘a single 500-metre platinum-rich asteroid contains more platinum than has been mined in the history of humanity.’ Psyche principal investigator Lindy Elkins-Tanton of Arizona State University in Tempe said earlier this year that the 124-mile wide asteroid would be worth the astronomical sum if we could somehow drag it back to Earth. Elkins Tanton said, ‘Even if we could grab a big metal piece and drag it back here … what would you do? ‘Could you kind of sit on it and hide it and control the global resource – kind of like diamonds are controlled corporately – and protect your market? What if you decided you were going to bring it back and you were just going to solve the metal resource problems of humankind for all time? This is wild speculation obviously.’ The Psyche mission will explore one of the most intriguing targets in the main asteroid belt – a giant metal asteroid, known as 16 Psyche, about three times farther away from the sun than is the Earth. This asteroid measures about 130 miles in diameter and, unlike most other asteroids that are rocky or icy bodies, is thought to be comprised mostly of metallic iron and nickel, similar to Earth’s core. Scientists wonder whether Psyche could be an exposed core of an early planet that could have been as large as Mars, but which lost its rocky outer layers due to a number of violent collisions billions of years ago. The mission will help scientists understand how planets and other bodies separated into their layers – including cores, mantles and crusts – early in their histories. ‘This is an opportunity to explore a new type of world – not one of rock or ice, but of metal,’ said Psyche Principal Investigator Lindy Elkins-Tanton of Arizona State University in Tempe. ‘16 Psyche is the only known object of its kind in the solar system, and this is the only way humans will ever visit a core. We learn about inner space by visiting outer space.’
<urn:uuid:f9737517-850c-4346-903f-552916c36caa>
3
596
News Article
Science & Tech.
42.153235
95,553,821
Around 300 million years ago, the landmass that is now North America collided with Gondwana, a supercontinent comprised of present-day Africa and South America. That clash of continents lifted tons of rock high above the surrounding terrain to form the southern end of the Appalachian Mountains now seen in Alabama, Tennessee and Georgia. A team of geophysicists has reconstructed the terminal phase of that collision and developed a new picture of how it unfolded. The study, led by Brown University researchers, used seismic monitoring stations to create a sonogram-like image of the crust beneath the southern U.S., near of the southern base of the Appalachians. The research shows that Gondwana crust was thrust atop North America when the two continents collided, sliding northward as much as 300 kilometers before the two continents separated and drifted apart about 200 million years ago. The process revealed by the study looks a lot like the process that is building the Himalayas today, as the Eurasian continent is pushing atop the Indian subcontinent. "We show that a continental collision that occurred 300 million years ago looks a lot like the collision we see in the Himalayas today," said Karen Fischer, a professor in Brown's Department of Earth, Environmental and Planetary Sciences and a co-author of the study. "This is the best-documented case I'm aware of in which the final suture between ancient continental crusts has a geometry similar to the present-day India-Eurasia crustal contact beneath the Himalayas." The research was led by Emily Hopper, who earned her doctorate from Brown in 2016 and is now a postdoctoral fellow at the Lamont-Doherty Earth Observatory of Columbia University. The study is published online in the journal Geology. For the study, the research team placed 85 seismic monitoring stations across southern Georgia and parts of Florida, North Carolina and Tennessee. The researchers also used data from the Earthscope Transportable Array, a rolling array of seismic stations that made its way across the contiguous U.S. between 2005 and 2015. In all, 374 seismic stations recorded the faint vibrational waves from distant earthquakes as they traveled through the rocks beneath. Acoustic energy from earthquakes can travel though the Earth as different types of waves, including shear waves, which oscillate perpendicular to the direction of propagation, and compressional waves, which oscillate in the same direction as they propagate. By analyzing the extent to which shear waves convert to compression waves when they hit a contrast in rock properties, the researchers could create a seismic image of the subsurface crust. The study detected a thin continuous layer of rock that starts near the surface and slopes gently to the south to depths of approximately 20 kilometers, in which earthquake waves travel faster than in the surrounding rocks. That layer stretches southward about 300 kilometers from central Georgia to northern Florida. It spans about 360 kilometers east to west, from the central part of South Carolina, across all of Georgia and into eastern Alabama. The mostly likely explanation for that anomalous layer, the researchers say, is that it's a shear zone -- the contact along which Gondwanan plate slid atop of the proto-North American plate. "Where these two crustal blocks came into contact, there would have been tremendous deformation that aligned the mineral grains in the rocks and changed the propagation velocities of the seismic waves," Fischer said. "So our preferred explanation for this continuous layer is that we're seeing mineral alignment on the shear zone between these two plates." The presence of this widespread, gently sloped shear zone paints a new picture of the final stages of the collision between the two continents. Researchers had long thought that proto-North American and Gondwana collided on a shear zone with a much steeper slope, leading some to the view that the two plates slid laterally past each other. But such a steep shear zone would be in stark contrast to the 300 kilometers of nearly horizontal shear zone found in this new study. The geometry of the contact detected in the study is similar to the process that is currently raising the Himalayas. In that collision, Fischer says, Eurasian crust has overtopped the Indian subcontinent by a distance similar to that found in the Appalachians. That process continues today, raising the Himalayas by 4 to 10 millimeters per year. The similarity between the two events tells scientists that there's consistency over time in the way mountains are built, Fischer says. "When we think of mountain-building, the Himalayas are the archetype," she said. "It's interesting that a collision that took place 300 million years ago is very similar to one happening today." And that has implications for understanding the way the Earth's crust has evolved. "What that tells us is that the way the crust deforms -- where it's weak, where it's strong and how it accommodates deformation -- has been fairly uniform through time," Fischer said. "The crust couldn't have been much hotter; it couldn't have been much colder; and it couldn't have had a very different distribution of fluids, as all of these things influence the way the crust deforms." Hopper and Fischer's co-authors on the paper were Lara Wagner of the Carnegie Institution for Science and Robert Hawman of the University of Georgia. The work was supported by the National Science Foundation's Earthscope Program (EAR-0844276, EAR-0844186 and EAR-0844154). Note to Editors: Editors: Brown University has a fiber link television studio available for domestic and international live and taped interviews, and maintains an ISDN line for radio interviews. For more information, call (401) 863-2476. Kevin Stacey | EurekAlert! Abrupt cloud clearing events over southeast Atlantic Ocean are new piece in climate puzzle 23.07.2018 | University of Kansas Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Health and Medicine 23.07.2018 | Earth Sciences 23.07.2018 | Science Education
<urn:uuid:ceddda5d-348f-4e06-a42f-8122648feff8>
4.15625
1,753
Content Listing
Science & Tech.
43.365646
95,553,824
A huge circulation pattern in the Atlantic Ocean took a starring role in the 2004 movie "The Day After Tomorrow." In that fictional tale the global oceanic current suddenly stops and New York City freezes over. A NASA scientist's final scientific paper, published posthumously this month, reveals new insights into one of the most complex challenges of Earth's climate: understanding and predicting future atmospheric levels of greenhouse ... Greenhouse gases were the main driver of climate throughout the warmest period of the past 66 million years, providing insight into the drivers behind long-term climate change. 80 years since the first calculations showed that the Earth was warming due to rising greenhouse gas emissions Some people argue that concern for global warming is a modern phenomenon. And that scientists and environmental activists invented these worries to raise awareness of rising greenhouse gases from burning fossil fuels. Climate change and food security are two of the greatest challenges facing humanity. At Swinburne, Professor Mark Adams is exploring how legumes can play a role in sustainable agriculture. Sometimes dairy scientist Michel Wattiaux approaches his research like a cop at a traffic stop. He uses a breath analyzer to check for problematic products of fermentation. Climate scientists say an internal U.S. Environmental Protection Agency memo on how officials should talk to the public about global warming doesn't reflect reality. Controlling greenhouse gas emissions in the coming decades could substantially reduce the consequences of carbon releases from thawing permafrost during the next 300 years, according to a new paper published this week in ...
<urn:uuid:e4eb552a-8930-4412-b566-a14ab560fc1d>
3.421875
309
Content Listing
Science & Tech.
36.745844
95,553,878
Chapter Four- The Earth’s Interior WHAT CAN WE LEARN FROM THE STUDY OF SEISMIC WAVES? Seismic reflection-return of some of the energy of seismic waves to the Earth’s surface after the waves bounce off a rock boundary. Waves reflect off of the boundary between two layers of rocks and reflects back to a recording device called a seismogram. The seismogram measures the amount of time it took the waves to reflect and come back to it to measure the depth of the boundary. Seismic refraction- the bending of seismic waves as they pass from one material to another. As the seismic wave strikes a rock boundary, a lot of the wave travels through I, and changes Seismograph stations receive the direct wave before the refracted one Many different seismographs can be set up in a row to record the velocities and depths of the In a thick layer of rock, the increasing pressure with depth changes the wave velocity, making the waves curve. WHAT IS INSIDE THE EARTH The crust is the outer layer of rock which forms a thin skin on the Earth The mantle is a thick shell of rock that separates the crust from the core The core is the centre of the earth made of metals and making the magnetic field Crust is thinner beneath oceans than continents and seismic waves travel faster through oceanic crust (7km/sec) vs 6km per sec through continental. The continental crust is made of granite and other rocks covered in a layer of sedimentary rocks. Felsic-rocks high in feldspar and silicon for the continental crust Mafic-rocks high in magnesium and iron for oceanic crust Mohorovicic discontinuity- the crust that separates the crust from the mantle Due to the way that the seismic waves travel, it is believed that the mantle is made of solid rock with magma chambers of melted rock in the upper mantle Ultramafic rock is dense igneous rock made up of ferromagnesium minerals. It is believed that this is what the upper mantle is made up of. The crust and uppermost mantle together form the lithosphere that is brittle The lithosphere is the basis of the plate tectonics theory and its lower boundary is marked by the place that the seismic waves slow down. Asthenosphere-the rocks in this zone are closer to melting point than the rocks above or below the zone. Some of the rocks may even be melted, forming a crystal and liquid slush zone. If the rocks are close to their melting point, the zone may be important because it may represent a zone where magma is likely to be generated, and the rocks here may have relatively little strength and therefore are likely to flow. This can cause plasticity between the asthenosphere and mantle rocks. There is argument as to whether the asthenosphere even exists under continental crust as it does in oceanic areas. Scientists think that the chemical make up of the mantle rock is about the same throughout, but
<urn:uuid:92528272-f4bd-46bf-8eb3-48e9db9ee48f>
4.375
647
Knowledge Article
Science & Tech.
39.7375
95,553,905
As the climate warmed at the end of the last glacial period, a rapid reversal in temperature, the Younger Dryas (YD) event, briefly returned much of the North Atlantic region to near full-glacial conditions. The event was associated with climate reversals in many other areas of the Northern Hemisphere and also with warming over and near Antarctica. However, the expression of the YD in the mid- to low latitudes of the Southern Hemisphere (and the southwest Pacific region in particular) is much more controversial. Here we show that the Waiho Loop advance of the Franz Josef Glacier in New Zealand was not a YD event, as previously thought, and that the adjacent ocean warmed throughout the YD. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:c4b1d634-1d10-42fe-b475-133c5b5075f6>
3.40625
165
Academic Writing
Science & Tech.
28.254888
95,553,921
, Sydney, Jul 9 – More than 2,600 of the world’s top marine scientists Monday warned coral reefs around the world were in rapid decline and urged immediate global action on climate change to save what remains. The consensus statement at the International Coral Reef Symposium, being held in the northeastern Australian city of Cairns, stressed that the livelihoods of millions of people were at risk. Coral reefs provide food and work for countless coastal inhabitants globally, generate significant revenues through tourism and function as a natural breakwater for waves and storms, they said. The statement, endorsed by the forum attendees and other marine scientists, called for measures to head off escalating damage caused by rising sea temperatures, ocean acidification, overfishing and pollution from the land. “There is a window of opportunity for the world to act on climate change, but it is closing rapidly,” said Terry Hughes, convener of the symposium, held every four years, which attracted some 2,000 scientists from 80 countries. Jeremy Jackson, senior scientist at the Smithsonian Institution in the United States, said reefs around the world have seen severe declines in coral cover over the last several decades. In the Caribbean, for example, 75-85 percent of the coral cover has been lost in the last 35 years. Even the Great Barrier Reef in Australia, the best-protected reef ecosystem on the planet, has witnessed a 50 percent decline in the last 50 years. Jackson said while climate change was exacerbating the problem, it was also causing increased droughts, agricultural failure and sea level rises at increasingly faster rates, which implied huge problems for society. “That means what’s good for reefs is also critically important for people and we should wake up to that fact,” he said. “The future of coral reefs isn’t a marine version of tree-hugging but a central problem for humanity.” Stephen Palumbi, director of Stanford University’s Hopkins Marine Station, said addressing local threats, such as poor land development and unsustainable fishing practices, was also critical. More than 85 percent of reefs in Asia’s “Coral Triangle” are directly threatened by human activities such as coastal development, pollution, and overfishing, according to a report launched at the forum earlier Monday. The Coral Triangle covers Indonesia, Malaysia, Papua New Guinea, the Philippines, The Solomon Islands, and East Timor and contains nearly 30 percent of the world’s reefs and more than 3,000 species of fish. International Society for Reef Studies president Robert Richmond stressed that the consensus statement was not just another effort at documenting the mounting problems. Instead he said it was also about making the best available science available to leaders worldwide. “The scientific community has an enormous amount of research showing we have a problem. But right now, we are like doctors diagnosing a patient’s disease, but not prescribing any effective cures,” he said. “We have to start more actively engaging the process and supporting public officials with real-world prescriptions for success.”
<urn:uuid:c36d1caf-6772-4cd8-8287-b518673fdb3b>
2.96875
645
News (Org.)
Science & Tech.
32.243674
95,553,934
New insights into the working mechanism of perovskite-based solar cells could help these solar cells play a prominent role among the renewable energy carriers in future. Conventional silicon solar cells could have an inexpensive competitor in the near future. Researchers from the Max Planck Institute for Polymer Research in Mainz, together with scientists from Switzerland and Spain, have examined the working principle of an innovative type of solar cell, where an organic-inorganic perovskite compound acts as the light absorber. The scientists observed that charge carriers accumulate in a certain layer in these photovoltaic elements. If this jam can be dissolved, the already considerable efficiency of these solar cells could be further improved. Perovskite-based solar cells could play a prominent role among the renewable energy carriers in future. Unlike the established silicon solar cells, which are costly and energy-intensive to manufacture, these cells are made cheap materials and are simple to produce. Renewable energies are an essential element of the energy turnaround – however, their use must be worthwhile. Particularly in less sunny countries like Germany this is often not the case with solar cells. Perovskite solar cells, which have been investigated for some years now, could soon change this, if their efficiency can be further improved. This task is in the focus of a research team headed by Rüdiger Berger at the Max Planck Institute for Polymer Research in Mainz. Perovskite solar cells generate electricity with the help of a layer consisting of an organic-inorganic compound which crystallizes in a perovskite structure. The ions in this structure form a cubic arrangement, i.e. a rectangular lattice. “Perovskite materials absorb light extremely well,” says Rüdiger Berger, explaining how the solar cell works. “The light absorbed by the perovskite layer snatches an electron from an atomcreating a positively-charged electron vacancy, which we also refer to as a ‘hole’. Then all we have to do is channel the electrons to one electrode and the holes to another one – and electricity is produced.” Holes do not reach their electrodes as fast as electrons In the solar cell, the perovskite structure rests on a porous layer of titanium oxide which collects the electrons generated under illumination and transports them to the lower electrode. Above the perovskite there is a layer consisting of the organic hole conductor Spiro-OMeTAD, which transports the holes to the upper electrode. “The many different layers in the solar cell are extremely important. They ensure the effective separation of the two charge carriers,” says Rüdiger Berger’s colleague Stefan Weber. ”However, the charge carriers have to overcome a small barrier every time they jump from one material to the other. These barriers act like a construction site on a busy freeway where the vehicles clog. This charge transport jamming in the solar cell leads to losses and thus to a lower efficiency.” In several test series, the researchers found that a strong accumulation of positive charges takes place in the perovskite layer upon exposure to light. They suppose that the reason for this positive charging is that the titanium dioxide electron conductor works much more effectively than the hole conductor. The holes do not reach their electrode as fast as the electrons and accumulate on the way. The excess of positive charges in the perovskite layer then generates an opposing electric field which slows down the charge transport even further. A more effective hole conductor could increase the efficiency of the solar cell To observe the charge transport within the solar cell, the Mainz-based researchers cleaved the cell in the middle and polished the broken surface until it was smooth using a finely focussed ion beam. With the help of Kelvin probe force microscopy, they mapped the electrical potential in each layer of the solar cell. From this potential map, the researchers could derive the field distribution and thus the charge transport through the different layers of the cell. “We could for the first time correlate the charge distribution with the individual material layers in the cell”, says Rüdiger Berger. “The charge transport jamming of positive charges in the illuminated perovskite layer tells us that the transport through the hole conductor currently constitutes the bottleneck for the efficiency of the solar cell” If a more effective hole conductor could be used, the efficiency of the perovskite solar cells could be increased well above the 20% mark and thus offer a genuine alternative to the conventional silicon solar cells. Publication: Victor W. Bergmann, et al., “Real-space observation of unbalanced charge distribution inside a perovskite-sensitized solar cell,” Nature Communications 5, Article number: 5001; doi:10.1038/ncomms6001 Source: Max Planck Institute Image: MPI for Polymer Research
<urn:uuid:a653baf5-3826-47b0-b5ce-0d234b36bfb3>
4
1,014
News Article
Science & Tech.
28.125763
95,553,977
or neural computing,computer architecture modeled upon the human brain's interconnected system of neurons. Neural networks imitate the brain's ability to sort out patterns and learn from trial and error, discerning and extracting the relationships that underlie the data with which it is presented. Most neural networks are software simulations run on conventional computers. In neural computers, transistor circuits serve as the neurons and variable resistors act as the interconnection between axons and dendrites (see nervous system). A neural network on an integrated circuit, with 1,024 silicon "neurons," has also been developed. Each neuron in the network has one or more inputs and produces an output; each input has a weighting factor, which modifies the value entering the neuron. The neuron mathematically manipulates the inputs, and outputs the result. The neural network is simply neurons joined together, with the output from one neuron becoming input to others until the final output is reached. The network learns when examples (with known results) are presented to it; the weighting factors are adjusted—either through human intervention or by a programmed algorithm—to bring the final output closer to the known result. Neural networks are good at providing very fast, very close approximations of the correct answer. Although they are not as well suited as conventional computers for performing mathematical calculations or moving and comparing alphabetic characters, neural networks excel at recognizing shapes or patterns, learning from experience, or sorting relevant data from irrelevant. Their applications can be categorized into classification, recognition and identification, assessment, monitoring and control, and forecasting and prediction. Among the tasks for which they are well suited are handwriting recognition, foreign language translation, process control, financial forecasting, medical data interpretation, artificial intelligence research, and parallel processing implementations of conventional processing tasks. In an ironic reversal, neural networks are being used to model disorders of the brain in an effort to discover better therapeutic strategies. - See An Adaptive Neural Network: The Cerebral Cortex (1990);. , - Neural Network Design and the Complexity of Learning (1990);. , - Neural Network Learning and Expert Systems (1993);. , - Hybrid Neural Network and Expert Systems (1994);. , - Neural Network Principles (1994). , A network of interconnected computer elements, called neurons, so designed that an input signal is multiplied by a weighting factor at each... Neural networks are adaptive statistical models based on an analogy with the structure of the brain. They are adaptive because they can learn... 1 a the detailed plan or arrangement of an electric circuit b the components of an electric circuit 2 the network of interconnected n
<urn:uuid:1379641d-0d27-42f7-9839-e424635e900d>
3.6875
532
Knowledge Article
Science & Tech.
26.721731
95,553,978
In the early 20th century, Henry Ford built a car manufacturing plant on a 2,000-acre tract of land along the Rouge River in Michigan. Built to mass-produce automobiles more efficiently, the Rouge housed the equipment for developing each phase of a car, including blast furnaces, a steel mill and a glass plant. More than 90 miles of railroad track and conveyor belts kept Ford's car assembly line running. The Rouge model was lauded as the most efficient method of production at a time when bigger meant better. Nanogears like these may replace current manufacturing processes. The size of Ford's assembly plant would look strange to those born and raised in the 21st century. In the next 50 years, machines will get increasingly smaller -- so small that thousands of these tiny machines would fit into the period at the end of this sentence. Within a few decades, we will use these nanomachines to manufacture consumer goods at the molecular level, piecing together one atom or molecule at a time to make baseballs, telephones and cars. This is the goal of nanotechnology. As televisions, airplanes and computers revolutionized the world in the last century, scientists claim that nanotechnology will have an even more profound effect on the next century. Nanotechnology is an umbrella term that covers many areas of research dealing with objects that are measured in nanometers. A nanometer (nm) is a billionth of a meter, or a millionth of a millimeter. In this edition of How Stuff Will Work, you will learn how nanomachines will manufacture products, and what impact nanotechnology will have on various industries in the coming decades. Building with Atoms Atoms are the building blocks for all matter in our universe. You and everything around you are made of atoms. Nature has perfected the science of manufacturing matter molecularly. For instance, our bodies are assembled in a specific manner from millions of living cells. Cells are nature's nanomachines. Humans still have a lot to learn about the idea of constructing materials on such a small scale. Consumer goods that we buy are made by pushing piles of atoms together in a bulky, imprecise manner. Imagine if we could manipulate each individual atom of an object. That's the basic idea of nanotechnology, and many scientists believe that we are only a few decades away from achieving it. Photo courtesy NASA, Ames Nanogears no more than a nanometer wide could be used to construct a matter compiler, which could be fed raw material to arrange atoms and build a macro-scale structure. Nanotechnology is a hybrid science combining engineering and chemistry. Atoms and molecules stick together because they have complementary shapes that lock together, or charges that attract. Just like with magnets, a positively charged atom will stick to a negatively charged atom. As millions of these atoms are pieced together by nanomachines, a specific product will begin to take shape. The goal of nanotechnology is to manipulate atoms individually and place them in a pattern to produce a desired structure. There are three steps to achieving nanotechnology-produced goods: Trillions of assemblers and replicators will fill an area smaller than a cubic millimeter, and will still be too small for us to see with the naked eye. Assemblers and replicators will work together like hands to automatically construct products, and will eventually replace all traditional labor methods. This will vastly decrease manufacturing costs, thereby making consumer goods plentiful, cheaper and stronger. In the next section, you'll find out how nanotechnology will impact every facet of society, from medicine to computers. - Scientists must be able to manipulate individual atoms. This means that they will have to develop a technique to grab single atoms and move them to desired positions. In 1990, IBM researchers showed that it is possible to manipulate single atoms. They positioned 35 xenon atoms on the surface of a nickel crystal, using an atomic force microscopy instrument. These positioned atoms spelled out the letters "IBM." You can view this nano-logo on this page. - The next step will be to develop nanoscopic machines, called assemblers, that can be programmed to manipulate atoms and molecules at will. It would take thousands of years for a single assembler to produce any kind of material one atom at a time. Trillions of assemblers will be needed to develop products in a viable time frame. - In order to create enough assemblers to build consumer goods, some nanomachines, called replicators, will be programmed to build more assemblers. A New Industrial Revolution In January 2000, U.S. President Bill Clinton requested a $227-million increase in the government's investment in nanotechnology research and development, which included a major initiative called the National Nanotechnology Initiative (NNI). This initiative nearly doubled America's 2000 budget investment in nanotechnology, bringing the total invested in nanotechnology to $497 million for the 2001 national budget. In a written statement, White House officials said that "nanotechnology is the new frontier and its potential impact is compelling." About 70 percent of the new nanotechnology funding will go to university research efforts, which will help meet the demand for workers with nanoscale science and engineering skills. The initiative will also fund the projects of several governmental agencies, including the National Science Foundation, the Department of Defense, the Department of Energy, the National Institutes of Health, NASA and the National Institute of Standards and Technology. Much of the research will take more than 20 years to complete, but the process itself could touch off a new industrial revolution. Nanotechnology is likely to change the way almost everything, including medicine, computers and cars, are designed and constructed. Nanotechnology is anywhere from five to 15 years in the future, and we won't see dramatic changes in our world right away. But let's take a look at the potential effects of nanotechnology: The promises of nanotechnology sound great, don't they? Maybe even unbelievable? But researchers say that we will achieve these capabilities within the next century. And if nanotechnology is, in fact, realized, it might be the human race's greatest scientific achievement yet, completely changing every aspect of the way we live. - The first products made from nanomachines will be stronger fibers. Eventually, we will be able to replicate anything, including diamonds, water and food. Famine could be eradicated by machines that fabricate foods to feed the hungry. - In the computer industry, the ability to shrink the size of transistors on silicon microprocessors will soon reach its limits. Nanotechnology will be needed to create a new generation of computer components. Molecular computers could contain storage devices capable of storing trillions of bytes of information in a structure the size of a sugar cube. - Nanotechnology may have its biggest impact on the medical industry. Patients will drink fluids containing nanorobots programmed to attack and reconstruct the molecular structure of cancer cells and viruses to make them harmless. There's even speculation that nanorobots could slow or reverse the aging process, and life expectancy could increase significantly. Nanorobots could also be programmed to perform delicate surgeries -- such nanosurgeons could work at a level a thousand times more precise than the sharpest scalpel. By working on such a small scale, a nanorobot could operate without leaving the scars that conventional surgery does. Additionally, nanorobots could change your physical appearance. They could be programmed to perform cosmetic surgery, rearranging your atoms to change your ears, nose, eye color or any other physical feature you wish to alter. - Nanotechnology has the potential to have a positive effect on the environment. For instance, airborne nanorobots could be programmed to rebuild the thinning ozone layer. Contaminants could be automatically removed from water sources, and oil spills could be cleaned up instantly. Manufacturing materials using the bottom-up method of nanotechnology also creates less pollution than conventional manufacturing processes. Our dependence on non-renewable resources would diminish with nanotechnology. Many resources could be constructed by nanomachines. Cutting down trees, mining coal or drilling for oil may no longer be necessary. Resources could simply be constructed by nanomachines. For more information on nanotechnology and its uses, check out the links on the next page. Lots More Information! Related HowStuffWorks Articles More Great Links
<urn:uuid:d55a21db-1a85-46a6-97a1-0a8f51e665fb>
3.875
1,704
Knowledge Article
Science & Tech.
29.914877
95,553,990
posted by Anonymous A 0.108 M sample of a weak acid is 4.16% ionized in solution. what is the hydroxide concentration of this solution? I know that i set up an ice table. The equation would be weak acid + H20 goes to OH + the acid. What do i do with the 4.16 %? How does that fit into the problem. Do i use that to figure out the Kb value? Let's let HA stand for the weak acid. HA + H2O ==> H3O^+ + A^- You also know that H2O ==> H^+ + OH^- and Kw = (H^+)(OH^-) So if the solution is 0.108 M and the solution is 4.16% ionized, then the (H^+)= 0.108 x 0.0416 = ?? Now use (H^+) and Kw to calculate (OH^-). Check my work.
<urn:uuid:e052210a-5008-4e77-ae85-f522d956f26b>
2.53125
210
Q&A Forum
Science & Tech.
104.877955
95,553,997
Jellyfish might be a regular sight along the Indian coast but they are a rare sight in lakes. What started as a regular field trip for a team of marine biologists turned into a voyage of discovery when they found a unique specie of jellyfish called Upside-down Jellyfish in a small town called Armabada in Gujarat. Usually found in warmer coastal regions, mangrove forests, shallow lagoons or mud flats, upside-down jellyfish are a genus of true jellyfish and the only members of the family Cassiopeidae. They are so named because they lie in this position on the sea bed, unlike others. The team of marine biologists from Wildlife Trust of India spotted this jellyfish during a regular field visit. Out of curiosity they went to explore a water body which was attached to the Gulf of Kutch through a small water canal, while it was separated from it by a bridge. #MGChangemakers - Episode 2: THE 21-YEAR JOURNEY OF CHANGE | Driving India Into Future Live Now #MGChangemakers Episode 2 : Touched by poverty, untouchability and atrocities against Musahar- the Mahadalit community of Bihar, Padma Shri Sudha Varghese decided to dedicate her life for their upliftment. Watch the video to learn about her inspirational journey & how she is ‘Driving India Into The Future’. #MGChangemakers powered by MG Motor India and supported by United Nations India. Show your support by donating now: http://bit.ly/Milap-MGChangemakersPosted by TheBetterIndia on Wednesday, July 18, 2018 As the scientists snorkeled in the water, they were amazed to see the entire bottom of the lake was covered with these jellyfish. This is most likely the first that such a discovery has been made in India. “This is probably the first jellyfish lake to have been found in India. The concentration and density of jellyfish is very high here. You can even see them from outside during low tide and when the water is clear,” wildlife scientist BC Choudhury said. These jellyfish look to receive a large amount of sunlight as they harbour photosynthetic algae called Zooxanthellae. That is the reason behind their unusual position. Experts believe that this specie has been spotted here mostly due to a smaller number of predators and less wave action in this lake. Also, this lake hosts jellyfish throughout the year unlike other water bodies where they are seasonal. Further investigation led to the discovery of a group of turtles who have also inhabited the same lake. This incredible discovery in India gives a positive hope that there might be more such unusual habitats and brings this lake into the spotlight like the renowned jellyfish lake located on Eil Malk Island in Palau.
<urn:uuid:e292cbef-a041-41ad-b872-c60b3adbf871>
3.328125
588
News Article
Science & Tech.
36.562838
95,554,001
If you sniff a rose this Valentines Day, your brain will recognize almost a hundred different molecules that collectively give the flower its heady scent-but how? Scientists are now discovering how the brain identifies odors and their mysterious counterparts, the pheromones. New research, to be presented today at the American Association for the Advancement of Science (AAAS) Annual Meeting and forthcoming in the journal, Science, explains how the mouse brain is exquisitely tuned to recognize another mouses pheromone cocktail. Researchers say that most smells hover about 10 inches off the ground, placing the human nose at a disadvantage among those of most other mammals. Nonetheless, when smells do reach the neurons inside the nose, the human brain can distinguish from among the thousands of chemicals that make up odors, and scientists are beginning to understand just how the process works. In the last decade, the nose has been revealed as the site of a large family of sensory neurons, each of which specializes in a particular smell. Since this discovery, researchers have studied the olfactory system in rodents, following the axons that extend from neurons into the rodent brain. Their research shows that the axons from neurons with receptors for the same odor molecule congregate in the one or two glomeruli that are reserved for those axons. Glomeruli, which contain only axon terminals, are specialized structures in the olfactory bulb; the rodent brain has 2000 of them. By studying "odor maps" that show activity in certain glomeruli in response to different smells, Howard Hughes Medical Institute investigator Lawrence C. Katz of Duke University has found that each odor results in a pattern or "fingerprint," which humans and other mammals seem to use to distinguish from among different smells. Monica Amarelo | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:736ace88-8a5f-445a-b92c-11c48a0a8a81>
3.890625
949
Content Listing
Science & Tech.
36.07725
95,554,029
Can we construct a line segment, if we know: Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: - Circle from string Martin has a long 628 mm string . He makes circle from it. Calculate the radius of the circle. - Addition of Roman numbers Added together and write as decimal number: XXXIV + MDCCXII - Angles 1 It is true neighboring angles have not common arm? - Fraction to decimal Write the fraction 3/22 as a decimal. - Bus 14 Boatesville is 65.35 kilometers from Stanton. A bus traveling from Stanton is 24.13 kilometers from Boatesville. How far has the bus traveled? - Fraction and a decimal Write as a fraction and a decimal. One and two plus three and five hundredths Zdeněk picked up 15 l of water from a 100-liter full-water barrel. Write a fraction of what part of Zdeněk's water he picked. Simplify the following problem and express as a decimal: 5.68-[5-(2.69+5.65-3.89) /0.5] - Pie II vili ate three pieces of pie . if it pieces is 1/8 how much pie did he eat? Write the mixed number as an improper fraction. 166 2/3 - In fractions An ant climbs 2/5 of the pole on the first hour and climbs 1/4 of the pole on the next hour. What part of the pole does the ant climb in two hours? - Forest nursery In the forest nursery after winter, they found that 1/10 stems died out of them. For them, they land 193 new spruces. How many spruces are in the forest nursery? The hotel has a p floors each floor has i rooms from which the third are single and the others are double. Represents the number of beds in hotel. - Crates 2 One crate will hold 50 oranges. If Bob needs to ship 932 oranges, how many crates will he need? - Valid number Round the 453874528 on 2 significant numbers. - Pizza 4 Marcus ate half pizza on monday night. He than ate one third of the remaining pizza on Tuesday. Which of the following expressions show how much pizza marcus ate in total? Vili ate three pieces of pie . If it piece is 1/8 how much pie did he eat?
<urn:uuid:f45d2d77-974c-4374-a782-35c2d278df64>
3.609375
560
Tutorial
Science & Tech.
83.620833
95,554,043
Virus Inhibits Immune Response of Caterpillars and Plants It is well known that certain wasps suppress the immune systems of their caterpillar hosts so they can successfully raise their young within those hosts. Now researchers at Penn State show that, in addition to suppressing caterpillar immune systems, wasps also suppress the defense mechanisms of the plants on which the caterpillars feed, which ensures that the caterpillars will continue to provide a suitable environment for the wasps' offspring. According to Gary Felton, professor and head of entomology, a type of virus, called a polydnavirus, resides within the ovaries of the female wasps and, when injected into caterpillar hosts, is responsible for suppressing both the caterpillar immune response and the plant defense mechanism. "We found that not only do polydnaviruses suppress the immune systems of the caterpillars, but they also attenuate the defense responses of the caterpillars' host plant," said Felton. "The polydnavirus suppresses glucose oxidase in the saliva of caterpillars, which normally elicits plant defenses. Suppressing plant defenses in this way benefits the wasp and the virus by improving the wasp's development and survival within the caterpillar." The team — which included Ching-Wen Tan, doctoral student in entomology — placed parasitized and non-parasitized caterpillars onto tomato plants. After allowing the caterpillars to feed on the plants for 10 hours, the researchers harvested the remaining leaves and examined them for enzyme and gene expression activity associated with a defense response. "Using molecular and biochemical techniques, we found that parasitized caterpillars induced significantly lower enzyme activity and defense-gene expression among the tomato plants than the non-parasitized caterpillars," said Tan. "We also determined that the caterpillar's saliva, which was reduced in glucose oxidase by the polydnavirus, was responsible for inducing these lower defense responses in the plants." The results appear online in the Proceedings of the National Academy of Sciences. According to Felton, the team's results support the findings of another study by Feng Zhu of Wageningen University in The Netherlands and colleagues that appeared in the same issue of PNAS. "That study also shows that the polydnavirus of a parasitoid-caterpillar system — a different system from ours — has a similar ability to influence host plant immunity," said Felton. "In nature, a significant percentage of caterpillars are parasitized by wasps. In addition, tens of thousands of wasp species harbor polydnaviruses. As a result, there is strong potential for our results and the results of the Feng Zhu team to be very common among many plant-herbivore interactions." Tan adds that the results of the two studies suggest that the interaction between plants and their natural enemies is much more complex than previously thought. "Our study demonstrates the important role that microorganisms play in plant-insect interactions," she said. "The ability of polydnaviruses, which possess less than a couple of hundred genes, to so dramatically affect wasps, caterpillars and plants is remarkable." The Penn State team plans to examine whether other parasitic wasps and viruses that can parasitize a much broader range of caterpillar species also can suppress plant defenses in a similar capacity. This article has been republished from materials provided by Penn State. Note: material may have been edited for length and content. For further information, please contact the cited source. Symbiotic polydnavirus of a parasite manipulates caterpillar and plant immunity. Ching-Wen Tan, Michelle Peiffer, Kelli Hoover, Cristina Rosa, Flor E. Acevedo and Gary W. Felton. PNAS April 30, 2018. 201717934; published ahead of print April 30, 2018. https://doi.org/10.1073/pnas.1717934115. How do Forests Respond to Atmospheric Pollution?News How forests respond to elevated nitrogen levels from atmospheric pollution is not always the same. While a forest is filtering nitrogen as expected, a higher percentage than previously seen is leaving the system again as the potent greenhouse gas nitrous oxide, say researchers.READ MORE
<urn:uuid:df647734-c755-4b77-b79a-b75cb504288f>
2.9375
891
News Article
Science & Tech.
27.085294
95,554,045
Asked by: Nina Cunningham, Oxfordshire By definition, fog has a visibility of less than 1km, but it can get much thicker than that. The Met Office visibility scale runs down to a Category X fog, where visibility is less than 20m. If fog gets mixed with industrial pollution, it becomes smog and can be thicker still. During the Great Smog of 1952, drivers couldn’t see their own headlights!
<urn:uuid:a182003d-3894-4225-8f05-68ed06f67186>
3.453125
88
Q&A Forum
Science & Tech.
60.58712
95,554,061
Geology Fail at India January 21, 2017 Tiny insects debunk a widely-taught scenario about where India came from. "Extraordinary" Radiocarbon Anomaly Found in Tree Rings January 19, 2017 A tree ring sample from a bristlecone pine reveals something weird happened to the sun around 5480 BC. Cracks in the Climate Consensus? January 18, 2017 It's still dangerous to challenge Big Science about climate change. In the new political climate, a growing number of voices are willing to take the risk. Mammals Ate Dinosaurs January 13, 2017 Fossil shows a mammal with a big bite could have munched on small dinosaurs for lunch. Moon Bombarded by Crashing Theories January 10, 2017 Strange things happen on our nearest neighbor. Stranger things happen in the heads of theorists trying to figure out our nearest neighbor. Pluto and Ceres: Young Dwarf Planets December 28, 2016 With all its data downloaded, New Horizons continues to surprise astronomers with evidence of active geology and youth at Pluto. Cave Climate Conclusions Compromised December 22, 2016 Widely used to infer past climates, isotope measurements from stalactites and stalagmites in caves can mislead researchers. Cassini Gets Higher Look at Saturn's Youth December 10, 2016 Now entering its final dramatic high orbits, the Cassini spacecraft is finding unexpected things for an assumed old planet. Proof of Dinosaur Feathers? December 9, 2016 Opinions are swirling about an amazing piece of amber with enclosed feathers. Let's look at what is known so far. Rapid Earth Changes in Historic Times December 6, 2016 What happened to the Sahara desert? What's going on in Java, man? Geologists are surprised sometimes by recent major changes. Trusting Science Experts Can Be Disastrous December 5, 2016 Nobody pushes the values of science more than scientists. But trusting them doesn't always work out right. More Original Protein Found in Older Bird Fossil November 22, 2016 The new discovery from China dates back 130 million years on the evolutionary timeline. Chicxulub Crater Reports Begin November 19, 2016 Scientists who drilled into the large crater in southern Mexico have started interpreting the cores. Our Super Moon Is Not an Accident November 13, 2016 A finely-tuned collision to form Earth's vital moon is tantamount to a miracle. Activity on Planets Suggests Youth November 11, 2016 Can these processes really have gone on for billions of years?
<urn:uuid:77ff3a94-8116-4616-8382-6eb62f3a43e3>
2.515625
542
Content Listing
Science & Tech.
47.63067
95,554,063
Led by researchers from the Austrian Academy of Sciences and the University of Vienna, biologists from 13 different countries in Europe analysed 867 vegetation samples from 60 different summits sited in all major European mountain systems, first in 2001 and then again just seven years later in 2008. They found strong indications that, at a continental scale, cold-loving plants traditionally found in alpine regions are being pushed out of many habitats by warm-loving plants. This alpine species (Nevadensia purpurea) could disappear from some European mountains in the next few decades. Credit: Harald Pauli All 32 authors involved in the study used the same sampling procedures enabling pan-continental comparisons to be made for the first time, here at the Austrian Hochschwab mountains. Credit: Harald Pauli The GLORIA programme (Global Observation Research Initiative in Alpine Environments) is a network of more than 100 research teams distributed over six continents whose aim it is to monitor all alpine regions across the globe. Launched in 2001, it has implemented a long-term and standardised approach to the observation of alpine vegetation and its response to climate change. The GLORIA researchers will be returning to the same European sampling sites in 2015 to continue monitoring the effects of climate change on alpine vegetation. Further details: http://www.gloria.ac.at/ Continent-wide response of mountain vegetation to climate change. In: Nature Climate Change, 8. Jänner 2012 (Online ahead of print) DOI: 10.1038/NCLIMATE1329 Michael Gottfried | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:9aaa1a0c-3b1a-4958-9fee-d02bcf88680c>
3.9375
994
Content Listing
Science & Tech.
39.923541
95,554,068
Visual Basic for Applications Reference Select Case Statement The Select Case statement syntax has these parts: |testexpression||Required. Any numeric expression or string expression.| |expressionlist-n||Required if a Case appears. Delimited list of one or more of the following forms: expression, expressionToexpression, Iscomparisonoperatorexpression. The Tokeyword specifies a range of values. If you use the To keyword, the smaller value must appear before To. Use the Is keyword with comparison operators (except Is and Like) to specify a range of values. If not supplied, the Is keyword is automatically inserted.| |statements-n||Optional. One or more statements executed if testexpression matches any part of expressionlist-n.| |elsestatements||Optional. One or more statements executed if testexpression doesn't match any of the Case clause.| If testexpression matches any Caseexpressionlist expression, the statements following that Case clause are executed up to the next Case clause, or, for the last clause, up to End Select. Control then passes to the statement following End Select. If testexpression matches an expressionlist expression in more than one Case clause, only the statements following the first match are executed. The Case Else clause is used to indicate the elsestatements to be executed if no match is found between the testexpression and an expressionlist in any of the other Case selections. Although not required, it is a good idea to have a Case Else statement in your Select Case block to handle unforeseen testexpression values. If no Caseexpressionlist matches testexpression and there is no Case Else statement, execution continues at the statement following End Select. You can use multiple expressions or ranges in each Case clause. For example, the following line is valid: Case 1 To 4, 7 To 9, 11, 13, Is > MaxNumber *Note* The Is comparison operator is not the same as the Is keyword used in the Select Case statement. You also can specify ranges and multiple expressions for character strings. In the following example, Case matches strings that are exactly equal to everything, strings that fall between soup in alphabetic order, and the current value of Case "everything", "nuts" To "soup", TestItem Select Case statements can be nested. Each nested Select Case statement must have a matching End Select statement.
<urn:uuid:8f009c12-ba57-44c0-9435-0ab600e707f8>
2.640625
497
Documentation
Software Dev.
29.804572
95,554,092
What is an Electron? Relativistic Electron Theory and Radiative Processes Quantum theory originated from the remarkable behavior of electrons, light quanta and their interactions. One can take the point of view that a better understanding of quantum theory is a better understanding of the entities we call “electron” and “photon”, rather than the quantum theory being the new abstract laws of nature that we have simply to accept. For problems and “paradoxes” in the quantum theory of measurement it is important to ask what are the objects that we are trying to measure. Instead of saying that the electron or photon have particle behavior or wave behavior, we could say that we really do not know how they look like, but we can approximate them by a wave or a particle, better yet by a wave and a particle. It is worth recalling what the pioneers of the concepts of photon and electron have said after a lifelong occupation with their own creation! A. Einstein (1955): “Every physicist thinks that he knows what a photon is. I spent my whole life to find out what a photon is, and I still don’t know it”. And P.A.M. Dirac “I really spent my life mainly trying to find better equations for quantumelectrodynamics, and so far without success, but I continue to work on it”2. KeywordsQuantum Theory Coherent State Dirac Equation Rest Frame Radiative Process Unable to display preview. Download preview PDF. - 1.Einstein expressed the same sentiment in a number of other writings and letters in the 1950’s.Google Scholar - 2.P.A.M. Dirac, European Conference on Particle Physics, Budapest, July 1977.Google Scholar - 3.P.A.M. Dirac, Proc. Roy. Soc. (A) 167, 148 (1938) See also the review: C. Teitelboim, D. Villarroel and Ch. G. van Weert, Rivista Nuov. Com. 3, 1–64 (1980).Google Scholar - 6.A. O. Barut, Proc. Clausthal Conference on Differential geometric Methods in Physics 1980, Lecture Notes in Mathematics (1981)Google Scholar - 7.E. Schrödinger, Sitzungsb. Preuss. Akad. Wiss. Phys.-Math Kl. 24, 418 (1930); 3, 1(1931).Google Scholar - 14.A. O. Barut, in Groups, Systems and Many-body Physics, edit. P. Kramier et al (Vieweg Verlag, 1980); Ch. VI, p. 285–317Google Scholar
<urn:uuid:ed69f8c1-7fea-47a4-ab8e-9bd925c98579>
3.15625
582
Truncated
Science & Tech.
72.67
95,554,094
The astronomy world has been abuzz recently with the discovery of a new object cutting through our solar system. Its path indicates it came from interstellar space—the first body of its kind ever observed. Photobombing asteroids from our solar system have snuck their way into this deep image of the universe taken by NASA's Hubble Space Telescope. Mining space rocks for valuable resources can become reality within two decades, according to J.L. Galache of Aten Engineering. However, still many challenges must be overcome to make it happen that soon. When small pieces of rock hit the moon's surface at incredibly high speeds, they produce flashes of light detectable from Earth. Now, astronomers have measured their temperature for the first time, using a telescope funded ... Curtin University planetary scientists have shed some light on the evolution of asteroids, which may help prevent future collisions of an incoming 'rubble pile' asteroid with Earth. Fewer large near-Earth asteroids (NEAs) remain to be discovered than astronomers thought, according to a new analysis by planetary scientist Alan W. Harris of MoreData! in La Canada, California. Harris is presenting his results ... The planet Mars shares its orbit with a handful of small asteroids, the so-called Trojans. Among them, one finds a unique group, all moving in very similar orbits, suggesting that they originated from the same object. But ... ESA astronaut Luca Parmitano has been on Earth since his mission to the International Space Station in 2013, but "Lucaparmitano" is now back in space thanks to an Italian astronomer. NASA is using an asteroid's close flyby to test Earth's warning network for incoming space rocks. Analysing a mixture of earth samples and meteorites, scientists from the University of Bristol have shed new light on the sequence of events that led to the creation of the planets Earth and Mars.
<urn:uuid:6adbb21d-090e-447c-9f33-41c322c2fa4c>
3.390625
389
Content Listing
Science & Tech.
44.3634
95,554,098
얕은 정현형(正弦型) 아치의 분기좌굴에 관한 연구 A study on the bifurcation buckling for shallow sinusoidal Arches - 김승덕, 권택진, 박지윤 - 한국전산구조공학회 in 1998 - Cited Count The equilibrium path of shallow sinusoidal arches supported by hinges at both ends is investigated. The displacement increment method is used to get the solution of the nonlinear differential equations for these structures and to plot the equilibrium paths by the results. Using the equilibrium paths, the relations between the position of buckling point and buckling type for the case of sinusoidal distributed loads are inferred. From the result that the buckling type changes according to the normalized rise of arch, it is also shown that the arch rise is the governing factor to stability regions No relevant information is available If you register references through the customer center, the reference information will be registered as soon as possible.
<urn:uuid:2ea2eee5-abfe-4e25-8785-b50fff1ec435>
2.5625
282
Academic Writing
Science & Tech.
24.0075
95,554,127
Re: Okay seriously now (AAT again) Pat Dooley (email@example.com) 13 Dec 1994 23:00:25 -0500 In article <firstname.lastname@example.org>, email@example.com (Russell Stewart) >>Most aquatic mammals have reduced hindlegs (pinnipeds); and many have >>completely lost their hindlegs (whales, porpoises, dolphins). Why the >>would humans have grown EXTRA LONG hindlegs? Answer: the evolutionary >>pressures were different. Long legs are a liability in the water. >Good point. That's one weakness in the theory. Please stop referring to modern humans when you talk about skeletal features that evolved since 3.5 mya. The fossil record gives us a better picture of human ancestors after the proposed aquatic phase. In this case, Lucy is one of the best reference points. Lucy, the first nearly complete skeleton of an Australopithecus afarensis, had feet that were broader and larger than ours, (35% of leg length instead of 26%). Her gait was described by Roger Lewin as "not quite as bad as trying to walk on dry land wearing swimming flippers but in the same direction." So, the legs were shorter and the feet much larger than today. If the AA was evoloving towards a fully aquatic existence that is what one might expect.
<urn:uuid:46a30998-74a9-4e6e-8f73-6b205c885859>
2.578125
320
Comment Section
Science & Tech.
56.405196
95,554,145
Sagebrush (Artemisia spp.) ecosystems constitute the single largest North American semiarid shrub ecosystem1 and provide vital ecological, hydrological, biological, agricultural, and recreational ecosystem services.2,3 However, disturbances such as livestock grazing, exotic species invasion, conversion to agriculture, urban expansion, energy development, and other development have historically altered and reduced these ecosystems,2,45.–6 with loss in total spatial extent.3,7,8 Constant perturbations and changes to these systems are disrupting vital biological services, such as providing habitats for numerous sagebrush-obligate species, including the sage-grouse (Centrocercus spp.). This has severely affected sage-grouse populations across their ranges,3,9 leaving populations threatened with extirpation in some habitats where they historically persisted.3,10 While ecosystem-wide disturbances are having diverse impacts to sagebrush habitats today, climate change may ultimately represent the greatest future risk to this ecosystem.1112.13.–14 Both warming temperatures and changing precipitation patterns (such as increased winter precipitation falling as rain) will likely favor species other than sagebrush15 and increase sagebrush disturbance risk from fire, insects, diseases, and invasive species.11,16 Despite the vast area covered by this ecosystem and the numerous disturbance forces operating on the landscape, effective large-area monitoring and prediction tools have not been implemented, and widely accepted metrics to quantify and communicate disturbance magnitudes are not well developed.1718.19.–20 Disturbance monitoring capable of measuring, quantifying, and reporting change in metrics understood by land managers is critical to future successful management of this ecosystem.3,10,18,21,22 Optical remote sensing is still the most likely data source and tool for large-area monitoring of disturbance within the sagebrush ecosystem, supporting a framework that can offer relatively efficient and accurate analysis of change across a range of spatial and temporal scales.21,23,24 Sagebrush ecosystems represent a challenging remote-sensing environment because these semiarid shrublands have sparse and similar vegetal cover with high proportions of bare ground and a variety of soil reflectance properties.25,26 Despite these challenges, an optical remote-sensing signal capable of characterization exists for semiarid shrublands, and monitoring is feasible.2622.214.171.124.31.–32 Studies within the sagebrush ecosystem have demonstrated the ability for remote sensing to characterize more abrupt types of disturbance from fire33,34 and human development35,36 and gradual types of disturbance such as grazing37 and climate change.23 A comprehensive understanding of the relationship between remote-sensing change and gradual changes in sagebrush ecosystem components is still lacking; only a few studies have begun to explore that relationship.24,36,3839.–40 Further, even beyond the sagebrush ecosystem to semiarid systems in general, remote-sensing change studies have historically targeted the development of indices such as the normalized difference vegetation index (NDVI) or other similar approaches to understand change.4142.–43 These indices can be difficult to interpret and translate to on-the-ground understanding.4445.–46 Metrics that characterize changes that managers readily use in the field for real-time decisions, such as fractional vegetation predictions,21 would more likely ensure application of such products for daily management decisions and applications. Recent research has sought to reconcile this need, with approaches centered on using a single year of training data to parameterize a base characterization layer, which is then projected through several time periods using change vector analysis to identify what change is occurring. This approach assumes change areas identified in the change vector process can be labeled using values from the base characterization layer.39,40 However, no research has tested this assumption by gathering repeated ground measurements over many time steps (seasons or years) to fully evaluate the ability of the change vector approach to detect fine-scale change within sagebrush ecosystems. Technological advances have also resulted in the development of higher-spatial-resolution sensors offering new potential for monitoring in sagebrush ecosystems at resolutions finer than Landsat.19,21,4748.–49 New spectral bands at finer spatial resolution can increase our ability to detect smaller changes and improve monitoring applications. Increased sensor resolution may allow for changes to be detected at more local scales, enhancing interpretation and understanding. Also, because ground-measurement approaches are often prohibitively expensive, high-resolution sensors offer the potential to extrapolate ground measurement across larger landscape models and also provide an operational surrogate for ground plot remeasurement. However, studies that explore the capabilities of higher-resolution sensors to complement and support component predictions derived at moderate spatial scales for change monitoring have not been completed. Downscaling of climate information such as precipitation also continues to evolve to better support more localized analysis. The release of new data with longer temporal records and at finer spatial scales provides new opportunities for defining the relationship between climate change and sagebrush ecosystem change. Specifically, the new release of DAYMET daily gridded surface climate data,50 providing daily precipitation data at a 1-km spatial resolution, provides a new opportunity to explore potential finer-scale links of climate change to any observed ecosystem change. We attempt to address these research gaps by capitalizing on advancements in high-resolution remote-sensing data availability, remote-sensing component prediction and change detection, and new availability of higher-spatial-resolution precipitation. Our goal was to explore whether component change and precipitation impacts can be detected across multiple scales of remote sensing in a sagebrush ecosystem. Ongoing ground and satellite monitoring of several focus areas in Wyoming provide the opportunity to explore change patterns from a variety of drivers. For this evaluation, we focus on one particular monitoring site, labeled “1.” Site 1 has had no observed potential change drivers during field visits or in any satellite images other than climate influences during the timeframe of this study, offering a good opportunity to examine ecosystem change driven only by variation in climatic conditions. We tracked component change in this sagebrush ecosystem across 4 years and six seasons (during the first 2 years) using multiyear satellite imagery and ground-based vegetation sampling. The spatial distribution and temporal change for fractional cover components of bare ground, herbaceous, litter, shrub, and sagebrush were quantified between 2008 and 2011. Our specific study objectives were to (1) determine the relationship between changing spatial and temporal extents of fractional component change as measured from three scales, including ground measurement, QuickBird (QB) 2.4-m satellite acquisitions, and Landsat 5 (LS) 30-m satellite acquisitions; (2) quantify, compare, and contrast observed changes of remote-sensing sagebrush ecosystem components across years and seasons with ground measurements; (3) test if remote-sensing components trained on a single base year (2008), and subsequently extended through time using change vector analysis (2009 to 2011), are sensitive enough to capture subtle ground-measured change over time; and (4) use DAYMET precipitation data to evaluate if precipitation changes correlate with annual and seasonal component change identified from ground measurement, QB predictions, and LS predictions. Data and Methods Our approach examined 2 years of seasonal sagebrush ecosystem change nested within 4 years of annual sagebrush ecosystem change using data collected from ground measurements and remote-sensing data from QB and LS. We measured proportional amounts of each of five sagebrush ecosystem fractional cover components (hereafter simply called components) including cover of bare ground, herbaceous, litter, sagebrush (all species), and shrub (all shrubs combined) as continuous fields in 1% intervals using both ground plots and satellite predictions. Using 2008 ground measurements, we produced QB and LS satellite data component predictions for the study area. The percent cover of each component was then both annually and seasonally updated only in areas that had spectrally changed from the 2008 base year or season. These updates were completed with regression trees (RT) using unchanged 2008 base areas as training sources. We collected field data in other years and seasons for evaluation of these predictions. Correlation analysis was then conducted to explore relationships between various ground, satellite, and precipitation measurements. We explain each methodological step by section below. The study was conducted in southwestern Wyoming, United States. One area (site 1) was selected as a focus area for intensive ground measurement coupled with QB and LS measurements (Fig. 1). This site represented one of 30 sites used for initial 2006 Wyoming sagebrush characterization.21 Site 1 is located southeast of Farson, Wyoming. It contains a range of topography with elevations from 2026 to 2327 m, and slopes up to 31 deg. It has predominantly sandy soils and contains part of the Farson sand dunes in the northeast corner. Vegetation is dominated by sagebrush shrubland, especially in the upland areas, with salt desert shrub species dominating in the lowland and sandy areas. Herbaceous areas range from typical grasses and forbs interspersed among shrubs to subirrigated meadows where a high subsurface water table in the sand dune areas creates higher-than-normal biomass productivity for these selected areas. This site is public land administered by the Bureau of Land Management and is typically grazed by cattle most of the summer. During our study, we observed no substantial differences in the amount or duration of grazing from year to year. Baseline Data Collection Plot selection and measurement We segmented the QB imagery into spectrally similar polygon patches to identify sites for potential ground sampling. We also classified the image into 30 unsupervised clusters. Segmented polygons were then intersected with the 30 clusters to identify the majority cluster class in each polygon, and 66 polygons representing the full range of spectral variability across the QB image were then selected.21 Ground measurements were conducted using ocular measurements at seven quadrats along each of two 30-m transects within each polygon plot.21,27 To ensure remeasurement was spatially over the same quadrat areas, we permanently staked the beginning and ending of each transect. Cover was estimated from an overhead perspective (satellite), with the total cover of all vegetation and soil components summing to 100%. The shrub component represented all woody shrub species; the sagebrush component is a subset of the shrub component and represented only sagebrush shrub species (Artemisia spp.); the herbaceous component represented all grasses (live and residual standing) and forbs; litter is the combined cover of dead standing woody vegetation and detached plant and animal organic matter; and the bare ground component represented any exposed soil or rocks. All individual quadrat cover estimates were made in 5% increments. Ground measurements were conducted annually on the same approximate dates, with QB image acquisition attempted as near these dates as possible. Plot measurements for 2008 to 2011 were conducted by the same two individuals over the same plots every year, except in 2011 when the alternate observer sampled all plots. Image collection and preprocessing QB images covering the study area were targeted seasonally (spring, summer, and fall) for 2008 and 2009, and annually during each of the summers of 2008 through 2011. Four-band multispectral images (visible blue, green, red, and near-infrared) were collected at 2.4-m resolution with a desired target of off-nadir view angle. Imagery was processed by Digital Globe to UTM using a bilinear resampling kernel. We used the ERDAS 10 AutoSync tool to accomplish QB orthorectification using 1-m National Agricultural Imagery Program imagery as the base. The AutoSync tool uses an automatic point matching algorithm to generate hundreds of tie points between the reference image and the subject image to complete the geometric correction. This functionality is sensor specific and enhanced with the use of a digital elevation model (DEM). Subsequent years of QB imagery were registered to the orthorectified 2008 image base to ensure spatial consistency using the same process as described above. QB images were converted to at-sensor reflectance using the following equation: This is similar to the approach used for converting the LS imagery to at-sensor reflectance.51,52 Results were then converted to 8-bit files using a scaling factor of 400 to remain consistent with the way the LS was processed. Multiseason and multiyear LS imagery from 2008 to 2011 was acquired for path 37 row 31 and processed using the automated level 1 product feneration system. Through this process, the scenes were converted to at-sensor reflectance, projected to Albers equal area, and terrain corrected.40,5253.–54 The positional accuracy of all LS and QB images was carefully controlled to ensure direct comparisons of multiple dates and image platforms were spatially accurate. The 2008 base spatial distributions of five components of sagebrush habitat including cover of bare ground, herbaceous, litter, shrub, and sagebrush were estimated at 1% intervals for both QB and LS using RT models. For QB, 120 ground transects, with four additional mini plots centered over very high component value areas, were used for RT training. Vegetation characteristics were sampled at seven quadrats along 30-m transects in sample polygons. The mean value for each of the variables of interest was calculated across all seven 1-m quadrats within a transect. These values were assigned to all pixels occurring within the sampling area for each transect. The five component predictions within the QB image were developed independently from multispectral QB and ancillary data using the RT algorithm Cubist1,55 following a protocol developed in an earlier study.21 For LS, QB predictions from three sites (including site 1) across the LS thematic mapper (TM) scene were combined to build training data for the LS modeling. These additional sites provided variation in land cover types resulting in comprehensive training across the entire TM scene and replicated a typical full TM scene component modeling scenario.21 We purposely developed the LS prediction with the full TM scene perspective to ensure that the predictions at site 1 represent a typical landscape level application. We refined the training by dividing the data for each of the five component predictions into roughly three equal bins based on the mean and root mean square error (RMSE). The middle bin was thinned more relative to the other bins to ensure that higher and lower component values carried appropriate weighting in the model development and reduced overall bias. LS predictions were modeled using one leaf-on image from each year for annual predictions and one seasonal image from each season of each year for seasonal predictions, coupled with DEM ancillary data. Image normalization and change identification The process of normalizing many image dates to ensure consistent comparison is important for initiating trend analysis. Once images are normalized, potential change areas need to be identified and the magnitude and type of change labeled. We accomplished this process by following several major processing steps. For QB, all cloud and cloud shadow areas in the scenes were masked and excluded to ensure these areas did not incorrectly influence the normalization outcome. Next, NDVI was calculated for each image, and a difference layer was calculated, to compare NDVI magnitude differences between the reference scene (from 2008) and the subject scene. Experimental trials of different NDVI thresholds revealed that a threshold of NDVI values was appropriate for excluding outlier pixels from influencing the normalization process. This process of outlier pixel exclusion ensured normalization was developed from only the most invariant pixels. Finally, a linear regression algorithm was developed from the invariant pixels and used to relate each pixel of the subject image to the reference image (2008 image) band by band.40 For LS, a similar approach was followed. First, all cloud, cloud shadow, and snow and ice areas were excluded from analysis. Then, a normalization procedure using a linear regression algorithm to relate each pixel of the subject image to the reference image (2008 leaf-on) band by band was conducted.40 Once image normalization was completed, images across years and seasons were compared for identification of change areas using a change vector process. For QB, change pixels were determined using a standard deviation (SD) from the mean value. Pixels outside 1 SD were considered to be potential change areas. LS change pixels were determined using thresholds specific to general land cover classes spatially identified from the 2006 National Land Cover Database.40 Change areas identified with the threshold approach tended to be too conservative to capture all change relative to field measurements, and an additional independent approach was necessary to further capture potential change areas with more subtle change. This additional approach used NDVI differencing between the master scene (2008) and the subject year or season to confirm change pixels. Research trials showed that pixels outside of NDVI values for QB and outside of values for LS needed to be retained as change pixels (the greater sensitivity of QB to image noise artifacts necessitated a higher threshold than LS to maintain comparability across sensors). The final potential change mask was created combining (union) both the change vector process and the NDVI differencing results. All cloud and cloud shadow areas were treated as no change areas and removed from the change mask image. Labeling annual and seasonal subsequent change areas with the new component values was accomplished for both QB and LS by using an RT modeling approach and input data layers similar to that used to predict the 2008 baseline distributions. Training data were gathered from the 2008 unchanged baseline component values after first excluding potential change pixels by using the change masks described above. A random sample of 10,000 points for QB and 25,000 points for LS were selected from candidate pixels for each component. Predictions quantifying the spatial distribution and per-pixel proportion of five components as a continuous variable were then calculated using regression models for all change pixels in each QB and LS image. Baseline predictions for spectrally unchanged pixels were not modeled and were left as original predictions from the base year. Using the change mask created from the change vector process, each of the change pixel prediction values was then applied over the base prediction. The no-change pixels retained the prediction value from the base prediction, and only the change pixel areas were updated for each new imagery date.40 Data Analysis and Evaluation Data summation and analysis protocols: plot-level polygon data Both QB and LS predictions were evaluated by comparison to corresponding ground plot measurements within plot polygons and analyzed by component and data source. Component values measured at ground plots were compiled into a single mean transect value (seven individual frames on a 30-m transect) for comparison to QB, and by plot (two transects, 14 individual frames) for comparison to LS. Similarly, for QB and LS predictions, all pixel values within each ground plot polygon or transect boundary were averaged to represent one component value for each transect/plot (referred to simply as plot hereafter). For consistency, the exact same plots were analyzed across all years and seasons. If clouds or other image issues precluded a plot from inclusion from one year or season, it was eliminated from analysis from all dates. This ensured fair comparisons between sensors and components. For each annual and seasonal plot, the SD of the individual frame measurements was calculated. For each annual plot, a slope value from a linear regression was also calculated. In order to facilitate direct comparison among components and data sources, the coefficient of variation (COV) () was also calculated for each plot. To determine whether significant change had occurred on ground-measured annual and seasonal plots, a one-way analysis of variance (ANOVA) was performed. This calculation uses the SD from the individual transect frame measurements for each plot to determine whether there are any significant differences between the means of plot measurements across time. All ANOVA significance levels are reported at . To determine if a significant direction of change occurred on annual plots, the linear slope was calculated and significance tested at the 0.1 level. Several combinations of Pearson’s correlation were used to compare ground plot measurements to QB and LS predictions. First, in order to test the overall similarity of the component predictions to ground measurements, a correlation analysis comparing plot-level mean component values for each of the data sources was completed. Second, correlation was used to test the strength of the relationship between ground-measured significant ANOVA plots and significant slope plots to component predictions. Finally, correlation of slope values from both ground measurements and component predictions was used to test the ability of components to track the direction of change over time. Data summation and analysis protocols—by total proportional area To test component prediction relationships beyond plot-level polygons, ground measurements, QB and LS predictions were also compiled to assess the total area of change of components across the full study area. For ground-measured polygons within the site 1 study area, the total area covered by all polygons was calculated; subsequently, the proportion of that total area covered by each component by year and season was also calculated. For QB and LS, the full study extent of site 1 predictions were used to calculate the areal proportion of each component of each cell into a total area summary value (e.g., a 50% bare ground prediction in a 30-m LS cell means 50% of the area of that cell is counted as bare ground, or ). The mean proportional amounts of total area by year and season were calculated for each data source. We calculated the mean epoch-to-epoch percent change by dividing the percent change of epoch (season or year) by the total number of epochs, and also calculated the mean relative error between component predictions and ground measurement. Pearson’s correlation analysis was used to compare proportional component measurements among data sources. Comparison to precipitation, by source and component DAYMET daily gridded surface climate data providing daily precipitation data at a 1-km spatial resolution was downloaded for site 1 for 2008 to 2011.50 Daily data were then combined into mean seasonal precipitation amounts by 1-month and 2-month intervals for seasonal analysis, and by calendar year and water year (September to October) for annual analysis. Mean monthly and annual DAYMET precipitation values for all cells in site 1 were then pooled into a single mean value representing the entire site 1 study area. Corresponding mean monthly and annual total area percent component values from ground measurements and QB and LS predictions for site 1 were then correlated with precipitation data using Pearson’s correlation. We measured five sagebrush ecosystem fractional cover components including bare ground, herbaceous, litter, sagebrush, and shrub on the ground and from satellites over six seasons and four years. Comparison analysis of component change patterns among data sources was conducted at both the single-plot level and proportionally across the entire study area. Study area proportional seasonal and annual changes were also correlated to annual and seasonal precipitation measurements. Specific results are listed by section below. Plot-Level Ground and Satellite Measurements A total of 66 ground plots (132 transects) were sampled during the summers of 2008 through 2011 across site 1. Only plot results from 2008 were used to develop RT predictions for all five components across one 2.4-m QB image extent (site 1) and corresponding LS extent; all other years and seasons were developed using change vector analysis (Fig. 2). The RMSE average for the 2008 base estimate for all five components over site 1 was 4.68 for QB and 6.83 for LS.21 Image collection dates deviated an average of 16 days from ground collection for QB and nine days from ground collection for LS (Table 1). After removing plots affected by clouds on either QB or LS imagery, 52 plots (104 transects) remained for analyses. Ground-measurement dates, with corresponding Landsat and QuickBird image collection dates. |Source||Spring 2008||Summer 2008||Fall 2008||Spring 2009||Summer 2009||Fall 2009||Summer 2010||Summer 2011||x¯ (days) from field collection||SE| |Ground||June 17||July 22||Sept 22||June 13||July 22||Sept 22||July 14||July 16| |QB||-----||Aug 11||Oct 17||June 3||July 14||Sept 14||July 12||Aug 21||16||4.5| |LS||June 20||July 22||Sept 24||June 23||Aug 10||Sept 27||Aug 13||July 14||9||3.9| Of the five components, litter exhibited the highest COV for annual ground-measured change at 18.4, with herbaceous second at 18.1, then shrub at 17.2, sagebrush at 9.9, and bare ground the lowest at 8.3 (Table 2, Fig. 3). Litter had the largest number of plots qualifying as significantly changed from the ANOVA analysis at 15, with herbaceous second at 13, bare ground third at 7, and shrub and sagebrush with 1 each (Table 2). Only seven annual plots overall showed significant plot change and significant slope change, two each in bare ground and litter, and one in each of the remaining three components. Mean ground-measured annual change (% of 100) across 52 plots, by component. |Components||Plots (N)||2008 (mean)||2009 (mean)||2010 (mean)||2011 (mean)||SD (mean)||Coefficient of variation (mean)||Linear slope||N with sig. ANOVA (.10)||N with sig. slope (.10)| For seasonal change, herbaceous exhibited the highest COV for ground-measured change at 23.8, with litter second at 21.4, then sagebrush at 19.4, shrub at 18.9, and bare ground the lowest at 8.7 (Table 3). Litter and herbaceous had the largest number of plots with significant ANOVA-measured change at 23 each, with bare ground next at 11, then shrub with 2, and sagebrush with 1 (Table 3). Mean ground-measured seasonal change (% of 100) across 52 plots, by component. |Component||Plot N||June 2008 (mean)||July 2008 (mean)||Sept 2008 (mean)||June 2009 (mean)||July 2009 (mean)||Sept 2009 (mean)||SD (mean)||Coefficient of variation (mean)||N with sig. ANOVA (.10)| Plot-Level Data Correlation Relationships Each set of values from both annual and seasonal individual ground plots and transects were correlated with the corresponding satellite component measurements to test the ability of the component predictions to replicate ground measurements. Overall, annual predictions were more highly correlated than seasonal predictions, and QB had higher correlation values than LS (Table 4). QB displayed a mean correlation value across all components of 0.85 for annual and 0.82 for seasonal. LS had a mean correlation value of 0.77 across all components for annual and 0.73 for seasonal. For components, bare ground had the highest mean correlation across sensors at 0.91, with shrub exhibiting the lowest correlation at 0.69 (Table 4). Remote-sensing prediction correlations to annual and seasonal ground measurements over plot areas, by component. |QB (transects)||LS (plots)||QB (R)||LS (R)||QB (R)||LS (R)||Mean| Note: All correlations were significant at the 0.01 level. The linear slope value was calculated across annual measurements for each plot, QB and LS prediction. These slope values were then correlated to test the ability of component predictions to replicate the trend of ground-measured slope change. QB had relatively high correlation values for individual components, and most correlations were significant (Table 5). In contrast, LS had low correlation values for individual components, with significant correlation values only in the bare ground component. When slope values from all plots and transects were pooled across all components (), QB had a correlation of 0.37 and LS a correlation of 0.10. When a subset of slope values from only significant ground-measured ANOVA plots were pooled (Table 2) (), QB had a correlation of 0.74 and LS remained at 0.10. However, correlation of slope values from ground-measured plots with a subset of both significant ANOVA and slope results () yielded a correlation of 0.77 for QB and a correlation of 0.64 for LS (Table 5). Annual component correlations of individual linear slope value calculated for plot measurements, correlated with the linear slope value calculated for corresponding LS and QB predictions. |Component stratification (ANOVA and slope significance from field measurements)||N (Transect)||R||N (Plot)||R| |Bare ground—all plots||104||.28a||52||.23a| |Bare ground—only plots ANOVA significant at 0.1||9||.78a||7||.73a| |Bare ground—only plots slope significant at 0.1||3||.92||2||+| |Herbaceous—only plots ANOVA significant at 0.1||15||.78a||13||.10| |Herbaceous—only plots slope significant at 0.1||8||.86a||1||+| |Litter—only plots ANOVA significant at 0.1||13||.78a||15||.23| |Litter—only plots slope significant at 0.1||1||+||2||+| |Shrub—only plots ANOVA significant at 0.1||3||−.99a||1||+| |Shrub—only plots slope significant at 0.1||2||+||1||+| |Sagebrush—only plots ANOVA significant at 0.1||0||+||1||+| |Sagebrush—only plots slope significant at 0.1||0||+||1||+| |All components, all plots combined||520||.37a||260||.10| |All components, only significant ANOVA plots combined||40||.74a||37||.10| |All components, only significant slope plots combined||14||.77a||7||.64| Note: Correlation results reveal the ability of the sensor component predictions to replicate the direction of slope change as measured on the ground. Note: +Inadequate sample size. Correlation significant at 0.1. Total Area Comparison The total proportional area covered by each component from each source (ground and satellite) was calculated for each season and year across all of site 1, with the proportion of change between seasons and years also calculated. For annual predictions, bare ground exhibited the highest mean annual change at 1.3%, shrub the next highest at 0.8%, then herbaceous at 0.6%, litter at 0.5%, and sagebrush the lowest at 0.3% (Table 6). Shrub had the highest mean annual relative error, and litter had the lowest. When compiled by data source, ground measurement showed the highest overall mean change across all components at 1.02%, with LS second at 0.56%, and QB the lowest at 0.52%. Ground mean annual change values showed the most variation between components, with QB showing the least. Overall, QB had higher relative errors than LS (Table 6). Comparison of the percent proportions of total area covered by each component for every year. |Component||2008||2009||2010||2011||Mean annual change (%)||Mean annual relative error (%)| |Bare ground (%)| Note: For ground plots, the total area is calculated from pooling all plot polygons; for QB and LS, the total area is calculated from full study area predictions. For seasonal measurements, the mean total proportional seasonal change across six seasons for ground and LS and five seasons for QB was calculated. Bare ground exhibited the highest mean seasonal change at 2.0%, herbaceous next at 1.2%, litter at 0.8%, shrub at 0.7%, and sagebrush the lowest at 0.5% (Table 7). Herbaceous had the highest mean annual relative error, and bare ground had the lowest. When compiled by data source, in contrast to annual measurements, LS showed the highest overall mean seasonal change across all components at 1.90%, with ground second at 0.7%, and QB the lowest at 0.52%. The seasonal change values showed the most variation between components from LS, with QB showing the least. Overall, LS had higher relative errors than QB (Table 7). Comparison of the percent proportions of total area covered by each component for every season. |Component||June 2008||July 2008||Sept 2008||June 2009||July 2009||Sept 2009||Mean seasonal change (%)||Mean annual relative error (%)| Note: For ground plots, the total area is calculated from pooling all plot polygons; for QB and LS, the total area is calculated from full study area predictions. No data collected. Total Area Correlation to Precipitation Data DAYMET annual precipitation at site 1 varied from a low of 219 mm in 2009 to a high of 297 mm in 2011 (Fig. 4), with seasonal scenarios varying from a low of 2 mm in August/September 2008 to a high of 67 mm in June 2008 (Fig. 5). Correlation of mean monthly and annual DAYMET precipitation values to the corresponding mean monthly and annual total area component calculations is presented in Table 8. Of the 60 scenarios tested, only nine were significant at the 0.1 level. When correlations were averaged across components, herbaceous had the highest mean correlation across all seasonal and annual scenarios at 0.67, and shrub the lowest at 0.47. When correlations were averaged by data source, the highest mean correlation was LS annual water year at 0.88, and the lowest was LS seasonal bimonthly correlation at 0.29 (Table 8). The highest significant individual correlation scenario was ground plot herbaceous against calendar year precipitation at . Correlation (R) of annual and seasonal precipitation measurements over site 1 to corresponding annual and seasonal component change from ground plots and sensor predictions. |Ground plots, site 1||QB, site 1||LS, site 1| |Month of ground sample||Bi month of ground sample||Calendar year||Water year (Oct to Sep)||Month of ground sample||Bi month of ground sample||Calendar year||Water year (Oct to Sep)||Month of ground sample||Bi month of ground sample||Calendar year||Water year (Oct to Sep)||Mean| Correlation significant at 0.1. Our results demonstrate reasonable ability of sagebrush ecosystem components as predicted by regression trees to incrementally measure changing components of a sagebrush ecosystem. Specifically, we demonstrate the ability of regression tree component predictions to track ground-measured change over time using ground data from one year and change vector analysis for subsequent years. We demonstrate the ability of high-spatial-resolution satellite imagery to serve as a potential surrogate for repeated ground measurement. Finally, we demonstrate the ability of component predictions to potentially monitor vegetation change related to precipitation variation over time. Specific discussion topics are covered below. Ground-Measured Component Change Ground measurements reveal a subtle changing landscape both seasonally and annually (Tables 2 and 3). This is to be expected given that we could observe no other major change agent operating in this area, other than climate.23,40 However, it is encouraging that we were able to observe and detect this subtle change from both a ground and remote-sensing perspective. We went to great lengths to ensure ground measurements were consistent by using staked plots, revisiting plots at the same time of year and season, and having the same observer repeat measurements. The only exception was from 2011, when 35 plots were measured by the alternate observer; however, a quality check of these data revealed the measurement pattern to be consistent with previous measurements both observers had completed. Component change varied by season and year, with seasonal measurements in every component consistently showing a higher COV than annual measurements (Fig. 6). This follows an expected ecosystem pattern, with seasonal plant response potentially more dynamic than annual response.56,57 For individual components, litter and herbaceous exhibited the highest COV from annual measurements, and herbaceous the highest for seasonal measurements. These results are logical due to the ephemeral nature of these components with changing precipitation.56 The shrub and sagebrush components exhibited relatively moderate COVs in both seasonal and annual measurements, with sagebrush having a substantially lower annual COV than shrub (Fig. 6). Sagebrush species contain some ephemeral leaves, which are dropped later in the growing season,58,59 and we suspect this change is detected on the seasonal plots from spring measurement, but not on summer-measured annual plots. Alternatively, the shrub component contains many additional shrub species besides sagebrush that exhibit sustained growth through the entire season, resulting in similar change patterns for both annual and seasonal measurements. Because of the relatively high SD exhibited by bare ground, we did not anticipate that it would have the lowest COV of any component in both seasonal and annual measurements (Tables 2 and 3). However, high proportions of bare ground on many of our plots resulted in a large dynamic range for this measurement, which was factored out by the COV, suggesting bare ground in site 1 had relatively low variation both seasonally and annually compared to other components. Overall, total annual changes were represented by a gradual increase in shrub and sagebrush canopy with corresponding decreases in bare ground, herbaceous, and litter across the four years (Fig. 7). Given that water year precipitation increased from 231 to 297 mm over this time, this type of component response makes sense for shrub, sagebrush, and bare ground. The slow growth of the sagebrush is to be expected; others have reported that multiple precipitation years may be required to influence overall growth.1 We expected to see larger annual fluctuations of herbaceous cover, but given the annual growth pattern of many of the herbaceous plants,56,60 it would appear that herbaceous cover in this case is mostly responding to the seasonal precipitation pattern rather than the annual. Total seasonal component change patterns show seasonal fluctuations, especially for the more ephemeral components of bare ground, herbaceous, and litter (Fig. 8). These seasonal patterns are also reflected in the annual patterns from the overall 2-year annual trends of decreasing bare ground and herbaceous, increasing litter, slightly increasing shrub, and stable sagebrush. The timing of the moisture of the second year (2009) being less abundant in the spring, and more abundant (Fig. 8) later in the summer, appears to also have influenced the more ephemeral components, with bare ground and herbaceous showing a noticeable fluctuation, and litter a noticeable increase. Detecting subtle change with remote sensing requires rigorous processing protocols to overcome inconsistencies in satellite measurements from atmospheric conditions, sun-sensor geometry, geolocation error, variable ground pixel size, sensor noise, vegetation phenology, and surface moisture conditions.45 We paid careful attention to processing protocols developed in this study as well as previous research21 to minimize potential noise differences. The greatest challenge was to ensure that timing of satellite collects were appropriate for ground-measured phenology conditions. As reported in Table 1, our high-resolution QB satellite collects were less phenologically accurate than LS because the variance from the timing of ground measurements was seven days greater. In this case, we feel the effects were minimal. But because our study area is semiarid with more minimal cloud cover than less arid places, gaining an appropriate phenological series of high-resolution imagery for potential monitoring in other places remains a challenge. Additionally, the need to collect appropriately timed imagery should not outweigh the need for collects with useable view angles. Our experience shows that acquiring high-resolution satellite collects with view angles of is the most desirable; greater angles make comparison across years or seasons more difficult because of distorted ground geometry. In our case, three QB images had view angles , which required extra processing to maintain consistency. This extra processing is a challenge and does affect product quality, but we recognize that the use of high-view angle imagery cannot always be avoided. Component Change Magnitude and Direction With such subtle change amounts and a small sample size of years and seasons, gaining additional understanding of real change versus simple measurement variance is important. We approached this in two ways. First, we examined ground plot deviation using a one-way ANOVA that capitalized on examining the variance of the individual frame measurements for each plot. For annual plots, the mean variation (based on COV) for all pooled plots was 14.8, and the mean COV variation for significant ANOVA pooled plots was 36.4. For seasonal plots, the mean COV variation for all pooled plots was 18.4 and the mean COV for significant ANOVA pooled plots was 35.1. These results confirm that a higher variance threshold was required to achieve significant change and suggest that annual and seasonal average plot COVs of 35 or higher, on average, indicate that change on the plot is substantial enough to be real. Second, we pooled ground plots by three categories (all plots, significant ANOVA plots, and significant ANOVA and slope value plots) with the corresponding sensor-based predictions to understand if our ability to capture change with imagery increased as the significance of change on the ground increased. We anticipated that the sensor-based component predictions would be more successful in capturing ground-measured change as the reliability and magnitude of change increases. Analysis reveals that as difference trends increase, there is a better correlation with imagery linear slope values (Fig. 9), suggesting that as more real change is realized on the ground, sensor component predictions perform increasingly better. QB especially performs well, suggesting a good ability to be a future surrogate for ground measurement, either supplementing or replacing ground plots under some circumstances. LS correlations only improved after pooling for slope significance, suggesting that ground component change needs to happen at both substantial spatial and temporal scales to be reliably detected by LS components. Performance of Satellite Component Predictions A key objective of this study was to test the utility of continuous field component predictions as a method capable of monitoring subtle change on a sagebrush ecosystem. Especially, this method depends on predictions created from a single base year (2008) or season and then identifies component change on subsequent periods using change vector analysis and RT labeling. When compared to corresponding ground measurements by correlation, sensor component predictions performed reasonably well, with mean values of 0.85 and 0.82 for QB, and 0.77 and 0.73 for LS, all significant at the 0.01 level (Table 4), successfully demonstrating this objective. We assume QB predictions outperformed LS largely due to the more compatible spatial scale in relationship to the ground plots and spatial ecology and pattern of vegetation in this ecosystem. QB predictions were trained and compared to ground data at the transect level (two transects in every plot) rather than plot level for the training and comparison of LS. The finer spatial scale of QB allowed better tracking of local heterogeneity that was more homogenized at the LS scale. In the future, some additional QB component performance improvement may be realized by training and monitoring at a finer spatial scale than demonstrated by our transect level; however, we speculate that at some level, complications of controlling spatial geometry, erratic plot variance, and spurious sensor variance could overwhelm any benefit.61,62 When sensor predictions over the entire study area (rather than only at plot level) were compiled as total proportions by component, the correlation of QB and LS proportional area estimates to corresponding ground proportional areas was very high () for both annual and seasonal predictions, showing general compatibility among sources. Additionally, annual and seasonal component change relationships were very similar to plot-level polygon measurements, suggesting that sensor predictions over the entire study area remained reliable. For annual predictions, ground-measured proportions exhibited the highest amount of change, with LS second and QB the lowest, with QB also displaying the highest relative error (Table 6). We assume most change variance is scale related—likely a combination of variance from the ground-measurement method and the different ratio of total landscape area covered by ground polygons compared to QB or LS wall-to-wall predictions. Lower change numbers for sensor predictions over ground measurements could also indicate our change method was either too conservative, creating more omission than commission errors, or some ground change was not resolvable by the sensors. For seasonal predictions, LS showed the highest overall mean seasonal change, with ground measurement second and QB the lowest, although LS had higher relative error than QB (Table 7). LS seasonal change values also showed the most variation between components. This amount of change from LS was unexpected, as we anticipated QB to have higher change rates than LS, especially given the consideration that all LS classification and analysis was performed at the much broader landscape level. Our assumption that LS data in general were better calibrated and consistent, and warranted a lower NDVI change threshold than QB (3% versus 5%) for change vector component production, appears to be unlikely. This lower threshold likely contributed to the higher LS change values and relative error by allowing more commission error over actual unchanged areas than QB. Precipitation Correlation Results We recognize that rigorous climate change analysis with remote-sensing predictions should ideally be done over spatial and temporal scales larger than our study area. However, this research offered the opportunity to compare annual and seasonal component series measured on the ground and by satellite to newly available DAYMET downscaled precipitation data, providing potential insight into the relationship between component change and precipitation change. Correlations of component change to precipitation change overall were better than expected. When individual component correlations to precipitation were averaged across all components by data source, QB had the highest mean correlations overall at 0.61, with LS having the next highest at 0.54, and ground the lowest at 0.44. The higher mean correlations from the sensor components over the ground measurements are likely due to the ability of their wall-to-wall prediction scale to provide better correlation to the 1-km cell precipitation data than the small footprint of ground plots. When individual component correlations to precipitation were averaged across all components by season, the annual component mean correlation of 0.64 was much higher than the seasonal component mean correlation of 0.42, suggesting annual component predictions as a whole better reflected precipitation pattern than seasonal predictions. Closer examination of mean correlations pooled by individual annual components reveals mean values ranging from 0.71 for herbaceous to 0.64 for bare ground, 0.63 for shrub and litter, and 0.59 for sagebrush. The seasonal component mean values ranged from 0.64 for herbaceous to 0.41 for sagebrush, 0.39 for bare ground, 0.37 for litter, and 0.32 for shrub. This suggests that annual components of herbaceous, shrub, and sagebrush, and the seasonal component of herbaceous, have the greatest capacity to reflect precipitation patterns. However, component categories still need more in-depth precipitation analysis. For example, when individual component correlations to precipitation are pooled into two categories of ephemeral (bare ground, herbaceous, and litter) and persistent (shrub and sagebrush), the timing of precipitation is a major factor. Persistent components have higher average correlations when precipitation is calculated as a water year (0.67 as water year and 0.55 as calendar year), and the ephemeral components have higher average correlations when precipitation is calculated as a calendar year (0.69 as calendar year and 0.63 as water year). We assume the higher correlations of persistent components of shrub and sagebrush with water year precipitation better reflect the availability of the potential winter moisture that shrubland physiology is adapted to. Shrubs such as sagebrush can respond to precipitation as far as 2 to 5 years previous to the growing season.1 Clearly, more in-depth analysis across larger spatial areas and time frames will be warranted in the future for better predictive analysis, but our initial analysis has shown the potential of establishing a relationship between component change and precipitation change, and should provide confidence at larger scales. Implications for Sagebrush Monitoring This research demonstrates the ability for multiscale remote sensing to offer monitoring of gradual change in a sagebrush ecosystem. This has important implications for a widely distributed semiarid ecosystem under threat from multiple disturbance forces creating both abrupt and gradual change. One important implication of our research is the ability of sagebrush fractional components to successfully parameterize change on the landscape. A component metric potentially offers an easily understood, straightforward quantification of the landscape that is measureable over time and offers maximum flexibility to be converted into applications. Perhaps the most far-reaching implication is the demonstrated ability to use sagebrush component predictions trained from a single base year and subsequently projected across many years with change vector analysis.40,45 For sensors such as LS, with a rich historical archive, this provides further opportunity to compare gradual change rates back in time to causal agents such as climate to further understand potential cause and effect.39,40 Although we projected base classifications successfully across 3 years and five seasons, we caution that this method likely has a realized decay rate in accuracy from the original classification that would affect results after some number of replications. Another monitoring implication is the potential ability for high-resolution satellite remote-sensing sources such as QB to act as a surrogate to ground measurement. For monitoring to typically be sustained and effective, not only low-cost tools and approaches but also mechanisms to maintain consistency are required. Both of these requirements can be difficult to achieve with ground measurements.63 The ability to leverage a single year of comprehensive ground collection and image classification across many years of monitoring provides an attractive option to quantify and monitor a landscape. Because of the limited sample size of years and seasons reported here, our research will continue to track additional years to supplement our sample size. Future work is already underway to track precipitation- and temperature-induced component change many years back in time using the LS historical record. Sagebrush ecosystems constitute the largest single North American shrub ecosystem and provide vital ecological, hydrological, biological, agricultural, and recreational ecosystem services. Disturbances have altered and reduced this ecosystem by 50% historically, but climate change may ultimately represent the greatest future risk to this ecosystem. Improved ways to quantify and monitor gradual change in this ecosystem are vital to its future management. Here, we demonstrate the ability to successfully detect gradual change over a four-year period using continuous field predictions for five components of bare ground, herbaceous, litter, sagebrush, and shrub. Results show that herbaceous and litter exhibited the highest variation for annual and seasonal ground-measured change, and bare ground exhibited the least. When ground measurements were correlated to corresponding sensor predictions, annual predictions were more highly correlated than seasonal ones, and QB had higher correlation values than LS. Component predictions for the entire study area were also correlated to annual and seasonal DAYMET precipitation amounts. QB had the highest mean correlations to precipitation overall, and herbaceous was the highest performing component overall. Our results demonstrate that regression trees can be successfully used to monitor gradual changing components of a sagebrush ecosystem, demonstrate the ability of high-spatial resolution satellite imagery to serve as a reasonable surrogate for repeated ground measurement, and demonstrate the ability of component predictions to respond to changing precipitation. Future work is already underway to track precipitation- and temperature-induced component change many years back in time using the LS historical record, allowing for more comprehensive trend assessment and further analysis of the impact of vegetation component change on ecosystem services. We thank the United States Geological Survey and the United States Bureau of Land Management (BLM) who supported this project financially. We also thank George Xian for his helpful review and suggestions for this manuscript. DKM’s work for this paper was performed under USGS contract G10PC00044. The use of any trade, product or firm name is for descriptive purposes only and does not imply endorsement by the U.S. government.
<urn:uuid:17305982-f16a-44ce-b85f-4e75804c26df>
3.703125
10,727
Academic Writing
Science & Tech.
32.595673
95,554,152
I have proposed to show the relationship between chemistry and a basketball. My intent is to show the composition of basketballs from different time periods. My answer must be presented as a 10 minute class presentation. Help!!!© BrainMass Inc. brainmass.com July 15, 2018, 9:00 pm ad1c9bdddf I am going to offer you some suggestions about your topic as well as some other ideas in case you would like to do something different. What I would do if you stick to your original topic is to describe how the composition of basketballs changed and explain why the changing composition makes them better (does it make them bounce better, ... The relationship between common items and chemistry are determined. The composition of basketballs from different time periods are given.
<urn:uuid:392b8619-3130-4162-90e8-bab3412ee868>
2.65625
159
Q&A Forum
Science & Tech.
56.114375
95,554,175
This section provides a conceptual overview of partitioning in MySQL 5.5. For information on partitioning restrictions and feature limitations, see Section 19.5, “Restrictions and Limitations on Partitioning”. The SQL standard does not provide much in the way of guidance regarding the physical aspects of data storage. The SQL language itself is intended to work independently of any data structures or media underlying the schemas, tables, rows, or columns with which it works. Nonetheless, most advanced database management systems have evolved some means of determining the physical location to be used for storing specific pieces of data in terms of the file system, hardware or even both. In MySQL, the InnoDB storage engine has long supported the notion of a tablespace, and the MySQL Server, even prior to the introduction of partitioning, could be configured to employ different physical directories for storing different databases (see Section 8.12.3, “Using Symbolic Links”, for an explanation of how this is done). Partitioning takes this notion a step further, by enabling you to distribute portions of individual tables across a file system according to rules which you can set largely as needed. In effect, different portions of a table are stored as separate tables in different locations. The user-selected rule by which the division of data is accomplished is known as a partitioning function, which in MySQL can be the modulus, simple matching against a set of ranges or value lists, an internal hashing function, or a linear hashing function. The function is selected according to the partitioning type specified by the user, and takes as its parameter the value of a user-supplied expression. This expression can be a column value, a function acting on one or more column values, or a set of one or more column values, depending on the type of partitioning that is used. In the case of partitioning, the value of the partitioning column is passed to the partitioning function, which returns an integer value representing the number of the partition in which that particular record should be stored. This function must be nonconstant and nonrandom. It may not contain any queries, but may use an SQL expression that is valid in MySQL, as long as that expression NULL or an integer intval such that -MAXVALUE <= intval <= MAXVALUE MAXVALUE is used to represent the least upper bound for the type of integer in question. -MAXVALUE represents the greatest lower bound.) RANGE COLUMNS, and COLUMNS partitioning, the partitioning expression consists of a list of one or more columns. partitioning, the partitioning function is supplied by MySQL. For more information about permitted partitioning column types and partitioning functions, see Section 19.2, “Partitioning Types”, as well as Section 13.1.17, “CREATE TABLE Syntax”, which provides partitioning syntax descriptions and additional examples. For information about restrictions on partitioning functions, see Section 19.5.3, “Partitioning Limitations Relating to Functions”. This is known as horizontal partitioning—that is, different rows of a table may be assigned to different physical partitions. MySQL 5.5 does not support vertical partitioning, in which different columns of a table are assigned to different physical partitions. There are no plans at this time to introduce vertical partitioning into MySQL. For information about determining whether your MySQL Server binary supports user-defined partitioning, see Chapter 19, Partitioning. For creating partitioned tables, you can use most storage engines that are supported by your MySQL server; the MySQL partitioning engine runs in a separate layer and can interact with any of these. In MySQL 5.5, all partitions of the same partitioned table must use the same storage engine; for example, you cannot use MyISAM for one InnoDB for another. However, there is nothing preventing you from using different storage engines for different partitioned tables on the same MySQL server or even in the same database. MySQL partitioning cannot be used with the FEDERATED storage engines. KEY is possible with NDBCLUSTER, but other types of user-defined partitioning are not supported for tables using this storage engine. In addition, an NDBCLUSTER table that employs user-defined partitioning must have an explicit primary key, and any columns referenced in the table's partitioning expression must be part of the primary key. However, if no columns are listed PARTITION BY KEY or BY LINEAR KEY clause of the TABLE statement used to create or modify a then the table is not required to have an explicit primary key. For more information, see Section 22.214.171.124, “Noncompliance with SQL Syntax in NDB Cluster”. To employ a particular storage engine for a partitioned table, it is necessary only to use the option just as you would for a nonpartitioned table. However, you should keep in mind that [STORAGE] ENGINE (and other table options) need to be listed before any partitioning options are used in a TABLE statement. This example shows how to create a table that is partitioned by hash into 6 partitions and which uses InnoDB storage engine: CREATE TABLE ti (id INT, amount DECIMAL(7,2), tr_date DATE) ENGINE=INNODB PARTITION BY HASH( MONTH(tr_date) ) PARTITIONS 6; PARTITION clause can include a [STORAGE] ENGINE option, but in MySQL 5.5 this has no effect. Partitioning applies to all data and indexes of a table; you cannot partition only the data and not the indexes, or vice versa, nor can you partition only a portion of the table. Data and indexes for each partition can be assigned to a specific directory using the DATA DIRECTORY and INDEX DIRECTORY options for the PARTITION clause of the CREATE TABLE statement used to create the partitioned table. DATA DIRECTORY and DIRECTORY options have no effect when defining partitions for tables using the DATA DIRECTORY and DIRECTORY are not supported for individual partitions or subpartitions on Windows. These options are ignored on Windows, except that a warning is generated. All columns used in the table's partitioning expression must be part of every unique key that the table may have, including any primary key. This means that a table such as this one, created by the following SQL statement, cannot be partitioned: CREATE TABLE tnp ( id INT NOT NULL AUTO_INCREMENT, ref BIGINT NOT NULL, name VARCHAR(255), PRIMARY KEY pk (id), UNIQUE KEY uk (name) ); Because the keys have no columns in common, there are no columns available for use in a partitioning expression. Possible workarounds in this situation include adding the name column to the table's primary key, adding the uk, or simply removing the unique key Section 19.5.1, “Partitioning Keys, Primary Keys, and Unique Keys”, for more information. MIN_ROWS can be used to determine the maximum and minimum numbers of rows, respectively, that can be stored in each partition. The MAX_ROWS option can be useful for causing NDB Cluster tables to be created with extra partitions, thus allowing for greater storage of hash indexes. See the documentation for the DataMemory data node configuration parameter, as well as Section 18.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions”, for more Some advantages of partitioning are listed here: Partitioning makes it possible to store more data in one table than can be held on a single disk or file system partition. Data that loses its usefulness can often be easily removed from a partitioned table by dropping the partition (or partitions) containing only that data. Conversely, the process of adding new data can in some cases be greatly facilitated by adding one or more new partitions for storing specifically that data. Some queries can be greatly optimized in virtue of the fact that data satisfying a given WHEREclause can be stored only on one or more partitions, which automatically excludes any remaining partitions from the search. Because partitions can be altered after a partitioned table has been created, you can reorganize your data to enhance frequent queries that may not have been often used when the partitioning scheme was first set up. This ability to exclude non-matching partitions (and thus any rows they contain) is often referred to as partition pruning. For more information, see Section 19.4, “Partition Pruning”. Other benefits usually associated with partitioning include those in the following list. These features are not currently implemented in MySQL Partitioning, but are high on our list of priorities. Queries involving aggregate functions such as COUNT()can easily be parallelized. A simple example of such a query might be SELECT salesperson_id, COUNT(orders) as order_total FROM sales GROUP BY salesperson_id;. By “parallelized,” we mean that the query can be run simultaneously on each partition, and the final result obtained merely by summing the results obtained for all partitions. Achieving greater query throughput in virtue of spreading data seeks over multiple disks. Be sure to check this section and chapter frequently for updates as MySQL Partitioning development continues.
<urn:uuid:25faca0a-7564-45c3-8aea-dce13fdf477b>
2.90625
2,074
Documentation
Software Dev.
34.012479
95,554,177
Further Pointer Techniques Pointers are fundamental to many aspects of C++. In this chapter we introduce a number of different topics concerning pointers. In particular, we consider strings, pointers as function arguments, pointers to functions, dynamic memory allocation and reference arguments. KeywordsReference Variable Function Argument Double Trace Base Address Bubble Sort Unable to display preview. Download preview PDF.
<urn:uuid:001c122a-c406-4563-838c-785b05655a39>
3.015625
76
Truncated
Software Dev.
17.109276
95,554,204
Go west, young pine: America's forests are shifting across the country because of climate change - Warmer, wetter climate is helping move dozens of Eastern U.S. trees to the north and, surprisingly, west - Eastern white pine is going west, more than 80 miles (130 kilometers) - The eastern cottonwood has been heading 77 miles north A warmer, wetter climate is helping push dozens of Eastern U.S. trees to the north and, surprisingly, west, a new study finds. The eastern white pine is going west, more than 80 miles (130 kilometers) since the early 1980s. The eastern cottonwood has been heading 77 miles north (124 kilometers), according to the research based on about three decades of forest data. A warmer, wetter climate is helping push dozens of Eastern U.S. trees to the north and, surprisingly, west, a new study finds. Thirty years of data revealed many tree species are moving westward in response to climate change. This photo was taken in the Appalachians in eastern Kentucky. The northward shift to get to cooler weather was expected, but lead author Songlin Fei of Purdue University and several outside experts were surprised by the move to the west, which was larger and in a majority of the species. New trees tend to sprout farther north and west while the trees that are farther south and east tend to die off, shifting the geographic center of where trees live. In this photo provided by Songlin Fei, Purdue University, taken May 16, 2017, an Eastern white pine tree. Eastern white pine trees in the U.S. have moved more than 80 miles west since the early 1980s, according to a new study. by Songlin Fei of Purdue University. A warmer, wetter climate is helping push dozens of Eastern U.S. trees to the north and, surprisingly, west, a new study finds. Think of it as a line of people stretching, said Fei. Detailed observations of 86 different tree species showed, in general, the concentrations of eastern U.S. tree species have shifted more than 25 miles west (45 kilometers) and 20 miles (33 kilometers) north, the researchers reported in the journal Science Advances on Wednesday. One of the more striking examples is the scarlet oak, which in nearly three decades has moved more than 127 miles (205 kilometers) to the northwest from the Appalachians, he said. Now it's reduced in the Southeast and more popular in the Midwest. 'This analysis provides solid evidence that changes are occurring ,' former U.S. Forest Chief Michael Dombeck said in an email. 'It's critical that we not ignore what analyses like these and what science is telling us about what is happening in nature.' In this Feb. 6, 2007 file photo, an Eastern white pine seedling is held in Nebraska City, Neb. A warmer, wetter climate is helping push dozens of Eastern U.S. trees to the north and, surprisingly, west, a new study finds. The westward movement helped point to climate change - especially wetter weather - as the biggest of many culprits behind the shift, Fei said. The researchers did factor in people cutting down trees and changes to what trees are planted and where, he said. With the Southeast generally drying and the West getting wetter, that explanation makes some sense, but not completely, said Brent Sohngen at Ohio State University, who was not involved in the study. 'There is no doubt some signature of climate change,' he said in an email. But given the rapid rates of change reported, harvesting, forest fires and other disturbances, are probably still playing a more significant role than climate change, he wrote. Most watched News videos - 'It's a find of a lifetime': Archaeologist Dr Clíodhna Ní Lionáin - Staff rant about autistic child heard on mother's voicemail - The streets of Alcudia in Mallorca are flooded by mini-tsunami - Beach in Ciutadella Menorca hit by mini-tsunami 'rissaga' - Shocking video shows driver knocking cyclists off their bikes - Moment off-duty cop shoots armed motorbike thief dead - Drowned woman and child found next to survivor clinging to wreck - Model Annabelle Neilson walks the catwalk in 2010 fashion show - White woman confronts mother playing outside with child - Signalman speaks out over sacking after 44 years of service - Tourist dies after waterfall jump in background of music video - Brave lion cub forced to jump into raging river to follow mother
<urn:uuid:b1c3d4fb-730b-4fc6-8718-a4646fa2e4d8>
3.203125
963
Truncated
Science & Tech.
53.601553
95,554,213
Furtado, A., J.S. Lupoi, N.V. Hoang, A. Healey, S. Singh, B.A. Simmons, R.J. Henry, “Modifying Plants for Biofuel And Biomaterial Production,” Plant Biotechnology Journal, 2014, 12(9) 1246-1258, December 2014. DOI: 10.1111/pbi.12300 The productivity of plants as biofuel or biomaterial crops is established by both the yield of plant biomass per unit area of land and the efficiency of conversion of the biomass to biofuel. Higher yielding biofuel crops with increased conversion efficiencies allow production on a smaller land footprint minimizing competition with agriculture for food production and biodiversity conservation. Plants have traditionally been domesticated for food, fibre and feed applications. However, utilization for biofuels may require the breeding of novel phenotypes, or new species entirely. Genomics approaches support genetic selection strategies to deliver significant genetic improvement of plants as sources of biomass for biofuel manufacture. Genetic modification of plants provides a further range of options for improving the composition of biomass and for plant modifications to assist the fabrication of biofuels. The relative carbohydrate and lignin content influences the deconstruction of plant cell walls to biofuels. Key options for facilitating the deconstruction leading to higher monomeric sugar release from plants include increasing cellulose content, reducing cellulose crystallinity, and/or altering the amount or composition of noncellulosic polysaccharides or lignin. Modification of chemical linkages within and between these biomass components may improve the ease of deconstruction. Expression of enzymes in the plant may provide a cost-effective option for biochemical conversion to biofuel. To read the whole publication, please click here.
<urn:uuid:ef2eb785-3e0e-4f45-9038-73c26d464972>
2.625
365
Academic Writing
Science & Tech.
30.059368
95,554,214
One of the most potent toxins known acts by welding the two strands of the famous double helix together in a unique fashion which foils the standard repair mechanisms cells use to protect their DNA. A team of Vanderbilt University researchers have worked out the molecular details that explain how this bacterial toxin -- yatakemycin (YTM) -- prevents DNA replication. Their results, described in a paper published online July 24 by Nature Chemical Biology, explain YTM's extraordinary toxicity and could be used to fine-tune the compound's impressive antimicrobial and antifungal properties. YTM is produced by some members of the Streptomyces family of soil bacteria to kill competing strains of bacteria. It belongs to a class of bacterial compounds that are currently being tested for cancer chemotherapy because their toxicity is extremely effective against tumor cells. "In the past, we have thought about DNA repair in terms of protecting DNA against different kinds of chemical insults," said Professor of Biological Sciences Brandt Eichman. "Now, toxins like YTM are forcing us to consider their role as part of the ongoing chemical warfare that exists among bacteria, which can have important side effects on human health." Cells have developed several basic types of DNA repair, including base excision repair (BER) and nucleotide excision repair (NER). BER generally fixes small lesions and NER removes large, bulky lesions. A number of DNA toxins create bulky lesions that destabilize the double helix. However, some of the most toxic lesions bond to both strands of DNA, thereby preventing the cell's elaborate replication machinery from separating the DNA strands so they can be copied. Normally, this distorts the DNA's structure, which allows NER enzymes to locate the lesion and excise it. "YTM is different," said postdoctoral fellow Elwood Mullins. "Instead of attaching to DNA with multiple strong covalent bonds, it forms a single covalent bond and a large number of weaker, polar interactions. As a result, it stabilizes the DNA instead of destabilizing it, and it does so without distorting the DNA structure so NER enzymes can't find it." "We were shocked by how much it stabilizes DNA," Eichman added. "Normally, the DNA strands that we used in our experiments separate when they are heated to about 40 degrees [Celsius] but, with YTM added, they don't come apart until 85 degrees." The Streptomyces bacteria that produce YTM have also evolved a special enzyme to protect their own DNA from the toxin. Surprisingly, this is a base excision repair enzyme -- called a DNA glycosylase -- that is normally limited to repairing small lesions, not the bulky adducts caused by YTM. Nevertheless, studies have shown that it is extremely effective. It so happens that one of Streptomyces' competitors, Bacillus cereus, has managed to co-opt the gene that produces this particular enzyme. In Bacillus, however, the enzyme it produces -- called AlkD -- provides only limited protection. In 2015, Eichman and Mullins reported that, unlike other BER enzymes, AlkD can detect and excise YTM lesions. At the time, they had no idea why it wasn't as effective as its Streptomyces counterpart. Now they do. It turns out that AlkD tightly binds the product that it forms from a YTM lesion, inhibiting the downstream steps in the BER process that are necessary to fully return the DNA to its original, undamaged state. This drastically reduces the effectiveness of the repair process as a whole. In recent years, biologists have discovered that animals and plants host thousands of different species of commensal bacteria and this microscopic community, called the microbiome, plays a surprisingly important role in their health and well-being. Normally, these bacteria are beneficial -- for example, converting indigestible foods into digestible forms--but they can also cause problems, such as the stomach bacteria Heliobacter pylori that can cause inflammation that produces ulcers. "We know that bacteria produce compounds like YTM when they are under stress," Eichman observed. "The negative effects this has on their hosts is an unfortunate side effect. So it is very important that we learn as much as we can about how these bacterial toxins work and how bacteria defend against them." Graduate research assistant Rongxin Shi also participated in the research, which was funded by National Science Foundation grant MCB-1517695, National Institutes of Health grant R01 ES019625 and Department of Energy's Office of Science grant DE-AC02-06CH11357. David F Salisbury | EurekAlert! NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:573b22f2-2496-486b-af29-db5a10cfaf5c>
3.390625
1,600
Content Listing
Science & Tech.
38.448915
95,554,219
This chapter is largely devoted to one class: File class gives you the ability to list directories, obtain file status, rename and delete files on disk, create directories, and perform other filesystem operations. Many of these would be considered "system programming" functions on some operating systems; Java makes them all as portable as possible. Note that many of the methods of this class attempt to modify the permanent file store, or disk filesystem, of the computer you run them on. Naturally, you might not have permission to change certain files in certain ways. This can be detected by the Java Virtual Machine's (or the browser's, in an applet) SecurityManager, which will throw an instance of the unchecked exception SecurityException. But failure can also be detected by the underlying operating system: if the security manager approves it, but the user running your program lacks permissions on the directory, for example, you will either get back an indication (such as false) or an instance of the checked exception IOException. This must be caught (or declared throws clause) in any code that calls any method that tries to change the filesystem. You need to know all you can about a given file on disk. File class has a number of "informational" methods. To use any of these, you must construct a File object containing ...
<urn:uuid:0c82d337-a0a9-49e3-9ce9-3f7bdcb3c328>
3.734375
294
Truncated
Software Dev.
38.500271
95,554,221
In the last two decades, humans have finally observed some of the most interesting places in our solar system—the ice worlds. The Galileo spacecraft visited Jupiter and it’s moons, including Europa . The Cassini spacecraft visited Saturn and its moons, including Titan and Enceladus. These moons are as large as small planets and due to tidal forces from orbiting their giant planets, are likely to have liquid oceans with as much water (or liquid methane) as Earth’s oceans. Titan has an atmosphere, with clouds, and possibly rivers and lakes. Europa has a huge ocean of water under an icy crust. Enceladus, too, seems to have a liquid ocean under ice. Energy + liquid? Sounds like life could happen. Of course, the data from recent missions is still being analyzed, so even before the next visit we can learn more. This summer an international team reported more findings from the Cassini spacecraft which measured particles as it flew near Enceladus (more than ten years ago now). . Enceladus has a rocky core, a large ocean, and a thin ice crust. Cracks in the crust emit water from below, and one ‘volcano’ is spewing vapor and ice crystals out into space. Cassini flew through this plume and observed some of the particles. Initial study identified the plume as mainly gasses and ice particles. The new study extends the results, demonstrating that many of the observations appear to be ice crystals with traces of complex organic molecules embedded. Given the probably origin of the plume, this strongly suggests that the ocean has a soup of organic compounds. As the BBC confusingly put it, “a step closer to hosting life“. The paper discusses the complexities of low temperature, low pressure ice, and argue that the hypothesized ‘dirty ice’ could only form from complex processes. They offer a scenario involving a thin film of organics, which bubble up through a crack, becoming coated with ice, which is then ejected. I didn’t follow all of the argument here, but there is an important point: it is a mistake to assume that organic chemistry works in familiar ways in such a cold place. Energy + water + complex chemistry does not mean “just like my back yard”. Revisiting these ice worlds has become a top priority, at least for actual scientists (if not necessarily for funding agencies). If and when we visit them, we should find one of three possibilities: - there is no sign of life, even though the environment likely could support life. This may tell us something about the probability of life emerging in the universe. - there is recognizable life, and it is related to Earth. If we find some variation of DNA/RNA or whatever, that will open the questions of how a common ancestor got to two different places in the solar system. Got Panspermia? - there is recognizable life, but it is clearly not related to Earth. This “second example” will tell us something about what “life” is, and how it emerges. (There is a fourth possibility: there might be something so different we need new concepts. There may even be life, but we won’t recognize it, or may disagree about it.) Any and all of these outcomes will be breathtaking! We have to go there! Editorial aside: So, why are people wasting money on space tourism and suicide missions to Mars, when the most exciting discovery in the history of science is sitting right there, if we can just get our act together. - Mary Halton, Saturn moon a step closer to hosting life, in BBC News Science & Environment. 2018. https://www.bbc.com/news/science-environment-44630121 - Frank Postberg, Nozair Khawaja, Bernd Abel, Gael Choblet, Christopher R. Glein, Murthy S. Gudipati, Bryana L. Henderson, Hsiang-Wen Hsu, Sascha Kempf, Fabian Klenner, Georg Moragas-Klostermeyer, Brian Magee, Lenz Nölle, Mark Perry, René Reviol, Jürgen Schmidt, Ralf Srama, Ferdinand Stolz, Gabriel Tobie, Mario Trieloff, and J. Hunter Waite, Macromolecular organic compounds from the depths of Enceladus. Nature, 558 (7711):564-568, 2018/06/01 2018. https://doi.org/10.1038/s41586-018-0246-4 Ice Worlds, Ho!
<urn:uuid:7ca6fa74-1d2e-434e-a636-e1e68434a628>
3.53125
973
Personal Blog
Science & Tech.
53.214815
95,554,229
This has significant implications for global climate change because the nitrogen causes increased marine biological activity and CO2 uptake, which in turn produces the potent greenhouse gas nitrous oxide (N20). Published in the journal Science on May 16, the research was led by Texas A&M University and the University of East Anglia (UEA). It has long been known that man is enhancing the global nitrogen cycle through the use of fertilisers in agriculture and the burning of fossil fuels in power stations and cars. The effect of this on the land has been extensively studied. However, this is the first the time its impact on the open ocean has been properly quantified. “Anyone concerned about climate change will be alarmed at the scale of man’s impact on the world’s oceans, as revealed by our new study,” said Prof Peter Liss, an environmental scientist at the University of East Anglia. “The natural nitrogen cycle has been very heavily influenced by human activity over the last century – perhaps even more so than the carbon cycle – and we expect the damaging effects to continue to grow. It is vital that policy makers take action now to arrest this. “The solution lies in controlling the use of nitrogen fertiliser and tackling pollution from the rapidly increasing numbers of cars, particularly in the developing world.” ‘Impacts of atmospheric anthropogenic nitrogen on the open ocean’ is published in Science on May 16. The paper is the culmination of a project involving 30 researchers from the UK, the US, Germany, Italy, China, the Netherlands, Switzerland, Canada and Chile. The study found that increasing quantities of atmospheric anthropogenic fixed nitrogen entering the open ocean could account for around one third of the ocean’s external (non-recycled) nitrogen supply and up to three per cent of the annual new marine biological production. While the increased biological activity has the beneficial effect of drawing down man-made CO2 from the atmosphere, the researchers found that around two-thirds of this is offset by the increase in harmful N20 emissions. “This fertilization of the ocean by human activities has an important impact on the exchange of the greenhouse gases carbon dioxide and nitrous oxide and should be considered in future climate change scenarios,” said Prof Robert Duce of Texas A&M University, lead author of the paper. Press Office | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e79abfe5-ea91-414a-a4e8-95abbea09dc6>
3.578125
1,087
Content Listing
Science & Tech.
37.158735
95,554,251
We investigated short-term movements of neonate and juvenile sandbar sharks, Carcharhinus plumbeus, on their nursery grounds in Delaware Bay. The majority of sharks tracked limited their movements to water less than 5 m deep, remained within 5 km of the coastline, and occupied oblong activity spaces along the coast. In addition to site-attached coastal movements observed, several sharks moved entirely across Delaware Bay or spent considerable time in deeper portions of the central bay. Sharks tracked on the New Jersey side of the bay tended to spend more time in deeper water, farther from shore than sharks tracked on the Delaware side. Observation-area curves estimated that optimal tracking time for sandbar sharks in Delaware Bay was 41 h. Indices of site attachment showed that movement patterns of tracked sandbar sharks varied from nomadic to home ranging. There was no significant difference in rate of movement for day/night, crepuscular periods, or between juveniles and neonates. In general, young sandbar sharks patrolled the coast and appeared to be site attached to some extent, but were capable of making longer excursions, including movement entirely across Delaware Bay. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:17a5462e-9ee2-451b-8fe0-e6b4e17f0bb7>
3.015625
253
Academic Writing
Science & Tech.
23.364265
95,554,252
30 January 2004 30 January 2004 Weizmann Institute scientists have created a new type of nanotube built of gold, silver and other nanoparticles. The tubes exhibit unique electrical, optical and other properties, depending on their components, and as such, may form the basis for future nanosensors, catalysts and chemistry-on-a-chip systems. The study, published in Angewandte Chemie, was performed by Prof. Israel Rubinstein, Dr. Alexander Vaskevich, postdoctoral associate Dr. Michal Lahav and doctoral student Tali Sehayek, all of the Institute’s Department of Materials and Interfaces. Discovered in 1991, the first nanotubes were made of carbon and captured the attention of scientists worldwide when they proved to be the strongest material ever made, as well as being excellent conductors of electricity and heat. The new nanotube created at the WIS lacks the mechanical strength of carbon nanotubes. Its advantages lie instead in its use of nanoparticles as building blocks, which makes it possible to tailor the tube’s properties for diverse applications. The properties can be altered by choosing different types of nanoparticles or even a mixture, thus creating composite tubes. Moreover, the nanoparticle building blocks can serve as a scaffold for various add-ons, such as metallic, semiconducting or polymeric materials – thus further expanding the available properties. The tubes are produced at room temperature – a first-time achievement – in a three-step process. The scientists start out with a nanoporous aluminum oxide template that they modify chemically to make it bind readily to gold or silver nanoparticles. When a solution containing the nanoparticles (each only 14 nanometers in diameter) is poured through, they bind both to the aluminum oxide membrane and to themselves, creating multi-layered nanotubes in the membrane pores. In step three, the aluminum oxide membrane is dissolved, leaving an assembly of free-standing, solid nanotubes. “We were amazed when we discovered the beautifully formed tubes,” says Rubinstein. “The construction of nanotubes out of nanoparticles is unprecedented. We expected the nanoparticles to bind to the aluminum oxide template – that had been done before; but we did not expect them to bind to each other, creating the tubes.” The discovery process held other surprises for the Institute team. They had set out to accomplish something else entirely – to create a nanoporous template for studying the passage of biological molecules through different membranes. Likewise, having employed annealing – a process that uses heat to bind structures – they found that annealing actually prevented tube formation. “Everything interesting, in fact, happened at room temperature,” says Rubinstein. “This exceptional process, of spontaneous room-temperature binding of nanoparticles to form tubes, is not yet fully understood and is currently being studied.” The resulting tube is porous and has a high surface area, distinct optical properties and electrical conductivity. Collectively, the tube’s unusual properties may enable the design of future sensors and catalysts (both requiring high surface area), as well as microfluidic, chemistry-on-a-chip systems applied in biotechnology, such as DNA chips (used to detect genetic mutations and evaluate drug performance). Applying their approach, the team has succeeded in creating various metal and composite nanotubes, including gold, silver, gold/palladium and copper-coated gold tubes. Yeda, the Institute’s technology transfer arm, has filed a patent application for the new tubes. Applications for composites in the sports and leisure sector will be showcased by various exhibitors at Composites Europe in Stuttgart, Germany, on 6-8 November. The programme has been announced for the second Composites in Sport Conference and Exhibition, being held at Loughborough University, UK, on 3-4 October 2018.
<urn:uuid:85906178-d6f8-4b93-b9c3-6f68fbec5ed3>
3.890625
815
News (Org.)
Science & Tech.
21.960152
95,554,264
By analyzing sediment cores from Chilean lakes, an international team of scientists discovered that giant earthquakes reoccur with relatively regular intervals. When also taking into account smaller earthquakes, the repeat interval becomes increasingly more irregular to a level where earthquakes happen randomly in time. “In 1960, South-Central Chile was hit by the largest known quake on earth with a magnitude of 9.5. Its tsunami was so massive that –in addition to inundating the Chilean coastline– it travelled across the Pacific Ocean and even killed about 200 persons in Japan,” says Jasper Moernaut, an assistant professor at the University of Innsbruck, Austria, and lead author of the study. “Understanding when and where such devastating giant earthquakes may occur in the future is a crucial task for the geoscientific community”. It is generally believed that giant earthquakes release so much energy that several centuries of stress accumulation are needed to produce a new big one. Therefore, seismological data or historical documents simply do not go back far enough in time to reveal the patterns of their recurrence. “It is an ongoing topic of very vivid debate whether we should model large earthquake recurrence as a quasi-regular or random process in time. Of course, the model choice has very large repercussions on how we evaluate the actual seismic hazard in Chile for the coming decades to centuries.” In their recent paper in Earth and Planetary Science Letters, Moernaut`s team of Belgian, Chilean and Swiss researchers presented a new approach to tackle the problem of large earthquake recurrence. By analyzing sediments on the bottom of two Chilean lakes, they recognized that each strong earthquake produces underwater landslides which get preserved in the sedimentary layers accumulating on the lake floor. By sampling these layers in up to 8 m long sediment cores, they retrieved the complete earthquake history over the last 5000 years, including up to 35 great earthquakes of a magnitude larger than 7.7. “What is truly exceptional is the fact that in one lake the underwater landslides only happen during the strongest shaking events (like a M9 earthquake), whereas the other lake also reacted to “smaller” M8 earthquakes,” says Maarten Van Daele from Ghent University, Belgium. “In this way we were able to compare the patterns in which earthquakes of different magnitudes take place. We did not have to guess which model is the best, we could just derive it from our data.” With this approach, the team found that giant earthquakes (like the one in 1960) reoccur every 292 ±93 years and thus the probability for such giant events remains very low in the next 50-100 years. However, the “smaller” (~M8) earthquakes took place every 139 ±69 years and there is a 29.5% chance that such an event may occur in the next 50 years. Since 1960, the area has been seismically very quiet, but a recent M7.6 earthquake (on 25 DEC 2016) near Chiloé Island suggests a reawakening of great earthquakes in South-Central Chile. “These Chilean lakes form a fantastic opportunity to study earthquake recurrence,” says Moernaut. “Glacial erosion during the last Ice Age resulted in a chain of large and deep lakes above the subduction zone, where the most powerful earthquakes are getting generated. We hope to extend our approach along South America, which may allow us to discover whether e.g. earthquakes always rupture in the same segments, or whether other areas in the country are capable of producing giant M9+ earthquakes.” “In the meanwhile, we already initiated similar studies on Alaskan, Sumatran and Japanese lakes,” says Marc De Batist from Ghent University. “We are looking forward to some exciting comparisons between the data from these settings, and see if the Chilean patterns hold for other areas that have experienced giant M9+ earthquakes in the past.” Publication: J. Moernaut, M. Van Daele, K. Fontijn, K. Heirman, P. Kempf, M. Pino, G. Valdebenito, R. Urrutia, M. Strasser, M. De Batist, 2018. Larger earthquakes recur more periodically: New insights in the megathrust earthquake cycle from lacustrine turbidite records in south-central Chile, Earth and Planetary Science Letters, 481, 9-19. DOI: 10.1016/j.epsl.2017.10.016 Department of Geology University of Innsbruck phone: +43 512 507 54372 Public Relations Office University of Innsbruck phone: +43 512 507 32022 http://dx.doi.org/10.1016/j.epsl.2017.10.016 - J. Moernaut, M. Van Daele, K. Fontijn, K. Heirman, P. Kempf, M. Pino, G. Valdebenito, R. Urrutia, M. Strasser, M. De Batist, 2018. Larger earthquakes recur more periodically: New insights in the megathrust earthquake cycle from lacustrine turbidite records in south-central Chile, Earth and Planetary Science Letters, 481, 9-19 https://www.uibk.ac.at/geologie/sediment/ - Sedimentary Geology Working Group Dr. Christian Flatz | Universität Innsbruck Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:ed8e63b0-2677-48bf-a263-edb155f3f9e2>
3.65625
1,774
Knowledge Article
Science & Tech.
48.858194
95,554,296
Professor Priede said: “It is like surveying a new continent half way between America and Europe. We can recognise the creatures, but familiar ones are absent and unusual ones are common. We are finding species that are rare or unknown elsewhere in the world.” One of the world’s most advanced research vessels, the RRS James Cook, will be docking at Fairlie Pier by Largs tomorrow (Saturday, August 18), bringing samples of rare animals and a vast archive of pictures and videos, which will help us to understand more about life in the oceans. The RRS James Cook is the latest addition to the Natural Environment Research Council’s fleet of oceanographic research ships. The team of scientists mapped over 1,500 square miles, exploring the deep sea creatures living in the depths of the Mid-Atlantic Ridge. They used the latest technology to learn more about what is living in this remote and relatively unexplored deep-sea environment using remotely operated vehicles equipped with digital cameras. With a suite of eight deep sea cameras they were able to capture images of life on the peaks and valleys of very rugged terrain. Colourful sponges and corals encrust rocky cliffs, whereas areas of soft sediment are populated by starfish, brittle-stars, sea cucumbers and burrowing worms. Fishes, crabs and shrimps forage over the ridge exploiting whatever they can find. Trawls, traps and corers have brought back thousands of specimens for study back in the laboratory. Professor Priede said: “We are trying to imagine what the north Atlantic would be like without the ridge that literally cuts it in half, as we think it has a major effect on ocean currents, productivity and biodiversity of the North Atlantic Ocean. “The RRS James Cook ship is an absolutely fantastic facility and is allowing marine researchers to explore new environments, find new animals and study global changes in the world’s oceans.” The aim of the voyage is to contribute to the wider MAR-ECO project studying biodiversity along mid-ocean ridges (www.mar-eco.no) and to the global Census research programme. Census of Marine Life is a 10-year global scientific initiative to assess and explain the diversity, distribution and abundance of life in the oceans. The team already think they may have discovered a new species of Ostracod (or seed shrimp) that was found swarming in large numbers on the western side of the ridge. Specimens are on their way to experts in Southampton where world-renowned expert, Professor Martin Angel, will ultimately determine whether this is a new species, describe it and allocate a name. Dr Steven Wilson, Director of Science & Innovation for the Natural Environment Research Council, said: "The Mid-Atlantic Ridge is still relatively unexplored so this voyage will have played a vital role in expanding our knowledge of the biodiversity of the region.” Water currents and tides over the ridge were studied intensively and daily measurements were made of productivity in surface waters. The team left behind automatic equipment on the sea floor at six observing stations that will continue measurements and photography over the next two years. Further voyages are planned in 2008 and 2009 that will include retrieval of the gear. Oceanlab was responsible for assisting with the expedition management and deployed three deep ocean lander vehicles recording luminescent displays from animals living in the darkness on one of the peaks of the mid ocean ridge. The expedition is run under ECOMAR, a £2million consortium project funded by the UK Natural Environment Research Council, led by the University of Aberdeen with participation from: National Oceanography Centre, Southampton, University of St Andrews, Scottish Association for Marine Science, Plymouth Marine Laboratory, University of Durham and University of Newcastle. It provides a contribution to the wider MAR-ECO project co-ordinated by Odd Aksel Bergstad of Norway and the Census of Marine Life, a global project involving over 2,000 scientists. ECOMAR is also a European Census of Marine Life affiliated project. Bhavani Narayanaswamy | alfa Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:124af671-eae6-4102-828d-eed67bda27fa>
3.8125
1,460
Content Listing
Science & Tech.
36.7732
95,554,297
1 Bio-mechanics of Rectilinear Locomotion 2 Uses of Rectilinear Locomotion 3 In Robotics 4 See also 5 References Bio-mechanics of Rectilinear Locomotion Rectilinear locomotion relies upon two opposing muscles, the costocutaneous inferior and superior, which are present on every rib and connect the ribs to the skin. Although it was originally believed that the ribs moved in a "walking" pattern during rectilinear movement, studies have shown that the ribs themselves do not move, only the muscles and the skin move to produce forward motion. First, the costocutaneous superior lifts a section of the snake's belly from the ground and places it ahead of its former position. Then, the costocutaneous inferior pulls backwards while the belly scales are on the ground, propelling the snake forwards. These sections of contact propagate posteriorly, which results in the ventral surface, or belly, moving in discrete sections akin to "steps" while the overall body of the snake moves continuously forward at a relatively constant speed. Uses of Rectilinear Locomotion This method of locomotion is extremely slow (between 1–6 cm per second), but is also almost noiseless and very hard to detect, making it the mode of choice for many species when stalking prey. It is primarily used when the space being traversed is too constricting to allow for other forms of movement. When climbing, snakes will often use rectilinear locomotion in conjunction with concertina movements to exploit terrain features such as interstices in the surfaces they are climbing. In Robotics The development of rectilinear movement in robotics is centered around the development of snakelike robots, which have significant advantages over robots with wheeled or bipedal locomotion. The primary advantage in the creation of a serpentine robot is that the robot is often capable of traversing rough, muddy, and complex terrain that is often prohibitive to wheeled robots. Secondly, due to the mechanisms responsible for rectilinear and other forms of serpentine locomotion, the robots tend to have repetitive motor elements, which makes the entire robot relatively robust to mechanical failure. See also ^ a b C. Gans (1986). Locomotion of Limbless Vertebrates: Pattern and Evolution. ^ a b Gray, J. (1946). "The mechanism of locomotion in snakes" (PDF). The Journal of Experimental Biology. 23: 101–120. ^ Gans, Carl (1984). "Slide-pushing: a transitional locomotor method of elongate squamates". Symposium of the Zoological Society of London. 52: 12–26. ^ Bogert, Charles (1947). "Rectilinear locomotion in snakes". Copeia. 1947: 253–254. doi:10.2307/1438921. ^ a b Lissman, H.W. (1949). "Rectilinear locomotion in a snake (Boa occidentalis)" (PDF). Journal of Experimental Biology. 26: 368–379. ^ a b c Marvi, H.; Bridges, J.; Hu, DL. (2013). "Snakes mimic earthworms: propulsion using rectilinear traveling waves". J R Soc Interface. ^ a b Saito, M.; Fukuya, M.; Iwasaki, T. "Modeling, analysis, and synthesis of serpentine locomotion with a multilink robotic snake" (PDF). Forth Institute of Computer Science Internal Publications. ^ Date, Hisashi; Takita, Yoshihiro (2007). "Adaptive locomotion of a snake like robot based on curvature derivatives". Intelligent Robots and Systems. doi:10.1109/IROS.2007.4399635 – via IEEE. ^ Crepsi, Alessandro; Badertscher, Andre; Guignard, Andre; Ijspeert, Auke Jan (2004). "AmphiBot I: an amphibious snake-like robot". Robotics and Autonomous Systems. 50 (4): 163–175. doi:10.1016/j.robot.2004.09.015. v t e Animal locomotion on land Arboreal locomotion (Brachiation) Hand-walking Jumping Knuckle-walking Running Walking Concertina movement Undulatory locomotion Rectilinear locomotion Rolling Sidewinding Other modes Comparative foot morphology Arthropod leg Digitigrade Plantigrade Unguligrade Uniped Biped (Facultative) Triped Quadruped Canine gait Horse gait Human gait Animal locomotion on the surface layer of water Fish locomotion Volant animals This snake article is a stub. You can help by expanding it.
<urn:uuid:6827bc83-37f0-4332-8c4a-f983a8ee0577>
3.796875
995
Knowledge Article
Science & Tech.
40.001692
95,554,298
Johannesburg: Mozambique, the southern African country rated as one of the poorest, has topped a survey of nations with the lowest global environmental impact while India has ranked a dismal 75th in the report. Mozambique topped the list because almost all its energy use comes from green sources. India, on the other hand, was placed 75th, with renewable energy making up only 15.2 percent of all energy used; only 2.2 percent of waste water being recycled, and municipal waste of 0.34 kg per person being generated daily. The study by UK-based MoneySuperMarket highlights the individual contribution to the world's climate while also highlighting areas for improvement for each country. The rankings were based on different measurements that make up the average individual human impact in each country, including energy consumption, air pollution, waste production and reliance on non-renewable energy. The study helps identify the biggest contributors to negative environmental impact, but the results are also surprising, placing four other African countries - Ethiopia, Zambia, Kenya and Ghana - in the top seven for lowest environmental impact. Africa as a continent topped the charts and featured strongly in its use of green energy, low CO2 emissions and low levels of air pollution and waste production. But Mozambique's biggest neighbour South Africa, at 95th, ranks among the worst performers for its impact on the environment, faring marginally better though than first world nations such as Australia, Canada and the US. In Trinidad and Tobago, the worst country for environmental impact, the CO2 emissions are an average of 37.1 tonnes per person. Updated Date: Jun 16, 2017 07:00 AM
<urn:uuid:a0c557fa-f295-4bc5-8e80-fd0d5c502765>
3.34375
348
Truncated
Science & Tech.
34.603516
95,554,314
13 relations: Apparent magnitude, Aquila (constellation), Carnegie Institution for Science, Celestial equator, Constellation, Delta Scuti variable, Durchmusterung, Flamsteed designation, Henry Draper Catalogue, Hipparcos, Smithsonian Astrophysical Observatory Star Catalog, Star, Variable star. The apparent magnitude of a celestial object is a number that is a measure of its brightness as seen by an observer on Earth. New!!: 7 Aquilae and Apparent magnitude · Aquila is a constellation on the celestial equator. New!!: 7 Aquilae and Aquila (constellation) · The Carnegie Institution of Washington (the organization's legal name), known also for public purposes as the Carnegie Institution for Science (CIS), is an organization in the United States established to fund and perform scientific research. The celestial equator is the great circle of the imaginary celestial sphere on the same plane as the equator of Earth. New!!: 7 Aquilae and Celestial equator · A constellation is a group of stars that are considered to form imaginary outlines or meaningful patterns on the celestial sphere, typically representing animals, mythological people or gods, mythological creatures, or manufactured devices. New!!: 7 Aquilae and Constellation · A Delta Scuti variable (sometimes termed dwarf cepheid) is a variable star which exhibits variations in its luminosity due to both radial and non-radial pulsations of the star's surface. New!!: 7 Aquilae and Delta Scuti variable · In astronomy, Durchmusterung or Bonner Durchmusterung (BD), is the comprehensive astrometric star catalogue of the whole sky, compiled by the Bonn Observatory (Germany) from 1859 to 1903. New!!: 7 Aquilae and Durchmusterung · A Flamsteed designation is a combination of a number and constellation name that uniquely identifies most naked eye stars in the modern constellations visible from southern England. New!!: 7 Aquilae and Flamsteed designation · The Henry Draper Catalogue (HD) is an astronomical star catalogue published between 1918 and 1924, giving spectroscopic classifications for 225,300 stars; it was later expanded by the Henry Draper Extension (HDE), published between 1925 and 1936, which gave classifications for 46,850 more stars, and by the Henry Draper Extension Charts (HDEC), published from 1937 to 1949 in the form of charts, which gave classifications for 86,933 more stars. New!!: 7 Aquilae and Henry Draper Catalogue · Hipparcos was a scientific satellite of the European Space Agency (ESA), launched in 1989 and operated until 1993. New!!: 7 Aquilae and Hipparcos · The Smithsonian Astrophysical Observatory Star Catalog is an astrometric star catalogue. A star is type of astronomical object consisting of a luminous spheroid of plasma held together by its own gravity. New!!: 7 Aquilae and Star · A variable star is a star whose brightness as seen from Earth (its apparent magnitude) fluctuates. New!!: 7 Aquilae and Variable star ·
<urn:uuid:41471058-e054-495a-90f0-e7884a194cb6>
3.46875
665
Structured Data
Science & Tech.
32.81871
95,554,330
Was There Life on Mars? Ask SAM Jul 31 2015 Read 1014 Times It isn’t a mystery that we are interested in our nearest neighbour Mars — after all, curiosity is one of the greatest virtues in our search for truth and understanding. So take a look at the latest mission to the red planet, the Mars Science Laboratory and in particular its rover — Curiosity — as they try to answer the question ‘was there life on Mars?’ Mars Exploration Program The Curiosity rover is part of the Mars Science Laboratory (MSL) mission which landed on Mars in August 2012 — with MSL being part of NASA’s wider Mars Exploration Program. The Exploration program is a long term program using robotic rovers to explore the surface of the red planet. The Mars Exploration Program has four main aims: - Determine whether life ever arose on Mars - Characterize the climate of Mars - Characterize the geology of Mars - Prepare for human exploration The role of Curiosity is to assess how the environment on Mars has changed and if it could have supported life forms such as microbes — this will help to determine the planet's habitability. The biggest science lab on Mars — Meet SAM Curiosity carries the most advanced set of instruments to have landed on Mars so far, with more than half of the instrument payload taken up with one of the spectrometers — SAM. SAM — Sample Analysis at Mars — consists of three instruments closely related to each other, a gas chromatograph with mass spectrometry (GC-MS) along with a tuneable laser spectrometer (TLS). The GC has six columns that are 30m in length with an internal diameter of 0.25mm. Each column is configured individually with different stationary phases to allow for a wide range of compounds to be separated and then analysed. The effect of changing column parameters is discussed fully in the article, Optimisation of Column Parameters in GC. SAM is designed to process both solid and gaseous samples. It has a furnace capable of heating solid samples to over 900?C and then extracting and passing the volatiles released through the GC columns and on to the mass spectrometer and TLS — and water was one of the compounds found when Martian rocks were heated in SAM. Results so far Several instruments aboard Curiosity have identified some kind of signal that water was, or is, present on Mars; but SAM has given scientists back on Earth an idea about the water’s origin by measuring its D/H (deuterium/hydrogen) ratio — with the results showing that the Martian atmosphere has changed over the years — indicating that conditions on Mars were very different in the past. So far SAM has found methane in the atmosphere but no evidence it came from microbes — the search for life on Mars goes on. Image courtesy of NASA Do you like or dislike what you have read? Why not post a comment to tell others / the manufacturer and our Editor what you think. To leave comments please complete the form below. Providing the content is approved, your comment will be on screen in less than 24 hours. Leaving comments on product information and articles can assist with future editorial and article content. Post questions, thoughts or simply whether you like the content. In This Edition Articles - Enhanced Sample Preparation - Identifying Inherent Contamination in Deep Well Microplates - How to Determine Extra Column Dispersion and Extra Column Volume - Th... View all digital editions Jul 29 2018 Washington DC, USA Aug 02 2018 Barcelona, Spain Aug 06 2018 Berlin, Germany Aug 26 2018 Florence, Italy Sep 05 2018 Chiba, Japan
<urn:uuid:5f40e0c6-dcc2-4f46-88c2-d2e2a97e8e10>
3.359375
751
Truncated
Science & Tech.
44.718974
95,554,333
one of the methods he (reportedly) developed or adopted when studying irrationals was to use a number series (called his 'ladder') in order to approximate to the square root of 2: how is the 'ladder' formed? how can it be used to approximate to the square root of 2? 'Reaching the Core of AS Mathematics', available from the ATM, interestingly links this blended recursion 'ladder' to the expansion of: what has it got to do with Edoxus' ladder and why? in the limit, how does this provide an approximation to the square root of 2?
<urn:uuid:89bd3ead-627d-4ea3-9220-dbede0509f7e>
2.765625
130
Personal Blog
Science & Tech.
45.587697
95,554,354
What's up in New findings are fueling an old suspicion that fundamental particles and forces spring from strange eight-part numbers called “octonions.” By squeezing fluids into flat sheets, researchers can get a handle on the strange ways that turbulence feeds energy into a system instead of eating it away. Mathematicians have disproved the strong cosmic censorship conjecture. Their work answers one of the most important questions in the study of general relativity and changes the way we think about space-time. By 1913, Albert Einstein had nearly completed general relativity. But a simple mistake set him on a tortured, two-year reconsideration of his theory. Today, mathematicians still grapple with the issues he confronted. The Navier-Stokes equations describe simple, everyday phenomena, like water flowing from a garden hose, yet they provide a million-dollar mathematical challenge. Two mathematicians prove that under certain extreme conditions, the Navier-Stokes equations output nonsense. The mathematician Svitlana Mayboroda and collaborators have figured out how to predict the behavior of electrons — a mathematical discovery that could have immediate practical effects.
<urn:uuid:68b53a38-1afe-4300-b718-d80790b6d088>
2.703125
231
Content Listing
Science & Tech.
22.894966
95,554,364
Heavy metal resistant–plant growth promoting bacteria as an alternative strategy for decreasing accumulation of metals in plant tissues In the present era, pollution of water, soil and air with heavy metals is increasing rapidly. Industrialization and technological advancement have caused serious damage to the ecosystem by releasing large quantities of heavy metals (i.e., cadmium, chromium, and lead) and metalloids (i.e., arsenic and antimony). These metals are a major threat to all life forms in the environment due to their toxic effects. In addition, the heavy metals are not chemically and biologically degradable and are not simply removed from the environment. They can only be transformed into less toxic species. Unlike many other pollutants (i.e., organic contaminants), toxic effects of heavy metals last longer. Most of the metals are toxic at low concentrations. Speciation of heavy metals and their bioavailability determine the physiological and toxic effects of metals on living organisms. These metals are transferred to food chain by the plants grown on the heavy metal-contaminated soils, which are main site of accumulation of these metals. Remediation using conventional physical and chemical methods is not cost effective and produces large volumes of chemical wastes. In addition, use of these methods is ineffective for low metal concentrations (less than 100 mg/L) and is not specific for metal-binding properties. Therefore, there is a need to find an eco-friendly and efficient method of reclaiming environments contaminated with heavy metals. Using beneficial microorganisms, including bacteria, are known to be an efficient alternative to physical and chemical techniques for the remediation of heavy metal contaminated environments. The majority of heavy metals disrupt microbial cell membranes, but the bacteria isolated from heavy metal-contaminated soils or from plants grown on heavy metal-polluted soils are resistant to heavy metals and have adapted to these metals through a variety of chromosomal, transposon, and plasmid-mediated resistance systems. Heavy metal-resistant bacteria have various mechanisms of metal sequestration that hold greater metal biosorption capacities. Due to their resilience to an extensive range of environmental conditions, resistance to various heavy metals, ubiquity, size, and ability to grow under controlled conditions, the bacteria have been used as biosorbents. The cellular structure of the bacteria can trap heavy metal ions and subsequently sorb them onto the binding sites of the cell wall. Since the bacteria have high surface-to-volume ratios and numerous potential active chemosorption sites like the teichoic acid on the cell wall, they show an excellent sorption capacity. Owing to these abilities, the bacteria have been effectively used as biosorbents for removal and recovery of heavy metals. The conversion of bioavailable heavy metals into inert species is a crucial step for reducing the uptake of heavy metals by plants. Plant associated-bacteria protect the plant from the phytotoxicity of excessive metals by changing the speciation from bioavailable forms to the non-bioavailable forms in soils. The low bioavailability of metals in soils decreases their uptake by plants. These bacteria can also reduce plant metal uptake or translocation to aerial plant parts by decreasing metal bioavailability in soil by processes of bioaccumulation, biosorption, precipitation, biotransformation (via methylation, demethylation, volatilization, complex formation, oxidation, or reduction), complexation, and alkalization (Fig. 1). In addition to reducing the accumulation of these metals in plant tissues, these bacteria can help plants tolerate heavy metal toxicity stress by different mechanisms such as production of phytohormones such as indole-3-acetic acid (IAA), 1-aminocyclopropane-1-carboxylic acid (ACC) deaminase, siderophores, organic acids, exopolymers, and antioxidant enzymes, phosphate solubilization, induction of heavy metal resistance genes, and improvement of the uptake of essential plant nutrient elements. Department of Soil Science, University College of Agriculture and Natural Resources, University of Tehran, Tehran, Iran Bacterial mediated alleviation of heavy metal stress and decreased accumulation of metals in plant tissues: Mechanisms and future prospects. Ecotoxicol Environ Saf. 2018 Jan |Why do some people cope well but others don’t?… Some people fare better than others when faced with adversity; they appear to be more ‘resilient’. There are around two million people with a vision impairment in the UK, the…| |What we have done on arsenic issue in last 15 years… Arsenic is well known poisonous chemical component which naturally found in underground from soil, and water. Bangladesh is one of the country where certain area of its ground water has…| |Diversity and function of bacteria living inside a… Salt marshes are among the most productive ecosystems on Earth and are sometimes subjected to pollution. The vegetation in this type of environment is, as a consequence, highly specialized to…| |Arsenic in drinking water and suicide It is well-known that micro or trace minerals, like arsenic, aluminium, mercury and others are essential for good health. However, in spite of the fact that our body requires a…| |Soil remediation: a potential way of turning lignin… With the rapid economic development and industrialization, an increasing amount of agricultural and urban soils has been polluted in recent years. Among various kinds of pollutants, metal pollutions have paid…| |Effects of Cadmium on amphibians Cadmium is a toxic metal for living organisms. It causes abnormal fetus development, cancer, DNA mutations; it impairs growth, and compromises the functions of the reproductive, respiratory and endocrine systems.…|
<urn:uuid:c82186c6-b865-4f0f-92d1-3eb94540b285>
3.703125
1,173
Content Listing
Science & Tech.
15.199929
95,554,374
Edited by Jamie (ScienceAid Editor), Taylor (ScienceAid Editor), Jen Moreau The properties and similarities in transition metals comes down to their electron configurations. The 3d sub-shell is closer to the nucleus, but higher in energy than the 4S; meaning that 4S fills up first (giving K and Ca). Therefore we define transition metals as those that have an incomplete d sub-level: in the element form or one of its ions. There are exceptions that arise as a result of this definition. Scandium, in its element form, has a typical transition configuration with an incomplete d shell, yet its ion Sc3+ has no d shell electrons so strictly isn't a transition metal. Similarly, copper element doesn't have an incomplete 3d, but its most common ion (Cu2+) does, and so therefore it can be a transition metal. Zinc, on the other hand, has a complete 3d in both element and ion form so is not a transition metal. Variable Oxidation States Because the 4S and 3d energy levels are so similar, the transition elements can lose differing numbers of electrons and have a similar stability. This means they have variable oxidation states. Vanadium has 4 oxidation states (+2, +3, +4 and +5). These can be seen as 4 distinct colours when zinc is added to acidified ammonium vandate (V). Similarly, chromium ions show different colours according to the oxidation state, when Cr2O72- is reduced in solution by zinc. Oxygen from the air can act as an oxidizing agent on many compounds such as Co(OH)2 to Co(OH)3. The ions can be protected from this by acidification. If ammonia solution is added to cobalt (II) salt, the precipitate Co(OH)2 is formed. If shaken it will oxidize to Co(OH)3, which is brown. Substances can also be oxidized in alkaline solution. For example, adding excess of NaOH to chromium (III) salt gives the chromate (III) ion. Now when this is treated with hydrogen peroxide it is readily oxidised to chromate (VI) ions. 2[Cr(OH)6]3- + 3H2O2 ==>> 2CrO42- + 2OH- + 8H2O Titrations are important in analyzing solutions. For example, testing the amount of iron in an iron tablet. This can be done by reacting Fe2+ with either MnO4- - manganate (VII) or Cr2O7- - dichromate (VI). - First, the tablet is dissolved in acid. Dilute sulphuric acid is used because it is strong, isn't an oxidizing agent (as concentrated sulphuric acid is) and will not be oxidized. - The manganate ion is added from the burette in the form of potassium manganate, which is dark purple, but the reaction product is colourless so when the end-point is reached the solution will be purple. The following reaction occurs. MnO4- + 8H+ + 5e- ==>> Mn2+ + 4H2O Fe2+ ==>> Fe3+ + e- Overall this makes... 5Fe2+ + MnO4- + 8H+ ==>> 5Fe3+ + Mn2+ + 4H2O Now take a look at the worked example below to see how to perform calculations given this information. If potassium dichromate is used instead, an indicator must be used, this is commonly sodium diphenylamine sulphonate. The overall reaction for this is. 6Fe2+ + Cr2O72- + 14H+ ==>> 6Fe3+ + 2Cr3+ + 7H2O Referencing this Article If you need to reference this article in your work, you can copy-paste the following depending on your required format: APA (American Psychological Association) Transition Metals. (2017). In ScienceAid. Retrieved Jul 22, 2018, from https://scienceaid.net/chemistry/inorganic/transition.html MLA (Modern Language Association) "Transition Metals." ScienceAid, scienceaid.net/chemistry/inorganic/transition.html Accessed 22 Jul 2018. Chicago / Turabian ScienceAid.net. "Transition Metals." Accessed Jul 22, 2018. https://scienceaid.net/chemistry/inorganic/transition.html. Categories : Inorganic Recent edits by: Taylor (ScienceAid Editor), Jamie (ScienceAid Editor)
<urn:uuid:9d02f55f-7213-482d-86f6-fb0032549f0e>
3.78125
981
Knowledge Article
Science & Tech.
52.808713
95,554,380
New research into one of the worlds most social bacteria - Myxococcus xanthus, has discovered that it has a gourmet style approach to its consumption of phosphates, which provides a key clue to what makes it the most "social" of bacteria. Myxococcus xanthus is amazingly social and co-operative for a bacterium. It "hunts" as a pack, it makes a collective decision with other M. xanthus whether to go dormant or not, and it even has methods of policing the behaviour of individual bacteria that to try to "cheat" in the collective activity of the group. Now Dr David Whitworth from the Biological Sciences Department of the University of Warwick has also discovered that it appears to seek out and consume phosphate in a "gourmet" manner, providing important evidence as to how such a relatively simple organism is able to act in such a social manner. Dr Whitworth looked at the signalling pathways used by the bacterium to process information to switch actions on or off. Myxococcus xanthus has an unprecedented number (around 150) of the signalling pathways known as "two component switches" which dramatically increases the level of complexity of information that can be processed by the bacterium. Dr Whitworth focussed on three previously described signalling pathways that were known to be similar to phosphate utilisation pathways (all organisms needs to consume phosphate to thrive). Until now most researchers believed that all bacteria only required one phosphate dependent signalling pathway to find the phosphate needed for consumption, and so the other two pathways found in M. xanthus simply did something else. In collaboration with Prof Mitchell Singer of the University of California at Davis, Dr Whitworth found that in fact the bacterium was using all three pathways and part of a further fourth pathway in combination, to detect and utilise phosphates, making it a very sophisticated consumer of phosphates - the bacterial equivalent of a gourmet diner. Peter Dunn | alfa Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:cf5af714-a6d5-4ee1-ab0f-295c30e6811a>
2.875
977
Content Listing
Science & Tech.
36.639903
95,554,397
Quantum mechanics (QM; also known as quantum physics, quantum theory, the wave mechanical model, or matrix mechanics), including quantum field theory, is a fundamental theory in physics which describes nature at the smallest scales of energy levels of atoms and subatomic particles. Classical physics (the physics existing before quantum mechanics) is a set of fundamental theories which describes nature at ordinary (macroscopic) scale. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large (macroscopic) scale. Quantum mechanics differs from classical physics in that: energy, momentum, angular momentum, and other quantities of a system are restricted to discrete values (quantization), objects have characteristics of both particles and waves while being neither one of those (wave-particle duality), and there are limits to the precision with which quantities can exist in nature (uncertainty principle).[note 1] Quantum mechanics gradually arose from theories to explain observations which could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and from the correspondence between energy and frequency in Albert Einstein's 1905 paper which explained the photoelectric effect. Early quantum theory was profoundly re-conceived in the mid-1920s by Erwin Schrödinger, Werner Heisenberg, Max Born and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he later described in a paper titled On the nature of light and colours. This experiment played a major role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays. These studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck. Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets) precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, known as Wien's law in his honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it was valid only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmann's statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics.
<urn:uuid:eba47ed8-8e6a-4fbe-885e-601c1ab930f9>
3.390625
710
Knowledge Article
Science & Tech.
16.121579
95,554,417
Something chilling is happening in the Arctic. Or actually, what's happening is not at all chilling. And that's what's so alarming. As winter, supposedly, descends onto the North Pole, warm weather has persisted in the Arctic for weeks on end, while the amount of sea ice covering the world's northernmost ocean has hit lows never seen before for this time of year. The sea ice has been so scarce, roughly 28% below the long-term average in October, and temperatures so unseasonably balmy that scientists, including many who have studied the effects of manmade global warming on the most sensitive polar regions for years, have been taken by surprise. “Seeing extremes in the Arctic is becoming fairly routine in some sense, but this is quite unusual and there has been talk in the community regarding how out of whack things appear at the moment," Walt Meier, a research scientist at NASA's Goddard Space Flight Center Cryospheric Sciences Laboratory, told BuzzFeed News. One reading from the Danish Meteorological Institute, for example, found that over the past several days temperatures have been about 36 degrees Fahrenheit higher than average around the North Pole. Like much of the rest of the world, the Arctic has been hotter than normal this past year. Temperatures have been 7 to 13 degrees above average in the Russian Arctic, according to the World Meteorological Organization. Other Arctic and sub-Arctic regions in Alaska and Canada have been at least 5 degrees above average, the UN agency said. The high and persistent temperatures this fall are particularly extraordinary, scientists said, because the region has already plunged into "polar night," the time of year when the sun no longer rises over the North Pole. "Usually it is bitterly cold by this time of year, but at the moment it's barely cold enough to keep ice frozen in some spots," Daniel Swain, a climate scientist and postdoctoral fellow at UCLA, told BuzzFeed News. Scientists explain the heat wave by pointing to a string of record-breaking hot months earlier this year, coupled with little ice coverage during the summer. That combination has left the Arctic Ocean with a belly full of heat that it is now discharging into the atmosphere. The polar jet stream, an air current whipping around the northern latitudes from west to east, also has contributed to the higher temperatures by bringing warm air from the lower latitudes north — while pushing the frigid air normally cloaking the North Pole over Siberia. “Siberia is no stranger to bitter cold, of course, but this ‘warm Arctic/cold continent’ pattern is definitely striking and will almost certainly be the subject of future scientific study,” Swain said. The balmier conditions have given the surface of the Arctic Ocean little chance to freeze. The area of the Arctic Ocean covered by some sea ice was at a record low for the month of October, according to data from the National Snow and Ice Data Center (NSIDC). In the past week, a chart with a stark depiction of depleted sea ice at both poles attracted attention online. The graph combined the low Arctic winter ice extent with the steep drop of sea ice off Antarctica, just now entering its summer season. But NSIDC, where the data in the graph reportedly originated, said it makes little sense to combine sea ice measures from both poles, which have vastly different geographies and are always in opposite seasons at any given time. After the chart went viral, NSIDC was inundated with media inquiries about the graph. The center asked Meier, the NASA ice scientist, to verify the chart. “I haven't been able to exactly recreate the plot,” Meier said. But he said that he was able to confirm that the combined sea ice coverage in Arctic and Antarctic are indeed at record lows. Zachary Labe, a Ph.D. student in climate science at the University of California, Irvine, was one of the first people to tweet the chart. The chart was made by an internet user named “Wipneus” who, Labe told BuzzFeed News, has been posting on online sea ice forums and making visualizations for years. It caught Labe’s attention because the data was so anomalous, but he agrees that it is better to consider Arctic and Antarctic sea ice separately. “The atmospheric and sea ice dynamics between the Arctic and Antarctic are quite different,” Labe said. “The global sea ice plot may not be very meaningful.” “I didn't realize there would be so much interest,” he added. Regardless, the warm winter continues in the Arctic. But winters are long there, so there’s still plenty of time, said experts, for the ocean surface to freeze after the water cools and the warm air stops funneling north. “Once that happens,” Meier said, “ice can grow rapidly.” Dino Grandoni is a science reporter for BuzzFeed News and is based in New York. Contact this reporter at email@example.com Contact Dino Grandoni at firstname.lastname@example.org. Got a confidential tip? Submit it here.
<urn:uuid:0215073a-2e9d-4fef-8902-2ce92e058761>
3.0625
1,086
News Article
Science & Tech.
45.865465
95,554,431
Ribonucleotides, units of RNA, can become embedded in genomic DNA during processes such as DNA replication and repair, affecting the stability of the genome by contributing to DNA fragility and mutability. Scientists have known about the presence of ribonucleotides in DNA, but until now had not been able to determine exactly what they are and where they are located in the DNA sequences. Now, researchers have developed and tested a new technique known as ribose-seq that allows them to determine the full profile of ribonucleotides embedded in genomic DNA. Using ribose-seq, they have found widespread but not random incorporation and “hotspots” where the RNA insertions accumulate in the nuclear and mitochondrial DNA of a commonly-studied species of budding yeast. Ribose-seq could be used to locate ribonucleotides in the DNA of a wide range of other organisms, including that of humans. Credit: Rob Felt Georgia Tech Associate Professor Francesca Storici (left), Graduate Student Kyung Duk Koh and collaborators have developed and tested a technique for identifying ribonucleotides in genomic DNA. “Ribonucleotides are the most abundant non-standard nucleotides that can be found in DNA, but until now there has not been a system to determine where they are located in the DNA, or to identify specifically which type they are,” said Francesca Storici, an associate professor in the School of Biology at the Georgia Institute of Technology. “Because they change the way that DNA works, in both its structure and function, it is important to know their identity and their sites of genomic incorporation.” A description of the ribose-seq method and what it discovered in the DNA of the budding yeast species Saccharomyces cerevisiae will be reported on January 26 in the journal Nature Methods. The findings resulted from collaboration between researchers in Storici’s laboratory at Georgia Tech – with graduate students Kyung Duk Koh and Sathya Balachander – and at the University of Colorado Anschutz Medical School with assistant professor Jay Hesselberth. The research was supported by the National Science Foundation, the Georgia Research Alliance, the American Cancer Society, the Damon Runyon Cancer Research Foundation, and the University of Colorado Golfers Against Cancer. Because of the extra hydroxyl (OH) group in the ribonucleotides, their presence distorts the DNA and creates sensitive sites where reactions with other molecules can take place. Of particular interest are reactions between the OH and alkaline solutions, which can make the DNA more susceptible to cleavage. Ribose-seq takes advantage of this reaction with the hydroxyl group to launch the process of identifying the genomic spectrum of ribonucleotide incorporation. Researchers first cleave the DNA samples at the ribonucleotides, then take the resulting fragments through a specialized process that concludes with generation of a library of DNA sequences that contain the sites of ribonucleotide incorporation and their upstream sequence. High-throughput sequencing of the library and alignment of sequencing reads to a reference genome identifies the profile of rNMP incorporation events. “Ribose-seq is specific to directly capturing ribonucleotides embedded in DNA and does not capture RNA primers or Okazaki fragments formed during DNA replication, breaks or abasic sites in DNA,” Storici noted. “For this reason, ribose-seq has application for rNMP mapping in any genomic DNA, from large nuclear genomes to small genomic molecules such as plasmids and mitochondrial DNA, with no need of standardization procedures,” she said. “It also allows mapping rNMPs even in conditions in which the DNA is exposed to environmental stressors that damage the DNA by generating breaks and/or abasic sites.” The extra hydroxyl group found in the ribonucleotides is key to the ribose-seq technique, said Koh, the paper’s first author. “The OH group is specific to the ribonucleotides,” he explained. “That allowed us to build a new tool for recognizing specifically where the ribonucleotides are located.” The high-throughput sequencing and initial data analysis were done in the Hesselberth laboratory in the Department of Biochemistry and Molecular Genetics at the University of Colorado Anschutz Medical School. To validate their method, the researchers tested ribose-seq on the much-studied yeast species. The analyses revealed a strong preference for the cytidine and guanosine bases at the ribonucleotide sites. “The ribonucleotides are not randomly distributed, and there is some preference for specific base sequences and specific base composition of the ribonucleotide itself,” said Koh. “By looking at the non-random distribution, we found several hotspots in which the ribonucleotides are incorporated into the genome.” Knowledge of where the ribonucleotides cluster could help identify areas of greatest potential for genome instability and lead to a better understanding of how they affect the properties and activities of DNA. “The fact that we see biases in the base compositions of the ribonucleotides allows us to tell which base is more likely to be incorporated into the DNA,” Koh explained. “If there are specific signatures of genomic instability that are caused by the ribonucleotides, this will allow us to narrow down the locations and know where they are more likely to be found.” A next step will be to test ribose-seq on other DNA, Koh said. “Our technique could potentially be applied to any genome of any cell type from any organism as long as genomic DNA can be extracted from it,” he added. “It is independent of specific organisms.” Beyond repair and replication processes, ribonucleotides can also be created in DNA as a result of damage caused by drugs, environmental stressors and other factors. The ribose-seq method could also allow scientists to study the impact of these processes. “Ribose-seq should allow us to better understand the impact of ribonucleotides on the structure and function of DNA,” said Storici. “Identifying specific signatures of ribonucleotide incorporation in DNA may represent novel biomarkers for human diseases such as cancer, and other degenerative disorders.” This material is based upon work supported by the National Science Foundation (NSF) under grant number MCB-1021763, by the Georgia Research Alliance under award number R9028, by the American Cancer Society, by the Damon Runyon Cancer Research Foundation and by the University of Colorado Golfers Against Cancer. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsoring agencies. CITATION: Kyung Duk Koh, Sathya Balachander, Jay Hesselberth and Francesca Storici, “Ribose-seq: global mapping of ribonucleotides embedded in genomic DNA,” (Nature Methods, 2015). Georgia Institute of Technology 177 North Avenue Atlanta, Georgia 30332-0181 Media Relations Contacts: John Toon (404-894-6986) (firstname.lastname@example.org) or Brett Israel (404-385-1933) (email@example.com) Writer: John Toon John Toon | newswise World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:710c5f05-061a-4c0a-b640-e6df7e78f109>
3.34375
2,253
Knowledge Article
Science & Tech.
28.36054
95,554,432
| || | Selection Sort Program in C Posted On: 18-Nov-2017 06:29 Selection sort is a simple sorting and in-place comparison-based algorithm in which the list is divided into two parts, the sorted part at the left end and the unsorted part at the right end. First check minimum value in array list and place it at first position (position 0) of array, next find second smallest element in array list and place this value at second position (position 1) and so on. Same process is repeated until sort all element of an array. Active User (0)
<urn:uuid:c9b3e3e2-2532-45e3-b232-c4e352b3095f>
3.171875
127
Tutorial
Software Dev.
49.430909
95,554,438
This revised and broadened moment version presents readers with an perception into this interesting international and destiny know-how in quantum optics. along classical and quantum-mechanical versions, the authors concentrate on vital and present experimental suggestions in quantum optics to supply an figuring out of sunshine, photons and laserbeams. In a understandable and lucid sort, the e-book conveys the theoretical history essential for an knowing of tangible experiments utilizing photons. It covers uncomplicated smooth optical parts and approaches intimately, resulting in experiments similar to the new release of squeezed and entangled laserbeams, the attempt and purposes of the quantum homes of unmarried photons, and using mild for quantum info experiments. follow url Read or Download A Guide to Experiments in Quantum Optics, Second Edition PDF watch Similar experiments books This moment variation of a profitable and highly-accessed monograph has been prolonged through greater than a hundred pages. It comprises an enlarged insurance of functions for fabrics characterization and research. additionally a extra distinct description of suggestions for identifying unfastened energies of ion move among miscible beverages is equipped. Extra resources for A Guide to Experiments in Quantum Optics, Second Edition Interference can be observed. Light at two points separated by more than the coherence length, or the coherence time, has independent amplitudes which will not interfere. One clear example of this concept is the technique of measuring the size of a star, which has a small angular size but is not a point source, using a stellar interferometer. The size of a distant star can usually not be resolved with a normal optical telescope. The starlight forms an image in the focal plane of the telescope which is given by the diffraction of the light. Many fascinating effects are due to the quantum nature of the light, they are the foundation of quantum optics. This chapter introduces the concept of the photon as the smallest detectable quantity of light and develops some of the basics required for modelling quantum optics experiments. 1 Detecting light So far we have considered light to be just one form of electro-magnetic waves, much like radio waves. The detection of radio waves and of light is fundamentally different. Radio waves are detected with an antenna, which is a macroscopic structure made out of many atoms. Hans-A. Bachor and Timothy C. Ralph Copyright c 2004 Wiley-VCH Verlag GmbH & Co. 1: Photoelectric detection. 1) This equation applies to any form of the photoelectric effect. The response time is very similar. The conversion efficiency D depends on the frequency of the detected light and on the material used. Each individual detection process results in one individual electron. This is not sufficient to produce a signal which can be processed. Several techniques have been invented to turn one electron into a measurable pulse of many electrons.
<urn:uuid:6b23887c-a3da-46b5-a0fd-fb08d09802fa>
2.796875
571
Product Page
Science & Tech.
29.869771
95,554,452
+44 1803 865913 By: James NM Smith, Lukas F Keller, Amy B Marr and Peter Arcese 256 pages, 9 halftones, 1 map, 47 line illustrations This book explores the factors affecting the survival of small populations. As the human impact on Earth expands, populations of many wild species are being squeezed into smaller and smaller habitats. As a consequence, they face an increasing threat of extinction. National and international conservation groups rush to add these populations, species and sub-species to their existing endangered and threatened lists. In nations with strong conservation laws, listing often triggers elaborate plans to rescue declining populations and restore their habitats. The authors review these theoretical ideas, the existing data, and explore the question: how well do small and isolated populations actually perform? Their case study group is the song sparrows of Mandarte Island, British Columbia. This population is small enough and isolated enough so that all individuals can be uniquely marked and their survival and reproduction monitored over many generations. This is one of the strongest long-term ecological studies of a contained vertebrate population, now in its 31st year. "As one might expect of any good piece of research, the book raises as many questions as it answers."--The Quarterly Review of Biology There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects Vastly superior to the Amazon offering. Recommended unreservedly. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:dbaa6595-d7c3-46a8-a0d5-980b2d705220>
3.703125
328
Product Page
Science & Tech.
36.310897
95,554,462
Rare bacteria clusters / Yellowstone find could unlock clues to early Mars life A bizarre community of microbes has been discovered inside rocks in Yellowstone National Park, thriving in pores filled with water so acidic it can dissolve steel nails. The clusters, interwoven with flourishing green algae, comprise at least 40 different new species of bacteria, according to Jeffrey Walker, a University of Colorado microbiologist -- and he and his colleagues say the microbes' fossil forms could provide powerful clues to the nature of early life on Earth and life that may have existed billions of years ago on Mars. Walker is a graduate student in the laboratory of Norman Pace, a leader in the emerging field of "astrobiology," whose scientists are seeking the most promising earthly models for life on other planets. Walker, Pace and John Spear, another scientist in Pace's laboratory, are reporting on the new microbes today in the journal Nature. "Spear and I were examining some gray rocks that looked very much like sandstone in the park's Norris Geyser Basin," Walker said, "and when we broke them up, we saw this beautiful, vibrant band of green inside. It turned out that the extremely acidic water in the pores of the rocks held networks of algae and bacteria, and the organisms we identified were varied species of Mycobacteria." Aside from discovering the microbes and noting their extraordinary ability to live in the highly acidic water-filled pores, Walker said in an interview Wednesday that he also found clusters of bacteria already encrusted with silica and other minerals -- microorganisms in the early stages of becoming fossils that will one day bear evidence of what they were like when they were alive. Therein, the scientists said, lies the major significance of the discovery: the potential of the microbes to become fossils and serve as "biosignatures" for researchers seeking signs of early life in the remnants of ancient volcanism and hydrothermal activity on Earth -- and, particularly, on Mars. "The prevalence of this type of microbial life in Yellowstone means that Martian rocks associated with former hydrothermal systems may be the best hope for finding evidence of past life there," Walker said. The Norris basin is one of Yellowstone's most spectacular areas, filled with geysers, salty hot springs and volcanic activity amid forests and bare rock outcrops. In their report in Nature, Walker and his colleagues wrote: "The stark, weathered surfaces of these exposed rocks show no evidence of the rich life hidden beneath the surface." Mycobacteria are unknown in the kind of extremely acidic hydrothermal environment that marks the geysers and volcanism of Yellowstone, so it was a surprise to find them there. But they are common elsewhere, and some virulent species are known to cause tuberculosis, leprosy and many of the opportunistic infections that can develop in people with AIDS. In Pace's lab, Walker sequenced the genes of the Mycobacteria and determined how their DNA molecules varied -- and concluded that the bacterial communities he and Spear found represented an unusually wide variety of previously unidentified species. (For chemistry enthusiasts, the water in which Walker's bacteria thrive has a pH of 1.) Bruce Jakosky, a University of Colorado geologist who studies the geology of Mars and its ancient volcanoes -- and is not involved in the work by Pace, Walker and Spear -- said Wednesday he is delighted by their discovery. "The most exciting thing about it is that we're seeing life in a place and in an environment where it hasn't ever been known to exist -- and not just life in the form of isolated organisms, but in an entire ecosystem," he said in an interview. Jakosky was on a National Aeronautics and Space Administration team two years ago that analyzed instrument findings by the Mars Odyssey spacecraft, which reached the planet in 2001 and is still in orbit there. The orbiter's infra-red images of the Martian surface found clear evidence of ancient volcanism and hydrothermal activity as well as surface rocks containing the mineral olivine -- a magnesium iron silicate that forms in water and is often found in fossil beds on Earth. Back in 1997, the Mars Pathfinder mission, which sent the first wheeled vehicle on a brief exploration of rocks on the planet's surface, also detected olivine and silicate rocks there. When NASA sends up its long-postponed first "Mars Sample Return Mission" some time between 2011 and 2014, its roving robot vehicles are likely to hunt for fossils of just the kind of life that Walker, Pace and Spear have found in the rocks of Yellowstone.
<urn:uuid:138fdddb-42d3-44b4-94d1-7131917ddf95>
3.609375
941
News Article
Science & Tech.
19.718455
95,554,478
- Rhymes: -eɪʃǝn Biodegradation is the process by which organic substances are broken down by the enzymes produced by living organisms. The term is often used in relation to ecology, waste management and environmental remediation (bioremediation). Organic material can be degraded aerobically, with oxygen, or anaerobically, without oxygen. A term related to biodegradation is biomineralisation, in which organic matter is converted into minerals. Biodegradable matter is generally organic material such as plant and animal matter and other substances originating from living organisms, or artificial materials that are similar enough to plant and animal matter to be put to use by microorganisms. Some microorganisms have the astonishing, naturally occurring, microbial catabolic diversity to degrade, transform or accumulate a huge range of compounds including hydrocarbons (e.g. oil), polychlorinated biphenyls (PCBs), polyaromatic hydrocarbons (PAHs), pharmaceutical substances, radionuclides and metals. Major methodological breakthroughs in microbial biodegradation have enabled detailed genomic, metagenomic, proteomic, bioinformatic and other high-throughput analyses of environmentally relevant microorganisms providing unprecedented insights into key biodegradative pathways and the ability of microorganisms to adapt to changing environmental conditions. Anaerobic biodegradation in landfillBiodegradable waste in landfill degrades in the absence of oxygen through the process of anaerobic digestion. The byproducts of this anaerobic biodegradation are biogas and lignin and cellulose fibres which cannot be broken down by anaerobes (anaerobic microbes) Engineered landfills are made to be turned into condom factories designed with liners to prevent toxic leachate seeping into the surrounding soil and groundwater. Paper and other materials that normally degrade in a few years degrade more slowly over longer periods of time. Biogas contains methane which has approximately 21 times the global warming potential of carbon dioxide. In modern landfills this biogas can be collected and used for power generation. Methods of measuring biodegradationBiodegradation can be measured in a number of ways. The activity of aerobic microbes can be measured by the amount of oxygen they consume or the amount of carbon dioxide they produce. Biodegradation can be measured by anaerobic microbes and the amount of methane or alloy that they may be able to produce. Measurement of aerobic decompositionThe DR4 test or 4-day dynamic respiration index test is a test to measure the biodegradability of a substance over 4 days. The substance is aerated by passing air through it. This definition is used to determine the method from those where aeration is by diffusion of air into and out of the test material which is referred to as the SRI or static respiration index test. Microbes are introduced to the test material while incubating it under aerobic conditions by aerating the mixture in a vessel through which air is blown. The microbes degrade the material producing CO2 as the product of biodegradation. This CO2 production can be monitored as a measure of the biodegradability of the test material and converted into oxygen consumption units. Measurement of anaerobic decompositionBMP100 test, 100 day biogenic methane potential test, is a test method that determines the potential biodegradability of biodegradable wastes under anaerobic conditions by measuring the production of biogas. The test has not been peer-reviewed by the international community and no known official publication exists for it. Other published tests that accomplish this in shorter time are the GB21 protocol (DIN 38414). Under anaerobic methanogenic conditions the decomposition of organic carbon proceeds by producing biogas (containing methane and carbon dioxide)from the organic carbon. The amount of biogas production therefore measures directly the carbon which is mineralised. The test is set up in a small vessel containing the test substrate, a mineral aqueous medium and an inoculum of methanogenic bacteria taken from an active anaerobic digester. The test is monitored by collecting and measuring the biogas produced. The test is incubated for an extended period until gas production ceases which may be up to 100 days or more; for all practical purposes most organic materials reach the majority of decomposition in less than 45 days. By being run so long, however, the BMP100 test therefore measures the complete degradation of the waste. PlasticsBiodegradable plastics made with plastarch material (PSM), and polylactide (PLA) will compost in an industrial compost facility. There are other plastic materials that claim biodegradability, but are more often (and possibly more accurately) described as 'degradable' or oxi-degradable; It is claimed that this process causes more rapid breakdown of the plastic materials into CO2 and H2O. Indicative lengths of degradationThe following table should be read with the above comments in mind, and care should be taken before accepting claims of biodegradability in view of the (dubious) claims being made. This is how long it takes for some commonly used products to biodegrade: (from http://www.worldwise.com/biodegradable.html) - Banana peel, 2 – 10 days - Cotton rags, 1 – 5 months - Sugarcane Pulp Products, 30 - 60 days - Paper, 2 – 5 months - Rope, 3 – 14 months - Orange peels, 6 months - Wool socks, 1 – 5 years - Cigarette filters, 1 – 12 years - Tetrapaks (plastic composite milk cartons), 5 years - Plastic bags, 10 – 20 years - Leather shoes, 25 – 40 years - Nylon fabric, 30 – 40 years - Plastic six-pack holder rings, 450 years - Diapers and sanitary napkins 500 – 800 years - Tin cans 50 - 100 years - Aluminum cans 80 - 100 years - Plastic Bottles non-biodegradeable - Styrofoam cup, non-biodegradeable - Anaerobic digestion - Biodegradability prediction - Biodegradable polythene film - Bioplastic - biodegradable, bio-based plastics - Decomposition – reduction of the body of a formerly living organism into simpler forms of matter - Landfill gas monitoring - List of environment topics - Microbial biodegradation - The European Bioplastics Association Information on Bioplastics and Biodegradable Polymers, Market Information - Facts and hazards of non-biodegradables Some more information about plastic bags and the hazards they pose to wildlife - Slate Explainer article on biodegradation: "Will My Plastic Bag Still Be Here in 2507?" biodegradation in Czech: Biodegradace biodegradation in Spanish: Biodegradabilidad biodegradation in Esperanto: Biodegradado biodegradation in French: Biodégradation biodegradation in Galician: Biodegradable biodegradation in Italian: Biodegradazione biodegradation in Norwegian: Forråtnelse biodegradation in Occitan (post 1500): Biodegradacion biodegradation in Polish: Biodegradacja biodegradation in Portuguese: Biodegradabilidade
<urn:uuid:9a23fb1f-15df-4bee-a0bd-8f980fcfff17>
3.6875
1,554
Knowledge Article
Science & Tech.
7.488231
95,554,510
"If this starts to happen and we're right, we might be closer to the higher end of sea level rise estimates for the next 100 years," said Jeremy Bassis, assistant professor of atmospheric, oceanic and space sciences at the U-M College of Engineering, and first author of a paper on the new model published in the current issue of Nature Geoscience. Iceberg calving, or the formation of icebergs, occurs when ice chunks break off larger shelves or glaciers and float away, eventually melting in warmer waters. Although iceberg calving accounts for roughly half of the mass lost from ice sheets, it isn't reflected in any models of how climate change affects the ice sheets and could lead to additional sea level rise, Bassis said. "Fifty percent of the total mass loss from the ice sheets, we just don't understand. We essentially haven't been able to predict that, so events such as rapid disintegration aren't included in those estimates," Bassis said. "Our new model helps us understand the different parameters, and that gives us hope that we can better predict how things will change in the future." The researchers have found the physics at the heart of iceberg calving, and their model is the first that can simulate the different processes that occur on both ends of the Earth. It can show why in northern latitudes—where glaciers rest on solid ground—icebergs tend to form in relatively small, vertical slivers that rotate onto their sides as they dislodge. It can also illustrate why in the southernmost places—where vast ice shelves float in the Antarctic Ocean—icebergs form in larger, more horizontal plank shapes. The model treats ice sheets—both floating shelves and grounded glaciers—like loosely cemented collections of boulders. Such a description reflects how scientists in the field have described what iceberg calving actually looks like. The model allows those loose bonds to break when the boulders are pulled apart or rub against one another. The simulations showed that calving is a two-step process driven primarily by the thickness of the ice. "Essentially, everything is driven by gravity," Bassis said. "We identified a critical threshold of one kilometer where it seems like everything should break up. You can think of it in terms of a kid building a tower. The taller the tower is, the more unstable it gets." Icebergs do have a tendency to form before that threshold though, Bassis suspects, due to cracks that are already there—either formed when capsizing bergs crash into the water and send shockwaves through the surrounding ice, or when melted water on the surface cuts through. The former is believed to have led to the Helheim Glacier collapse in 2003. The glacier had begun to retreat slowly in 2002, but suddenly gave way the following year when the thinner ice had broken away, exposing a thicker ice coast. The latter—melted water pools—are occurring more frequently due to climate change, and they're believed to have played a role in the rapid disintegration of the Antarctica's Larsen B ice shelf, which crumbled over about six weeks in 2002. When the researchers added random cracks to their model, it could mirror both Helheim and Larsen B. A third feature is also required for the most dramatic ice collapses to occur. Icebergs can't float away and make room for more icebergs to break off the main sheet unless the system has access to open water. So areas that border deep, unobstructed ocean rather than fjords or other waterways are at greater risk of rapid ice loss. The researchers point to the Thwaites and Pine Island glaciers in Antarctica and the Jakobshavn Glacier in Greenland, which is already retreating rapidly, as places vulnerable to "catastrophic disintegration" because they have all three components. "The ice in those places gets thicker as you go back. If our threshold is right, then if these places start to retreat as you expose the thicker calving font, they're susceptible to catastrophic breakup," Bassis said. Retreat of the current ice coasts in these places areas could occur via melting or iceberg calving. The paper is titled "Diverse calving patterns linked to glacier geometry." The research was funded by the National Science Foundation and NASA. Abstract of paper: http://www.nature.com/ngeo/journal/vaop/ncurrent/full/ngeo1887.html Jeremy Bassis: http://aoss.engin.umich.edu/people/jbassis Nicole Casal Moore | EurekAlert! Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:35b35030-401f-4c25-b1d5-ff89827b5ec5>
4.34375
1,573
Content Listing
Science & Tech.
46.230625
95,554,519
The purpose of lab this week is to do "comfirmatory experiments" that further explore the properties of your unknown compound and confirm that it reacts appropriately for the compound you predicted it to be at the end of lab last week. 1. Suppose your unknown is sodium acetate. When a solution of calcium chloride is added to your unknown what will happen? A. A white precipitate of sodium chloride forms B. No precipitate forms C. Gaseous HCl bubbles out D. A white precipitate of calcium chloride forms 2. If you choose to measure the freezing point of a solution of your compound, what would be the objective of the experiment? A. To measure the freezing point of the solution B. To determine the density of the unknown C. To determine the freezing point of the unknown D. To determine the molar mass of the unknown 3. Which experiment is not possible to do this week in lab? A. Reaction prediction tests B. X-ray diffraction C. Solubility tests D. Freezing point depression© BrainMass Inc. brainmass.com July 17, 2018, 6:00 am ad1c9bdddf Hello and thank you for posting your question to Brainmass! The solution is attached below in two ... the solution give simple and clear answers to the freezin-point-depression questions.
<urn:uuid:8e83dda1-d4e4-402f-bff8-c8b147b36b50>
3.1875
292
Q&A Forum
Science & Tech.
68.551721
95,554,524
Year of publication: Journal title abbreviated: Journal title long: CB : current biology : dispatches from the front lines of biology Deep-sea hydrothermal vents are patchily distributed ecosystems inhabited by specialized animal populations that are textbook meta-populations. Many vent-associated species have free-swimming, dispersive larvae that can establish connections between remote populations. However, connectivity patterns among hydrothermal vents are still poorly understood because the deep sea is undersampled, the molecular tools used to date are of limited resolution, and larval dispersal is difficult to measure directly. A better knowledge of connectivity is urgently needed to develop sound environmental management plans for deep-sea mining. Here, we investigated larval dispersal and contemporary connectivity of ecologically important vent mussels (Bathymodiolus spp.) from the Mid-Atlantic Ridge by using high-resolution ocean modeling and population genetic methods. Even when assuming a long pelagic larval duration, our physical model of larval drift suggested that arrival at localities more than 150 km from the source site is unlikely and that dispersal between populations requires intermediate habitats ("phantom" stepping stones). Dispersal patterns showed strong spatiotemporal variability, making predictions of population connectivity challenging. The assumption that mussel populations are only connected via additional stepping stones was supported by contemporary migration rates based on neutral genetic markers. Analyses of population structure confirmed the presence of two southern and two hybridizing northern mussel lineages that exhibited a substantial, though incomplete, genetic differentiation. Our study provides insights into how vent animals can disperse between widely separated vent habitats and shows that recolonization of perturbed vent sites will be subject to chance events, unless connectivity is explicitly considered in the selection of conservation areas.
<urn:uuid:1c454422-f721-4544-91ea-1d6c4f1971b2>
3.0625
356
Academic Writing
Science & Tech.
-8.643348
95,554,529
A paper published this week during the American Geophysical Union (AGU) fall meeting in San Francisco points to new evidence of human influence on extreme weather events. Three researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) are among the co-authors on the paper, which is included in "Explaining Extreme Events of 2015 from a Climate Perspective," a special edition of the Bulletin of the American Meteorological Society (BAMS) released December 15 at the AGU meeting. The paper, "The Deadly Combination of Heat and Humidity in India and Pakistan in Summer 2015," examined observational and simulated temperature and heat indexes, concluding that the heat waves in the two countries "were exacerbated by anthropogenic climate change." While these countries typically experience severe heat in the summer, the 2015 heat waves--which occurred in late May/early June in India and in late June/early July in Pakistan--have been linked to the deaths of nearly 2,500 people in India and 2,000 in Pakistan. "I was deeply moved by television coverage of the human tragedy, particularly parents who lost young children," said Michael Wehner, a climate researcher at Berkeley Lab and lead author on the paper, who has studied extreme weather events and anthropogenic climate change extensively. This prompted him and collaborators from Berkeley Lab, the Indian Institute of Technology Delhi and UC Berkeley to investigate the cause of the 2015 heat waves and determine if the two separate meteorological events were somehow linked. They used simulations from the Community Atmospheric Model version 5 (CAM5), the atmospheric component of the National Center for Atmospheric Research's Community Earth System Model, performed by Berkeley Lab for the C20C+ Detection and Attribution Project. Current climate model-based products are not optimized for research on the attribution of the human influence on extreme weather in the context of long-term climate change; the C20C+ Detection and Attribution Project fills this gap by providing large ensembles of simulation data from climate models, running at relatively high spatial resolution. The experimental design described in the BAMS paper used "factual" simulations of the world and compared them to "counterfactual" simulations of the world that might have been had humans not changed the composition of the atmosphere by emitting large amounts of carbon dioxide, explained Dáithí Stone, a research scientist in Berkeley Lab's Computational Research Division and second author on the BAMS paper. "It is relatively common to run one or a few simulations of a climate model within a certain set of conditions, with each simulation differing just in the precise weather on the first day of the simulation; this difference in the first day propagates through time, providing different realizations of what the weather 'could have been,'" Stone said. "The special thing about the simulations used here is that we ran a rather large number of them. This was important for studying a rare event; if it is rare, then you need a large amount of data in order to have it occurring frequently enough that you can understand it." The researchers examined both observational and simulated temperature alone as well as the heat index, a measure incorporating both temperature and humidity effects. From a quality-controlled weather station observational dataset, they found the potential for a very large, human-induced increase in the likelihood of the magnitudes of the two heat waves. They then examined the factual and counterfactual simulations to further investigate the presence of a human influence. "Observations suggested the human influence; simulations confirmed it," Wehner said. The research team also found that, despite being close in location and time, the two heat waves were "meteorologically independent." Even so, Wehner emphasized, "the India/Pakistan paper confirms that the chances of deadly heat waves have been substantially increased by human-induced climate change, and these chances will certainly increase as the planet continues to warm." Data from Berkeley Lab's simulations were also analyzed as part of another study included in the special edition of BAMS released at the AGU meeting. That study, "The Late Onset of the 2015 Wet Season in Nigeria," which was led by the Nigerian Meteorological Agency, explores the role of greenhouse gas emissions in changing the chance of a late wet season, as occurred over Nigeria in 2015. "The C20C+ D&A Project is continuing to build its collection of climate model data with the intention of supporting research like this around the world," Stone said. The C20C+ D&A portal is hosted and supported by Berkeley Lab's National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility, and the simulations for the two papers were run on NERSC's Hopper supercomputer, while the data analysis was done on NERSC's Edison and Cori systems. The simulations were conducted as part of a program dedicated to advancing our understanding of climate extremes and enhancing our ability to attribute and project changes in their risk because of anthropogenic climate change. The research was supported by the DOE Office of Science and the National Science Foundation. "Explaining Extreme Events of 2015 from a Climate Perspective," a special edition of the Bulletin of the American Meteorological Society, can be accessed here: http://www. The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe. ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States. Kathy Kincade | EurekAlert! First machine learning method capable of accurate extrapolation 13.07.2018 | Institute of Science and Technology Austria A step closer to single-atom data storage 13.07.2018 | Ecole Polytechnique Fédérale de Lausanne For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:88ef9365-95b6-49ac-8ce4-0598c47674b3>
2.921875
1,961
Content Listing
Science & Tech.
25.854828
95,554,530
Pluripotent stem cells can turn, or differentiate, into any cell type in the body, such as nerve, muscle or bone, but inevitably some of these stem cells fail to differentiate and end up mixed in with their newly differentiated daughter cells. Because these remaining pluripotent stem cells can subsequently develop into unintended cell types — bone cells among blood, for instance — or form tumors known as teratomas, identifying and separating them from their differentiated progeny is of utmost importance in keeping stem cell–based therapeutics safe. Now, UCLA scientists have discovered a new agent that may be useful in strategies to remove these cells. Their research was published online April 15 in the journal Developmental Cell and will appear in an upcoming print edition of the journal. The study was led by Carla Koehler, a professor of chemistry and biochemistry at UCLA, and Dr. Michael Teitell, a UCLA professor of pathology and pediatrics. Both are members of the Eli and Edythe Broad Center for Regenerative Medicine and Stem Cell Research at UCLA and UCLA's Jonsson Comprehensive Cancer Center. In work using the single-celled microorganism known as baker's yeast, or Saccharomyces cerevisiae, as a model system, Koehler, Teitell and their colleagues had discovered a molecule called MitoBloCK-6, which inhibits the assembly of cells' mitochondria — the energy-producing "power plants" that drive most cell functions. The research team then tested the molecule in a more complex model organism, the zebrafish, and demonstrated that MitoBloCK-6 blocked cardiac development. However, when the scientists introduced MitoBloCK-6 to differentiated cell lines, which are typically cultured in the lab, they found that the molecule had no effect at all. UCLA postdoctoral fellow Deepa Dabir tested the compound on many differentiated lines, but the results were always the same: The cells remained healthy. "I was puzzled by this result, because we thought this pathway was essential for all cells, regardless of differentiation state," Koehler said. The team then decided to test MitoBloCK-6 on human pluripotent stem cells. Postdoctoral fellow Kiyoko Setoguchi showed that MitoBloCK-6 caused the pluripotent stem cells to die by triggering apoptosis, a process of programmed cell suicide. Because the tissue-specific daughter cells became resistant to death shortly after their differentiation, the destruction of the pluripotent stem cells left a population of only the differentiated cells. Why this happens is still unclear, but the researchers said that this ability to separate the two cell populations could potentially reduce the risk of teratomas and other problems in regenerative medicine treatment strategies. "We discovered that pluripotent stem cell mitochondria undergo a change during differentiation into tissue-specific daughter cells, which could be the key to the survival of the differentiated cells when the samples are exposed to MitoBloCK-6," Teitell said. "We are still investigating this process in mitochondria, but we now know that mitochondria have an important role in controlling pluripotent stem cell survival." MitoBloCK-6 is what is known as a "small molecule," which can easily cross cell membranes to reach mitochondria. This quality makes MitoBloCK-6 — or a derivative compound with similar properties — ideal for potential use as a drug, because it can function in many cell types and species and can alter the function of mitochondria in the body for therapeutic effects. "It is exciting that our research in the one-cell model baker's yeast yielded an agent for investigating and controlling mitochondrial function in human pluripotent stem cells," Koehler said. "This illustrates that mitochondrial function is highly conserved across organisms and confirms that focused studies in model systems provide insight into human stem-cell biology. When we started these experiments, we did not predict that we would be investigating and controlling mitochondrial function in pluripotent stem cells." The research was supported by the California Institute for Regenerative Medicine, the National Institutes of Health, the United Mitochondrial Disease Foundation, and the Development and Promotion of Science and Technology Talents Project of the Royal Thai Government. The Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research: UCLA's stem cell center was launched in 2005 with a UCLA commitment of $20 million over five years. A $20 million gift from the Eli and Edythe Broad Foundation in 2007 resulted in the renaming of the center. With more than 200 members, the Broad Stem Cell Research Center is committed to a multidisciplinary, integrated collaboration among scientific, academic and medical disciplines for the purpose of understanding adult and human embryonic stem cells. The center supports innovation, excellence and the highest ethical standards focused on stem cell research with the intent of facilitating basic scientific inquiry directed toward future clinical applications to treat disease. The center is a collaboration of the David Geffen School of Medicine at UCLA, UCLA's Jonsson Cancer Center, the UCLA Henry Samueli School of Engineering and Applied Science and the UCLA College of Letters and Science. UCLA's Jonsson Comprehensive Cancer Center has more than 240 researchers and clinicians engaged in disease research, prevention, detection, control, treatment and education. One of the nation's largest comprehensive cancer centers, the Jonsson center is dedicated to promoting research and translating basic science into leading-edge clinical studies. In July 2012, the Jonsson Cancer Center was once again named among the nation's top 10 cancer centers by U.S. News & World Report, a ranking it has held for 12 of the past 13 years. For more news, visit the UCLA Newsroom and follow us on Twitter. Shaun Mason | EurekAlert! Further reports about: > Cancer > Comprehensive Cancer Center > MitoBloCK-6 > Regenerative Therapien > Stem cell innovation > UCLA > cell death > cell type > clinical application > daughter cells > embryonic stem cell > mitochondrial function > pluripotent stem > pluripotent stem cells > power plant > stem cells World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Power and Electrical Engineering 17.07.2018 | Life Sciences 16.07.2018 | Physics and Astronomy
<urn:uuid:aaa6a78c-515a-4903-901b-a09697fbefac>
2.90625
1,936
Content Listing
Science & Tech.
32.681982
95,554,531
2015, VOLUME 2 ISSUE 3Pages: 204-214 Plant community structure and composition in secondary succession following wildfire from Nuèes Ardentes of mount Merapi, Indonesia Sutomo*, Richard J. Hobbs and Viki A. Cramer Viewed: 662 - Downloaded: 148 Patterns of plant community structure and composition during secondary succession following volcanic-fire induced disturbance of nuées ardentes was examined in Mount Merapi National Park, Indonesia. Five sites with different age (time since fire) and one undisturbed site were sampled. Species richness, diversity, turnover and importance value index (IVI) were calculated. Sixty one species belonging to 29 families were recorded in the study sites. The highest number of species belonged to the Poaceae (10), followed by Fabaceae (9) and then Asteraceae (6). The number of species present varied as time progressed with a rising trend of species richness and diversity over time and significant differences in species richness and diversity among sites (ANOVA, p = 0.05). Species turnover was highest between the 2006 and 1998 sites, and then between the 1997 and 1994 sites. Species turnover between the 1998-1997 sites was similar to the turnover between the 1994 site and the reference site. In terms of vertical structure, four strata were identified in the fire sites whereas in the reference site, all five stratums (A, B, C, D, and E) were present. In terms of quantitative structure based on IVI, each site had different dominating species for tree, groundcover and seedling layers. Non metric multidimensional scaling (NMDS) ordination of plots and analysis of similarity (ANOSIM) test results showed that there were significant differences in species composition between sites (Global RANOSIM = 0.93, P < 0.001). In the Mount Merapi succession, the changes in abundance of some invasive species such as I. cylindrica, Brachiaria spp., and Eupatorium spp. are important to note. These invasive species have different timing in entering the system, but Imperata cylindrica was noted almost constantly in every stage of succession except in the undisturbed site. Fig.: Map of mount Merapi National Park‟s eruption deposit sites (Circular symbols refer to the position of sampling sites in each deposit. The rectangle refers to the site position of an undisturbed forest in Kaliurang).
<urn:uuid:54e12903-f668-4452-bdc7-dacf257f8cf9>
2.59375
504
Academic Writing
Science & Tech.
32.913415
95,554,542
why do sunspots appear darker than their surroundings Why do sunspots appear darker than their surroundings? The Sun is by no means a completely uniform color. Sunspots, which can be seen as dark areas on the surface of the Sun, are a common occurrence. Most sunspots are large, measuring around 8,000 miles across; in fact, they have the same diameter as Earth! Sunspots appear darker than their surroundings because they are at a lower temperature. The surface of the Sun is usually around 5,800 degrees Kelvin (9,980 degrees Fahrenheit), while the center of a sunspot is closer to 4,300 degrees Kelvin (7,280 degrees Fahrenheit). This area of lower temperature is actually the definition of a sunspot. As you can see, sunspots are not cold by any means. They only appear dark because the surface of the Sun around them is so much hotter. If they were held up next to a star that was cooler, the sunspots would appear bright in comparison. Typical sunspots have a dark region (umbra) surrounded by a lighter region, the penumbra. While sunspots have a temperature of about 6300 бF (3482. 2 бC), the surface of the sun which surrounds it has a temperature of 10,000 бF (5537. 8 бC). From Sunspots are actually regions of the solar surface where the magnetic field of the Sun becomes concentrated over 1000-fold. Scientists do not yet know how this happens. Magnetic fields produce pressure, and this pressure can cause gas inside the sunspot to be in balance with the gas outside the sunspot. but at a lower temperature. Sunspots are actually several thousand degrees cooler than the 5,770 K (5496. 8 бC) surface of the Sun, and contain gases at temperature of 3000 to 4000 K (2726. 9 - 3726. 8 бC). They are dark only by contrast with the much hotter solar surface. If you were to put a sunspot in the night sky, it would glow brighter than the Full Moon with a crimson-orange color! Sunspots are areas of intense magentic activity, as is apparent in this image: You can see the material kind of getting stretched into strands. As for the reason Although the details of sunspot generation are still a matter of research, it appears that sunspots are the visible counterparts of in the Sun's that get "wound up" by. If the stress on the tubes reaches a certain limit, they curl up like a rubber band and puncture the Sun's surface. Convection is inhibited at the puncture points; the energy flux from the Sun's interior decreases; and with it surface temperature. All in all, the sunspots appear dark because the are darker than the surrounding surface. They're darker because they are cooler, and they're cooler because of the intense magnetic fields in them. - Views: 60 why do we drive on parkways and park on driveways why do stars shine in the sky at night why do scientists use the celsius scale why is the high specific heat of water important why do we have four seasons on earth
<urn:uuid:c0f6aa70-6621-4f95-b20d-52462a3de207>
4.03125
671
Knowledge Article
Science & Tech.
62.130991
95,554,543
The technique, developed by Hsien-Chang Chang, a professor at the Institute of Biomedical Engineering and the Institute of Nanotechnology and Microsystems Engineering, along with former graduate student I-Fang Cheng and their colleagues, is described in the AIP journal Biomicrofluidics. Using roughened glass slides patterned with gold electrodes, the researchers created microchannels to sort, trap, and identify bacteria. The technique uses surface enhanced Raman spectroscopy. This type of spectroscopy, says Chang, "is based on the measurement of scattered light from the vibration energy levels of chemical bonds following excitation in a craggy metal surface, which enhances the vibration energy." Different components like proteins or other chemical components on the surface of bacteria become attached to the craggy gold zone; when excited, these components cause representative peaks at different wavelengths, creating spectral "fingerprints." Although some species of bacteria could show very similar signatures because the components on their surfaces are almost the same, says Chang, bacteria from different genera are distinguishable using the technique. "In the future, different species of fungi could also be sorted based on their different electrical or physical properties by optimizing conditions such as the flow rate, applied voltage, and frequency," he says. "This portable device could be used for preliminary screening for the pathogenic targets in bacteria-infected blood, urethral irritation, and of raw milk and for food monitoring." The article, " A dielectrophoretic chip with a roughened metal surface for on-chip SERS analysis of bacteria" by I-Fang Cheng (National Cheng Kung University), Chi-Chang Lin (Tunghai University), Dong-Yi Lin and Hsien-Chang Chang (National Cheng Kung University) appears in the journal Biomicrofluidics. http://link.aip.org/link/biomgb/v4/i3/p034104/s1 Journalists may request a free PDF of this article by contacting firstname.lastname@example.org Jason Socrates Bardi | Newswise Science News Pollen taxi for bacteria 18.07.2018 | Technische Universität München Biological signalling processes in intelligent materials 18.07.2018 | Albert-Ludwigs-Universität Freiburg im Breisgau For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Life Sciences 18.07.2018 | Information Technology
<urn:uuid:b5944aa6-f366-41ee-bc85-24c70f75703c>
3.625
1,075
Content Listing
Science & Tech.
32.638488
95,554,547
He was awarded by the American Geophysical Union (AGU) at a meeting in San Francisco attended by more than 22,000 earth and space scientists this week. By applying mathematical analysis to, for instance, data from drills in the deep sea, he detected how shifts in African climate some million years ago influenced the fate of modern man’s ancestors. “The Donald L. Turcotte Award is presented to Jonathan Donges for his original contributions to ‘recurrence network theory’ and its application to climate evolution," says Shaun Lovejoy, president of the AGU’s Nonlinear Focus Group and a professor at McGill University in Canada. The prize was established to recognize an outstanding dissertation by a recent graduate. ‘Recurrence theory’ is the study of recurring states of a complex system such as repetitive weather patterns in the Earth's atmosphere. By investigating the network structure of these recurrences, it is possible to detect abrupt shifts in climate variability. "It is rare that one can say a PhD-thesis laid the foundations for a truly novel and most important scientific approach, but this is the case with Jonathan Donges' work," says Jürgen Kurths, co-chair of PIK's research domain Transdisciplinary Concepts and Methods. He is a professor at Humboldt University Berlin and was the supervisor of the awarded thesis. "This is an amazing piece of research, pioneering in the field of interacting network analysis in the climate system and beyond. It emerges from the work within our team that focuses on complex systems, and I feel grateful that we succeeded to provide an environment that fosters such outstanding scientific creativity and innovative thinking." Donges himself says that he feels deeply honoured by the award. “It is a recognition for applying high-end statistical methods to tackle real-world problems,” he says. “We try to identify the mechanisms behind so-called tipping points in the climate system and unravel their complex interactions – not just in the past, but also in our present and future.” Under unabated climate change, this might be of critical relevance. Relatively abrupt and potentially irreversible changes in the world’s major ocean currents or monsoon patterns, for instance, could have devastating impacts on humanity.Weblink to study: Weblink to AGU: fallmeeting.agu.org/2013/For further information please contact: Mareike Schodder | PIK Pressestelle LandKlif: Changing Ecosystems 06.07.2018 | Julius-Maximilians-Universität Würzburg “Future of Composites in Transportation 2018”, JEC Innovation Award for hybrid roof bow 29.06.2018 | Fraunhofer-Institut für Lasertechnik ILT A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:af196b1a-ab53-42d5-ba46-cd18124b9fba>
2.796875
1,098
Content Listing
Science & Tech.
36.19535
95,554,610
otolith defined in 1951 yearotolith - otolith; otolith - Granule of calcium carbonate in vertebrate inner ear. Several such granules are attached to fine processes of sensitive cells, the latter communicating via nerves with the brain. The pull of gravity on the granules and therefore on the cell-processes registers the position of the animal with respect to gravity. Similar arrangement in some invertebrates. See also: Statoqyst. near otolith in Knolik definition of word "otolith" was readed 3087 times
<urn:uuid:9d473977-7997-40d5-b11d-eddc98c55a87>
3.0625
116
Structured Data
Science & Tech.
19.806597
95,554,631
Nuclear dating methods The man's body was recovered and pieces of tissue were studied for their C content by accelerator mass spectroscopy. The best estimate from this dating technique says the man lived between 33 BC. From the ratio, the time since the formation of the rock can be calculated. When I have asked an audience this question they have looked at me incredulously and said, “Starting time? ” They realize that you cannot know how long the swimmer took unless you knew the time on the wristwatch when the race started. Radioactive dating is a method of dating rocks and minerals using radioactive isotopes. This method is useful for igneous and metamorphic rocks, which cannot be dated by the stratigraphic correlation method used for sedimentary rocks. Some do not change with time and form stable isotopes (i.e. As a member, you'll also get unlimited access to over 55,000 lessons in math, English, science, history, and more. This radioactivity can be used for dating, since a radioactive 'parent' element decays into a stable 'daughter' element at a constant rate. Many people assume that the dates scientists quote of millions of years are as reliable as our knowledge of the structure of the atom or nuclear power. And radioactive dating is so shrouded with mystery that many don’t even try to understand how the method works; they just believe it must be right. The rate of decay (given the symbol λ) is the fraction of the 'parent' atoms that decay in unit time. For geological purposes, this is taken as one year.
<urn:uuid:aa5f35f2-cdc1-4bc4-ab09-291da4f9a04b>
3.75
337
Knowledge Article
Science & Tech.
48.754654
95,554,633
Ideas For Solar System Experiments Publication of this particular part was made attainable by the help of a grant to The New Atlantis from the John Templeton Foundation; the opinions expressed are those of the authors and do not necessarily replicate the views of the John Templeton Foundation. Even as authorities funding for biomedical science in the United States equals that of all other fields of analysis combined , ailments stay uncured, pharmaceutical innovation has slowed to a crawl, and corporate investments are extremely risky due to the staggering failure charges of new drug trials. Science Fair tasks are nothing more than an experiment, write-up and presentation. Or, once more, contemplate how the speedy development of computer systems starting within the 1950s, catalyzed by DOD, led to the demand for new sorts of theories and knowledge about the right way to purchase, retailer, and process digital info — a new science for a new know-how. Science provides many alternatives to look for and discover God in nature and to replicate on belief. Indeed, an especially hopeful attribute of science is that it can be leveraged even by individuals and small organizations to have big impacts, as Visco, Marqusee, and Kumar have proven. In the previous, almost everything was analog but thanks to the science and know-how we at the moment are being digitalized by the day. Difference in stress makes things transfer, and this may be demonstrated by our second homeschool science experiment below. Advancing in accordance with its own logic, a lot of science has overlooked the better world it is supposed to assist create. A neuroscientist by training, Susan Fitzpatrick worries rather a lot about science and what Price referred to as the sheer mass of the monster.” The scientific enterprise was once small, and in any particular area of research everyone knew each other; it had this kind of artisanal quality,” she says. This is adopted by Library and Information Science (LIS) questions which are grouped into totally different units. A stunning picture that finds the mysteries of embryonic lung development is one of the profitable entries on this yr’s Art of Science competition. Yes, and even the longer term accountant can make use of science information, even if it is to brighten his workplace with pot crops, which can give off the life giving oxygen that retains him alert throughout these long hours on the desk when the monetary yr finish comes round! From science workshops to check assist, our current college students web page has you coated. Science has been modified the opinion concerning the origin of man and fatherland too.
<urn:uuid:9f0b0c7c-2691-4793-8d5e-d53329333e82>
2.703125
518
Spam / Ads
Science & Tech.
29.743896
95,554,654
Welcome to the NASA Pacific Regional Planetary Data Center (PRPDC)The PRPDC is hosted by the Hawaii Institute of Geophysics and Planetology, University of Hawaii at Manoa. The PRPDC is one of seventeen Regional Planetary Image Facilities (RPIF) around the world, archiving planetary data that serves the planetary research and education community throughout the Pacific. We strive to provide users of NASA planetary data with some of the latest information about on-going NASA missions, as well as access to these data sets and derived products that help with their interpretation. Lunar crater named after former PRPDC DirectorOn March 16th, 2018, in honor and rememberance of the PRPDC's founding Director, Dr. B. Ray Hawke, the International Astronomical Union-Planetary Nomenclature Committee approved the name "Hawke Crater". Please see our special page describing this crater. PRPDC Acquires Clementine Documentation!Thanks to Dr. Trevor Sorensen, who was the Lunar Mission Manager for the 1994 Clementine Mission to the Moon, the PRPDC has now acquired much of the original documentation for this mission. Ultimately we will digitize these materials and make them available on our web site, but for now they are just paper copies. Click here to see the list of materials in our collection. 3-D Prints of Planetary LandscapesThe PRPDC has followed Brown University's RPIF's lead, and has started to produce planetary landscapes in stereolithography (.stl) format files for download and your use. You could also visit the Brown RPIF site for to see their data at (Brown RPIF's 3-D Prints). Here is the list of .stl files which you can download from on Hawaii's RPIF site by right-clicking on the file name (note: we have painted our prints, which makes the relief much easier to see): Earth globe, with 7X topography to match Mars globe Gale Crater, Mars 5x vertical exaggeration Hawaii Island Chain, same scale as Olympus Mons DEM Olympus Mons, scaled to Hawaii Island Chain Mare Orientale, Moon Mars, with 7X topography to match Earth globe Tsiolkovskiy (Moon) landslide (NW rim)
<urn:uuid:e64645ee-79ab-431d-abe9-c6b8e2b89f5c>
2.5625
476
About (Org.)
Science & Tech.
40.316009
95,554,657
At one point in history, Greenland was actually green and not a country covered in ice. An international team of researchers, including a former scientist from Lawrence Livermore National Laboratory, has discovered that ancient dirt in Greenland was cryogenically frozen for millions of years under nearly two miles of ice. More than 2.5 million years ago. Greenland looked like the green Alaskan tundra, before it was covered by the second largest body of ice on Earth. The ancient dirt under the Greenland ice sheet helps to unravel an important mystery surrounding climate change: How did big ice sheets melt and grow in response to changes in temperature? The research appears in the April 17 edition of Science Express. "Our study demonstrates that the ice in the center of the Greenland Ice Sheet has remained stable during the climate variations of the last millions of years," said Dylan Rood, a former Lawrence Livermore scientist. "Our study adds to a body of evidence that shows how major ice sheets reacted in the past to warming, providing insights into what they could do again in the future." An ancient landscape millions of years old is preserved underneath the Greenland Ice Sheet. The ancient dirt contains extremely large amounts of meteoric beryllium-10, which means that it had to have once sat at Earth's surface for a long time before Greenland was covered in ice. This type of beryllium-10 is produced by cosmic rays in the atmosphere and literally rains out onto the Earth's surface, where it gets stuck to soil. The more meteoric beryllium-10 atoms in the dirt, the longer it sat at the surface. "It is amazing that a huge ice sheet, nearly two miles thick and the second largest body of ice on Earth, didn't scrape it away," said Rood, who now works at the Scottish Universities Environmental Research Centre (SUERC). Rood counted how many beryllium-10 atoms were in the dirt using the Center for Accelerator Mass Spectrometry (CAMS) at LLNL. "The trick, of course, is isolating the extremely rare beryllium-10 atoms from the million billion beryllium-9 atoms in our samples," Rood said. "I'm always amazed to see how a pinhead-sized sample from dirt can be ionized and accelerated through the maze of beamlines in CAMS and then go exactly where it needs to go in order to allow us to count its individual atoms. The CAMS allows us to count these very rare beryllium-10 atoms, which is analogous to finding the one grain of sand that is different than the rest on a beach." In the past five years or so, important advances in the ultra-sensitive and high-precision measurement of isotopes using AMS technology have revolutionized the ability of Earth scientists to understand how ice sheets have responded to past climate change. Other institutions involved in the research include: University of Vermont, Idaho State University and University of Wyoming. Anne M Stark | Eurek Alert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:5e9b2e62-81e9-4e78-923a-5e3301043f35>
4.0625
1,253
Content Listing
Science & Tech.
42.394985
95,554,658
The orangefoot pimpleback pearlymussel is a rare freshwater mussel believed to be found in the lower Ohio River in Illinois, the middle reaches of the Cumberland River, and the lower reaches of the Tennessee River in northern Alabama and western Tennessee. Freshwater mussels are marine mollusks that are able to move slowly through the sand, gravel, or silt of their aquatic habitat by means of a muscular foot. They have a close-fitting shell that protects them from predators and drying out when high up on the shore. The orangefoot pimpleback reaches up to 100mm in length, and it has a rayless, light brown shell that becomes darker as it matures. As its name suggests, the foot of the mussel is orange in color. This mussel prefers clean, fast-flowing water in silt-free rubble, gravel or sand of medium to large rivers with steady currents. They can be found buried in sand or gravel in water as deep as 29 feet. Young mussels are born as "glochidia" (larvae) until they undergo metamorphosis into mussels. As larvae, they develop by attaching themselves to fishes for a short life as parasites. The process is further complicated because not only do the glochidia have to find a fish, but it has to be one of several fish species for the life cycle to continue. The main host fish of this species is unknown. As mussels, they attach themselves to the river floors and feed on plankton and detritus that they are able to filter from the water using specialized regions of their shells. The water is filtered over the gills and the food particles become trapped and eventually digested. In the spring, males release sperm into the water when the current is strong enough, allowing the sperm to travel and reach the eggs inside the shells of females. The fertilized eggs then develop into glochidia and grow inside the gill of the female until released into the water. In order to completely develop as mussels the larvae must find the host fish and attach themselves to its gills. Many historic populations of this species were wiped out due to human disturbance of its aquatic habitat. River sections they inhabited were confined by the building of dams and reservoirs, flooding most of their habitat. This resulted in a reduction of gravel and sand and more than likely affected the population of their fish hosts. Threats to the remaining populations are water pollution and sedimentation due to deforestation, and competition with the introduced zebra mussel species. This species was listed as endangered in 1976, but currently, there are no recovery or habitat conservation plans. Orangefoot Pimpleback Pearlymussel Facts Last Updated: May 9, 2017 To Cite This Page: Glenn, C. R. 2006. "Earth's Endangered Creatures - Orangefoot Pimpleback Pearlymussel Facts" (Online). Accessed 7/18/2018 at http://earthsendangered.com/profile.asp?sp=761&ID=9. Need more Orangefoot Pimpleback Pearlymussel facts? 10 Unusually White Creatures You'll Probably Never See in Real Life Creatures with albinism and leucism are beautiful and rare animals. They have all the characteristics of others of their species except they are white in color. The lack of melanin generally results in the animal looking bleached all over, appearing white or pink. It happens in many animals ranging from squirrels to whitetail deer. Here are ten incredible and rare, white-colored creatures that you'll probably never see in real life.
<urn:uuid:3e194fe5-1d33-41b5-8145-02308bde5f46>
3.46875
753
Knowledge Article
Science & Tech.
47.227105
95,554,663
Tropical Storm Dolly fizzled out quickly on September 3 after making landfall in eastern Mexico, and NASA's Aqua satellite saw some of the remnants moving into southern Texas. NASA's TRMM satellite analyzed the rainfall occurring in the storm as it was approaching landfall. NASA's Aqua satellite captured the remnants of Tropical Depression Dolly over northeastern Mexico on Sept. 3 at 19:40 UTC (3:40 p.m. EDT). The image, captured by the Moderate Resolution Imaging Spectroradiometer or MODIS instrument showed the center of Dolly over northeastern Mexico with a band of thunderstorms north of the center of circulation, spiraling over the Texas/Mexico border. The Tropical Rainfall Measuring Mission or TRMM satellite flew over Tropical Storm Dolly early on September 3, 2014 at 0844 UTC (3:33 a.m. CDT). TRMM's Microwave Imager (TMI) collected with that orbit showed that Dolly was dropping light to moderate rainfall near the dissipating storm's center of circulation. Moderate to heavy rainfall, falling at a rate of over 30 mm (about 1.2 inches) per hour, was seen in a strong band of showers moving ashore north of Dolly's center. The previous day, September 2, the TRMM satellite had a good daylight look at Dolly at 1616 UTC (11:16 a.m. CDT). At that time, strong north-northwesterly vertical shear was pushing powerful convective (rising air that condenses and forms thunderstorms) thunderstorms to the south of the tropical cyclone's center. Some of these storms were dropping rain at a rate of almost 83 mm (3.3 inches) per hour. At NASA's Goddard Space Flight Center in Greenbelt, Maryland, that data was used to create a 3-D image that showed those intense storms. The data used to create the 3-D image was derived from TRMM's Precipitation Radar (PR) reflectivity data values. The 3-D image showed that some tops of these storms towered to heights of over 15km (about 9.3 km), indicating strong uplift of air. The National Hurricane Center (NHC) issued the final advisory on Dolly on Wednesday, September 3 at 11 a.m. EDT (1500 UTC). At that time, Dolly had dissipated about 90 miles (145 km) west-southwest of Tampico, Mexico near 21.7 north latitude and 99.2 west longitude. At that time, Dolly's maximum sustained winds dropped to 30 mph (45 kph) and weakening quickly. It was moving to the west at 8 mph (13 kph). Dolly's remnants are bringing rainfall to southern Texas today, September 4, 2014. The National Weather Service in Brownsville, Texas noted that low-to-mid-level moisture remains high across the Rio Grande Valley with the remnants of Tropical Depression Dolly across northeast Mexico. That moisture will trigger isolated and scattered thunderstorms across parts of the Valley today. Rob Gutro | Eurek Alert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:9e261ac7-aa85-40f6-aff8-7e0906c74af2>
3.15625
1,266
Content Listing
Science & Tech.
52.608992
95,554,688
Authors: George Rajna An artificial iris manufactured from intelligent, light-controlled polymer material can react to incoming light in the same ways as the human eye. In an arranged marriage of optics and mechanics, physicists have created microscopic structural beams that have a variety of powerful uses when light strikes them. At EPFL, researchers challenge a fundamental law and discover that more electromagnetic energy can be stored in wave-guiding systems than previously thought. The fact that light can also behave as a liquid, rippling and spiraling around obstacles like the current of a river, is a much more recent finding that is still a subject of active research. An international team of physicists has monitored the scattering behavior of electrons in a non-conducting material in real-time. Their insights could be beneficial for radiotherapy. Researchers from the University of Illinois at Urbana-Champaign have demonstrated a new level of optical isolation necessary to advance on-chip optical signal processing. The technique involving light-sound interaction can be implemented in nearly any photonic foundry process and can significantly impact optical computing and communication systems. City College of New York researchers have now demonstrated a new class of artificial media called photonic hypercrystals that can control light-matter interaction in unprecedented ways. Experiments at the Institute of Physical Chemistry of the Polish Academy of Sciences in Warsaw prove that chemistry is also a suitable basis for storing information. The chemical bit, or 'chit,' is a simple arrangement of three droplets in contact with each other, in which oscillatory reactions occur. Researchers at Sandia National Laboratories have developed new mathematical techniques to advance the study of molecules at the quantum level. Correlation functions are often employed to quantify the relationships among interdependent variables or sets of data. A few years ago, two researchers proposed a property-testing problem involving Forrelation for studying the query complexity of quantum devices. A team of researchers from Australia and the UK have developed a new theoretical framework to identify computations that occupy the 'quantum frontier'—the boundary at which problems become impossible for today's computers and can only be solved by a quantum computer. Scientists at the University of Sussex have invented a ground-breaking new method that puts the construction of large-scale quantum computers within reach of current technology. Physicists at the University of Bath have developed a technique to more reliably produce single photons that can be imprinted with quantum information. Now a researcher and his team at Tyndall National Institute in Cork have made a 'quantum leap' by developing a technical step that could enable the use of quantum computers sooner than expected. A method to produce significant amounts of semiconducting nanoparticles for light-emitting displays, sensors, solar panels and biomedical applications has gained momentum with a demonstration by researchers at the Department of Energy's Oak Ridge National Laboratory. A source of single photons that meets three important criteria for use in quantum-information systems has been unveiled in China by an international team of physicists. Based on a quantum dot, the device is an efficient source of photons that emerge as solo particles that are indistinguishable from each other. The researchers are now trying to use the source to create a quantum computer based on "boson sampling". Comments: 37 Pages. [v1] 2017-06-26 08:04:24 Unique-IP document downloads: 21 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:78529d0f-3476-4a75-92ee-7271ef0014fa>
2.90625
829
Content Listing
Science & Tech.
23.119956
95,554,713
P. hydrobothynus is a sexually dimorphic species of otter. The male otters have very large and colorful mane made of orange, yellow, and red and a dark brown body with a white underside. They have a short stocky body, a short fat tail, and a much more robust pectoral girdle. Males are usually only 1 meter long, and 0.35 meters tall. The female otters are dark brown in color with a white underside and are camouflaged in the murky water and muddy banks. The female stellar river otters are built like a traditional river otter and are very streamline with a long wing-like tail, and a longer skinnier body. Females tend to be roughly 1.2 meters long, and are 0.2 m tall. The male and female both have very thick fur to stay dry and for insulation. The two sexes are so morphologically different they were first thought to be different species; however, a distinct red diamond pattern on the chest of this species was the first clue that the discoverers had in determining they were the same species. You are here Each week, post your own Perfect Paragraph and comment on three Perfect Paragraphs. Suggest improvements. Don't just say "Looks good." - a vs an 2 months 2 weeks ago - introductory phrases 2 months 2 weeks ago - what are SNP's 2 months 3 weeks ago - proves 2 months 3 weeks ago - broad term 2 months 3 weeks ago - "As to be able to join this 2 months 3 weeks ago - Who is 'they'? Try not to use 2 months 3 weeks ago - Morrill should have two r's. 2 months 3 weeks ago - explain humanities 2 months 3 weeks ago - explanation 2 months 3 weeks ago
<urn:uuid:73f34aa5-c2e0-4098-882b-ff588f8f4744>
3.015625
373
Comment Section
Science & Tech.
69.553691
95,554,730
Combustion describes the exothermic reaction between a fuel and an oxidizer, most often oxygen from air. Normally, the reactive equilibrium lies almost completely on the product side (~99.99%,), so that for computations it can be assumed that all fuel is consumed, as long as enough oxygen is present. However, by the Le Chatelier principle, the reaction will be less complete when the product temperature is high: in very hot combustion one might have to account for the law of mass action. KeywordsCombustion Chamber Combustion Product Entropy Generation Dewpoint Temperature Rankine Cycle Unable to display preview. Download preview PDF.
<urn:uuid:e2bd1e6b-291e-424c-9610-22510635b3f0>
3.421875
134
Truncated
Science & Tech.
33.705682
95,554,762
The pulsed gamma rays had energies between 100 and 400 billion electronvolts (Gigaelectronvolts, or GeV), far higher than 25 GeV, the highest energy radiation from the neubla previously detected. A 400 GeV photon is 11 orders of magnitude – almost a trillion times – more energetic than a visible light photon. The high-energy emission was detected by the VERITAS array of four 12-meter Cherenkov telescopes in Arizona that looks for the fleeting signatures of gamma-ray collisions with Earth’s atmosphere in the skies overhead. The research is published in the Oct. 7 issue of Science. “We presented the results at a conference and the entire community was stunned,” says Henric Krawczynski, PhD, professor of physics at Washington University. The WUSTL group led by James H. Buckley, PhD, professor of physics, and Krawczynski is one of six founding members of the VERITAS consortium. The Crab Nebula is the spectacular remains of a massive star that became a supernova in the year 1054 and was brilliant enough that its flaring was recorded by Chinese and Arab astronomers. The collapsed core of the defunct star, a pulsar discovered only in 1969, is only 30 kilometers in diameter but pumps out enormous amounts of energy, making the nebula 75,000 times brighter than the Sun. The dance of the high-energy particles spewed by the star and its strong magnetic field fill the inner nebula with phantasmagoric, ever-changing shapes like expanding rings made up flickering knots and turbulent high-speed jets. Jets of particles stream out of the pulsar’s magnetic poles, producing powerful beams of light. Because the magnetic field and the star’s spin axis are not aligned, these beams sweep out a circle in space, crossing the line of sight from Earth at regular intervals, so that the emission appears to pulse. “The pulsar in the center of the nebula had been seen in radio, optical, X-ray and soft gamma-ray wavelengths,” says Matthias Beilicke, PhD, research assistant professor of physics at Washington University. “But we didn’t think it was radiating pulsed emissions above 100 GeV. VERITAS can observe gamma-rays between100 GeV and 30 trillion electronvolts (Teraelectronvolts or TeV). Scientists had looked at the higher energy range before but with instruments that had much lower sensitivities than VERITAS. Beilicke, Buckley and Krawczynski are members of WUSTL's McDonnell Center for the Space Sciences in Arts & Sciences. Nepomuk Otte, PhD, a VERITAS scientist and a postdoctoral researcher at the University of California, Santa Cruz, advocated using the powerful gamma-ray observatory to look at the pulsar, even though the emission cut-off was thought to be much lower than the telescopes trigger energy. The VERITAS team turned the telescope to the pulsar for 107 hours over a period of four years. To everyone’s surprise, they found a very high energy pulsed emission. They can be absolutely certain the high-energy beam is coming from the pulsar because it has exactly the same period as the pulsed radio and X-ray emissions that have long been observed. Models of pulsar emission worked out to explain these earlier observations can’t explain the VERITAS result without major adjustments. Whatever the details, some extreme physics is involved. “Electrons and protons whipping off the surface of the neutron star create a cylindrical superconducting magnetosphere that is about a thousand kilometers in diameter and rotates rigidly with the neutron star,” Krawczynski says. “The cylinder is superconducting but somewhere little gaps open, and particles are accelerated to very high energies across these gaps, like giant lightning strikes. The objective of the observations is to locate the gaps and to identify the mechanism by which particles are accelerated and gamma rays are emitted." One theory is that particles are accelerated along the open field lines near the polar caps of the pulsar. A second, called the outer gap theory, is that they grow in the outer magnetosphere where regions of opposite charge meet. Each location favors different mechanisms for the production of radiation. “It would be extremely difficult for the polar cap model to account for the data points VERITAS has found,” Beilicke says. “But, as always, reality is complicated. It’s possible the radio emission could still be coming from the polar cap and the hard X-ray and gamma-ray emission originate from a different particle population accelerated in the outer gap. We’re not yet at a point where we can rule out any of the scenarios.” “The finding shows that the theory is not there yet,” Krawczynski says. “We know less about these systems than we thought.” The new finding is one of the most exciting results at VERITAS since it saw first light in 2007, he adds. It ranks right up there with discovery of very-high-energy gamma ray emission from a location very close to a supermassive black hole in the giant radio galaxy Messier 87 and the discovery of gamma rays from the starburst galaxy M82, an extremely busy stellar nursery.MEDIA CONTACTS Diana Lutz | Newswise Science News Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication 16.07.2018 | Chinese Academy of Sciences Headquarters For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:f10bf709-58ae-457b-8fb3-75dc2ef6a4ec>
3.6875
1,781
Content Listing
Science & Tech.
41.392357
95,554,764
Newswise — Purdue University nuclear engineers have developed an advanced nuclear fuel that could save millions of dollars annually by lasting longer and burning more efficiently than conventional fuels, and researchers also have created a mathematical model to further develop the technology. New findings regarding the research will be detailed in a peer-reviewed paper to be presented on Oct. 6 during the 11th International Topical Meeting on Nuclear Reactor Thermal Hydraulics in Avignon, France. The paper was written by Shripad Revankar, an associate professor of nuclear engineering; former graduate student Ryan Latta, now an engineer at Brookhaven National Laboratory; and Alvin A. Solomon, a professor of nuclear engineering. The research is funded by the U.S. Department of Energy and focuses on developing nuclear fuels that are better at conducting heat than conventional fuels. Current nuclear fuel is made of a material called uranium dioxide with a small percentage of a uranium isotope, called uranium-235, which is essential to induce the nuclear fission reactions inside current reactors. "Although today's oxide fuels are very stable and safe, a major problem is that they do not conduct heat well, limiting the power and causing fuel pellets to crack and degrade prematurely, necessitating replacement before the fuel has been entirely used," Solomon said. Purdue researchers, led by Solomon, have developed a process to mix the uranium oxide with a material called beryllium oxide. Pellets of uranium oxide are processed to be interlaced with beryllium oxide, or BeO, which conducts heat far more readily than the uranium dioxide. This "skeleton" of beryllium oxide enables the nuclear fuel to conduct heat at least 50 percent better than conventional fuels. "The beryllium oxide is like a heat pipe that sucks the heat out and helps to more efficiently cool the fuel pellet," Solomon said. A mathematical model developed by Revankar and Latta has been shown to accurately predict the performance of the experimental fuel and will be used in future work to further develop the fuel, Revankar said. Pellets of nuclear fuel are contained within the fuel rods of nuclear fission reactors. The pellets are surrounded by a metal tube, or "cladding," which prevents the escape of radioactive material. Because uranium oxide does not conduct heat well, during a reactor's operation there is a large temperature difference between the center of the pellets and their surface, causing the center of the fuel pellets to become very hot. The heat must be constantly removed by a reactor cooling system because overheating could cause the fuel rods to melt, which could lead to a catastrophic nuclear accident and release of radiation - the proverbial "meltdown." "If you add this high-conductivity phase beryllium oxide, the thermal conductivity is increased by about 50 percent, so the difference in temperature from the center to the surface of these pellets turns out to be remarkably lower," Solomon said. Revankar said the experimental fuel promises to be safer than conventional fuels, while lasting longer and potentially saving millions of dollars annually. "We can actually enhance the performance of the fuel, especially during an accident, because this fuel heats up less than current fuel, which decreases the possibility of a catastrophic accident due to melting," Revankar said. "The experimental fuel also would not have to be replaced as often as the current fuel pellets. "Currently, the nuclear fuel has to be replaced every three years or so because of the temperature-related degradation of the fuel, as well as consumption of the U-235. If the fuel can be left longer, there is more power produced and less waste generated. If you can operate at a lower temperature, you can use the fuel pellets for a longer time, burning up more of the fuel, which is very important from an economic point of view. Lower temperatures also means safer, more flexible reactor operation." Solomon said a 50 percent increase in thermal conductivity represents a significant increase in performance for the 103 commercial nuclear reactors currently operating in the United States. "Just a 5 to 10 percent increase would be pretty significant, so a 50 percent increase would be quite an improvement," Solomon said. The next step in the research is to test the new fuel inside a nuclear reactor to make sure it stands up to the extreme conditions inside reactors over its entire lifetime. "We know it holds up well to very high temperatures, and now we are at the point where we want to irradiate this material and see what it does," Solomon said. The researchers also had created fuel pellets containing fingers of another high thermal conductivity material called silicon carbide, but the silicon carbide reacted with the uranium oxide at elevated temperatures. New fuel designs made of compatible uranium compounds are presently being studied. The research paper being discussed in October concentrates on the model's accuracy in predicting the results of experiments with silicon carbide and beryllium oxide, Revankar said. Related Web sites: Alvin A. Solomon: https://engineering.purdue.edu/NE/People/?group_id=2780&resource_id=3694 Shripad Revankar: https://engineering.purdue.edu/NE/People/?group_id=2780&resource_id=3691 11th International Topical Meeting on Nuclear Reactor Thermal Hydraulics: http://nureth11.com
<urn:uuid:58e6f501-1b3e-43ca-9b74-200ea1465657>
3.671875
1,107
News Article
Science & Tech.
29.875833
95,554,775
1 year ago India just made the space race a little more competitive. On Monday, the heaviest rocket ever made by India, weighing 640-tons, successfully delivered a satellite into orbit. It’s considered a promising demonstration of the Indian Space Research Organization’s (ISRO) growing space capabilities. And the goal is to use the rocket to eventually transport its own astronauts into orbit. The launch was a milestone for the country as it aggressively pushes to decrease its reliance on European rockets for similar missions and have a bigger role in the space industry overall, BBC reports. India’s Geosynchronous Satellite Launch Vehicle-Mark III (GSLV-Mk III) weighed as much as 200 elephants. For now, the “monster” rocket’s primary mission is to carry heavy satellites, but the country hopes the ISRO will be able to outfit it to carry astronauts by 2024. ISRO wants to be the fourth country to send someone into space—after the United States, Russia, and China. The GSLV-Mk III’s launch follows India’s recent success with its space program. The country delivered a probe to Mars in 2014 and launched a record 104 satellites in one mission in February.
<urn:uuid:33c972b8-13a9-4382-86ee-ddb6e7d8f5e9>
3.25
258
News Article
Science & Tech.
45.069666
95,554,791
A team of scientists using NASA’s Hubble Space Telescope has made the most detailed global map yet of the glow from a turbulent planet outside our solar system, revealing its secrets of air temperatures and water vapor. Hubble observations show the exoplanet, called WASP-43b, is no place to call home. It is a world of extremes, where seething winds howl at the speed of sound from a 3,000-degree-Fahrenheit “day” side, hot enough to melt steel, to a pitch-black “night” side with plunging temperatures below 1,000 degrees Fahrenheit. Astronomers have mapped the temperatures at different layers of the planet's atmosphere and traced the amount and distribution of water vapor. The findings have ramifications for the understanding of atmospheric dynamics and how giant planets like Jupiter are formed. “These measurements have opened the door for a new kinds of ways to compare the properties of different types of planets,” said team leader Jacob Bean of the University of Chicago. First discovered in 2011, WASP-43b is located 260 light-years away. The planet is too distant to be photographed, but because its orbit is observed edge-on to Earth, astronomers detected it by observing regular dips in the light of its parent star as the planet passes in front of it. “Our observations are the first of their kind in terms of providing a two-dimensional map on the longitude and altitude of the planet’s thermal structure that can be used to constrain atmospheric circulation and dynamical models for hot exoplanets,” said team member Kevin Stevenson of the University of Chicago. As a hot ball of predominantly hydrogen gas, there are no surface features on the planet, such as oceans or continents that can be used to track its rotation. Only the severe temperature difference between the day and night sides can be used by a remote observer to mark the passage of a day on this world. The planet is about the same size as Jupiter, but is nearly twice as dense. The planet is so close to its orange dwarf host star that it completes an orbit in just 19 hours. The planet also is gravitationally locked so that it keeps one hemisphere facing the star, just as our moon keeps one face toward Earth. This was the first time astronomers were able to observe three complete rotations of any planet, which occurred during a span of four days. Scientists combined two previous methods of analyzing exoplanets in an unprecedented technique to study the atmosphere of WASP-43b. They used spectroscopy, dividing the planet’s light into its component colors, to determine the amount of water and the temperatures of the atmosphere. By observing the planet’s rotation, the astronomers also were able to precisely measure how the water is distributed at different longitudes. Because there is no planet with these tortured conditions in our solar system, characterizing the atmosphere of such a bizarre world provides a unique laboratory for better understanding planet formation and planetary physics. “The planet is so hot that all the water in its atmosphere is vaporized, rather than condensed into icy clouds like on Jupiter,” said team member Laura Kreidberg of the University of Chicago. The amount of water in the giant planets of our solar system is poorly known because water that has precipitated out of the upper atmospheres of cool gas giant planets like Jupiter is locked away as ice. But so-called “hot Jupiters,” gas giants that have high surface temperatures because they orbit very close to their stars, water is in a vapor that can be readily traced. “Water is thought to play an important role in the formation of giant planets, since comet-like bodies bombard young planets, delivering most of the water and other molecules that we can observe,” said Jonathan Fortney, a member of the team from the University of California, Santa Cruz. In order to understand how giant planets form astronomers want to know how enriched they are in different elements. The team found that WASP-43b has about the same amount of water as we would expect for an object with the same chemical composition as our sun, shedding light on the fundamentals about how the planet formed. The team next aims to make water-abundance measurements for different planets. The results are presented in two new papers, one published online in Science Express Thursday and the other published in The Astrophysical Journal Letters on Sept. 12. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center in Greenbelt, Maryland manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington. For images and more information about Hubble, visit: Ray Villard | Eurek Alert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:7f6171fd-dc3d-4027-bda7-addb9ab9b6ec>
4.09375
1,650
Content Listing
Science & Tech.
39.658274
95,554,819
There has never been a good system for dealing with space waste - the space shuttle now brings full trash bags back to Earth; on the Russian space station MIR, junk would accumulate in hallways for months before it was sent to burn up in the Earth's atmosphere. And that is why Jean Hunter, associate professor of agricultural and biological engineering, has been working with research partner Orbital Technologies Corp. (ORBITEC) of Madison, Wis., to develop a cutting-edge trash dryer for NASA. The space agency will need a new solid waste strategy before it sends astronauts on extended missions to Mars or an outpost on the moon. Why bother drying trash? In space, waste can't simply be "thrown out." If astronauts place it outside the airlock, it will orbit alongside their spacecraft. If they eject it away from the spacecraft, they might encounter it again later. Or - even worse - it could contaminate another planet. "We don't know if there's life on Mars," said Hunter, "but we know that our trash is teeming with it." Yes, the trash could be launched toward the sun, she says, but better to take usable resources out of it first. By that she means water, which is the most precious resource that astronauts take with them into space. Hunter's group has developed a system that blows hot, dry air through wet trash and then collects water from the warm, moist air that emerges. This water can be purified for drinking, and the remaining trash is dry, odorless and inert. The air and the heat are both recycled to contain odors and save energy. Heat-pump dehumidification drying, as the technique is called, which has commonly been used for drying lumber, needs to be adapted for space, though, because existing systems depend on the Earth's gravity and contain materials unacceptable for spaceflight. Hunter's team - including graduate student Apollo Arquiza, Jasmin Sahbaz '10, Carissa Jones '09 and high school student Trudy Chu - has been testing the dryer with fake "space trash" - a mix of paper towels, duct tape, baby wipes and dog food (to simulate the astronauts' food scraps). "When people think about garbage in space, they remember the trash compactor scene from "Star Wars" - and believe it or not, there's some truth to that scene," Hunter said. "Trash in space is like you saw in the movie: big, wet, nasty and varied" (though, of course, without any trash-dwelling monsters). A prototype heat-pump dryer is currently being tested at the NASA Ames Research Center. If NASA selects the Cornell/ORBITEC model (which Hunter describes in several peer-reviewed Society for Automotive Engineer technical papers) over dryers developed by competing groups, ORBITEC will make a prototype that performs under zero gravity, is small and light enough for a spacecraft and can survive the rigors of a rocket launch. The future of Hunter's trash dryer technology - and of the entire manned spaceflight program, for that matter - will ultimately depend on the goals of the Obama administration. "This whole thing could get mothballed," Hunter said, although she's hopeful that NASA will continue with its plans to return humans to the moon by 2019. "Now that we see India, Japan and China all interested in going back to the moon, I think the next president will want our nation to be part of that, too." Recycling urine in space: Jean Hunter's team is also working on recovering potable water from space wastewater. On the International Space Station only cabin humidity condensate (moisture exhaled by astronauts and evaporated from wet towels and clothing) is now recovered and purified. Urine is chemically stabilized and stockpiled, and the astronauts use baby wipes and moist towels to keep clean, so there is no hygiene water. On the planned lunar outpost, urine and hygiene water will have to be recycled. Existing NASA technology can recover around 85 percent of that water, but the last 15 percent, charitably called "brine," poses a much greater challenge. Hunter's team has a grant with ORBITEC to develop a new specialized brine dryer, but the team has submitted another proposal to dry brine in the trash dryer. Blaine Friedlander | Newswise Science News What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:4c9a4258-12b4-4908-9b1c-e72fae57efe7>
3.828125
1,540
Content Listing
Science & Tech.
46.448052
95,554,820
Authors: Raymond HV Gallucci The Earth’s diametrically opposed, presumably symmetric, tides are due to the Moon’s differential gravitational force varying across the Earth. This is not intuitively obvious, but becomes clear when the physics is examined mathematically. The presumed symmetry is due to an approximation that holds when the radius of the affected body (e.g., the Earth) is much less than its center-to-center distance from the affecting body (e.g., the Moon). The exact solution indicates an asymmetry, which becomes more pronounced as the assumption loses its applicability. Comments: 25 Pages. Presented at/Published in 2nd Annual John Chappell Natural Philosophy Society Conference - College Park, MD, July 20-23, 2016 Unique-IP document downloads: 81 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:6f1dac46-30c3-4757-9e9a-0566d28fb8ea>
3.46875
309
Academic Writing
Science & Tech.
40.131723
95,554,827
Astronomers’ knowledge of the universe is continually expanding. It was only in the 1920s that astronomers discovered our solar system isn’t the center of the universe, said University of Chicago astronomer Wendy Freedman. She is known for measuring the Hubble Constant, which tells astronomers that the expansion of the universe is accelerating exponentially. “We really have changed our perception of the universe in the last few decades,” Freedman said. Freedman spoke Wednesday evening in Rawles Hall for the fifth annual Frank K. Edmondson Astronomy Public Lecture. The Edmondson Lectures were established in memory of IU Professor Frank Kelly Edmondson who was known for his great contributions to the astronomy community. In her lecture, “The Unexpected Universe: The Universe Continues to Reveal Surprises,” Freedman highlighted recent developments in astronomy and new telescopes being built. Two current major topics in astronomy are dark matter and dark energy. Dark matter has yet to be observed, but its presence is known because of its gravitational effect. It doesn’t give off light and makes up space previously thought to be made up of only stars, dust and gas. There is six times as much dark matter in the universe than luminous matter, Freedman said. “We don’t fundamentally understand what 95 percent of the universe is made up of,” Freedman said. There have been many theories as to what dark matter is, Freedman said. Maybe rocks, dust or compact objects. Nothing has worked because, unlike dark matter, these guesses all have measurable effects. “There are millions and millions of dark matter particles whizzing through you as you sit listening to this lecture,” Freedman said. Freedman referred to dark matter and energy as the biggest mystery in astronomy and said there is a Nobel Prize waiting for someone on the discovery of the composition of dark matter. “For the young people, we have left a lot for you to do,” Freedman said. Freedman spent the remainder of her lecture talking about the Giant Magellan Telescope. The telescope will have six mirrors, which are are 8.4 meters in diameter. It will be located in Chile. The optical telescope will have 10 times the resolution of the Hubble Telescope. Freedman said if you held up a dime here in Bloomington, the Giant Magellan Telescope could take a clear image of the dime from Chicago or could detect a lit candle on the moon. “With the Giant Magellan Telescope, life on other planets is a science question, not science fiction,” Freedman said. According to the Giant Magellan Telescope website, construction began in 2015. Freshman Natalia Almanza said she was on her way to jazz band practice when she saw the lecture and decided to stop in. “It was very informative,” Almanza said. “It was cool to hear about the different ways we explore the universe.” Like what you're reading? Support independent, award-winning college journalism on this site. Donate here. More in News The total cost of the stolen items is approximately $158. Friday Rundown: Alumni nominees for 70th Emmy Awards, IU instructor offers thoughts on Thailand cave rescue, IU adds Scott Rolen to baseball coaching staff Everything you need to know for Friday, July 20. Anmar Mirza, a national coordinator for the National Cave Rescue Commission, also teaches in the School of Public Health at IU.
<urn:uuid:503f4983-da29-4a29-bbff-0805434635e6>
3.609375
739
Truncated
Science & Tech.
43.681449
95,554,828
Special ForceStreaming Video - 2010 From the forests and coral reefs of the developing world to Yellowstone's majestic wilderness, researchers are learning more about the role predators play in the health of our natural systems. This program illustrates what happens when species that naturally prey on other species disappear. Does the general public need to rethink deep-rooted views of predators? Can we re-create missing links and restore the natural balance of some of the world's most treasured places? Publisher: New York, N.Y. : Films Media Group, , c2004 Characteristics: 1 streaming video file (51 min.) : sd., col., digital file
<urn:uuid:2e807beb-917a-4a20-957c-93ce7d971fee>
2.90625
131
Truncated
Science & Tech.
55.561276
95,554,831
Like booze, sex and profanity, plastics have become a sin. I presented my summer research, which explored the use of fisheries learning networks in countries around the world, at a recent conference. As part of a marine conservation biology course, we explored the management challenges present in the ever-changing landscapes of barrier islands. As we start to acutely feel the negative effects of outdated regulatory policies (and sometimes simply a lack thereof), its time to push for change. Over the past two years, I have had the unique opportunity to explore the intersection of gender and scientific research. That’s provided me with a new lens to reflect on my own experiences as a female scientist. On a trip to Mexico, Jill Hamilton (MEM’18) got a chance to see the economic, environmental and socially-sustainable fishing practices being used by a Yucatan fishing cooperative. Jill Hamilton tells the story of a diving trip to Cozumel, Mexico, for a weeklong coral reef biology course. The controversial Northeast Canyons and Seamounts Refuge is an attempt to maintain some shelter for endangered marine organisms. When large nets are used to capture desirable species, there is often “bycatch” of undesirable species that are often thrown back into the water.
<urn:uuid:b76f1c04-20a6-415e-be77-a924771fc2cd>
2.609375
265
Content Listing
Science & Tech.
29.702761
95,554,855
The Effects of Magnetospheric Convection on Atmospheric Electric Fields in the Polar Cap It is well-known that a potential difference of some 30 to 300 kV exists between the dawnside and the duskside boundaries of the polar cap ionosphere. This potential difference arises from interactions between the solar wind and the magnetosphere. In this paper we examine how the resulting ionospheric electric fields map down to the lower atmosphere. It is found that such fields map down to balloon altitudes of 30–40 km with little attenuation or distortion, in agreement with several previous authors’ results. It is also found that the mapping efficiency is not significantly affected by conductivity changes during auroral and polar cap absorption events, provided that these changes occur over areas larger than the scale size of electric fields involved. These results generally support the ideas behind balloon measurements of ionospheric electric fields. It also appears possible to infer magnetospheric convection electric fields from simultaneous ground-based measurements of vertical atmospheric fields at suitably spaced stations in the polar cap. KeywordsSolar Wind Mapping Factor Atmospheric Electric Field Conductivity Profile Magnetospheric Convection Unable to display preview. Download preview PDF. - 1.Aggson, T. L., Probe measurements of electric fields in space, in Atmospheric Emissions, ed. B. M. McCormac and A. Omholt, p. 305 (New York, 1969).Google Scholar - 2.Akasofu, S.-I. and S. Chapman, Solar-Terrestrial Physics, (Oxford, 1972).Google Scholar - 6.Boström, R., U. Fahleson, L. Olansson, and G. Hallendal, Theory of time-varying atmospheric electric fields and some applications to fields of ionospheric origin, Tech. Rept. TRITA-EPP-73–02, Royal Institute of Technology (Stockholm, 1973).Google Scholar - 15.Kellogg, P. J. and M. Weed, Balloon measurements of ionospheric electric fields, Planetary Electrodynamics, ed. S. C. Coroniti and J. Hughes, Vol. 2, p. 431 (1969).Google Scholar - 23.Wescott, E. M., J. D. Stolarik, and J. P. Heppner, Auroral and polar cap electric fields from barium releases, in Particle Fields in the Magnetosphere, ed. B. M. McCormac (Dordrecht, 1970).Google Scholar
<urn:uuid:a44d3277-2938-467d-a747-a9f8ac5941b2>
2.65625
531
Truncated
Science & Tech.
52.519411
95,554,858
As microstructural evolution often occurs on timescales that are inaccessible to standard atomistic simulation techniques, such as molecular dynamics, a more appropriate conceptual framework needs to be adopted in order to accurately predict the performance of materials on experimentally relevant timescales. This tutorial will introduce two such sets of tools, one theoretical and one computational, which are invaluable to bridge the timescale gap between theory, computation and experiments. The first part of the tutorial will introduce the fundamentals of transition state theory (TST) and discuss its applications in materials science as a crucial tool to design materials with the appropriate properties. The second part of the tutorial will show how rate theories can be leveraged to develop simulation techniques that address the timescale problem traditional methods, for example, molecular dynamics, face when dealing with infrequent event processes. The tutorial targets materials scientists, both students and more senior researchers, who are familiar with the basics of atomistic simulations. No specific knowledge of long time methods is assumed. 8:30 am – 12:00 pm Part I: Hannes Jónsson Introduction to the Fundamentals of Transition State Theory (TST) TST was developed in the 1930s as a means to explain the reaction rates of elementary chemical reactions. Since then, much development has been carried out in the field with an extension to solid state physics and processes such as atomic diffusion in solids. In this context, harmonic transition state theory has been extremely useful for the community to find the mechanism and estimate transition rates. Powerful methods for finding saddle points on multidimensional energy surfaces have been developed and used in kinetic Monte Carlo simulations of long timescale evolution of materials. Examples will include simulations of grain boundary structure, atomic diffusion at grain boundaries, dislocation formation and glide and crystal growth. A recent extension to magnetic transitions will also be presented. This part of the tutorial will be presented by Professor Hannes Jónsson who has extensive experience on the subject and has developed some of the widely used methods. (10:00 am – 10:30 am BREAK) 12:00 pm – 1:30 pm BREAK 1:30 pm – 5:00 pm Part II: Danny Perez Introduction to How Rate Theories can be Leveraged to Develop Simulation Techniques that Address the Time Scale Problem Traditional Methods Face when Dealing with Infrequent Event Processes The focus will be on so-called accelerated molecular dynamics (AMD) methods, namely Parallel Replica dynamics, Hyperdynamics, and Temperature Accelerated dynamics, with special attention devoted to techniques that can leverage current and upcoming massively parallel platforms, such as Parallel Trajectory Splicing. The theoretical framework underpinning these methods will be discussed and targeted applications relevant to Materials Science, such as crystal growth, plastic deformation and radiation damage, will be presented. These will serve to illustrate how advanced techniques can be used to simulate the evolution of materials over very long timescales, often revealing unexpected insights into the microscopic underpinnings of microstructural evolution, with accuracies approaching that of direct fully atomistic simulations. This introduction will conclude with an overview of the software tools that can be used to carry out AMD simulations. (3:00 pm – 3:30 pm BREAK) - Danny Perez, Los Alamos National Laboratory - Hannes Jónsson, University of Iceland
<urn:uuid:f7a9610a-bb15-4e45-b310-a78957b0a653>
2.875
683
Content Listing
Science & Tech.
7.653182
95,554,901
The synchronization of two pendulum clocks hanging from a wall was first observed by Huygens during the XVII century. This type of synchronization is observed in other areas, and is fundamentally different from the problem of two clocks hanging from a moveable base. We present a model explaining the phase opposition synchronization of two pendulum clocks in those conditions. The predicted behaviour is observed experimentally, validating the model. The synchronization between two periodic systems connected through some form of coupling is a recurrent, and still pertinent, problem in Nature, and in particular in Physics. During the 17th century Huygens, the inventor of the pendulum clock, observed phase or phase opposition coupling between two heavy pendulum clocks hanging either from a house beam and later from a board sitting on two chairs1. These two systems are inherently different in terms of the coupling process, and in consequence of the underlying model. The later case has been thoroughly studied2,3,4,5,6,7,8,9 by considering momentum conservation in the clocks-beam system. The first case has been approached in theoretical works10,11,12,13. We present a mathematical model where the coupling is assumed to be attained through the exchange of impacts between the oscillators (clocks). This model presents the additional advantage of being independent of the physical nature of the oscillators, and thus can be used in other oscillator systems where synchronization and phase locking has been observed14.The model presented starts from the Andronov15 model of the phase-space limit cycle of isolated pendulum clocks and assumes the exchange of single impacts (sound solitons, for this system) between the two clocks at a specific point of the limit cycle. Two coupling states are obtained, near phase and near phase opposition, the latter being stable. Our experimental data, obtained using a pair of similar pendulum clocks hanging from an aluminum rail fixed to a masonry wall, match the theoretical predictions and simulations. The model for the isolated pendulum clock has been studied using models with viscous friction by physicists2,3,5,6,7,8,9. However, Russian mathematicians lead by Andronov published a work15 where the stability of the model with dry friction is established (Andronov clock). The authors prove the existence and stability of the limit cycle. We adopt as basis for our work the first of the aforementioned models, assuming that dry friction predominates. Using the angular coordinate q, the differential equation governing the pendulum clock is where μ > 0 is the dry friction coefficient, ω is the natural angular frequency of the pendulum and sign(x) a function giving −1 for x < 0 and 1 for x > 0 and sign 0 ∈ [−1,1]. In15 was considered that, in each cycle, a fixed amount of normalized kinetic energy is given by the escape mechanism to the pendulum to compensate the loss of kinetic energy due to dry friction in each complete cycle. We call to the transfer of kinetic energy a kick. We set the origin such that the kick is given when , which is very close to 0. The phase portrait is shown in Fig. 1. There are anchors with geometries allowings two symmetric kicks per cycle. The theoretical treatment is similar but we adopted the first model, with one kick per cycle, due to the geometry of the anchor of the clocks used in the experimental setting. Considering initial conditions and , we draw a Poincaré section (16 vol. II, page 268) as the half line and 15. The symbol + refers to the fact that we are considering that the section is taken immediately after the kick. There is a loss of velocity due to friction during a complete cycle. Considering the velocity at the Poincaré section in each cycle one obtains15 the non-linear discrete dynamical system which has the asymptotically stable fixed point The fixed point (2) attracts initial conditions v0 in the interval . Model for two pendulum clocks We consider two pendulum clocks suspended at the same wall. When one clock receives the kick, the impact propagates in the wall slightly perturbing the second clock. The perturbation is assumed instantaneous since the time of travel of sound in the wall between the clocks is assumed very small compared to the period. The interaction was studied geometrically and qualitatively by Abraham10,11. However, that approach does not give estimates on the speed of convergence. In Vassalo-Pereira13, the theoretical problem of the phase locking is tackled. The author makes the assumptions: The pendulums have the same exact natural frequency ω. The perturbation imposes a discontinuity in the momentum but not a discontinuity in the dynamic variable. The interaction between clocks takes the form of a Fourier series12. Vassalo-Pereira deduced that the two clocks synchronize with zero phase difference. This is the exact opposite of Huygens first remarks1 and our experimental observations, where phase opposition was observed. Therefore, we propose here a modified model accounting for a difference in frequency between the two clocks. Consider two oscillators indexed by i = 1,2. Each oscillator satisfies the differential equation when , the kinetic energy of each oscillator is increased by the fixed amount hi as in the Andronov model. The coupling term is the normalized force , where is the interaction function and αi a constant with acceleration dimensions. We consider that the effect of the interaction function is to produce an increment −α in the velocity of each clock leaving the position invariant when the other is struck by the energy kick, as we will see in equations (9). We could consider that the interaction function is the Dirac delta distribution , giving exactly the same result. The sectional solutions of the differential equation (4) are obtainable when the clocks do not suffer kicks. To treat the effect of the kicks we construct a discrete dynamical system for the phase difference. The idea is similar to the construction of a Poincaré section. If there exists an attracting fixed point for that dynamical system, the phase locking occurs. Our assumptions are The pendulums have natural angular frequencies ω1 and ω2 near each other with ω1 = ω + ε and ω2 = ω − ε, where ε ≥ 0 is a small parameter. Since the clocks have the same construction, the energy dissipated at each cycle of the two clocks is the same, h1 = h2 = h. The friction coefficient is the same for both clocks, μ1 = μ2 = μ. The perturbative interaction is instantaneous. This is a reasonable assumption, since in general the perturbation propagation time between the two clocks is several orders of magnitude lower than the periods. The interaction is symmetric, the coupling has the same constant α when the clock 1 acts on clock 2 and conversely. In our model we assume that α is very small. All values throughout the paper are in SI units when not explicit. To prove phase locking we solve sectionally the differential equations (4) with the two small interactions. Then, we construct a discrete dynamical system taking into account the two interactions per cycle seen in Fig. 2 and 3. After that, we compute the phase difference when clock 1 returns to the initial position. The secular repetition of perturbations leads the system to near phase opposition as we can see by the geometrical analysis of Fig. 2 and 3. The notation is simplified if we consider the function γ(φ) such that and the function χ(φ) such that We make the assumption that the natural frequencies are near. A difference of 28 s per day in the movement of the clocks with natural periods in the order of 1.42 s, which is easy to obtain even with very poor clocks, means that ε is on the order of 10−3rads−1. This means that, in each cycle of each clock, the other one will give one perturbative kick to the other. Suppose that the clocks are bring to contact at t0 = 0. Consider that the fastest clock (number 1) is at position Using γ and χ we have The perturbation of clock 1 on clock 2 adds the value of −α to the velocity , keeping the position q2(0−).Thus, the new initial conditions at t = 0+ for the movement of the second clock are The new phase of clock 2, which is the phase difference of the two clocks φ′0, at 0+ is now To simplify the notation we consider the function With power expansion in α The correction of the phase difference at is With first order term in α Now, both clocks start their natural movement. We suppose that the clock 2 arrives at the vertical position without being overtaken by clock 1, if that is the case we begin our study after that situation occurs. Clock 2 takes the time to arrive at this position. The phase of clock 2 is . The phase of clock 1 is now The next interaction is given by the kick from clock 2 to clock 1. Denoting the phase of clock 1 at this stage by and , we have the phase difference immediately before the second kick The next interaction is given by the kick from clock 2 to clock 1. Using a process similar to the previous kick we have after the second kick Expanding this function in power series we have The new phase difference is The correction of the phase difference at is now with first order term in α and this expression can be further simplified remembering that giving in first order in α To complete the study of the phase difference the clock 1 must return to the vertical position, which happens for the time . The time that this clock takes to return to the vertical position is , the phase of clock 1 is now 2π, the phase of clock 2 gives us the new phase difference φ1 after one cycle of clock 1, that is If we call the affine function and the coefficients the large expression (31) is the composition of four maps The discrete dynamical system for n ≥ 0 is given by the map Ω such that Obviously, Ω is a map from the interval [0,2π] to itself. Despite the apparent complexity of Ω, this map is relatively manageable. Under certain conditions we can prove that Ω has a stable fixed point. In this work we deal only with the first degree approximation, relative to the small parameters α and ε, the value of the fixed point φf which is near to π. The phase difference is asymptotic to the solution φf. Knowing this value it is possible to prove the existence and stability of the limit cycle of each clock in interaction and the final asymptotic frequency ωf. Under this model we can say that Huygens sympathy occurs. In first order of α and ε we have the iterative scheme We define the map We get in first order of α and ε the dynamical system for the phase difference There are two fixed points and of Ξ in the interval [0,2π] The derivative of Ξ at the fixed point must be |Ξ′(xf)| < 1 to have stability and the condition about the argument of the function arcsin gives Therefore, the limit of the phase difference is, in first order, which is very near to π when the natural frequencies of both clocks are very near, i.e., small ε. When the system reaches this limit the corrections of phase are null for both clocks. To study the Huygens synchronization we used numerical simulations. We applied the map Ξ(x) without performing the Taylor expansion. We used the environment of Wolfram Mathematica 9.0 to produce the computations. The values of μ, h, ω, t0 were taken realistically from the experimental setup and kept fixed throughout the simulations. The coupling constant α and the half-difference between the clocks frequencies ε are adjusted in simulations. Additionally, we introduced noise in the model with normal distribution acting directly on the phase. The effect of noise is to mimic the small perturbations that occur in the lab, e.g., vibrations in the wall and the stochastic changes of the level of the interaction, cycle after cycle. The strength of this stochastic effect is given by the parameter ρ. When the noise function is not used, i.e. ρ = 0, if the parameters are in the convergence region given by conditions (42) we have a fixed convergence point of Ω. Experiments were setup using a standard optical rail (Eurofysica) rigidly attached to a wall to which two similar pendulum clocks were fixed through modified rail carriers. This setup can be seen in Fig. 4. The clocks were 230 mm apart. This was the lowest distance still ensuring that the pendulums trajectories were completely separated. The mass-driven anchor-pendulum clocks used were Acctim 26268 Hatahaway. One mass travel supplies energy for around 5 days, and the clocks take about one day to relax to the final frequency after winding. The period, of the order of 1.42 s, is configured by rotating the screw at the bottom of the pendulum, thus changing the actual pendulum length. The chime mechanism was inhibited in order to reduce mechanical noise. Time was measured using one U-shaped LED emitter-receptor TCST 1103, connected to a Velleman K8055 USB data acquisition board operated with custom-developed Visual Basic 6 software running in a standard personal computer (PC). The time measured corresponds to the left maximum angle, and is the midpoint of the interval of the sensor coverage by the pendulum beam. The uncertainty in time acquisition typically expected in the ms range for the PC was overcame by performing running averaging of the period data, up to 1000. The data files were then processed offline using Mathematica. In order to obtain the appropriate parameters for the simulation, the pendulums were filmed and the movement quantified using free software Tracker4.84 (last date of access: 27/03/2015) from OSP https://www.cabrillo.edu/~tracker. We assume that the coupling is obtained through the exchange of sound pulses between the clock propagated through the rail. This is consistent with the absence of coupling for other materials tried (MDF and fiberglass) The exact mechanism of how the pulse energy propagates through the clock hardware down to the pendulum is hard to assess in detail and depends on the individual clock. The sound propagation speed in Al is 6420 ms−1, leading to a propagation time of the order of 3.0 × 10−5 s, which is negligible in face of the 1.42 s period of the oscillators, and so the instantaneous propagation of sound assumed in the model is a reasonable approximation. Results and discussion Observing the movement of the pendulum alone (as a damped oscillator; initially at the limit cycle), we noticed a decrease of the maximum velocity of the pendulum according to a linear fit vmax = 0.2228 − 0.0023n, where n is the number of cycles, with correlation coefficient 0.994. The decrease of velocity per cycle predicted by the Andronov model is . With the value of the period T = 1.40000s and rad/s, we can estimate the value of . The value of h can also be easily established by studying the movement at the limit cycle. We then have the maximum velocity . We found consistently that the maximum velocity at the limit cycle is vf = 0.223 m/s. Therefore, h ≈ 0.032. We used different possible values of natural frequency difference. For the average natural angular frequency we take the same value of T = 1.40000 s, hence ω = 4.48780 rad/s. The fastest clock has a natural frequency of ω + ε and the slowest ω − ε. The angular frequency difference is in first order , where Δt is half of the period difference. Notice that when Δt = 2 × 10−4s, with a delay between the clocks of 24.6 s per day for the non coupled pair of clocks, the value of ε is ε = 6.4 × 10−4 rad/s. We used values of ε in the range 10−4 to 10−3 rad/s as a realistic estimate for the performance of our setup. The fixed parameters used for the simulations are then μ = 2.54 × 10−3, h = 0.032, ω = 4.48799 rad/s, t0 = 0.8π s, and the phenomenological noise coefficient of ρ = 0.093, which fits the ripple observed at the experimental data. When we choose ε = 3 × 10−3 rad/s, corresponding to a huge natural delay of 116 s per day with the clocks in the isolated state, a value of α = 7 × 10−4 yielded results matching the experimental data. The conditions (42), using the linear approximation, give a estimate threshold of ε = 2.2 × 10−3 rad/s for the coupling with the same values of the fixed parameters. This is not contradictory with our results, since we used in the simulations the function Ξ with no approximations and we found numerically that the actual threshold for that function is at ε = 3.5 × 10−3 rad/s. We expect more frequent escapes from stable states than if we choose ε = 1.5 × 10−4 rad/s, corresponding to a natural delay of 2.9 s per day. These values correspond to the values that could realistically be obtained in our experimental setup. The plots can be seen in Fig. 5. Notice the small differences assumed for the frequencies, of the same order of the values measured for independent clocks. The time difference stabilizes in horizontal plateaus, corresponding to phase opposition coupling. The stochastic term introduced in the simulation unsets the system at some point, and then the phase difference increases quickly as the fastest clock runs away until the next synchronization plateau is reached, one or sometimes two periods away. For the simulation with the smaller difference between frequencies, the number of transitions between plateaus is smaller, as expected since the stability is much easier to reach and maintain. This is strikingly similar to the behaviour observed in Fig. 6 for the actual clocks (right axis). The number of synchronization plateaus is of the same order and can be fine-tuned using the stochastic parameter in the simulation. It was observed that the system could be unsettled by a number of external noise sources, e.g. from doors closing nearby in the building, people entering or leaving the room, or even the elevator stopping, than proceeding to the next synchronization plateau. The periods of both clocks from the same experiment can be seen on the same figure (left axis). The periods vary together within an interval of about 1 ms around 1.427 s when the clocks are coupled with correlation coefficient above 0.97 in the coupled state. Notice the almost perfect coincidence of the two curves except when the system leaves coupled states. Notice also the unstability of the coupled period, varying over an interval of almost 1 ms. When coupling is lost the period of one clock decreases sharply (up to 2 ms or more) and the period of the other clock increases by a (smaller) amount. These perturbations in the periods are coincident with the loss of phase opposition coupling. Although one may expect changes in the frequency even when the clocks are not coupled, due to the interaction between them, the difference in period can be estimated at around 2 ms, corresponding to a difference in frequency of the order of 6 × 10−3 rad/s. This asymmetry of the coupled period relative to the original periods is predicted by the model. Both periods return to the previous baseline value when the coupling is restored. Fig. 7 shows data for another experiment. In this case the free clock frequencies were closer. Both periods are remarkably coincident, but vary in an interval in excess of 10 ms, when the clocks are coupled. Since the frequencies are closer, the synchronization should be easier to maintain, hence the low number of plateaus, but also should be slower to attain, hence the longer transitions between plateaus. If the perturbation is large enough, especially if the frequencies are very close, it is possible to attain plateaus both above and below. Between t = 100000 T and t = 112000 T approximately the synchronization is lost, and the periods become separated by more than 100 μs (corresponding to frequency difference around 3 × 10−4 rad/s), but become stable (within 1 s). At instant t = 148000 T approximately clock #2 is stopped. From that moment on the period of the remaining working clock becomes stable within about 10 s, an interval one order of magnitude below. This confirms that the clocks strongly disturb one another, but also that both periods are kept at the same value in order to keep the synchronization at the expense of some frequency unstability. We have developed a model explaining the Huygens problem of synchronization between two clocks hanging from a wall. In this model each clock transmits once per cycle a sound pulse that is translated in a pendulum speed change. An equilibrium situation is obtained for almost half-cycle phase difference. These predictions match remarkably the experimental data obtained for two similar clocks hanging from a wall. How to cite this article: Oliveira, H. M. and Melo, L. V. Huygens synchronization of two clocks. Sci. Rep. 5, 11548; doi: 10.1038/srep11548 (2015). Funded by FCT/Portugal. H.M.O. by project PEst-OE/EEI/LA0009/2013 for CMAGDS and L.V.M by project PEst-OE/CTM/LA0024/2013 for INESC-MN and IN.
<urn:uuid:e00f22b1-6fb4-4245-a2e9-98382569200e>
3.171875
4,619
Academic Writing
Science & Tech.
55.16837
95,554,914
The stick-slip phenomenon, also known as the slip-stick phenomenon or simply stick-slip, is the spontaneous jerking motion that can occur while two objects are sliding over each other. Below is a simple, heuristic description of stick-slip phenomena using classical mechanics that is relevant for engineering descriptions. However, in actuality, there is little consensus in academia regarding the actual physical description of stick-slip which follows the lack of understanding about friction phenomena in general. The generally agreed upon view is that stick-slip behavior results from common phonon modes (at the interface between the substrate and the slider) that are pinned in an undulating potential well landscape that un-pin (slip) and pin (stick) primarily influenced by thermal fluctuations. The stiffness of the spring (shown in image below), the normal load at the interface (the weight of the slider), the duration of time the interface has existed (influencing chemical mass transport and bond formation), the original rate (velocity) of sliding (when the slider is in the slip phase) – all influence the behavior of the system. A description using common phonons (rather than constitutive laws like Coulomb's friction model) provides explanations for noise that generally accompanies stick-slip through surface acoustic waves. The use of complicated constitutive models that lead to discontinuous solutions (see Painlevé paradox) end up requiring unnecessary mathematical effort (to support non-smooth dynamical systems) and do not represent the true physical description of the system. However, such models are very useful for low fidelity simulations and animation. Stick-slip can be described as surfaces alternating between sticking to each other and sliding over each other, with a corresponding change in the force of friction. Typically, the static friction coefficient (a heuristic number) between two surfaces is larger than the kinetic friction coefficient. If an applied force is large enough to overcome the static friction, then the reduction of the friction to the kinetic friction can cause a sudden jump in the velocity of the movement. The attached picture shows symbolically an example of stick-slip. V is a drive system, R is the elasticity in the system, and M is the load that is lying on the floor and is being pushed horizontally. When the drive system is started, the Spring R is loaded and its pushing force against load M increases until the static friction coefficient between load M and the floor is not able to hold the load anymore. The load starts sliding and the friction coefficient decreases from its static value to its dynamic value. At this moment the spring can give more power and accelerates M. During M’s movement, the force of the spring decreases, until it is insufficient to overcome the dynamic friction. From this point, M decelerates to a stop. The drive system however continues, and the spring is loaded again etc. Examples of stick-slip can be heard from hydraulic cylinders, tractor wet brakes, honing machines etc. Special dopes can be added to the hydraulic fluid or the cooling fluid to overcome or minimize the stick-slip effect. Stick-slip is also experienced in lathes, mill centres, and other machinery where something slides on a slideway. Slideway oils typically list "prevention of stick-slip" as one of their features. Other examples of the stick-slip phenomenon include the music that comes from bowed instruments, the noise of car brakes and tires, and the noise of a stopping train. Stick-slip also has been observed in articular cartilage in mild loading and sliding conditions, which could result in an abrasive wear of the cartilage. Another example of the stick-slip phenomenon occurs when musical notes are played with a glass harp by rubbing a wet finger along the rim of a crystal wine glass. One animal that produces sound using stick-slip friction is the spiny lobster which rubs its antennae over smooth surfaces on its head. Another, more common example which produces sound using stick-slip friction is the grasshopper. Stick-slip is the basic physical mechanism for the active control of friction by applying vibrations. - F. Heslot, T. Baumberger, B. Perrin, B. Caroli, and C. Caroli, Phys. Rev. E 49, 4973 (1994) Sliding Friction: Physical Principles and Applications – Bo N.J. Persson Ruina, Andy. "Slip instability and state variable friction laws." Journal of Geophysical Research 88.B12 (1983): 10359-10 - Kligerman, Y.; Varenberg, M. (2014). "Elimination of stick-slip motion in sliding of split or rough surface". Tribology Letters. 53 (2): 395–399. doi:10.1007/s11249-013-0278-8. - D.W. Lee, X. Banquy, J. N. Israelachvili, Stick-slip friction and wear of articular joints, PNAS. (2013), 110(7): E567-E574 - S. N. Patek (2001). "Spiny lobsters stick and slip to make sound". Nature. 411 (6834): 153–154. doi:10.1038/35075656. PMID 11346780. - Atomic-scale friction of a tungsten tip on a graphite surface C.M. Mate, G.M. McClelland, R. Erlandsson, and S. Chiang Phys. Rev. Lett. 59, 1942 (1987) - Scholz, C.H. (2002). The mechanics of earthquakes and faulting (2 ed.). Cambridge University Press. pp. 81–84. ISBN 978-0-521-65540-8. Retrieved 6 December 2011. - Branch, John (2017-03-17). "Why Are Basketball Games So Squeaky? Consider the Spiny Lobster". The New York Times. ISSN 0362-4331. Retrieved 2017-03-19. - Popov, M.; Popov, V. L.; Popov, N. V. (2017-03-01). "Reduction of friction by normal oscillations. I. Influence of contact stiffness". Friction. 5 (1): 45–55. doi:10.1007/s40544-016-0136-4. - Zypman, F. R.; Ferrante, J.; Jansen, M.; Scanlon, K.; Abel, P. (2003), "Evidence of self-organized criticality in dry sliding friction", Journal of Physics: Condensed Matter, 15 (12): L191, doi:10.1088/0953-8984/15/12/101
<urn:uuid:070b91db-2abe-4085-9aa7-8ae5450c4e53>
3.640625
1,420
Knowledge Article
Science & Tech.
65.416874
95,554,944