text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Abstract: Hydrothermal vents are deep-sea ecosystems that are almost exclusively known and explored by scientists rather than the general public. Continuing scientific discoveries arising from study of hydrothermal vents are concommitant with the increased number of scientific cruises visiting and sampling vent ecosystems. Through a bibliometric analysis, we assessed the scientific value of hydrothermal vents relative to two of the most well-studied marine ecosystems, coral reefs and seagrass beds. Scientific literature on hydrothermal vents is abundant, of high impact, international, and interdisciplinary and is comparable in these regards with literature on coral reefs and seagrass beds. Scientists may affect hydrothermal vents because their activities are intense and spatially and temporally concentrated in these small systems. The potential for undesirable effects from scientific enterprise motivated the creation of a code of conduct for environmentally and scientifically benign use of hydrothermal vents for research. We surveyed scientists worldwide engaged in deep-sea research and found that scientists were aware of the code of conduct and thought it was relevant to conservation, but they did not feel informed or confident about the respect other researchers have for the code. Although this code may serve as a reminder of scientists’ environmental responsibilities, conservation of particular vents (e.g., closures to human activity, specific human management) may effectively ensure sustainable use of vent ecosystems for all stakeholders. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:44445e35-cc4a-4827-b406-19a592be6abb>
3.546875
297
Academic Writing
Science & Tech.
7.9875
95,494,152
NASA scientists have discovered two large black holes which are on course to collide causing a massive blast that will destroy the galaxy. However, there is no need to panic, as the galaxy set to be destroyed is not our own one. Researchers at NASA’s Galaxy Evolution Explorer (GALEX) say the two black holes will collide in approximately one million years, causing the PG 1302-102 galaxy to be obliterated. Black hole mergers are considered to be the most violent events in the Universe. When they finally meet, they converge in a type of “death spiral,” and are predicted to send out ripples known as gravitational waves, a theory devised by Albert Einstein 100 years ago. Scientists at the California Institute of Pasadena who are trying to gauge a better understanding of how galaxies and black holes merge, first found the pair earlier this year after finding an unusual light signal coming from the centre of a galaxy, named PG 1302-102. Using telescopes at the Catalina Real-Time Transient Survey, they found the varying signal was most likely generated by the motion of two black holes, while they swing around each other every five years. The black holes themselves don’t give off light, but the material surrounding them does, NASA reported. The study was published in this month’s edition of the journal, Nature. According to the authors, as the black holes spin faster the material gives off more light. And the brighter the light, the closer it is to Earth. “It’s as if a 60-Watt light bulb suddenly appears to be 100 Watts,” Daniel D’Orazio, lead author of the study from Columbia University told NASA. “As the black hole light speeds away from us, it appears as a dimmer 20-Watt bulb.” The researchers were able to prove their theory from observations made by both the GALEX and Hubble telescopes. “We were lucky to have GALEX data to look through,” said co-author David Schiminovich of Columbia University in New York. “We went back into the GALEX archives and found that the object just happened to have been observed six times.” The researchers now hope others will be able to use what they have found to find even closer-knit merging black holes. “We are strengthening our ideas of what’s going on in this system and starting to understand it better,” said Zoltan Haiman, another co-author from Columbia University. Latest posts by Sean Adl-Tabatabai (see all) - Lisa Page Squeals: DNC Server Was Not Hacked By Russia - July 17, 2018 - Mandalay Bay Sues Vegas Shooting Victims For Daring To Speak Out - July 17, 2018 - FBI Official Overseeing Russian Investigation Quits After Lack Of Evidence - July 17, 2018
<urn:uuid:46d82bcc-60ba-405c-a99c-4c32dd58d012>
3.828125
612
News Article
Science & Tech.
46.058509
95,494,155
Skeletons in the Physics Closet Cleaning Our Own House The neutrino and energy conservationhttp://www.ethbib.ethz.ch/exhibit/pauli/neutrino_e.html Prof. Scalise discussed a few weird things about modern physics. There are a few skeletons in the closet here, too. The study of matter and energy is a principal part of physics. The goal is to understand how matter and and energy behave and, hopefully, how it all works. Matter is interesting stuff. It seems solid enough, but is really almost entirely empty space. If you could magnify an atom so its electron shells were the size of Texas Stadium, the nucleus would be the size of a baseball at the center. Everything in between is empty. In the nucleus you find protons (positively charged) and neutrons (not charged). They were long thought to be solid particles, although very tiny. Prof. Scalise used an example of the phenomenon called scaling. Consider a ball of red yarn. Completely ordinary ball of yarn maybe 4 inches in diameter. If you view it from a mile away it will appear as a red point. Move in to 20 feet and it is a red sphere. Get closer, like 5 feet, and the yarn becomes visible; it looks like a 1-dimensional string all wrapped up. At 1 foot you can see that the yarn actually has diameter; it is a long cylinder all rolled up. Now really close in on it, getting 2 inches away from it. Now you can see the individual tiny threads in the yarn. The appearance of the yarn ball changes as you get closer (magnify it more). It goes from point to sphere to textured sphere to rolled-up line to rolled-up cylinder to many threads. Such an object does not scale. So what does this have to do with protons? In 1969, some physicists did an experiment, smashing tiny electrons into protons at increasing energies. The way the electrons bounced off, plus the production of heavier versions of the electron and the proton, revealed that the proton is not solid but has three parts. These parts were christened quarks. Over a wide range of energies (magnifications) these quarks look like points without any internal structure. There are six types of quark: up, down, top, bottom, charm, and strange. The names are physically meaningless - they just refer to 6 different quarks. A proton is made of two ups and a down, while a neutron is two downs and an up. This work was good for a Nobel prize in physics. Current Strangeness (not quarks) We were just talking about an experiment - "real physics" to many. But there's more to physics today. Mathematical theory is quite prominent. There are three primary areas of this. - String theory Supersymmetry postulates that for every ordinary atomic particle there is a heavier symmetric partner. The only problem is that the supersymmetric partner particles have NEVER been seen in experiment. The masses of the partner particles is predicted to be large but seems to be quite uncertain. Formalism, the third area, is work beyond the current model of particles. These three theoretical areas have one property that many physicists don't like: they cannot be tested with experiments. Some (including Scalise) argue that these areas should be classified as mathematics. The final strangeness lies in the National Science Foundation's funding for physics research. - 24% String Theory - 23% Supersymmetry - 17% Beyond Standard Model Books and Articles: - The Trouble With Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next by Lee Smolin - Not Even Wrong: The Failure of String Theory And the Search for Unity in Physical Law by Peter Woit - The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory by Brian Greene
<urn:uuid:ae64fd9b-74fd-4990-9ce9-63bb298dba9a>
3.21875
833
Nonfiction Writing
Science & Tech.
56.263555
95,494,184
Trends in the Mediterranean rainfall The trends appearing in the annual rainfall of the 14 selected coastal and island stations of the Mediterranean were invetigated by running 30-year averages. The periods used as well as the standard deviation, the average variability and the coefficient of variation of the annual rainfall are given for each of the 14 stations. It was found that in the majority of the stations upward and downward trends in the annual rainfall appeared but in a few only stations these trends coincide in the same intervals. A relative similarity appeared in the stations of Marseille-Trieste, Malta-Tunis, Gibraltar-Rome, Nicosia-Limassol and Beyrut-Alexandria. By examination of the three more important maxima and minima in the course of rainfall it was observed that many of them coincide simultaneously at about the same time in the different stations and also that these coincidences occurred near the maximum or minimum of sunspots. KeywordsStandard Deviation Annual Rainfall Downward Trend Relative Similarity Island Station Unable to display preview. Download preview PDF. - (1).Biel E. R.:Climatology of the Mediterranean area. Publ. of the Inst. of Meteor. of the Univ. of Chicago. Misc. Rep., No. 13, 1944.Google Scholar - (2).Biel E. R.:Die Veränderlichkeit der Jahressumme des Niederschlages auf der Erde. Geogr. Jahresber. aus Oesterreich, Wien 1929.Google Scholar - (3).Carapiperis L. N.:Trends in Athens rainfall. Geofisica pura e applicata, Vol. 38, 1957.Google Scholar - (4).Hesselberg Th. & Birkeland B. I.:Säkulare Schwankungen des Klimas von Norwegen. Geof. Publ. Vol. XIV, No. 5.Google Scholar
<urn:uuid:5fe1c246-4d0f-4e07-aa0d-68ed28954074>
2.796875
416
Academic Writing
Science & Tech.
52.531047
95,494,187
A Programmer's guide to C# 5.0 Download a free Csharp training document in PDF .This pdf tutorial is for software developers who want to understand the basics of C# programming. They will learn the object-oriented programming concepts . Table of contents - Chapter 1: The .NET Runtime - Chapter 1: C# and Libraries - Chapter 2: C# QuickStart - Chapter 5: Exception Handling - Chapter 5: Object-oriented programming concepts - Chapter 6: Member Accessibility and Overloading - Chapter 7: Variables - Chapter 7: Class Details - Chapter 9: Interfaces - Chapter 9: OOP basics - Chapter 10: Versioning and Aliases - Chapter 11: Flow of Execution - Chapter 12: Variable Scoping and Definite Assignment - Chapter 13: Operators and Expressions - Chapter 14: Conversions - Chapter 15: Arrays and Strings - Chapter 17: Generic Types - Chapter 18: Indexers, Enumerators, and Iterators - Chapter 21: Attributes and Arrays - Chapter 22: Delegates, Anonymous Methods, and Lambdas - Chapter 24: Dynamic Typing - Chapter 25: User-Defined Conversions - Chapter 26: Operator Overloading - Chapter 28: Linq to Objects ,XML ,and SQL - Chapter 31: Other Details of C# - Chapter 32: The fundamental of .NET Framework - Chapter 33: Collection Classes - Chapter 34: Threading - Chapter 35: Parallel Programming - Chapter 38: .NET Base Class Library Overview - Chapter 39: More about C# - Chapter 41: IDEs and Utilities - File Size: - 4,820.17 Kb - Submitted On: Take advantage of this course called A Programmer's guide to C# 5.0 to improve your Programming skills and better understand Csharp. This course is adapted to your level as well as all Csharp pdf courses to better enrich your knowledge. All you need to do is download the training document, open it and start learning Csharp for free. Creating Games in C++ : A Step-by-Step Guide Creating Games in C++ : A Step-by-Step Guide,this PDF tutorial teaches You How to Build A Real Game, a complete training course under 600 pages by David Conger and Ron Little. Learning Laravel by examples With this PDF tutorial you will learn how to build a web application with Laravel PHP Framework version 4, free training document under 58 pages. Practical C++ programming This tutorial is devoted to practical C++ programming. It teaches you the mechanics of the language, free training document under 549 designated to all level users. C# : Practical Guide for Programmers Download a free Csharp tutorial in PDF by Michel de Champlain .A complet training document under 262 pages for intermediate level-users. Fundamentals of C# programming This tutorial is designated to learn the C# language and think like a programmer ,it's a free PDF document under 1122 pages for all level users. Getting started with C# Free tutorial in PDF about C# programming ,a training document under 52 pages designated to beginners who want to learn the basics of CSharp language.
<urn:uuid:9bbc04a7-44fd-41cf-a42a-18aa2f06f0f3>
2.59375
692
Product Page
Software Dev.
50.299579
95,494,189
Python is a widely used general-purpose, high-level programming language. Pro Lots of tutorials Python's popularity and beginner friendliness has led to a wealth of tutorials and example code on the internet. This means that when beginners have questions, they're very likely to be able to find an answer on their own just by searching. This is an advantage over some languages that are not as popular or covered as in-depth by its users. Pro Easy to get started On top of the wealth of tutorials and documentation, and the fact that it ships with a sizeable standard library, Python also ships with both an IDE (Integrated Development Environment: A graphical environment for editing running and debugging your code); as well as a text-based live interpreter. Both help users to get started trying out code immediately, and give users immediate feedback that aids learning. Pro Clear syntax Python's syntax is very clear and readable, making it excellent for beginners. The lack of extra characters like semicolons and curly braces reduces distractions, letting beginners focus on the meaning of the code. Significant whitespace also means that all code is properly and consistently indented. The language also uses natural english words such as 'and' and 'or', meaning that beginners need to learn fewer obscure symbols. On top of this, Python's dynamic type system means that code isn't cluttered with type information, which would further distract beginners from what the code is doing. Pro Comes with extensive libraries Python ships with a large standard library, including modules for everything from writing graphical applications, running servers, and doing unit testing. This means that beginners won't need to spend time searching for tools and libraries just to get started on their projects. Pro Good documentation The Python community has put a lot of work into creating excellent documentation filled with plain english describing functionality. Contrast this with other languages, such as Java, where documentation often contains a dry enumeration of the API. As a random example, consider GUI toolkit documentation - the tkinter documentation reads almost like a blog article, answering questions such as 'How do I...', whereas Java's Swing documentation contains dry descriptions that effectively reiterate the implementation code. On top of this, most functions contain 'Doc Strings', which mean that documentation is often immediately available, without even the need to search the internet. Pro Supports various programming paradigms Python supports three 'styles' of programming: - Procedural programming. - Object orientated programming. - Functional programming. All three styles can be seamlessly interchanged and can be learnt in harmony in Python rather than being forced into one point of view, which is helpful for easing confusion over the debate amongst programmers over which programming paradigm is best, as developers will get the chance to try all of them. Pro Advanced community projects There are outstanding projects being actively developed in Python. Projects such as the following to name a random four: - Django: a high-level Python Web framework that encourages rapid development and clean, pragmatic design. - iPython: a rich architecture for interactive computing with shells, a notebook and which is embeddable as well as wrapping and able to wrap libraries written in other languages. - Mercurial: a free, distributed source control management tool. It efficiently handles projects of any size and offers an easy and intuitive interface. - PyPy: a fast, compliant alternative implementation of the Python language (2.7.3 and 3.2.3) with several advantages and distinct features including a Just-in-Time compiler for speed, reduced memory use, sandboxing, micro-threads for massive concurrency, ... When you move on from being a learner you can still stay with Python for those advanced tasks. Pro Good introduction to data structures Python's built-in support and syntax for common collections such as lists, dictionaries, and sets, as well as supporting features like list comprehensions, foreach loops, map, filter, and others, makes their use much easier to get into for beginners. Python's support for Object Orient Programming, but with dynamic typing, also makes the topic of Data Structures much more accessible, as it takes the focus off of more tedious aspects, such as type casting and explicitly defined interfaces. Python's convention of only hiding methods through prefacing them with underscores further takes the focus off of details such as Access Modifiers common in languages such as Java and C++, allowing beginners to focus on the core concepts, without much worry for language specific implementation details. Pro Easy to find jobs Python's popularity also means that it's commonly in use in production at many companies - it's even one of the primary languages in use at Google. Furthermore, as a concise scripting language, it's very commonly used for smaller tasks, as an alternative to shell scripts. Python was also designed to make it easy to interface with other languages such as C, and so it is often used as 'glue code' between components written in other languages. Pro Import Turtle Do something visually interesting in minutes by using the turtle standard library package. Turtle graphics is a popular way for introducing programming to kids. It was part of the original Logo programming language developed by Wally Feurzig and Seymour Papert in 1966. Imagine a robotic turtle starting at (0, 0) in the x-y plane. After an import turtle, give it the command turtle.forward(15), and it moves (on-screen!) 15 pixels in the direction it is facing, drawing a line as it moves. Give it the command turtle.right(25), and it rotates in-place 25 degrees clockwise. Turtle can draw intricate shapes using programs that repeat simple moves. from turtle import * color('red', 'yellow') begin_fill() while True: forward(200) left(170) if abs(pos()) < 1: break end_fill() done() Pro Static typing via mypy Python's syntax supports optional type annotations for use with a third-party static type checker, which can catch a certain class of bugs at compile time. This also makes it easier for beginners to gradually transition to statically typed languages instead of wrestling with the compiler from the start. Con Language fragmentation A large subset of the Python community still uses / relies upon Python 2, which is considered a legacy implementation by the Python authors. Some libraries still have varying degrees of support depending on which version of Python you use. There are syntactical differences between the versions. Con Slow to run Check for example the first result from Google: https://benchmarksgame.alioth.debian.org/u64q/python.html Con Does not teach you about data types Since Python is a dynamically typed language, you don't have to learn about data types if you start using Python as your first language. Data types being one of the most important concepts in programming. This also will cause trouble in the long run when you will have to (inevitably) learn and work with a statically typed language because you will be forced to learn the type system from scratch. Con Inelegant and messy language design The first impression given by well-chosen Python sample code is quite attractive. However, very soon a lack of unifying philosophy / theory behind the language starts to show more and more. This includes issues with OOP such as lack of consistency in the use of object methods vs. functions (e.g., is it x.sort() or sorted(x), or both for lists?), made worse by too many functions in global name space. Method names via mangling and the init(self) look and feel like features just bolted on an existing simpler language. Con Limited support for functional programming While Python imports some very useful and elegant bits and pieces from FP (such as list comprehensions, higher-order functions such as map and filter), the language's support for FP falls short of the expectations raised by included features. For example, no tail call optimisation or proper lambdas. Referential transparency can be destroyed in unexpected ways even when it seems to be guaranteed. Function composition is not built into the core language. Etc. Con Multi-threading can introduce unwanted complexity Although the principals of multi-threading in Python are good, the simplicity can be deceptive and multi-threaded applications are not always easy to create when multiple additional factors are accounted for. Multi-thread processes have to be explicitly created manually. Con The process of shipping/distributing software is reatively complicated Once you have you program the process of having a way to send it to others to use is fragile and fragmented. Python is still looking for the right solution for this with still differences in opinion. These differences are a huge counter to Python's mantra of "There should be one-- and preferably only one --obvious way to do it." Flagged Pros + Cons Con Deployement is it Achile's Heel Once you have you program the process of having a way to send it to others to use is fragile and fragmented. Python is still looking for the right solution for this with still differences in opinion. These differences are a huge counter to Python's mantra of "There is one correct way to do something."
<urn:uuid:0a63fc43-8122-47e0-aeac-7bb666f66aa8>
3.3125
1,904
Listicle
Software Dev.
41.323954
95,494,192
On Earth, only arches (archaeobacteria) and anaerobic bacteria are able to live without oxygen. Origin of oxygen Oxygen comes from the activity of cyanobacteria, plants using chlorophyll which extracts carbon on carbon dioxide and reject oxygen; This is the mechanism of photosynthesis. Water vapor is also transformed into dioxygen and dihydrogen, once in the upper atmosphere due to the evaporation caused by the sun’s rays. Four billion years ago, the Earth’s atmosphere was composed mainly of methane, ammonia and carbon dioxide. The bacteria in the oceans then practiced fermentation. Until the appearance of mutant bacteria that have used photosynthesis by consuming carbon from CO2. These bacteria have continued to mutate to become cyanobacteria, they appeared 3.2 billion years ago in the form of blue-green algae or blue-green algae. They were then especially interested in the hydrogen of water. Change in the amount of oxygen in the atmosphere 2.7 billion years ago, the concentration of oxygen reached 2% of the composition of the earth’s atmosphere. Anaerobic bacteria have begun to disappear. 540 million years ago, during the Cambrian, the oxygen level reached 15% and fluctuated between this level and 30%. 300 million years ago, during the Permian period, oxygen reached a maximum of 35%. The insects then metamorphosed, taking gigantic sizes, just like the amphibians. So on Earth, the dragonflies had a 24 inches wingspan, the scorpions were 30 inches long, the spiders were as big as a human head and devoured the small reptiles, centipedes were 6.5 feet long. This is due to the growth of the giant fern forests (the size of a tree) on the planet, as well as the buried organic matter, turning into coal deposits. The amount of oxygen being too great, the storms began to ignite the sky, which gradually decreased the rate. Reptiles have succeeded insects. The oxygen content then fell to 10-15%, temperatures rose, as did the CO2 concentrations. Human activities do not change the amount of oxygen in the atmosphere, despite the burning of nearly 7 billion tons of fossil fuels. If oxygen disappeared overnight, it would take only 2,000 years for nature to produce the same oxygen content, photosynthesis now having a frantic rhythm. In order for us to reach about 30-35% oxygen again on Earth, the continents would have to merge again, to create a new Pangea, and this one would travel to the tropical zones of the earth. The moisture content will then allow the renewal of the fern. It would have to mutate to reach peaks and produce considerable amounts of oxygen. At present, it is the algae and the marine phytoplankton that produce the most oxygen on Earth, and then the forests. Why does not the current rate increase? If nature can recreate the rate of the 21st century in less than 2000 years by starting from scratch, why does the current rate not go beyond 20-21%? O2 is mainly produced by photosynthesis. Water and CO2 are recovered by plants to create carbohydrates (CH2O), then O2 is also released. But in the evening, the plant breathes O2 to release CO2, so the forests often have a record of almost zero over several years. Thus the Amazon is not the real lung of the planet (it is above all the largest pool of the diversification of the living), it is rather the oceans that should have this title. The oceans via algae and phytoplankton release a lot of oxygen, part is breathed by the terrestrial alive beings, the other goes into the atmosphere. But oxygen is also used in iron oxidation outside. Also, a large part of the organic matter in the water will oxidize, while the other is deposited at the bottom of the oceans. In order for the oxygen level to increase, the photosynthetic activity must increase, the organic matter (carbon) must be buried rather than oxidized, so that oxygen remains free and take off. This is what happened during the Carboniferous-Permian.
<urn:uuid:5837ea7f-caca-49f2-af6f-36428b5187de>
3.84375
874
Knowledge Article
Science & Tech.
45.815882
95,494,204
|Fistularia commersonii from Maldives| Rüppell, 1838 Fistularia depressa Günther, 1880 The bluespotted cornetfish (Fistularia commersonii), also known as smooth cornetfish or smooth flutemouth, is a marine fish which belongs to the family Fistulariidae. This very long and slender reef-dweller belongs to the same order as the pipefishes and seahorses, called Syngnathiformes. It is widespread in the tropical and subtropical waters of the Indo-Pacific as far north as Japan and east to the coasts of the Americas, including the Red Sea. In 2000, its presence was reported in the Mediterranean Sea; since then, it has continued to disperse and is now well established in some areas. This species is considered as part of the Lessepsian migration. It has spread rapidly through the Mediterranean from its origin in the Suez Canal, the first records being off Israel in 2000 and it had reached the southern coast of Spain and as far north as the Gulf of Lions by 2007. Scientists have determined that the fish in the Mediterranean are all descended from a small number of ancestors, possibly as a result of a single invasion event, and are not as genetically variable as ther conspecifics in the Red Sea. The bluespotted cornetfish grows to a length of 1.6 m (5.2 ft), but the average is around 1 metre (3 ft 3 in). It is notable for its unusually long, slender body shape. It has a tubular snout, large eyes and a long tail filament lined with sensory pores which may help with detecting prey. Its body is greenish-grey to brown with two thin blue stripes or lines of dots on the back and lighter on the front. Its body pattern changes to a broad banded pattern at night. The bluespotted cornetfish is usually a solitary predator, stalking and feeding on small fishes, crustaceans, and squid. Sometimes, they feed in small groups along the bottom on small, bottom-dwelling fish which their long snouts are very efficient at sucking up. Reproduction is oviparous. The large eggs hatch and develop outside of the body. Larvae hatch at 6–7 millimetres (0.24–0.28 in). |Wikimedia Commons has media related to Bluespotted cornetfish.| |Wikispecies has information related to Bluespotted cornetfish| - Pollom, R. (2016). "Fistularia commersonii (errata version published in 2017)". The IUCN Red List of Threatened Species. 2016: e.T18257780A115368874. Retrieved 10 June 2018. - "Fistularia commersonii". Integrated Taxonomic Information System. - Froese, Rainer and Pauly, Daniel, eds. (2018). "Fistularia commersoni" in FishBase. February 2018 version. - Bray, Dianne; Thompson, Vanessa. "Smooth Flutemouth, Fistularia commersonii". Fishes of Australia. Retrieved 16 September 2014. - "Fistularia commersonii Bluespotted Cornetfish". Encyclopedia of Life. eol. Retrieved 10 June 2018. - Domenico Meloni & Pierluigi Piras (2013). "Fistularia commersonii (Syngnathiformes Fistularidae), in the South-Western Mediterranean Sea" (PDF). Biodiversity Journal. 4 (3): 435–438. - Ernesto Azzurro; S. Soto; Germana Garafolo & Francesc Maynou (2012). "Fistularia commersoniiin the Mediterranean Sea: Invasion history and distributional modeling based on presence only records". Biological Invasions. 15 (5). doi:10.1007/s10530-012-0344-4. - "Biographical Etymology of Marine Organism Names. C". Hans G. Hansson. Retrieved 10 June 2018.
<urn:uuid:44f25e50-afa5-415f-ac1b-35d4013c0c1d>
2.828125
864
Knowledge Article
Science & Tech.
52.557029
95,494,236
RULE 3: Never mix your Actors The UML definition of an Actor is an external entity that interacts with the system under development. In other words: it’s a stakeholder. Having analysed all your stakeholders (see Part 3 ) it’s tempting to stick them (no pun intended) as actors on a use case diagram and start defining use cases for each. Each set of stakeholders (Users, Beneficiaries or Constrainers) has its own set of concerns, language and concepts: Each stakeholder group has a different set of issues, problems, wants and desires. For example, Users are interested in functionality; Constrainers in compliance. The way a system is perceived by the stakeholders depends on their viewpoint, their needs, their technical background, etc. Each group’s paradigm – their way of perceiving the system – will be different and involve often subtly different concepts. For example, Users may have no concept of return-on-investment (RoI) for the system; whereas this may be a key concept to a Beneficiary. Just as concepts are different; so is the language used to describe them. In many cases, the same word is used in different contexts to mean different things. For example: how many different concepts of ‘power’ can you think of? Mechanical, physical, electrical, political… It is vital never to mix actors from different stakeholder groups on the same use case diagram. Trying to mix actors leads to ambiguity and confusion; both for the writer and reader! The differences in concept, viewpoint and language will make the use case almost impossible to decipher and understand. By all means draw a separate use case diagram for each set of stakeholders. (Note: non-User stakeholder use case descriptions is beyond the scope of this article) Latest posts by Glennan Carnie (see all) - Your handy cut-out-and-keep guide to std::forward and std::move - April 26, 2018 - Setting up Sublime Text to build your project - April 12, 2018 - “May Not Meet Developer Expectations” #77 - February 15, 2018
<urn:uuid:2878740d-0eb2-46b2-af3e-5e7d29719e2b>
2.859375
447
Tutorial
Software Dev.
43.401238
95,494,256
Tapinoma, Formica, Facultative myrmecophily, Washington, Eriogonum I examined ant attendance and its importance to larval survivorship in a facultatively myrmecophilous butterfly, Icaricia acmon (Westwood and Hewitson) (Lycaenidae), in a population that uses two host plant species, Eriogonum compositum Dougl. and E. strictum Benth. (Polygonaceae). Third and fourth instar larvae of I. acmon were tended by three ant species: Tapinoma sessile (Say), Formica neogagates Emery, and an unidentified Formica species. Third instar larvae were tended less frequently than fourth instar larvae on both plant species, and T. sessile was the attendant ant species for a higher proportion of third instar than fourth instar larvae developing on E. compositum . Over the duration of the study, all switches of attendant ant species on individual plants were from early T. sessile attendance to later F. neogagates attendance. An exclosure experiment revealed that ant attendance had no significant effect on larval mortality. Journal of the Lepidopterists’ Society Required Publisher's Statement Published by the Lepidopterists' Society Peterson, Merrill A., "The Nature of Ant Attendance and the Survival of Larval Icaricia acmon (Lycaenidae)" (1993). Biology. 44.
<urn:uuid:057199d1-b822-4e27-8372-85aec980123f>
2.578125
308
Academic Writing
Science & Tech.
18.427941
95,494,257
We have sequenced the genome of the endangered European eel using the MinION by Oxford Nanopore, and assembled these data using a novel algorithm specifically designed for large eukaryotic genomes. For this 860 Mbp genome, the entire computational process takes two days on a single CPU. Human herpesvirus type 1 (HHV-1) has a large double-stranded DNA genome of approximately 152 kbp that is structurally complex and GC-rich. This makes the assembly of HHV-1 whole genomes from short-read sequencing data technically challenging. Nitrification, the oxidation of ammonia via nitrite to nitrate, has always been considered to be a two-step process catalysed by chemolithoautotrophic microorganisms oxidising either ammonia or nitrite. Second generation sequencing has revolutionized genomic studies. However, most genomes contain repeated DNA elements that are longer than the read lengths achievable with typical sequencers, so the genomic order of several generated contigs cannot be easily resolved.
<urn:uuid:29b3dc93-7a2b-4dda-9dde-7aa37f4fac71>
2.71875
207
Knowledge Article
Science & Tech.
14.059205
95,494,259
A first for physics – University of Jena physicists are first to achieve optical coherence tomography with XUV radiation at laboratory scale. A visit to the optometrist often involves optical coherence tomography. This imaging process uses infrared radiation to penetrate the layers of the retina and examine it more closely in three dimensions, without having to touch the eye at all. This allows eye specialists to diagnose diseases such as glaucoma without any physical intervention. However, this method would have even greater potential for science if a shorter radiation wavelength were used, thus allowing a higher resolution of the image. Physicists at Friedrich Schiller University Jena (Germany) have now achieved just that and they have reported their research findings in the latest issue of the specialist journal “Optica” (DOI: 10.1364/OPTICA.4.000903). First XUV coherence tomography at laboratory scale For the first time, the University physicists used extreme ultraviolet radiation (XUV) for this process, which was generated in their own laboratory, and they were thus able to perform the first XUV coherence tomography at laboratory scale. This radiation has a wavelength of between 20 and 40 nanometres – from which it is therefore just a small step to the X-ray range. “Large-scale equipment, that is to say particle accelerators such as the German Elektronen-Synchotron in Hamburg, are usually necessary for generating XUV radiation,” says Silvio Fuchs of the Institute of Optics and Quantum Electronics of the Jena University. “This makes such a research method very complex and costly, and only available to a few researchers.” The physicists from Jena have already demonstrated this method at large research facilities, but they have now found a possibility for applying it at a smaller scale. In this approach, they focus an ultrashort, very intense infrared laser in a noble gas, for example argon or neon. “The electrons in the gas are accelerated by means of an ionisation process,” explains Fuchs. “They then emit the XUV radiation.” It is true that this method is very inefficient, as only a millionth part of the laser radiation is actually transformed from infrared into the extreme ultraviolet range, but this loss can be offset by the use of very powerful laser sources. “It’s a simple calculation: the more we put in, the more we get out,” adds Fuchs. Strong image contrasts are produced The advantage of XUV coherence tomography is that, in addition to the very high resolution, the radiation interacts strongly with the sample, because differrent substances react differently to light. Some absorb more light and others less. This produces strong contrasts in the images, which provide the researchers with important information, for example regarding the material composition of the object being examined. “For example, we have created three-dimensional images of silicon chips, in a non-destructive way, on which we can distinguish the substrate clearly from structures consisting of other materials,” adds Silvio Fuchs. “If this procedure were applied in biology – for investigating cells, for example, which is one of our aims – it would not be necessary to colour samples, as is normal practice in other high-resolution microscopy methods. Elements such as carbon, oxygen and nitrogen would themselves provide the contrast.” Before that is possible, however, the physicists of the University of Jena still have some work to do. “With the light sources we have at the moment, we can achieve a depth resolution down to 24 nanometres. Although this is sufficient for producing images of small structures, for example in semiconductors, the structure sizes of current chips are in some cases already smaller. However, with new, even more powerful lasers, it should be possible in future to achieve a depth resolution of as little as three nanometres with this method,“ notes Fuchs. “We have shown in principle that it is possible to use this method at laboratory scale.” The long-term aim could ultimately be to develop a cost-effective and user-friendly device combining the laser with the microscope, which would enable the semiconductor industry or biological laboratories to use this imaging technique with ease. Silvio Fuchs et al.: „Optical coherence tomography with nanoscale axial resolution using a laser-driven high-harmonic source“, Optica (2017) Vol. 4, Issue 8, 903-906, https://doi.org/10.1364/OPTICA.4.000903 Institute of Optics and Quantum Electronics Friedrich Schiller University Jena Max-Wien-Platz 1, 07743 Jena, Germany Phone: +49 (0)3641 / 947615 Sebastian Hollstein | idw - Informationsdienst Wissenschaft Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication 16.07.2018 | Chinese Academy of Sciences Headquarters For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:e9a8b9c4-c4fc-428e-bd0a-e700ffd0b357>
3.46875
1,686
Content Listing
Science & Tech.
35.207584
95,494,261
For the green sea turtles that hatched at the northern end of the Great Barrier Reef, something essential for their species’ survival is missing: males. We have a very good idea why this is happening, and once again it is humanity’s fault. For many reptiles, sex is determined not by chromosomes, but by the temperature at which their eggs are incubated. For years this has sparked concerns that human-induced climate change will cause some species to only produce one sex, leading to extinction. Now it seems that day has arrived for the much beloved green sea turtles, at least in the far north of Queensland. Local extinction may not be far behind. Sea turtles have a form of what is known as temperature-dependent sex determination (TSD). When eggs hatch from sand that averaged 29ºC (84ºF) over the incubation period, they produce a 50-50 mix of males and females. Cooler temperatures mean more males, while warmer ones mean more females. In the past, a particularly hot or cold year might have produced more of one sex than the other, but for a species that doesn’t mate until at least the age of 25, this didn’t matter much as long as it evened out over several years. But global warming has created long-standing concerns that we could lose these great seafarers, along with their important role in the ocean ecosystem as almost the only thing that can eat jellyfish. In 2014, a study found that most newly hatched loggerhead turtles were female, but there were still enough males to ensure the future of the population for a long time to come. Unfortunately, Dr Michael Jensen of the National Oceanic and Atmospheric Administration has found things are more serious for the endangered green sea turtles (Chelonia mydas) of the Great Barrier Reef. At the southern end of the reef, 65-69 percent of turtles are female, which no doubt makes for some very happy males, and could actually improve breeding prospects compared to a 50-50 ratio in a species where some males mate with many females. The mid-reef region has almost no hatcheries. Among those born north of Cooktown, however, 99.1 percent of the juvenile turtles Jensen examined were female, along with 86.8 percent of adults. Among the subadult population, the male proportion was a shocking 0.2 percent, suggesting things have gotten much worse as the world has warmed. “The northern GBR rookeries have been producing primarily females for more than two decades, and the complete feminization of this population is possible in the near future,” Jensen and co-authors wrote in Current Biology. Turtles from both ends of the reef feed in the same places, so Jensen used genetic analysis to classify where a turtle was born while establishing its sex. The same feeding grounds also had some turtles from New Caledonia and other Pacific islands, whose sex ratios were quite similar to those of turtles born at the southern end of the Great Barrier Reef. The findings have a certain irony in the light of a recent study that found some men reject environmentally friendly options when shopping because they see them as too feminine. As bad as things are for sea turtles, they are likely to be worse for the tuataras – the last survivors of an order of lizard-like reptiles that predates the dinosaurs. When temperatures rise, more male tuataras hatch, a far more threatening outcome for species survival. TSD species have survived periods when the world was much hotter or colder before, as they can evolve to a different temperature mid-point where populations are balanced. Green sea turtles, which currently incubate their eggs during the hottest part of the year, could also even out their sex ratio by nesting at a different time. Some reptile species even switch to using sex chromosomes when it suits them best. However, these changes don’t always happen quickly, and may not be possible in the face of current rates of warming, which are much faster than anything the turtles have experienced in their multi-million-year history. Although some intermixing is thought to occur between populations, most southern males return to mate near where they were born, rather than seizing the opportunities further north. Experiments are being run on shading beaches favored by loggerhead turtles in the south, and the idea could be extended to those preferred by northern green turtles.
<urn:uuid:40584a76-87b1-4f05-b9d3-6f01c8096ec7>
3.796875
896
News Article
Science & Tech.
42.875237
95,494,271
Sign In / Sign Out - ASU Home - My ASU - Colleges and Schools - Map and Locations Non-governmental Organization: (NGO) a group or business that is separate from a government or a for-profit business. These groups are often created by local citizens and run by volunteers. A few activists in a small city may work to save monarch butterflies. Or maybe a large group of people are exploring how to conserve water during a drought. People in different cities or countries take different approaches to ecosystem management. Organizations can be more focused on conservation of species, while others deal with water, land, or people. To better understand what ecosystem management involves in the real world, here are three organizations that are great examples. Each group works with the environment to accomplish some kind of goal, or mission. In Portland, Oregon, a non-governmental organization called “Community Partners for Affordable Housing” is mainly focused on helping people in need find housing. On each of their properties, they teach their residents different ways to conserve resources and become more connected to the Oregon environment. One trick the residents do to save water is to put a bucket under the showerhead in the shower. Before showering, while they wait for the water to get hot, they catch that extra water. They then use the water for outdoor plants and gardens. This is just one of the ways Community Partners for Affordable Housing residents and staff save natural resources. Monarch Watch organization educates the public about monarch butterflies. These butterflies are important pollinators that travel great distances over part of the species' migration every year. The monarch butterflies need the milkweed plant to live and lay eggs on. Monarch Watch suggests that people in certain areas of the country plant more milkweed in their gardens to give the monarchs more usable habitat. The people then get to enjoy monarchs visiting their yards. That is an example of an interaction that is positive for both humans and butterflies. Lastly, Dr. Jamie Bechtel is a marine biologist and founder of a non-profit called New Course. As a marine biologist, Dr. Bechtel was aware of the effects that people have on marine habitats and worked extensively with local fishermen and farmers to improve fishing practice. It was through her partnership with local communities that she then realized how the ecosystem affects people- especially women. New Course is the non-governmental organization (NGO) that Dr. Jamie Bechtel co-founded to involve women in ecosystem management. In many cultures, women are in charge of washing clothes, feeding the children, and growing the food for their homes. To do these tasks, women must find water, gather firewood to build fires, and tend to crops within their natural ecosystem. New Course helps women from many countries and cultures learn skills to better use their environment when doing these tasks. New Course is an example of an international NGO. Dr. Bechtel works in Seattle, Washington, but the people she helps are on other continents like Asia and Africa. Just like ecosystems, each community is unique and has different challenges. New Course has found that the most important part of helping a community is to communicate and work directly with the people involved. These are just a few of the many organizations out there that are concerned with protecting the environment and ecosystem services. If you have interest in becoming involved, try to learn about some of the organizations in your area that are gathering to make a difference. Additional images via Wikimedia Commons. Image of Nepali girls by Department of Foreign Affairs and Trade. Dr. Biology. (2015, August 31). Ecosystem Management Organizations. ASU - Ask A Biologist. Retrieved July 17, 2018 from https://askabiologist.asu.edu/ecosystem-management-organizations Dr. Biology. "Ecosystem Management Organizations". ASU - Ask A Biologist. 31 August, 2015. https://askabiologist.asu.edu/ecosystem-management-organizations Dr. Biology. "Ecosystem Management Organizations". ASU - Ask A Biologist. 31 Aug 2015. ASU - Ask A Biologist, Web. 17 Jul 2018. https://askabiologist.asu.edu/ecosystem-management-organizations Two girls show off their clean hands near where a water supply was established. NEWAH WASH is an NGO devoted to providing clean water and conditions to people in Nepal.
<urn:uuid:98779959-6d7a-48fd-acf4-052cc3c738de>
3.046875
906
Knowledge Article
Science & Tech.
41.445308
95,494,276
In the previous chapters, the focus was on how to use the .NET API. All of the examples were illustrated using C#, but the examples did not use any particular feature of C#. The examples could have been implemented with VB.NET or any other .NET language. That changes in this chapter, as the focus is on the C# programming language. Specific features of the language will be dissected and analyzed. Sometimes patterns will be used, and other times not. In the over-gill scheme of this chapter, the idea is to give you a better understanding of what C# is capable of and not capable of. KeywordsPublic Class Class Implementation Data Member Code Solution Nullable Type Unable to display preview. Download preview PDF.
<urn:uuid:bfacfb3a-37a4-4ad2-bac1-2acf0a6777e0>
3.109375
155
Truncated
Software Dev.
54.436989
95,494,290
Not only does Visual Basic let you store date and time information in the specific Date data type, it also provides a lot of date- and time-related functions. These functions are very important in all business applications and deserve an in-depth look. Strictly speaking, Date and Time aren't functions: They're properties. In fact, you can use them to either retrieve the current date and time (as Date values) or assign new values to them to modify the system settings: Print Date & " " & Time ' Displays "8/14/98 8:35:48 P.M.". ' Set a new system date using any valid date format. Date = "10/14/98" Date = "October 14, 1998" To help you compare the outcome of all date and time ...
<urn:uuid:d11f05f0-0bd5-429a-bf5b-e1aa9c1bfc10>
2.6875
168
Truncated
Software Dev.
65.984484
95,494,297
An intensive survey deep into the universe by NASA's Hubble and Spitzer space telescopes has yielded the proverbial needle-in-a-haystack: the farthest galaxy yet seen in an image that has been stretched and amplified by a phenomenon called gravitational lensing. The embryonic galaxy named SPT0615-JD existed when the universe was just 500 million years old. Though a few other primitive galaxies have been seen at this early epoch, they have essentially all looked like red dots given their small size and tremendous distances. However, in this case, the gravitational field of a massive foreground galaxy cluster not only amplified the light from the background galaxy but also smeared the image of it into an arc (about 2 arcseconds long). "No other candidate galaxy has been found at such a great distance that also gives you the spatial information that this arc image does. By analyzing the effects of gravitational lensing on the image of this galaxy, we can determine its actual size and shape," said the study's lead author Brett Salmon of the Space Telescope Science Institute in Baltimore, Maryland. He is presenting his research at the 231st meeting of the American Astronomical Society in Washington, D.C. First predicted by Albert Einstein a century ago, the warping of space by the gravity of a massive foreground object can brighten and distort the images of far more distant background objects. Astronomers use this "zoom lens" effect to go hunting for amplified images of distant galaxies that otherwise would not be visible with today's telescopes. SPT0615-JD was identified in Hubble's Reionization Lensing Cluster Survey (RELICS) and companion S-RELICS Spitzer program. "RELICS was designed to discover distant galaxies like these that are magnified brightly enough for detailed study," said Dan Coe, Principal Investigator of RELICS. RELICS observed 41 massive galaxy clusters for the first time in the infrared with Hubble to search for such distant lensed galaxies. One of these clusters was SPT-CL J0615-5746, which Salmon analyzed to make this discovery. Upon finding the lens-arc, Salmon thought, "Oh, wow! I think we're on to something!" By combining the Hubble and Spitzer data, Salmon calculated the lookback time to the galaxy of 13.3 billion years. Preliminary analysis suggests the diminutive galaxy weighs in at no more than 3 billion solar masses (roughly 1/100th the mass of our fully grown Milky Way galaxy). It is less than 2,500 light-years across, half the size of the Small Magellanic Cloud, a satellite galaxy of our Milky Way. The object is considered prototypical of young galaxies that emerged during the epoch shortly after the big bang. The galaxy is right at the limits of Hubble's detection capabilities, but just the beginning for the upcoming NASA James Webb Space Telescope's powerful capabilities, said Salmon. "This galaxy is an exciting target for science with the Webb telescope as it offers the unique opportunity for resolving stellar populations in the very early universe." Spectroscopy with Webb will allow for astronomers to study in detail the firestorm of starbirth activity taking place at this early epoch, and resolve its substructure. Explore further: Hubble digs into cosmic archaeology
<urn:uuid:fd33b462-b6ee-4d71-ae46-c144ba2c9384>
3.59375
661
News Article
Science & Tech.
35.565968
95,494,325
News & Opinion Top 10 Innovations Cell & Molecular Biology Disease & Medicine Ecology & Environment Genetics & Genomics Pharma & Biotech Image of the Day Our Top 10 Innovations Contest Is Now Accepting Submissions Staff | May 14, 2018 Enter your new product to have a chance at being selected for a coveted spot in ’s 2018 competition. New Enzyme Makes CRISPR More Powerful Shawna Williams | Mar 2, 2018 xCas9 enables more precisely targeted gene editing. New CRISPR-Based Tools Flag Genetic Sequences and Log Data Diana Kwon | Feb 16, 2018 SHERLOCK and DETECTR can identify particular nucleic acid sequences, while CAMERA records events in human and bacterial cells. Optical Cell Sorting Rachel Berkowitz | Dec 1, 2017 Researchers are using light and new image processing tools for label-free cell characterization. A Growing Open Access Toolbox Diana Kwon | Nov 28, 2017 Legal methods to retrieve paywalled articles for free are on the rise, but better self-archiving practices could help improve accessibility. Implanted Magnetic Probes Measure Brain Activity Ruth Williams | Nov 1, 2017 Micrometer-size magnetrodes detect activity-generated magnetic fields within living brains. Infographic: Reading the Mind’s Magnetism Ruth Williams | Oct 31, 2017 Newly designed sensors detect the magnetic fields generated by electrical activity within cat brains. Last Chance to Enter the Fray Staff | Jun 11, 2017 You only have a couple of days left to submit a product in 's Top 10 Innovation competition. Your product can't win if it doesn't get in! 2016 Top 10 Innovations: Honorable Mentions Staff | Nov 30, 2016 These runners up to the Top 10 Innovations of 2016 caught our judges' attention. Gut Bacteria Vectors Ruth Williams | Jun 1, 2016 Researchers mix bacteria genetically engineered to express double-stranded RNA into insect food.
<urn:uuid:1d2ac47d-49d3-4b31-afcf-17116ded8b5a>
2.71875
438
Content Listing
Science & Tech.
33.850288
95,494,331
Stars do not like to be alone. Indeed, most stars are members of a binary system, in which two stars circle around each other in an apparently never-ending cosmic ballet. But sometimes, things can go wrong. When the dancing stars are too close to each other, one of them can start devouring its partner. If the vampire star is a white dwarf – a burned-out star that was once like our Sun – this greed can lead to a cosmic catastrophe: the white dwarf explodes as a Type Ia supernova. ESO PR Photo 39/07 SN 2006dr in NGC 1288 In July 2006, ESO’s Very Large Telescope took images of such a stellar firework in the galaxy NGC 1288. The supernova - designated SN 2006dr - was at its peak brightness, shining as bright as the entire galaxy itself, bearing witness to the amount of energy released. NGC 1288 is a rather spectacular spiral galaxy, seen almost face-on and showing multiple spiral arms pirouetting around the centre. Bearing a strong resemblance to the beautiful spiral galaxy NGC 1232, it is located 200 million light-years away from our home Galaxy, the Milky Way. Two main arms emerge from the central regions and then progressively split into other arms when moving further away. A small bar of stars and gas runs across the centre of the galaxy. The first images of NGC 1288, obtained during the commissioning period of the FORS instrument on ESO's VLT in 1998, were of such high quality that they have allowed astronomers to carry out a quantitative analysis of the morphology of the galaxy. They found that NGC 1288 is most probably surrounded by a large dark matter halo. The appearance and number of spiral arms are indeed directly related to the amount of dark matter in the galaxy's halo. The supernova was first spotted by amateur astronomer Berto Monard. On the night of 17 July 2006, Monard used his 30-cm telescope in the suburbs of Pretoria in South Africa and discovered the supernova as an apparent 'new star' close to the centre of NGC 1288, which was then designated SN 2006dr. The supernova reached magnitude 16, that is, it was about 10 000 times fainter than what the unaided eye can see. Using spectra obtained with the Keck telescope on 26 July 2006, astronomers from the University of California found SN 2006dr to be a Type Ia supernova that expelled material with speeds up to 10 000 km/s. Type Ia supernovae play a very useful role as cosmological distance indicators, allowing astronomers to study the expansion history of our Universe, leading to the conclusion that the Universe is expanding at an accelerating rate (see e.g. ESO PR 21/98). Henri Boffin | alfa Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:40451e50-e227-4ceb-aae5-7f0b25edb590>
3.1875
1,164
Content Listing
Science & Tech.
47.317243
95,494,345
Meet Doublesex, the Sex Characteristics Gene News Feb 28, 2017 | Original story from Indiana University Bloomington Cris Ledón-Rettig and Armin Moczek in the lab. Photo by Indiana University Physical differences between males and females in species are common, but there remains much to learn about the genetic mechanisms behind these differences. New research by scientists at Indiana University finds that the "master gene" that regulates these differences plays a complex role in matching the right physical trait to the right sex. The study, published in the journal Nature Communications, reveals new details about the behavior of the gene called "doublesex," or dsx. "We want to know more about this gene because it helps us answer a major question about development and evolution: How do animals with similar genomes -- such as males and females of the same species -- produce different versions of the same trait? And why do some traits, like ornamental features that attract mates, vary so widely, while others, like legs, don't?" said Cris Ledón-Rettig, a postdoctoral researcher in the IU Bloomington College of Arts and Sciences’ Department of Biology, who led the study. The study is significant because it's the first to look at the effect of dsx across the whole genome. It finds that the gene isn't simply a "switch" that turns off certain male traits in females, as previously thought. Rather, it plays a highly complex role in controlling the expression of physical differences at different points in the genome based upon sex. The paper’s co-lead author is Eduardo Zattara, a postdoctoral researcher in the Department of Biology. The senior author is Armin Moczek, a professor in the department. The fine-grained control that dsx exerts over male and female traits is possible because the gene acts in a surprising variety of ways, Ledón-Rettig said. By activating different genes in males and females, for example, it can promote male or female versions of the same trait, such as genitalia. Or, by activating the same genes in males while simultaneously inhibiting them in females, it can promote opposite traits. "The power to prevent the expression of male traits in females, and vice versa, is a critical feature," Moczek said. "It buffers traits that benefit only members of one sex from causing harm in members of the other." For instance, in the species used by IU researchers to study dsx -- the beetle Onthophagus taurus -- males possess elaborate horns to battle rivals over females. These horns do not offer a similar reproductive advantage to females, however -- large horns would interfere with their ability to dig tunnels used to nest offspring. A similar dynamic exists in birds. Higher testosterone attracts more mates in male birds due to greater aggression but decreases maternal instinct in females. Both examples underline the tension that can exist between natural selection, which favors traits that promote a species’ survival, and sexual selection, which favors traits that attract mates. If a species lacks this ability to "buffer" between male and female traits, it can go extinct. To conduct the study, the IU scientists compared genes expressed in normal beetles to genes expressed beetles in which dsx was suppressed. The comparison revealed over 1,000 points on the genome in normal beetles where dsx affected gene expression in males and over 250 points where it affected gene expression in females. Importantly, Ledón-Rettig said, the majority of these points did not overlap. This indicated that dsx didn’t simply turn certain genes "on" or "off" for most of the traits studied but rather affected gene expression at different locations in the genome based on sex. "Essentially, dsx instructs the development of male and female versions of the same trait by influencing different genes in each sex," she said. This was especially the case when they looked at the effect of dsx on the brains, which regulates sex-specific behaviors, and genitalia, used in reproduction. But for one trait -- head horns -- the study showed that dsx sometimes targets the exact same genes in both sexes. In this situation, dsx regulated the genes in opposite directions, creating completely horned males and completely hornless females. When the scientists disabled dsx, both sexes developed similarly sized intermediate horns. Onthophagus taurus is one of the few insect species in which it’s possible to conduct a whole genome analysis since their genome has been sequenced by the i5k Project, a.k.a. "The Manhattan Project of Entomology," a large-scale effort supported by the U.S. Department of Agriculture that aims to transcribe the genomes of 5,000 insects and other arthropods. Genetic sequencing of the species was conducted through the project using insects provided by Moczek’s lab at IU, which has pioneered the use of insects to study fundamental principles in evolution. Ledón-Rettig and Zattara are members of Moczek’s lab. "We're eager to extend our work on role of dsx -- and other genes -- to sexual differences across other, closely related species of beetles," Ledón-Rettig said. "These beetles are really a powerful platform for unraveling the fundamental mechanisms that underlie evolutionary diversification of sexual traits across species." This article has been republished from materials provided by Indiana University Bloomington. Note: material may have been edited for length and content. For further information, please contact the cited source. Ledón-Rettig, C. C., Zattara, E. E., & Moczek, A. P. (2017). Asymmetric interactions between doublesex and tissue- and sex-specific target genes mediate sexual dimorphism in beetles. Nature Communications, 8, 14593. doi:10.1038/ncomms14593 Analytical Tool Predicts Disease-Causing GenesNews Predicting genes that can cause disease due to the production of truncated or altered proteins that take on a new or different function, rather than those that lose their function, is now possible thanks to an international team of researchers that has developed a new analytical tool to effectively and efficiently predict such candidate genes. Single Gene Change in Gut Bacteria Alters Host MetabolismNews Scientists have found that deleting a single gene in a particular strain of gut bacteria causes changes in metabolism and reduced weight gain in mice. The research provides an important step towards understanding how the microbiome – the bacteria that live in our body – affects metabolism.READ MORE Gotta Sample 'Em All! Underwater Pokéball Captures Ocean LifeNews A new device developed by Wyss Institute reseachers safely traps delicate sea creatures inside a folding polyhedral enclosure and lets them go without harm using a novel, origami-inspired design. The ultimate aim is to allow the sea creatures to be (gently) analyzed in high detail.READ MORE
<urn:uuid:927d852a-8e85-486a-a38a-39f15c2f8602>
3.625
1,459
News Article
Science & Tech.
35.885194
95,494,357
You are here: Home Homestead Living Culture How Does Wind Power Work? Hands-On KidWind Challenge Trains Students in Renewable Energy How Does Wind Power Work? Hands-On KidWind Challenge Trains Students in Renewable Energy by Jeff McIntire-Strasburg February 4, 2014, 10:02 am I’ve been passionate about educational programs for sustainability from sustainablog’s earliest days, so I wasn’t surprised at all to discover that I’d written about Minneapolis-based educational company KidWind way back in 2006. Founded by former science teacher Michael Arquin, KidWind has developed an impressive array of educational programming both for science educators wanting to introduce their students to renewable energy, and for students themselves. The KidWind Challenge, the organization’s signature program, gives students a chance to compete in building small, working wind turbines. KidWind plans to host 35 of these programs across the United States in 2014, as well as two international events. Take a look at this report from last year’s event in Alberta, Canada, to get a sense of how competition works: No doubt, the kids have a great time with this (and I’m betting teachers do, too). More importantly, though, this is a great way for kids to learn math, physics, engineering, etc. – younger students pick up concepts and information much better if they can “get their hands dirty.” Of course, running events like these, as well as hosting teacher training workshops, and creating and maintaining online resources for students and teachers, isn’t cheap: KidWind notes that a single Challenge event cost at least $2,000. In order to make sure that they’re able to host all of the events planned for the coming year, they’ve launched an indiegogo campaign to raise $70,000 for the KidWind Challenge and supporting materials. As I learned from their press materials, a donation as small as $5 can get one student started on an educational journey involving renewable energy. Interested in helping? Head over to the campaign, check out the other information they’ve shared about the program, as well as the specific program elements you’ll fund at certain donation levels. If you decide to kick in, let us know, and tell us what inspired you… Of course, if you’ve ever been a part of any KidWind events, let us know about your experience. Featured image credit: screen capture from “KidWind Challenge 2013” video See more Previous article Food Waste: Businesses Helping Businesses Next article Google Earth Maps Global Warming at the Local Level Leave a Reply Cancel reply Your email address will not be published. Required fields are marked *Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment.
<urn:uuid:6aa6f39c-f29d-4f31-86c4-5142d27b3367>
2.6875
590
Personal Blog
Science & Tech.
39.228186
95,494,366
From Associated Press , Published on 25 January 2016 Forget about selfies. In California, residents are using smartphones and drones to document the coastline's changing face. Starting January 2016, The Nature Conservancy is asking tech junkies to capture the flooding and coastal erosion that come with El Niño, a weather pattern that's bringing California its wettest winter in years - and all in the name of science. The idea is that crowd-sourced, geotagged images of storm surges and flooded beaches will give scientists a brief window into what the future holds as sea levels rise from global warming, a sort of a crystal ball for climate change. Images from the latest drones, which can produce high-resolution 3D maps, will be particularly useful and will help scientists determine if predictive models about coastal flooding are accurate, said Matt Merrifield, the organization's chief technology officer. "We use these projected models and they don't quite look right, but we're lacking any empirical evidence," he said. "This is essentially a way of 'ground truthing' those models." Experts on climate change agreed that El Niño-fueled storms offer a sneak peak of the future and said the project was a novel way to raise public awareness. Because of its crowd-sourced nature, however, they cautioned the experiment might not yield all the results organizers hoped for, although any additional information is useful. "It's not the answer, but it's a part of the answer," said Lesley Ewing, senior coastal engineer with the California Coastal Commission. "It's a piece of the puzzle." In California, nearly a half-million people, $100 billion in property and critical infrastructure such as schools, power plants and highways will be at risk of inundation during a major storm if sea level rises another 4.6 feet - a figure that could become a reality by 2100, according to a 2009 Pacific Institute study commissioned by three state agencies. Beaches that Californians take for granted will become much smaller or disappear altogether and El Niño-fueled storms will have a similar effect, if only temporarily, said William Patzert, a climatologist for NASA's Jet Propulsion Laboratory. "When you get big winter storm surge like they want to document, you tend to lose a lot of beach," he said. "In a way, it's like doing a documentary on the future. It'll show you what your beaches will look like in 100 years." What the mapping won't be able to predict is exactly which beaches will disappear and which bluffs will crumble - all things that will affect how flooding impacts coastal populations, said Ewing, the California Coastal Commission engineer. "We're not going to capture that change," she said. "We're going to capture where the water could go to with this current landscape and that's still a very important thing to understand because it gets at those hot spots." So far, project organizers aren't giving assignments to participants, although they may send out specific requests as the winter unfolds, said Merrifield. If users wind up mapping real-time flooding events along 10 or 15 percent of California's 840-mile-long coastline the project will be a success, he said. A realistic goal is a "curated selection" of 3D maps showing flooding up and down the coast at different dates and times. The Nature Conservancy has partnered with a San Francisco-area startup called DroneDeploy that will provide a free app to drone owners for consistency. The app will provide automated flight patterns at the touch of a screen while cloud-based technology will make managing so much data feasible, said Ian Smith, a business developer for the company. Trent Lukaczyk heard about the experiment from a posting in a Facebook group dedicated to drone enthusiasts. For the aerospace engineer, who has already used drones to map coral reefs in American Samoa, the volunteer work was appealing. "It's a really exciting application. It's not just something to take a selfie with," he said, before heading out to collect images of beach erosion after a storm in Pacifica, California. For additional reading on green technology, please refer to the following links: How Tech Companies are Promoting Sustainability How Google Earth Promotes Environmental Protection in Near Real-Time The Role of Technology in Sustainability Subscribe to our blog Latest post: Setting Science-Based Targets DOWNLOAD THE LATEST WHITEPAPER Effectiveness of Local Agency Sustainability Plans Subscribe to Greenwatch Newsletter Check out the latest issues READ OUR LATEST CASE STUDY Assisting City of Dublin with CEQA Review for Major Kaiser Permanente Medical Facility
<urn:uuid:2b8769b1-6670-4058-8e1e-1ea1c46ef067>
3.296875
962
News (Org.)
Science & Tech.
34.812848
95,494,378
|pale yellow-green gas| |Name, symbol, number||chlorine, Cl, 17| or /[invalid input: 'ɨ']/ KLOHR-ən |Group, period, block||17, 3, p| |Standard atomic weight||35.45(1) g/mol| |Electron configuration||[Ne] 3s2 3p5| |Electrons per shell||2, 8, 7 (Image)| |Density||(0 °C, 101.325 kPa)| |Liquid density at b.p.||1.5625 g/cm3| |Melting point||171.6 K, -101.5 °C, -150.7 °F| |Boiling point||239.11 K, -34.04 °C, -29.27 °F| |Critical point||416.9 K, 7.991 MPa| |Heat of fusion||(Cl2) 6.406 kJ/mol| |Heat of vaporization||(Cl2) 20.41 kJ/mol| |Specific heat capacity||(25 °C) (Cl2)| |Oxidation states||7, 6, 5, 4, 3, 2, 1, -1| (strongly acidic oxide) |Electronegativity||3.16 (Pauling scale)| ||1st: 1251.2 kJ/mol| |2nd: 2298 kJ/mol| |3rd: 3822 kJ/mol| |Covalent radius||102±4 pm| |Van der Waals radius||175 pm| |Electrical resistivity||(20 °C) > 10 Ω·m| |Thermal conductivity||(300 K) 8.9×10−3 W/(m·K)| |Speed of sound||(gas, 0 °C) 206 m/s| |CAS registry number||7782-50-5| |Most stable isotopes| |Main article: Isotopes of chlorine| Chlorine (chemical symbol Cl) is a chemical element. Its atomic number (which is the number of protons in it) is 17, and its atomic mass is 35.45. It is part of the 7th column (halogens) on the periodic table of elements. Properties[change | change source] Physical properties[change | change source] Chlorine is a very irritating and greenish-yellow gas. It has a strong smell like bleach. It is toxic. It can be made into a liquid when cooled. It is heavier than air. Chemical properties[change | change source] Chlorine is highly reactive. It is more reactive than bromine but less reactive than fluorine. It reacts with most things to make chlorides. It can even burn things instead of oxygen. It dissolves in water to make a mixture of hypochlorous acid and hydrochloric acid. The more acidic it is, the more chlorine is made; the more basic it is, the more hypochlorous acid (normally turned into hypochlorite) and hydrochloric acid (normally turned into chlorides) are there. Chlorine reacts with bromides and iodides to make bromine and iodine. Chlorine compounds[change | change source] Chlorine exists in several oxidation states: -1, +1, +3, +4, +5, and +7. The -1 state is most often in chloride. Chlorides are not reactive. Compounds containing chlorine in its +1 oxidation state are hypochlorites. Only one is common. They are a strong oxidizing agent, as are all + oxidation state compounds. +3 is in chlorites. +4 is in chlorine dioxide, a common chlorine compound that is not a chloride. +5 is in chlorates. +7 is in perchlorates. Hypochlorites are most reactive, while perchlorates are the least reactive. Chlorine oxides can be made, but most of them are very reactive and unstable. Occurrence[change | change source] Chlorine is not found as an element. Sodium chloride is the most common chlorine ore. It is in the ocean (sea salt) and in the ground (rock salt). There are some organic compounds that have chlorine in them, too. Preparation[change | change source] It is made by electrolysis (the passing of electricity through a solution to make chemical reactions happen) of sodium chloride. This is known as the chloralkali process. It can also be made by reacting hydrogen chloride with oxygen and a catalyst. It can be made in the laboratory by reacting manganese dioxide with hydrochloric acid. It is made when sodium hypochlorite reacts with hydrochloric acid. This is a dangerous reaction that can happen without anyone knowing. Uses[change | change source] Chlorine is used widely to purify water (usually in a swimming pool), as a disinfectant and bleach, and in the making of many important compounds including chloroform and carbon tetrachloride. It was used as a poison gas in some wars. History[change | change source] It was discovered in 1774 by Carl Wilhelm Scheele who thought it had oxygen in it. Chlorine was named in 1810 by Humphry Davy who insisted it was an element. The US made all water chlorinated (added chlorine to water) by 1918. Safety[change | change source] It is poisonous in large amounts and can damage skin. When it is inhaled (breathed in), it irritates the lungs, eyes, and skin badly. It can cause fire with some things because it is very reactive. It is heavier than air, so it can fill up enclosed spaces. Related pages[change | change source] Sources[change | change source] - Chlorine, Gas Encyclopaedia, Air Liquide - Magnetic susceptibility of the elements and inorganic compounds, in Lide, D. R., ed. (2005). CRC Handbook of Chemistry and Physics (86th ed.). Boca Raton (FL): CRC Press. ISBN 0-8493-0486-5.
<urn:uuid:5ce8d343-abec-40c7-badb-75768ad800d9>
2.984375
1,345
Knowledge Article
Science & Tech.
66.207897
95,494,384
How do they come up with the equations in (308) mathematically? Why do (308) give solutions to (285) and (286). Or why do (308) determine whether (285) and (286) have 1 or more solutions? I don't wonder about the proof for why the La place (309) introduced as a general equation later in the text has only 1 solution. Thanks© BrainMass Inc. brainmass.com July 20, 2018, 7:01 am ad1c9bdddf Hello and thank you for posting your question to Brainmass. The solution is attached in two files. the files are identical in content, only differ in format. The first is in MS Word format, while the other is in Adobe pdf format. Therefore you can choose the format that is most suitable to you. See below for a condensed version of the solution. Equations are in the attachment. Can we determine a vector field given just its curl, divergence and physical boundary conditions and is this determination unique? Let F and A be vector fields, and is a scalar function such that: If we want to write these equations explicitly in Cartesian coordinates we get for equations. The first is the scalar equation: And three from the vector equation (1.2) when we equate components on both sides: The whole point of the exercise is to find a vector field F that will satisfy these four first-order partial differential equations, ... The solution shows how to decouple the first order differential equations arising from the curl and divergence and turn them into Helmholtz equations and show what are the conditions for uniqueness.
<urn:uuid:ba7d7994-a918-4586-98f1-7bdcecbb98b4>
3.625
338
Q&A Forum
Science & Tech.
62.371091
95,494,425
The shape of the sunspot cycle The temporal behavior of a sunspot cycle, as described by the International sunspot numbers, can be represented by a simple function with four parameters: starting time, amplitude, rise time, and asymmetry. Of these, the parameter that governs the asymmetry between the rise to maximum and the fall to minimum is found to vary little from cycle to cycle and can be fixed at a single value for all cycles. A close relationship is found between rise time and amplitude which allows for a representation of each cycle by a function containing only two parameters: the starting time and the amplitude. These parameters are determined for the previous 22 sunspot cycles and examined for any predictable behavior. A weak correlation is found between the amplitude of a cycle and the length of the previous cycle. This allows for an estimate of the amplitude accurate to within about 30% right at the start of the cycle. As the cycle progresses, the amplitude can be better determined to within 20% at 30 months and to within 10% at 42 months into the cycle, thereby providing a good prediction both for the timing and size of sunspot maximum and for the behavior of the remaining 7–12 years of the cycle. KeywordsSolar Phys Sunspot Number Previous Cycle Sunspot Cycle Solar Dynamo Unable to display preview. Download preview PDF. - Brunner, W.: 1943,Publ. Zürich Obs. 7, 42.Google Scholar - Maunder, E. W.: 1922,Brit. Astron. Assoc. J. 32, 140.Google Scholar - McKinnon, J. A.: 1987,Report UAG-95, World Data Center A for Solar-Terrestrial Physics, Boulder, Colorado, 112 pp.Google Scholar - Meadows, A. J.: 1970,Early Solar Physics, Pergamon Press, New York, p. 95Google Scholar - Press, W. H., Flannery, B. P., Teukolsky, S. A., and Vetterling, W. T.: 1986,Numerical Recipes, Cambridge University Press, Cambridge, 818 pp.Google Scholar - Shove, D. J.: 1983,Sunspot Cycles, Hutchinson Ross Publ., Stroudsberg, Pennsylvania, p. 81.Google Scholar - Waldmeier, M.: 1935,Astron. Mitt. Zürich 14 (No. 133), 105.Google Scholar - Waldmeier, M.: 1939,Astron. Mitt. Zürich 14 (No. 138), 470.Google Scholar - Waldmeier, W.: 1961,The Sunspot Activity in the Years 1610–1960, Schulthess and Co., Zürich, 171 pp.Google Scholar - Wilson, R. M.: 1984,NASA TM 86458, Huntsville, Alabama, 43 pp.Google Scholar - Wolf, R.: 1852,Acad. Sci. Compt. Rend. 35, 704.Google Scholar
<urn:uuid:71cd01ac-25b7-4337-a9fd-d614f4d3878f>
3.34375
608
Academic Writing
Science & Tech.
73.178309
95,494,446
(Reuters) - Chances of the emergence of the El Nino weather pattern have increased to 65 percent during the fall and 70 percent during winter 2018-19, a U.S. government weather forecaster said on Thursday. Atmosphere And Ozone Layer The basic causes of the large-scale deficiency in the South-West Monsoon rainfall over India in 1965 and 1966 The years 1965 and 1966 will be remembered in Indian history as two successive years in which the failure of the south-west monsoon had a disastrous effect on Indian economy. This article is an attempt The S.W. monsoon or summer monsoon is a major event in the agricultural life of India, which has been widely studied. The chief characteristic of the pressure distribution over India during the monsoon
<urn:uuid:724d8a1c-340d-4f8d-9bef-abe6e8a8161c>
2.625
160
Content Listing
Science & Tech.
53.032326
95,494,457
Astronomers: Gold, Silver, All Heavy Elements the Result of Neutron Star Merger NASA. R. Hurt/Caltech-JPLTech03:03 18.10.2017Get short URL579311 Astronomers observing the collision of two hyperdense neutron stars have concluded that all the heaviest elements in the universe, including precious metals like gold and silver as well as substances like uranium, are created during neutron star mergers just like the one they observed. Astrophysicists have long wondered where the heaviest elements in the universe come from. The leading hypothesis was that they were synthesized only in the extreme conditions of a neutron star merger, but there was no evidence to prove this. However, physicists from the University of California (UC) Berkeley and Lawrence Berkeley National Laboratory say that the recently observed neutron star merger is the first definitive evidence for the hypothesis. Two neutron stars, each the size of a tiny island like Malta but also each twice as heavy as our sun, slammed into each other in an event dubbed GW170817. The Berkeley teams observed the event in August and saw that the enormous exchange of energy ejected rich clouds of free neutrons into space. These free neutrons bombarded atoms in space, turning them into progressively heavier elements. This is called a “kilonova,” essentially a more powerful and intense version of an ordinary nova, which is caused by the merger of less exotic stars such as red dwarfs. “We have been working for years to predict what the light from a neutron [star] merger would look like,” said Daniel Kasen, a UC Berkeley physics professor and Berkeley Lab scientist who worked on the study. “Now that theoretical speculation has suddenly come to life.” “For years the idea of a kilonova had existed only in our theoretical imagination and our computer models,” he added. “Given the complex physics involved, and the fact that we had essentially zero observational input to guide us, it was an insanely treacherous prediction — the theorists were really sticking their necks out.” Here’s the quick and dirty explanation: Immediately after the Big Bang, there were only two elements in existence: hydrogen and helium. When stars formed and began the process of nuclear fusion, they began to synthesize new, progressively heavier elements. However, this process only accounts for the next 24 elements on the Periodic Table, like carbon and oxygen and lastly iron, the final element that can be formed in non-extreme conditions. Astrophysicists have long wondered where the other naturally forming elements come from. Now, according to Kasen, they have their answer. The gold in your tooth, watch or ring and the silver in your phone or car engine were forged in the unimaginable pressure of a merger of neutron stars. Awesome. “Most of the time in science you are working to gradually advance an established subject,” Kasen said. “It is rare to be around for the birth of an entirely new field of astrophysics. I think we are all very lucky to have had the chance to play a role.” The observation was done with the Laser Interferometer Gravitational-Wave Observatory (LIGO) as well as its counterpart, the Virgo detector in Italy. The observatories were made to search for the source of gravitational waves, ripples in spacetime caused by the movement and activity of massive celestial objects. Usually, gravitational waves result from black hole activity — and since black holes do not emit visible light, such events are difficult to study. This was the first gravitational wave LIGO and Virgo were able to study that emitted visible light. Neutron stars, which were once the cores of massive stars that underwent supernovae, are the second densest known objects in the universe after black holes.
<urn:uuid:06513161-e141-45f5-b64e-ee6150bb717d>
3.265625
797
News Article
Science & Tech.
33.849349
95,494,459
Don't tell Superman but Kryptonite exists Last updated at 10:39 24 April 2007 A newly-discovered mineral has been found to contain exactly the same elements as the large green crystals that rob the superhero of his powers. Unlike fictional kryptonite, the real thing at London's Natural History Museum is white and powdery, emits no radiation, and comes from Serbia rather than outer space. But scientists who analysed the find were astonished to discover that its chemical composition matched a description of kryptonite in the film Superman Returns. In the 2006 movie, Superman's arch enemy Lex Luthor steals a kryptonite rock fragment from the Metropolis Museum. On the case are written the words "sodium lithium boron silicate hydroxide with fluorine". Mineralogist Dr Chris Stanley, from the Natural History Museum, said: "Towards the end of my research, I searched the web using the mineral's chemical formula - sodium lithium boron silicate hydroxide - and was amazed to discover that same scientific name written on a case of rock containing kryptonite stolen by Lex Luthor from a museum in the film Superman Returns. "The new mineral does not contain fluorine and is white rather than green, but in all other respects the chemistry matches that for the rock containing kryptonite. "We will have to be careful with it - we wouldn't want to deprive Earth of its most famous superhero!" The unusual mineral was unearthed in Serbia by geologists from the mining group Rio Tinto. As it is unlike anything previously known to science, they enlisted the help of experts at the Natural History Museum. Between 30 and 40 new minerals are discovered each year but before it can be classified as new, a mineral's chemical properties, including its crystalline structure, must be rigorously tested. Dr Stanley recruited colleagues at Canada's National Research Council (NRC) to examine the "kryptonite" using state-of-the-art X-ray facilities in Ottawa. Dr Yvon Le Page, an expert in the field of crystallography at the NRC, said finding that a material's chemical composition was an exact match for fictional kryptonite was "the coincidence of a lifetime". A comparison with a database of all existing known minerals proved that the new material was unique. Tempting though it might have been to christen the new mineral kryptonite, scientists opted for the name jadarite. It will be formally described in the European Journal of Mineralogy later this year. The mineral can be seen at the Natural History Museum in free events tomorrow and on Sunday May 13. Most watched News videos - London commuter sings out loud and doesn't care who hears him - Gunman in custody after hostage standoff at Trader Joe's in LA - Freedom! Cyclist rides highway without helmet - or clothes - Sir David Attenborough shuts down Naga Munchetty's questions - Woman livestreams unassisted birth of her 6th child in her garden - Moment uni student fends off armed mugger with martial arts in Brazil - Roseanne Bar explains her Valerie Jarrett tweet in eccentric rant - Prince George turns five: His memorable moments - Female police officer knocked down in Worcester protests - Cohen taped Trump discussing payment to Playboy model - Police surround LA Trader Joe's where gunman is barricaded inside - Man fatally shoots a father during an argument over a handicap spot
<urn:uuid:371be77f-e408-45a1-afe4-3b81f389ca6e>
2.828125
714
Truncated
Science & Tech.
29.021762
95,494,461
New, very precise measurements have shown that the rotation of the Milky Way is simpler than previously thought. A remarkable result from the most successful ESO instrument HARPS, shows that a much debated, apparent 'fall' of neighbourhood Cepheid stars towards our Sun stems from an intrinsic property of the Cepheids themselves. The result, obtained by a group of astrophysicists led by Nicolas Nardetto, will soon appear in the journal Astronomy & Astrophysics. Since Henrietta Leavitt's discovery of their unique properties in 1912, the class of bright, pulsating stars known as Cepheids has been used as a distance indicator. Combined with velocity measurements, the properties of Cepheids are also an extremely valuable tool in investigations of how our galaxy, the Milky Way, rotates. "The motion of Milky Way Cepheids is confusing and has led to disagreement among researchers," says Nardetto. "If the rotation of the Galaxy is taken into account, the Cepheids appear to 'fall' towards the Sun with a mean velocity of about 2 km/s." A debate has raged for decades as to whether this phenomenon was truly related to the actual motion of the Cepheids and, consequently, to a complicated rotating pattern of our galaxy, or if it was the result of effects within the atmospheres of the Cepheids. Nardetto and his colleagues observed eight Cepheids with the high precision HARPS spectrograph, attached to the 3.6-m ESO telescope at La Silla, 2400 m up in the mountains of the Chilean Atacama Desert. HARPS, or the High Accuracy Radial Velocity Planetary Searcher, is best known as a very successful planet hunter, but it can also be used to resolve other complicated cases, where its ability to determine radial velocities - the speed with which something is moving towards or away from us - with phenomenally high accuracy is invaluable. "Our observations show that this apparent motion towards us almost certainly stems from an intrinsic property of Cepheids," says Nardetto. The astronomers found that the deviations in the measured velocity of Cepheids were linked to the chemical elements in the atmospheres of the Cepheids considered. "This result, if generalised to all Cepheids, implies that the rotation of the Milky Way is simpler than previously thought, and is certainly symmetrical about an axis," concludes Nardetto.Nicolas Nardetto Henri Boffin | alfa Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication 16.07.2018 | Chinese Academy of Sciences Headquarters Theorists publish highest-precision prediction of muon magnetic anomaly 16.07.2018 | DOE/Brookhaven National Laboratory For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:9974da61-645c-4274-9123-e797379d493d>
3.546875
1,178
Content Listing
Science & Tech.
36.621995
95,494,484
Scanning electrochemical microscopy decisively optimized How active a living cell is can be seen by its oxygen consumption. The method for determining this consumption has now been significantly improved by chemists in Bochum. The problem up to now was that the measuring electrode altered the oxygen consumption in the cell’s environment much more than the cell itself. “We already found that out twelve years ago,” says Prof. Dr. Wolfgang Schuhmann from the Department of Analytical Chemistry at the Ruhr-Universitat. “Now we have finally managed to make the measuring electrode an spectator.” Together with his team, he reports in the “International Edition” of the journal Angewandte Chemie. Precise positioning of the measuring electrodes Cells need oxygen for various metabolic processes, for example to break down glucose. To measure its consumption, researchers have to detect very small signals in a large background noise. For this they use scanning electrochemical microscopy, for which they need to position electrodes with a diameter of five micrometres or below at a distance of 200 nanometres from the cell. To this end, the RUB team has developed a special process over the last few years, with which the distance of the electrode to the cell can be precisely controlled. Making competition to the cells with microelectrodes Using the electrode, the researchers first generate oxygen in the aqueous environment of the cell, and then they measure how much of this oxygen the cell utilises. For this purpose, they give the electrode a certain potential at the beginning. This has the effect that electrons are extracted from water in the cell environment under formation of oxygen. The cell can use the oxygen for its metabolism; however, at the same time, the microelectrode applied by the researchers competes against it. They change the potential at the electrode so that the reaction reverses: oxygen is now converted to water. The scientists use the electrode to measure the electrons flowing and thus obtain a measure of the oxygen consumption in the local environment. The more oxygen the cell uses for its metabolism, the less oxygen is left for the current-generating reaction at the electrode. Thus, the lower the current flow measured, the greater the activity of the cell. This method is termed the redox competition mode. In the methods used so far, the oxygen consumption caused by the electrode was significantly higher than that of the cell. “The measurement itself thus caused a stronger local change in the oxygen concentration than the cell metabolism,” explains Prof. Schuhmann. It was essential to measure the activity of the cell very quickly after the oxygen was generated at the microelectrode, i.e. after twenty milliseconds. If you wait longer, the electrode deprives the cell of oxygen instead of using the oxygen from the environment that the researchers had artificially created in advance. Three factors were therefore crucial for the success of the Bochum method: the highly accurate position of the electrodes, the redox competition mode and the rapid measuring time. Bibliographic record: M. Nebel, S. Grützke, N. Diab, A. Schulte, W. Schuhmann (2013): Visualization of oxygen consumption of single living cells by scanning electrochemical microscopy: the influence of the faradaic tip reaction, Angewandte Chemie International Edition, DOI: 10.1002/anie.201301098
<urn:uuid:956e9245-fdf2-4d63-b1d0-36ce03212298>
3.34375
711
News Article
Science & Tech.
34.994781
95,494,505
The movies promise to give astronomers a better understanding of how black holes shape galaxy evolution. "Central, supermassive black holes are a key component in all big galaxies," said Eileen T. Meyer of the Space Telescope Science Institute in Baltimore, Md. "Most of these black holes are believed to have gone through an active phase, and black-hole powered jets from this active phase play a key role in the evolution of galaxies. By studying the details of this process in the nearest galaxy with an optical jet, we can hope to learn more about galaxy formation and black hole physics in general." The research team spent eight months analyzing 400 observations from Hubble's Wide Field Planetary Camera 2 and Advanced Camera for Surveys. The observations, taken from 1995 to 2008, are of a black hole sitting in the center of a giant galaxy dubbed M87. "We analyzed several years' worth of Hubble data of a relatively nearby spiraling jet of plasma emitted from a black hole, which allowed us to see lots of details," Meyer said. "The only reason you see the distant jet in motion is because it is traveling very fast." Meyer found evidence that suggests the jet's spiral motion is created by a helix-shaped magnetic field surrounding the black hole. In the outer part of the M87 jet, for example, one bright gas clump, called knot B, appears to zigzag, as if it were moving along a spiral path. Several other gas clumps along the jet also appear to loop around an invisible structure. M87 resides at the center of the neighboring Virgo cluster of roughly 2,000 galaxies, located 50 million light years away. The galaxy's monster black hole is several billion times more massive than our sun. The Hubble data also provided information on why the M87 jet is composed of a long string of gas blobs, which appear to brighten and dim over time. "The jet structure is very clumpy. Is this a ballistic effect, like cannonballs fired sequentially from a cannon?" Meyer asked, "or, are there some particularly interesting physics going on, such as a shock that is magnetically driven?" Meyer's team found evidence for both scenarios. "We found things that move quickly," Meyer said. "We found things that move slowly. And, we found things that are stationary. This study shows us that the clumps are very dynamic sources." It is too soon to tell whether all black-hole-powered jets behave like the one in M87, which is why Meyer plans to use Hubble to study three more jets. "It's always dangerous to have exactly one example because it could be a strange outlier," Meyer said. "The M87 black hole is justification for looking at more jets." The team's results appeared Aug. 22 in the online issue of The Astrophysical Journal Letters. For images and more information about M87's jet, visit:http://www.nasa.gov/hubble What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:8e2ea43f-5449-4c83-beac-35286f1a516d>
3.484375
1,247
Content Listing
Science & Tech.
48.718353
95,494,507
A combination mechanism for a safe comprises thirty-two tumblers numbered from one to thirty-two in such a way that the numbers in each wheel total 132... Could you open the safe? There are exactly 3 ways to add 4 odd numbers to get 10. Find all the ways of adding 8 odd numbers to get 20. To be sure of getting all the solutions you will need to be systematic. What about. . . . The number 10112359550561797752808988764044943820224719 is called a 'slippy number' because, when the last digit 9 is moved to the front, the new number produced is the slippy number multiplied by 9. Ann thought of 5 numbers and told Bob all the sums that could be made by adding the numbers in pairs. The list of sums is 6, 7, 8, 8, 9, 9, 10,10, 11, 12. Help Bob to find out which numbers Ann was. . . . When I type a sequence of letters my calculator gives the product of all the numbers in the corresponding memories. What numbers should I store so that when I type 'ONE' it returns 1, and when I type. . . . Can you arrange the digits 1,2,3,4,5,6,7,8,9 into three 3-digit numbers such that their total is close to 1500? Can you work out how many of each kind of pencil this student bought? Consider all two digit numbers (10, 11, . . . ,99). In writing down all these numbers, which digits occur least often, and which occur most often ? What about three digit numbers, four digit numbers. . . . How many positive integers less than or equal to 4000 can be written down without using the digits 7, 8 or 9? This investigation is about happy numbers in the World of the Octopus where all numbers are written in base 8 .Octi the octopus counts. Consider all of the five digit numbers which we can form using only the digits 2, 4, 6 and 8. If these numbers are arranged in ascending order, what is the 512th number? The five digit number A679B, in base ten, is divisible by 72. What are the values of A and B? How many six digit numbers are there which DO NOT contain a 5? Explain why the arithmetic sequence 1, 14, 27, 40, ... contains many terms of the form 222...2 where only the digit 2 appears. When asked how old she was, the teacher replied: My age in years is not prime but odd and when reversed and added to my age you have a perfect square... In the following sum the letters A, B, C, D, E and F stand for six distinct digits. Find all the ways of replacing the letters with digits so that the arithmetic is correct. If a two digit number has its digits reversed and the smaller of the two numbers is subtracted from the larger, prove the difference can never be prime. Choose two digits and arrange them to make two double-digit numbers. Now add your double-digit numbers. Now add your single digit numbers. Divide your double-digit answer by your single-digit answer. . . . Using the digits 1, 2, 3, 4, 5, 6, 7 and 8, mulitply a two two digit numbers are multiplied to give a four digit number, so that the expression is correct. How many different solutions can you find? The number 27 is special because it is three times the sum of its digits 27 = 3 (2 + 7). Find some two digit numbers that are SEVEN times the sum of their digits (seven-up numbers)? Can you explain why a sequence of operations always gives you perfect squares? Explore the factors of the numbers which are written as 10101 in different number bases. Prove that the numbers 10201, 11011 and 10101 are composite in any base. Start with any whole number N, write N as a multiple of 10 plus a remainder R and produce a new whole number N'. Repeat. What happens? Show that if you add 1 to the product of four consecutive numbers the answer is ALWAYS a perfect square. Euler found four whole numbers such that the sum of any two of the numbers is a perfect square... Find the maximum value of 1/p + 1/q + 1/r where this sum is less than 1 and p, q, and r are positive integers. Ranging from kindergarten mathematics to the fringe of research this informal article paints the big picture of number in a non technical way suitable for primary teachers and older students. Find the smallest integer solution to the equation 1/x^2 + 1/y^2 = 1/z^2 The first of five articles concentrating on whole number dynamics, ideas of general dynamical systems are introduced and seen in concrete cases. I am exactly n times my daughter's age. In m years I shall be ... How old am I? Can you create a Latin Square from multiples of a six digit number? Let a(n) be the number of ways of expressing the integer n as an ordered sum of 1's and 2's. Let b(n) be the number of ways of expressing n as an ordered sum of integers greater than 1. (i) Calculate. . . . Investigate the sequences obtained by starting with any positive 2 digit number (10a+b) and repeatedly using the rule 10a+b maps to 10b-a to get the next number in the sequence. Write 100 as the sum of two positive integers, one divisible by 7 and the other divisible by 11. Then find formulas giving all the solutions to 7x + 11y = 100 where x and y are integers. Using the 8 dominoes make a square where each of the columns and rows adds up to 8 A group of 20 people pay a total of £20 to see an exhibition. The admission price is £3 for men, £2 for women and 50p for children. How many men, women and children are there in the group? Sissa cleverly asked the King for a reward that sounded quite modest but turned out to be rather large... Liam's house has a staircase with 12 steps. He can go down the steps one at a time or two at time. In how many different ways can Liam go down the 12 steps? To make 11 kilograms of this blend of coffee costs £15 per kilogram. The blend uses more Brazilian, Kenyan and Mocha coffee... How many kilograms of each type of coffee are used?
<urn:uuid:7734e3e0-55a4-4c99-9109-652e0dbddc3b>
3.046875
1,386
Content Listing
Science & Tech.
80.906874
95,494,521
Join GitHub today GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up Binary JSON Serialization Clone this wiki locally Voldemort supports pluggable serialization, including one serialization type which is custom to voldemort called, somewhat misleadingly, “json”. This serialization type uses a JSON data model but a more compact byte format, and also checks data against an expected schema for type correctness. int8, int16, int32, int64 – big endian signed integers. float32, float64 – IEEE 754 floating-point format Each of these takes the given number of bits to encode (i.e. int16 uses 16 bits). Strings are stored as a length which encodes the number of characters in the number, N, of characters in the string followed by N UTF8 bytes. Strings are really just arrays of bytes (see below). Dates are stored as int64 unix timestamps. Primitive types are denoted in a schema by a JSON string with the type name, for example “int8”, “float64”, “string”, or “date”. Compound types are recursive types composed of primitive types and other compound types, so for example an Object with Array fields is legitimate. An object is denoted in a schema by a JSON object. For example the following object has two fields, and int8 ‘foo’, and a string ‘bar’: An object is stored in the following format The nullness byte is an int8 with value -1 if the object is null and 1 otherwise. The entries are the fields of the object stored in alphabetical order by name, each stored in the format described elsewhere in this document. Arrays are homogeniously typed sequences of values. They can contain primitives like numbers and strings or objects. The format is a length N followed by N entries of the given type. Lengths are not a type but are used to specify the number of bytes in a string or the number of items in an object. A length is encoded as follows: if the length is less than 2^15 it is stored as an int16. If the length is greater than 2^15 it is stored as a int32 with the first two bits set to 1.
<urn:uuid:b33c2a93-031f-4497-8b6e-82b1aa8bc117>
2.765625
513
Documentation
Software Dev.
54.013288
95,494,530
Effects of waves and tide on tidal flat ecosystems Coming wave to coastal area has an essential effect on ecosystems. In shallow water region such as sandy beach and tidal flats, wave run-ups due to breaking waves, whose height are around 5 cm to 10 cm, are often observed at the slope. In addition to the tidal motion, these small waves may affect water flow inside the seabed. Internal flow in the seabed has close relation with biological activities in these areas. Seawater is a kind of transport medium for oxygen and nutrients. As stated in Chap. 1, the number of bacteria inside the seabed is known to closely correlate with the silt content because the presence of such small partic1es increases the wet surface area for bacteria habitation. The small particles easily move along with the flow. Therefore, to determine the internal flow characteristics of tidal flat is important for understanding of its ecological role. Studies on seawater transport in the surf zone have been carried out mostly for large waves of several meters: (Riedel 1971; McLachlan 1982). However, studies for small waves of several cm high are scarce, hitherto. KeywordsTidal Flat Sandy Beach Total Organic Carbon Content Benthic Alga Surf Zone Unable to display preview. Download preview PDF. - Cheong C.J. (2001) Penetration Behavior of Spilled Oil and its Effects on Coastal Zone Ecosystem. Ph. D. thesis, Department of Environmental Science, Faculty of Engineering, Hiroshima UniversityGoogle Scholar - Grasshoff K, Kremling K, Ehrhardt M (eds) (1999) Methods of Seawater Analysis. 3rd edn Weinheim, New York, Chichester, Brisbane, Singapore, TorontoGoogle Scholar - Hall S.J. (1994) Physical disturbance and marine benthic communities: Life in unconsolidated sediments. Oceanography and Marine Biology 32:179–239Google Scholar - Hosokawa Y, Kuwae T (1997) Mesocosm Experiment by Tidal-Flat Expeiment Facility (in Japanese) J Japan Society of Civil Engineers 82:12–14Google Scholar - Lee Y.S. (1996) The mechanism of phytoplankton growth in Hiroshima Bay. Ph.D. thesis, Department of Environmental Science, Faculty of Engineering, Hiroshima UniversityGoogle Scholar - Sorensen M (1997) Two-dimensional wave equations and wave characteristics. In: Basic Coastal Engineering 2nd eds. Chapman and Hall, pp 9–52Google Scholar
<urn:uuid:38f10c3b-e5ce-4c89-b662-e22e27e7ac93>
3.3125
524
Truncated
Science & Tech.
40.383989
95,494,532
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message) Synthetic-aperture radar (SAR) is a form of radar that is used to create two- or three-dimensional images of objects, such as landscapes. SAR uses the motion of the radar antenna over a target region to provide finer spatial resolution than conventional beam-scanning radars. SAR is typically mounted on a moving platform, such as an aircraft or spacecraft, and has its origins in an advanced form of side looking airborne radar (SLAR), the distance the SAR device travels over a target in the time taken for the radar pulses to return to the antenna creates the large synthetic antenna aperture (the size of the antenna). Typically, the larger the aperture, the higher the image resolution will be, regardless of whether the aperture is physical (a large antenna) or synthetic (a moving antenna) – this allows SAR to create high-resolution images with comparatively small physical antennas. To create a SAR image, successive pulses of radio waves are transmitted to "illuminate" a target scene, and the echo of each pulse is received and recorded, the pulses are transmitted and the echoes received using a single beam-forming antenna, with wavelengths of a meter down to several millimeters. As the SAR device on board the aircraft or spacecraft moves, the antenna location relative to the target changes with time. Signal processing of the successive recorded radar echoes allows the combining of the recordings from these multiple antenna positions. This process forms the synthetic antenna aperture and allows the creation of higher-resolution images than would otherwise be possible with a given physical antenna. As of 2010[update], airborne systems provide resolutions of about 10 cm, ultra-wideband systems provide resolutions of a few millimeters, and experimental terahertz SAR has provided sub-millimeter resolution in the laboratory. - 1 Motivation and applications - 2 Basic principle - 3 Algorithm - 4 Existing spectral estimation approaches - 4.1 Non-parametric methods - 4.2 SAMV method - 4.3 Parametric subspace decomposition methods - 4.4 Backprojection algorithm - 4.5 Comparison between the algorithms - 4.6 More complex operation - 4.7 Modes - 4.8 Polarimetry - 4.9 Interferometry - 4.10 Ultra-wideband SAR - 4.11 Doppler-beam sharpening - 4.12 Chirped (pulse-compressed) radars - 5 Typical operation - 6 Image appearance - 7 History - 8 Relationship to phased arrays - 9 Data collection - 10 Data distribution - 11 See also - 12 References - 13 Further reading - 14 External links Motivation and applications The properties of SAR can be described as having high-resolution capability, which is independent of flight altitude, not being dependent on the weather, as SAR can select proper frequency range. SAR also have a great day and night imaging capability considering their own illumination. SAR images have wide applications in remote sensing and mapping of the surfaces of both the Earth and other planets, some of the other important applications of SAR are topography, oceanography, glaciology, geology (for example, terrain discrimination and subsurface imaging), and forestry, which includes forest height, biomass, deforestation. Volcano and earthquake monitoring is a part of differential interferometry, it is also useful in environment monitoring like oil spills, flooding, urban growth, global change and military surveillance, which includes strategic policy and tactical assessment. SAR can also be implemented as inverse SAR by observing a moving target over a substantial time with a stationary antenna. This section needs expansion with: a short description of the general functional principle of SAR, with illustrative images. You can help by adding to it. (June 2014) A synthetic-aperture radar is an imaging radar mounted on a moving platform. Electromagnetic waves are sequentially transmitted, and reflected echoes are collected, digitized and stored by the radar antenna for later processing, as transmission and reception occur at different time, they map to different positions. The well ordered combination of the received signals builds a virtual aperture that is much longer than the physical antenna length, this is why it is named "synthetic aperture", giving it the property of being an imaging radar. The range direction is parallel to flight track and perpendicular to azimuth direction, which is also known as along-track direction because it is in line with the position of the object within the antenna's field of view. The 3D processing is done in two steps: the azimuth and range direction are focused for the generation of 2D (azimuth-range) high-resolution images, after which a digital elevation model (DEM) is used to measure the phase differences between complex images, which is determined from different look angles to recover the height information. This height information, along with the azimuth-range coordinates provided by 2-D SAR focusing, gives the third dimension, which is the elevation direction, the first step requires only standard processing algorithms, for the second step, an additional pre-processing stage such as image co-registration and phase calibration is used. In addition to this, multiple baselines can be used to extend 3D imaging to the time dimension. 4D and multi-D SAR imaging allows imaging of complex scenarios, such as urban areas, and has improved performances with respect to classical interferometric techniques such as persistent scatterers interferometry (PSI). The SAR algorithm, as given here, applies to phased arrays generally. A three-dimensional array (a volume) of scene elements is defined, which will represent the volume of space within which targets exist, each element of the array is a cubical voxel representing the probability (a "density") of a reflective surface being at that location in space. (Note that two-dimensional SARs are also possible—showing only a top-down view of the target area.) Initially, the SAR algorithm gives each voxel a density of zero. Then, for each captured waveform, the entire volume is iterated, for a given waveform and voxel, the distance from the position represented by that voxel to the antenna(e) used to capture that waveform is calculated. That distance represents a time delay into the waveform, the sample value at that position in the waveform is then added to the voxel's density value. This represents a possible echo from a target at that position. Note that there are several optional approaches here, depending on the precision of the waveform timing, among other things, for example, if phase cannot be accurately known, then only the envelope magnitude (with the help of a Hilbert transform) of the waveform sample might be added to the voxel. If polarization and phase are known in the waveform and are accurate enough, then these values might be added to a more complex voxel that holds such measurements separately. After all waveforms have been iterated over all voxels, the basic SAR processing is complete. What remains, in the simplest approach, is to decide what voxel density value represents a solid object. Voxels whose density is below that threshold are ignored. Note that the threshold level chosen must at least be higher than the peak energy of any single wave, otherwise that wave peak would appear as a sphere (or ellipse, in the case of multistatic operation) of false "density" across the entire volume, thus to detect a point on a target, there must be at least two different antenna echoes from that point. Consequently, there is a need for large numbers of antenna positions to properly characterize a target. The voxels that passed the threshold criteria are visualized in 2D or 3D. Optionally, added visual quality can sometimes be had by use of a surface detection algorithm like marching cubes. Existing spectral estimation approaches Synthetic-aperture radar determines the 3D reflectivity from measured SAR data, it is basically a spectrum estimation, because for a specific cell of an image, the complex-valued SAR measurements of the SAR image stack are sampled version of the Fourier transform of reflectivity in elevation direction, but the Fourier transform is irregular. Thus the spectral estimation techniques are used to improve the resolution, reduce speckle compared to what we get in conventional Fourier transform SAR imaging techniques. FFT (i.e., Periodogram or Matched filter) is one such method, which is used in majority of the spectral estimation algorithms, and there are many fast algorithms for computing the multidimensional discrete Fourier transform. Computational Kronecker-core array algebra is a popular algorithm used as new variant of FFT algorithms for the processing in multidimensional synthetic-aperture radar (SAR) systems. This algorithm uses a study of theoretical properties of input/output data indexing sets and groups of permutations. A branch of finite multi-dimensional linear algebra is used to identify similarities and differences among various FFT algorithm variants and also to create new variants, each multidimensional DFT computation is expressed in matrix form. The multidimensional DFT matrix, in turn, is disintegrated into a set of factors, called functional primitives, which are individually identified with an underlying software/hardware computational design. The FFT implementation is essentially a realization of the mapping of the mathematical framework through generation of the variants and executing matrix operations, the performance of this implementation may vary from machine to machine, and the objective is to identify on which machine it performs best. - Additive group-theoretic properties of multidimensional input/output indexing sets are used for the mathematical formulations, therefore, it is easier to identify mapping between computing structures and mathematical expressions and thus better than conventional methods. - The language of CKA algebra helps the application developer in understanding which are the more computational efficient FFT variants and thus reducing the computational effort and improve their implementation time. - FFT cannot separate sinusoids closer in frequency. Also if the periodicity of the data does not match FFT, edge effects are seen. The Capon spectral method, also called the minimum-variance method, is a multidimensional array-processing technique, it is a nonparametric covariance-based method, which has adaptive matched-filterbank approach and follows two main steps: - Passing the data through a 2D bandpass filter with varying center frequencies (). - Estimating the power at () for all of interest from the filtered data. The adaptive Capon bandpass filter is designed to minimize the power of the filter output, as well as pass the frequencies () without any attenuation, i.e., to satisfy, for each (), - subject to Therefore, it passes a 2D sinusoid at a given frequency without distortion while minimizing the variance of the noise of the resulting image, the purpose is to compute the spectral estimate efficiently. Spectral estimate is given as where R is the covariance matrix, and is the 2D complex-conjugate transpose of the Fourier vector. The computation of this equation over all frequencies is time-consuming, it is seen that the forward–backward Capon estimator yields better estimation than the forward-only classical capon approach. The main reason behind this is that while the forward–backward Capon uses both the forward and backward data vectors to obtain the estimate of the covariance matrix, the forward-only Capon uses only the forward data vectors to estimate the covariance matrix. - Capon can yield more accurate spectral estimates with much lower sidelobes and narrower spectral peaks than the fast Fourier transform (FFT) method. - Capon method can provide much better resolution. - Implementation requires computation of two intensive task: inversion of the covariance matrix R and also multiply it with the matrix, which has to be done for each point . The APES (amplitude and phase estimation) method is also a matched-filter-bank method, which assumes that the phase history data is a sum of 2D sinusoids in noise. APES spectral estimator has 2-step filtering interpretation: - Passing data through a bank of FIR bandpass filters with varying center frequency . - Obtaining the spectrum estimate for from the filtered data. Empirically, the APES method results in wider spectral peaks than the Capon method, but more accurate spectral estimates for amplitude in SAR; in the Capon method, although the spectral peaks are narrower than the APES, the sidelobes are higher than that for the APES. As a result, the estimate for the amplitude is expected to be less accurate for the Capon method than for the APES method, the APES method requires about 1.5 times more computation than the Capon method. - Filtering reduces the number of available samples, but when it is designed tactically, the increase in signal-to-noise ratio (SNR) in the filtered data will compensate this reduction, and the amplitude of a sinusoidal component with frequency can be estimated more accurately from the filtered data than from the original signal. - The autocovariance matrix is much larger in 2D than in 1D, therefore it is limited by memory available. SAMV method is a parameter-free sparse signal reconstruction based algorithm. It achieves superresolution and robust to highly correlated signals, the name emphasizes its basis on the asymptotically minimum variance (AMV) criterion. It is a powerful tool for the recovery of both the amplitude and frequency characteristics of multiple highly correlated sources in challenging environment (e.g., limited number of snapshots, low signal-to-noise ratio. Applications include synthetic-aperture radar imaging and various source localization. Computation complexity of the SAMV method is higher due to its iterative procedure. Parametric subspace decomposition methods This subspace decomposition method separates the eigenvectors of the autocovariance matrix into those corresponding to signals and to clutter, the amplitude of the image at a point ( ) is given by: where is the amplitude of the image at a point , is the coherency matrix and is the Hermitian of the coherency matrix, is the inverse of the eigenvalues of the clutter subspace, W are vectors defined as where ⊗ denotes the Kronecker product of the two vectors. - Shows features of image more accurately. - High computational complexity. MUSIC detects frequencies in a signal by performing an eigen decomposition on the covariance matrix of a data vector of the samples obtained from the samples of the received signal. When all of the eigenvectors are included in the clutter subspace (model order = 0) the EV method becomes identical to the Capon method, thus the determination of model order is critical to operation of the EV method. The eigenvalue of the R matrix decides whether its corresponding eigenvector corresponds to the clutter or to the signal subspace. The MUSIC method is considered to be a poor performer in SAR applications, this method uses a constant instead of the clutter subspace. In this method, the denominator is equated to zero when a sinusoidal signal corresponding to a point in the SAR image is in alignment to one of the signal subspace eigenvectors which is the peak in image estimate, thus this method does not accurately represent the scattering intensity at each point, but show the particular points of the image. - Resolution loss due to the averaging operation. Backprojection Algorithm has two methods: Time-domain Backprojection and Frequency-domain Backprojection. The time-domain Backprojection has more advantages over frequency-domain and thus, is more preferred, the time-domain Backprojection forms images or spectrums by matching the data acquired from the radar and as per what it expects to receive. It can be considered as an ideal matched-filter for Synthetic Aperture Radar. There is no need of having a different motion compensation step due to its quality of handling non-ideal motion/sampling, it can also be used for various imaging geometries. - It is invariant to the imaging mode: which means, that it uses the same algorithm irrespective of the imaging mode present, whereas, frequency domain methods require changes depending on the mode and geometry. - Ambiguous azimuth aliasing usually occurs when the Nyquist spatial sampling requirements are exceeded by frequencies. Unambiguous aliasing occurs in squinted geometries where the signal bandwidth does not exceed the sampling limits, but has undergone "spectral wrapping." Backprojection Algorithm does not get affected by any such kind of aliasing effects. - It matches the space/time filter: uses the information about the imaging geometry, to produce a pixel-by-pixel varying matched filter to approximate the expected return signal. This usually yields antenna gain compensation. - With reference to the previous advantage, the back projection algorithm compensates for the motion. This becomes an advantage at areas having low altitudes. - The computational expense is more for Backprojection algorithm as compared to other frequency domain methods. - It requires very precise knowledge of imaging geometry. Application: geosynchronous orbit synthetic aperture radar (GEO-SAR) In GEO-SAR, to focus specially on the relative moving track, the backprojection algorithm works very well, it uses the concept of Azimuth Processing in the time domain. For the satellite-ground geometry, GEO-SAR plays a significant role. The procedure of this concept is elaborated as follows. - The raw data acquired is segmented or drawn into sub-apertures for simplification of speedy conduction of procedure. - The range of the data is then compressed, using the concept of "Matched Filtering" for every segment/sub-aperture created. It is given by-where τ is the range time, t is the azimuthal time, λ is the wavelength, c is the speed of light. - Accuracy in the "Range Migration Curve" is achieved by range interpolation. - The pixel locations of the ground in the image is dependent on the satellite–ground geometry model. Grid-division is now done as per the azimuth time. - Calculations for the "slant range" (range between the antenna's phase center and the point on the ground) are done for every azimuth time using coordinate transformations. - Azimuth Compression is done after the previous step. - Step 5 and 6 are repeated for every pixel, to cover every pixel, and conduct the procedure on every sub-aperture. - Lastly, all the sub-apertures of the image created throughout, are superimposed onto each other and the ultimate HD image is generated. Comparison between the algorithms Capon and APES can yield more accurate spectral estimates with much lower sidelobes and more narrow spectral peaks than the fast Fourier transform (FFT) method,which is also a special case of the FIR filtering approaches, it is seen that although the APES algorithm gives slightly wider spectral peaks than the Capon method, the former yields more accurate overall spectral estimates than the latter and the FFT method. FFT method is fast and simple but have larger sidelobes. Capon has high resolution but high computational complexity. EV also has high resolution and high computational complexity. APES has higher resolution, faster than capon and EV but high computational complexity. MUSIC method is not generally suitable for SAR imaging, as whitening the clutter eigenvalues destroys the spatial inhomogeneities associated with terrain clutter or other diffuse scattering in SAR imagery, but it offers higher frequency resolution in the resulting power spectral density (PSD) than the fast Fourier transform (FFT)-based methods. Backprojection Algorithm is computationally expensive, it is specifically attractive for sensors that are wideband, wide-angle, and/or have long coherent apertures with substantial off-track motion. More complex operation The basic design of a synthetic aperture radar system can be enhanced to collect more information. Most of these methods use the same basic principle of combining many pulses to form a synthetic aperture, but may involve additional antennas or significant additional processing. SAR requires that echo captures be taken at multiple antenna positions, the more captures taken (at different antenna locations) the more reliable the target characterization. Multiple captures can be obtained by moving a single antenna to different locations, by placing multiple stationary antennas at different locations, or combinations thereof. The advantage of a single moving antenna is that it can be easily placed in any number of positions to provide any number of monostatic waveforms, for example, an antenna mounted on an airplane takes many captures per second as the plane travels. The principal advantages of multiple static antennas are that a moving target can be characterized (assuming the capture electronics are fast enough), that no vehicle or motion machinery is necessary, and that antenna positions need not be derived from other, sometimes unreliable, information. (One problem with SAR aboard an airplane is knowing precise antenna positions as the plane travels). For multiple static antennas, all combinations of monostatic and multistatic radar waveform captures are possible. Note, however, that it is not advantageous to capture a waveform for each of both transmission directions for a given pair of antennas, because those waveforms will be identical. When multiple static antennas are used, the total number of unique echo waveforms that can be captured is where N is the number of unique antenna positions. Stripmap mode airborne SAR The antenna stays in a fixed position, and may be orthogonal to the flight path or squinted slightly forward or backward . When the antenna aperture travels along the flight path, a signal is transmitted at a rate equal to the pulse repetition frequency (PRF), the lower boundary of the PRF is determined by the Doppler bandwidth of the radar. The backscatter of each of these signals is commutatively added on a pixel-by-pixel basis to attain the fine azimuth resolution desired in radar imagery. Spotlight mode SAR The spotlight synthetic aperture is given by- where is the angle formed between the beginning and end of the imaging, as shown in the diagram of spotlight imaging and is the range distance. The spotlight mode gives better resolution for a smaller ground patch; in this mode, the illuminating radar beam is steered continually as the aircraft moves, so that it illuminates the same patch over a longer period of time. This mode is not a very continuous imaging mode; however, has high azimuth resolution. Scan mode SAR While operating as a scan mode SAR, the antenna beam sweeps periodically and thus cover much larger area than spotlight and stripmap modes. However, the azimuth resolution become much lower than stripmap mode due to decreased azimuth bandwidth. Clearly there is a balance achieved between azimuth resolution and scan area of SAR. Here, the synthetic aperture is shared between the sub swaths, and it is not in direct contact within one subswath. Mosaic Operation is required in Azimuth and range directions to join the azimuth bursts and the range sub-swaths. - ScanSAR makes the swath beam huge. - The azimuth signal has many bursts. - The Azimuth resolution is limited due to the burst duration. - Each target contains varied frequency which completely depends where the Azimuth is present. Radar waves have a polarization. Different materials reflect radar waves with different intensities, but anisotropic materials such as grass often reflect different polarizations with different intensities, some materials will also convert one polarization into another. By emitting a mixture of polarizations and using receiving antennas with a specific polarization, several images can be collected from the same series of pulses. Frequently three such RX-TX polarizations (HH-pol, VV-pol, VH-pol) are used as the three color channels in a synthesized image, this is what has been done in the picture at right. Interpretation of the resulting colors requires significant testing of known materials. New developments in polarimetry include using the changes in the random polarization returns of some surfaces (such as grass or sand) and between two images of the same location at different times to determine where changes not visible to optical systems occurred. Examples include subterranean tunneling or paths of vehicles driving through the area being imaged. Enhanced SAR sea oil slick observation has been developed by appropriate physical modelling and use of fully polarimetric and dual-polarimetric measurements. SAR polarimetry is a technique used for deriving qualitative and quantitative physical information for land, snow and ice, ocean and urban applications based on the measurement and exploration of the polarimetric properties of man-made and natural scatterers. Terrain and land use classification is one of the most important applications of polarimetric synthetic aperture radar (POLSAR). SAR polarimetry uses a scattering matrix (S) to identify the scattering behavior of objects after an interaction with electromagnetic wave, the matrix is represented by a combination of horizontal and vertical polarization states of transmitted and received signals. where, HH is for horizontal transmit and horizontal receive, VV is for vertical transmit and vertical receive, HV is for horizontal transmit and vertical receive, and VH – for vertical transmit and horizontal receive. The first two of these polarization combinations are referred to as like-polarized (or co-polarized), because the transmit and receive polarizations are the same, the last two combinations are referred to as cross-polarized because the transmit and receive polarizations are orthogonal to one another. The three-component scattering power model by Freeman and Durden is successfully used for decomposition of POLSAR image, applying the reflection symmetry condition using covariance matrix. The method is based on simple physical scattering mechanisms (surface scattering, double-bounce scattering, and volume scattering), the advantage of this scattering model is that it is simple and easy to implement for image processing. There are 2 major approaches for a 33 polarimetric matrix decomposition. One is the lexicographic covariance matrix approach based on physically measurable parameters, and the other is the Pauli decomposition which is a coherent decomposition matrix. It represents all the polarimetric information in a single SAR image, the polarimetric information of [S] could be represented by the combination of the intensities in a single RGB image where all the previous intensities will be coded as a color channel. For PolSAR image analysis, there can be cases where reflection symmetry condition does not hold; in those cases a four-component scattering model can be used to decompose polarimetric synthetic aperture radar (SAR) images. This approach deals with the non- reflection symmetric scattering case, it includes and extends the three-component decomposition method introduced by Freeman and Durden to a fourth component by adding the helix scattering power. This helix power term generally appears in complex urban area but disappears for a natural distributed scatterer. There is also an improved method using the four-component decomposition algorithm, which was introduced for the general POLSAR data image analyses, the SAR data is first filtered which is known as speckle reduction, then each pixel is decomposed by four-component model to determine the surface scattering power (), double-bounce scattering power (), volume scattering power (), and helix scattering power (). The pixels are then divided into 5 classes (surface,double-bounce,volume,helix,and mixed pixels) classified with respect to maximum powers. A mixed category is added for the pixels having two or three equal dominant scattering powers after computation, the process continues as the pixels in all these categories are divided in 20 small clutter approximately of same number of pixels and merged as desirable, this is called cluster merging. They are iteratively classified and then automatically color is delivered to each class, the summarization of this algorithm leads to an understanding that, brown colors denotes the surface scattering classes, red colors for double-bounce scattering classes, green colors for volume scattering classes, and blue colors for helix scattering classes. Although this method is aimed for non-reflection case, it automatically includes the reflection symmetry condition, therefore in can be used as a general case, it also preserves the scattering characteristics by taking the mixed scattering category into account therefore proving to be a better algorithm. Rather than discarding the phase data, information can be extracted from it. If two observations of the same terrain from very similar positions are available, aperture synthesis can be performed to provide the resolution performance which would be given by a radar system with dimensions equal to the separation of the two measurements, this technique is called interferometric SAR or InSAR. If the two samples are obtained simultaneously (perhaps by placing two antennas on the same aircraft, some distance apart), then any phase difference will contain information about the angle from which the radar echo returned. Combining this with the distance information, one can determine the position in three dimensions of the image pixel; in other words, one can extract terrain altitude as well as radar reflectivity, producing a digital elevation model (DEM) with a single airplane pass. One aircraft application at the Canada Centre for Remote Sensing produced digital elevation maps with a resolution of 5 m and altitude errors also about 5 m. Interferometry was used to map many regions of the Earth's surface with unprecedented accuracy using data from the Shuttle Radar Topography Mission. If the two samples are separated in time, perhaps from two flights over the same terrain, then there are two possible sources of phase shift, the first is terrain altitude, as discussed above. The second is terrain motion: if the terrain has shifted between observations, it will return a different phase, the amount of shift required to cause a significant phase difference is on the order of the wavelength used. This means that if the terrain shifts by centimeters, it can be seen in the resulting image (a digital elevation map must be available to separate the two kinds of phase difference; a third pass may be necessary to produce one). This second method offers a powerful tool in geology and geography. Glacier flow can be mapped with two passes. Maps showing the land deformation after a minor earthquake or after a volcanic eruption (showing the shrinkage of the whole volcano by several centimeters) have been published (where?). Differential interferometry (D-InSAR) requires taking at least two images with addition of a DEM. The DEM can be either produced by GPS measurements or could be generated by interferometry as long as the time between acquisition of the image pairs is short, which guarantees minimal distortion of the image of the target surface. In principle, 3 images of the ground area with similar image acquisition geometry is often adequate for D-InSar. The principle for detecting ground movement is quite simple. One interferogram is created from the first two images; this is also called the reference interferogram or topographical interferogram. A second interferogram is created that captures topography + distortion. Subtracting the latter from the reference interferogram can reveal differential fringes, indicating movement. The described 3 image D-InSAR generation technique is called 3-pass or double-difference method. Differential fringes which remain as fringes in the differential interferogram are a result of SAR range changes of any displaced point on the ground from one interferogram to the next. In the differential interferogram, each fringe is directly proportional to the SAR wavelength, which is about 5.6 cm for ERS and RADARSAT single phase cycle. Surface displacement away from the satellite look direction causes an increase in path (translating to phase) difference. Since the signal travels from the SAR antenna to the target and back again, the measured displacement is twice the unit of wavelength. This means in differential interferometry one fringe cycle −π to +π or one wavelength corresponds to a displacement relative to SAR antenna of only half wavelength (2.8 cm). There are various publications on measuring subsidence movement, slope stability analysis, landslide, glacier movement, etc. tooling D-InSAR. Further advancement to this technique whereby differential interferometry from satellite SAR ascending pass and descending pass can be used to estimate 3-D ground movement. Research in this area has shown accurate measurements of 3-D ground movement with accuracies comparable to GPS based measurements can be achieved. SAR Tomography is a subfield of a concept named as multi-baseline interferometry, it has been developed to give a 3D exposure to the imaging, which uses the beam formation concept. It can be used when the use demands a focused phase concern between the magnitude and the phase components of the SAR data, during information retrieval. One of the major advantages of Tomo-SAR is that it can separate out the parameters which get scattered, irrespective of how different their motions are. On using Tomo-SAR with differential interferometry, a new combination named "differential tomography" (Diff-Tomo) is developed. Application of Tomo-SAR Tomo-SAR has an application based on radar imaging, which is the depiction of Ice Volume and Forest Temporal Coherence (Temporal coherence describes the correlation between waves observed at different moments in time). Conventional radar systems emit bursts of radio energy with a fairly narrow range of frequencies. A narrow-band channel, by definition, does not allow rapid changes in modulation, since it is the change in a received signal that reveals the time of arrival of the signal (obviously an unchanging signal would reveal nothing about "when" it reflected from the target), a signal with only a slow change in modulation cannot reveal the distance to the target as well as can a signal with a quick change in modulation. Ultra-wideband (UWB) refers to any radio transmission that uses a very large bandwidth – which is the same as saying it uses very rapid changes in modulation. Although there is no set bandwidth value that qualifies a signal as "UWB", systems using bandwidths greater than a sizable portion of the center frequency (typically about ten percent, or so) are most often called "UWB" systems. A typical UWB system might use a bandwidth of one-third to one-half of its center frequency, for example, some systems use a bandwidth of about 1 GHz centered around 3 GHz. There are as many ways to increase the bandwidth of a signal as there are forms of modulation – it is simply a matter of increasing the rate of that modulation. However, the two most common methods used in UWB radar, including SAR, are very short pulses and high-bandwidth chirping. A general description of chirping appears elsewhere in this article, the bandwidth of a chirped system can be as narrow or as wide as the designers desire. Pulse-based UWB systems, being the more common method associated with the term "UWB radar", are described here. A pulse-based radar system transmits very short pulses of electromagnetic energy, typically only a few waves or less. A very short pulse is, of course, a very rapidly changing signal, and thus occupies a very wide bandwidth, this allows far more accurate measurement of distance, and thus resolution. The main disadvantage of pulse-based UWB SAR is that the transmitting and receiving front-end electronics are difficult to design for high-power applications. Specifically, the transmit duty cycle is so exceptionally low and pulse time so exceptionally short, that the electronics must be capable of extremely high instantaneous power to rival the average power of conventional radars. (Although it is true that UWB provides a notable gain in channel capacity over a narrow band signal because of the relationship of bandwidth in the Shannon–Hartley theorem and because the low receive duty cycle receives less noise, increasing the signal-to-noise ratio, there is still a notable disparity in link budget because conventional radar might be several orders of magnitude more powerful than a typical pulse-based radar.) So pulse-based UWB SAR is typically used in applications requiring average power levels in the microwatt or milliwatt range, and thus is used for scanning smaller, nearer target areas (several tens of meters), or in cases where lengthy integration (over a span of minutes) of the received signal is possible. Note, however, that this limitation is solved in chirped UWB radar systems. The principal advantages of UWB radar are better resolution (a few millimeters using commercial off-the-shelf electronics) and more spectral information of target reflectivity. Doppler Beam Sharpening commonly refers to the method of processing unfocused real-beam phase history to achieve better resolution than could be achieved by processing the real beam without it. Because the real aperture of the radar antenna is so small (compared to the wavelength in use), the radar energy spreads over a wide area (usually many degrees wide in a direction orthogonal (at right angles) to the direction of the platform (aircraft)). Doppler-beam sharpening takes advantage of the motion of the platform in that targets ahead of the platform return a Doppler upshifted signal (slightly higher in frequency) and targets behind the platform return a Doppler downshifted signal (slightly lower in frequency). The amount of shift varies with the angle forward or backward from the ortho-normal direction. By knowing the speed of the platform, target signal return is placed in a specific angle "bin" that changes over time. Signals are integrated over time and thus the radar "beam" is synthetically reduced to a much smaller aperture – or more accurately (and based on the ability to distinguish smaller Doppler shifts) the system can have hundreds of very "tight" beams concurrently. This technique dramatically improves angular resolution; however, it is far more difficult to take advantage of this technique for range resolution. (See pulse-doppler radar). Chirped (pulse-compressed) radars A common technique for many radar systems (usually also found in SAR systems) is to "chirp" the signal. In a "chirped" radar, the pulse is allowed to be much longer. A longer pulse allows more energy to be emitted, and hence received, but usually hinders range resolution. But in a chirped radar, this longer pulse also has a frequency shift during the pulse (hence the chirp or frequency shift). When the "chirped" signal is returned, it must be correlated with the sent pulse. Classically, in analog systems, it is passed to a dispersive delay line (often a SAW device) that has the property of varying velocity of propagation based on frequency. This technique "compresses" the pulse in time – thus having the effect of a much shorter pulse (improved range resolution) while having the benefit of longer pulse length (much more signal returned). Newer systems use digital pulse correlation to find the pulse return in the signal. In a typical SAR application, a single radar antenna is attached to an aircraft or spacecraft so as to radiate a beam whose wave-propagation direction has a substantial component perpendicular to the flight-path direction, the beam is allowed to be broad in the vertical direction so it will illuminate the terrain from nearly beneath the aircraft out toward the horizon. Resolution in the range dimension of the image is accomplished by creating pulses which define very short time intervals, either by emitting short pulses consisting of a carrier frequency and the necessary sidebands, all within a certain bandwidth, or by using longer "chirp pulses" in which frequency varies (often linearly) with time within that bandwidth, the differing times at which echoes return allow points at different distances to be distinguished. The total signal is that from a beamwidth-sized patch of the ground. To produce a beam that is narrow in the cross-range direction[clarification needed], diffraction effects require that the antenna be wide in that dimension. Therefore, the distinguishing, from each other, of co-range points simply by strengths of returns that persist for as long as they are within the beam width is difficult with aircraft-carryable antennas, because their beams can have linear widths only about two orders of magnitude (hundreds of times) smaller than the range. (Spacecraft-carryable ones can do 10 or more times better.) However, if both the amplitude and the phase of returns are recorded, then the portion of that multi-target return that was scattered radially from any smaller scene element can be extracted by phase-vector correlation of the total return with the form of the return expected from each such element. Careful design and operation can accomplish resolution of items smaller than a millionth of the range, for example, 30 cm at 300 km, or about one foot at nearly 200 miles (320 km). The process can be thought of as combining the series of spatially distributed observations as if all had been made simultaneously with an antenna as long as the beamwidth and focused on that particular point, the "synthetic aperture" simulated at maximum system range by this process not only is longer than the real antenna, but, in practical applications, it is much longer than the radar aircraft, and tremendously longer than the radar spacecraft. Image resolution of SAR in its range coordinate (expressed in image pixels per distance unit) is mainly proportional to the radio bandwidth of whatever type of pulse is used; in the cross-range coordinate, the similar resolution is mainly proportional to the bandwidth of the Doppler shift of the signal returns within the beamwidth. Since Doppler frequency depends on the angle of the scattering point's direction from the broadside direction, the Doppler bandwidth available within the beamwidth is the same at all ranges. Hence the theoretical spatial resolution limits in both image dimensions remain constant with variation of range. However, in practice, both the errors that accumulate with data-collection time and the particular techniques used in post-processing further limit cross-range resolution at long ranges. The conversion of return delay time to geometric range can be very accurate because of the natural constancy of the speed and direction of propagation of electromagnetic waves. However, for an aircraft flying through the never-uniform and never-quiescent atmosphere, the relating of pulse transmission and reception times to successive geometric positions of the antenna must be accompanied by constant adjusting of the return phases to account for sensed irregularities in the flight path. SAR's in spacecraft avoid that atmosphere problem, but still must make corrections for known antenna movements due to rotations of the spacecraft, even those that are reactions to movements of onboard machinery. Locating a SAR in a manned space vehicle may require that the humans carefully remain motionless relative to the vehicle during data collection periods. Although some references to SARs have characterized them as "radar telescopes", their actual optical analogy is the microscope, the detail in their images being smaller than the length of the synthetic aperture; in radar-engineering terms, while the target area is in the "far field" of the illuminating antenna, it is in the "near field" of the simulated one. Returns from scatterers within the range extent of any image are spread over a matching time interval, the inter-pulse period must be long enough to allow farthest-range returns from any pulse to finish arriving before the nearest-range ones from the next pulse begin to appear, so that those do not overlap each other in time. On the other hand, the interpulse rate must be fast enough to provide sufficient samples for the desired across-range (or across-beam) resolution. When the radar is to be carried by a high-speed vehicle and is to image a large area at fine resolution, those conditions may clash, leading to what has been called SAR's ambiguity problem, the same considerations apply to "conventional" radars also, but this problem occurs significantly only when resolution is so fine as to be available only through SAR processes. Since the basis of the problem is the information-carrying capacity of the single signal-input channel provided by one antenna, the only solution is to use additional channels fed by additional antennas, the system then becomes a hybrid of a SAR and a phased array, sometimes being called a Vernier array. Combining the series of observations requires significant computational resources, usually using Fourier transform techniques, the high digital computing speed now available allows such processing to be done in near-real time on board a SAR aircraft. (There is necessarily a minimum time delay until all parts of the signal have been received.) The result is a map of radar reflectivity, including both amplitude and phase. The amplitude information, when shown in a map-like display, gives information about ground cover in much the same way that a black-and-white photo does. Variations in processing may also be done in either vehicle-borne stations or ground stations for various purposes, so as to accentuate certain image features for detailed target-area analysis. Although the phase information in an image is generally not made available to a human observer of an image display device, it can be preserved numerically, and sometimes allows certain additional features of targets to be recognized. Unfortunately, the phase differences between adjacent image picture elements ("pixels") also produce random interference effects called "coherence speckle", which is a sort of graininess with dimensions on the order of the resolution, causing the concept of resolution to take on a subtly different meaning, this effect is the same as is apparent both visually and photographically in laser-illuminated optical scenes. The scale of that random speckle structure is governed by the size of the synthetic aperture in wavelengths, and cannot be finer than the system's resolution. Speckle structure can be subdued at the expense of resolution. Before rapid digital computers were available, the data processing was done using an optical holography technique, the analog radar data were recorded as a holographic interference pattern on photographic film at a scale permitting the film to preserve the signal bandwidths (for example, 1:1,000,000 for a radar using a 0.6-meter wavelength). Then light using, for example, 0.6-micrometer waves (as from a helium–neon laser) passing through the hologram could project a terrain image at a scale recordable on another film at reasonable processor focal distances of around a meter. This worked because both SAR and phased arrays are fundamentally similar to optical holography, but using microwaves instead of light waves, the "optical data-processors" developed for this radar purpose were the first effective analog optical computer systems, and were, in fact, devised before the holographic technique was fully adapted to optical imaging. Because of the different sources of range and across-range signal structures in the radar signals, optical data-processors for SAR included not only both spherical and cylindrical lenses, but sometimes conical ones. The following considerations apply also to real-aperture terrain-imaging radars, but are more consequential when resolution in range is matched to a cross-beam resolution that is available only from a SAR. The two dimensions of a radar image are range and cross-range. Radar images of limited patches of terrain can resemble oblique photographs, but not ones taken from the location of the radar, this is because the range coordinate in a radar image is perpendicular to the vertical-angle coordinate of an oblique photo. The apparent entrance-pupil position (or camera center) for viewing such an image is therefore not as if at the radar, but as if at a point from which the viewer's line of sight is perpendicular to the slant-range direction connecting radar and target, with slant-range increasing from top to bottom of the image. Because slant ranges to level terrain vary in vertical angle, each elevation of such terrain appears as a curved surface, specifically a hyperbolic cosine one. Verticals at various ranges are perpendiculars to those curves, the viewer's apparent looking directions are parallel to the curve's "hypcos" axis. Items directly beneath the radar appear as if optically viewed horizontally (i.e., from the side) and those at far ranges as if optically viewed from directly above. These curvatures are not evident unless large extents of near-range terrain, including steep slant ranges, are being viewed. When viewed as specified above, fine-resolution radar images of small areas can appear most nearly like familiar optical ones, for two reasons, the first reason is easily understood by imagining a flagpole in the scene. The slant-range to its upper end is less than that to its base. Therefore, the pole can appear correctly top-end up only when viewed in the above orientation. Secondly, the radar illumination then being downward, shadows are seen in their most-familiar "overhead-lighting" direction. Note that the image of the pole's top will overlay that of some terrain point which is on the same slant range arc but at a shorter horizontal range ("ground-range"). Images of scene surfaces which faced both the illumination and the apparent eyepoint will have geometries that resemble those of an optical scene viewed from that eyepoint. However, slopes facing the radar will be foreshortened and ones facing away from it will be lengthened from their horizontal (map) dimensions, the former will therefore be brightened and the latter dimmed. Returns from slopes steeper than perpendicular to slant range will be overlaid on those of lower-elevation terrain at a nearer ground-range, both being visible but intermingled, this is especially the case for vertical surfaces like the walls of buildings. Another viewing inconvenience that arises when a surface is steeper than perpendicular to the slant range is that it is then illuminated on one face but "viewed" from the reverse face. Then one "sees", for example, the radar-facing wall of a building as if from the inside, while the building's interior and the rear wall (that nearest to, hence expected to be optically visible to, the viewer) have vanished, since they lack illumination, being in the shadow of the front wall and the roof, some return from the roof may overlay that from the front wall, and both of those may overlay return from terrain in front of the building. The visible building shadow will include those of all illuminated items. Long shadows may exhibit blurred edges due to the illuminating antenna's movement during the "time exposure" needed to create the image. Surfaces that we usually consider rough will, if that roughness consists of relief less than the radar wavelength, behave as smooth mirrors, showing, beyond such a surface, additional images of items in front of it, those mirror images will appear within the shadow of the mirroring surface, sometimes filling the entire shadow, thus preventing recognition of the shadow. An important fact that applies to SARs but not to real-aperture radars is that the direction of overlay of any scene point is not directly toward the radar, but toward that point of the SAR's current path direction that is nearest to the target point. If the SAR is "squinting" forward or aft away from the exactly broadside direction, then the illumination direction, and hence the shadow direction, will not be opposite to the overlay direction, but slanted to right or left from it. An image will appear with the correct projection geometry when viewed so that the overlay direction is vertical, the SAR's flight-path is above the image, and range increases somewhat downward. Objects in motion within a SAR scene alter the Doppler frequencies of the returns, such objects therefore appear in the image at locations offset in the across-range direction by amounts proportional to the range-direction component of their velocity. Road vehicles may be depicted off the roadway and therefore not recognized as road traffic items. Trains appearing away from their tracks are more easily properly recognized by their length parallel to known trackage as well as by the absence of an equal length of railbed signature and of some adjacent terrain, both having been shadowed by the train. While images of moving vessels can be offset from the line of the earlier parts of their wakes, the more recent parts of the wake, which still partake of some of the vessel's motion, appear as curves connecting the vessel image to the relatively quiescent far-aft wake; in such identifiable cases, speed and direction of the moving items can be determined from the amounts of their offsets. The along-track component of a target's motion causes some defocus. Random motions such as that of wind-driven tree foliage, vehicles driven over rough terrain, or humans or other animals walking or running generally render those items not focusable, resulting in blurring or even effective invisibility. These considerations, along with the speckle structure due to coherence, take some getting used to in order to correctly interpret SAR images. To assist in that, large collections of significant target signatures have been accumulated by performing many test flights over known terrains and cultural objects. Carl A. Wiley, a mathematician at Goodyear Aircraft Company in Litchfield Park, Arizona, invented synthetic aperture radar in June 1951 while working on a correlation guidance system for the Atlas ICBM program. In early 1952, Wiley, together with Fred Heisley and Bill Welty, constructed a concept validation system known as DOUSER ("Doppler Unbeamed Search Radar"), during the 1950s and 1960s, Goodyear Aircraft (later Goodyear Aerospace) introduced numerous advancements in SAR technology, many with the help from Don Beckerleg. Independently of Wiley's work, experimental trials in early 1952 by Sherwin and others at the University of Illinois' Control Systems Laboratory showed results that they pointed out "could provide the basis for radar systems with greatly improved angular resolution" and might even lead to systems capable of focusing at all ranges simultaneously. In both of those programs, processing of the radar returns was done by electrical-circuit filtering methods; in essence, signal strength in isolated discrete bands of Doppler frequency defined image intensities that were displayed at matching angular positions within proper range locations. When only the central (zero-Doppler band) portion of the return signals was used, the effect was as if only that central part of the beam existed, that led to the term Doppler Beam Sharpening. Displaying returns from several adjacent non-zero Doppler frequency bands accomplished further "beam-subdividing" (sometimes called "unfocused radar", though it could have been considered "semi-focused"). Wiley's patent, applied for in 1954, still proposed similar processing, the bulkiness of the circuitry then available limited the extent to which those schemes might further improve resolution. The principle was included in a memorandum authored by Walter Hausz of General Electric that was part of the then-secret report of a 1952 Dept. of Defense summer study conference called TEOTA ("The Eyes of the Army"), which sought to identify new techniques useful for military reconnaissance and technical gathering of intelligence. A follow-on summer program in 1953 at the University of Michigan, called Project Wolverine, identified several of the TEOTA subjects, including Doppler-assisted sub-beamwidth resolution, as research efforts to be sponsored by the Department of Defense (DoD) at various academic and industrial research laboratories; in that same year, the Illinois group produced a "strip-map" image exhibiting a considerable amount of sub-beamwidth resolution. A more advanced focused-radar project was among several remote sensing schemes assigned in 1953 to Project Michigan, a tri-service-sponsored (Army, Navy, Air Force) program at the University of Michigan's Willow Run Research Center (WRRC), that program being administered by the Army Signal Corps. Initially called the side-looking radar project, it was carried out by a group first known as the Radar Laboratory and later as the Radar and Optics Laboratory, it proposed to take into account, not just the short-term existence of several particular Doppler shifts, but the entire history of the steadily varying shifts from each target as the latter crossed the beam. An early analysis by Dr. Louis J. Cutrona, Weston E. Vivian, and Emmett N. Leith of that group showed that such a fully focused system should yield, at all ranges, a resolution equal to the width (or, by some criteria, the half-width) of the real antenna carried on the radar aircraft and continually pointed broadside to the aircraft's path. The required data processing amounted to calculating cross-correlations of the received signals with samples of the forms of signals to be expected from unit-amplitude sources at the various ranges, at that time, even large digital computers had capabilities somewhat near the levels of today's four-function handheld calculators, hence were nowhere near able to do such a huge amount of computation. Instead, the device for doing the correlation computations was to be an optical correlator. It was proposed that signals received by the traveling antenna and coherently detected be displayed as a single range-trace line across the diameter of the face of a cathode-ray tube, the line's successive forms being recorded as images projected onto a film traveling perpendicular to the length of that line, the information on the developed film was to be subsequently processed in the laboratory on equipment still to be devised as a principal task of the project. In the initial processor proposal, an arrangement of lenses was expected to multiply the recorded signals point-by-point with the known signal forms by passing light successively through both the signal film and another film containing the known signal pattern, the subsequent summation, or integration, step of the correlation was to be done by converging appropriate sets of multiplication products by the focusing action of one or more spherical and cylindrical lenses. The processor was to be, in effect, an optical analog computer performing large-scale scalar arithmetic calculations in many channels (with many light "rays") at once. Ultimately, two such devices would be needed, their outputs to be combined as quadrature components of the complete solution. Fortunately (as it turned out), a desire to keep the equipment small had led to recording the reference pattern on 35 mm film. Trials promptly showed that the patterns on the film were so fine as to show pronounced diffraction effects that prevented sharp final focusing. That led Leith, a physicist who was devising the correlator, to recognize that those effects in themselves could, by natural processes, perform a significant part of the needed processing, since along-track strips of the recording operated like diametrical slices of a series of circular optical zone plates. Any such plate performs somewhat like a lens, each plate having a specific focal length for any given wavelength, the recording that had been considered as scalar became recognized as pairs of opposite-sign vector ones of many spatial frequencies plus a zero-frequency "bias" quantity. The needed correlation summation changed from a pair of scalar ones to a single vector one. Each zone plate strip has two equal but oppositely signed focal lengths, one real, where a beam through it converges to a focus, and one virtual, where another beam appears to have diverged from, beyond the other face of the zone plate, the zero-frequency (DC bias) component has no focal point, but overlays both the converging and diverging beams. The key to obtaining, from the converging wave component, focused images that are not overlaid with unwanted haze from the other two is to block the latter, allowing only the wanted beam to pass through a properly positioned frequency-band selecting aperture. Each radar range yields a zone plate strip with a focal length proportional to that range, this fact became a principal complication in the design of optical processors. Consequently, technical journals of the time contain a large volume of material devoted to ways for coping with the variation of focus with range. For that major change in approach, the light used had to be both monochromatic and coherent, properties that were already a requirement on the radar radiation. Lasers also then being in the future, the best then-available approximation to a coherent light source was the output of a mercury vapor lamp, passed through a color filter that was matched to the lamp spectrum's green band, and then concentrated as well as possible onto a very small beam-limiting aperture. While the resulting amount of light was so weak that very long exposure times had to be used, a workable optical correlator was assembled in time to be used when appropriate data became available. Although creating that radar was a more straightforward task based on already-known techniques, that work did demand the achievement of signal linearity and frequency stability that were at the extreme state of the art. An adequate instrument was designed and built by the Radar Laboratory and was installed in a C-46 (Curtiss Commando) aircraft, because the aircraft was bailed to WRRC by the U. S. Army and was flown and maintained by WRRC's own pilots and ground personnel, it was available for many flights at times matching the Radar Laboratory's needs, a feature important for allowing frequent re-testing and "debugging" of the continually developing complex equipment. By contrast, the Illinois group had used a C-46 belonging to the Air Force and flown by AF pilots only by pre-arrangement, resulting, in the eyes of those researchers, in limitation to a less-than-desirable frequency of flight tests of their equipment, hence a low bandwidth of feedback from tests. (Later work with newer Convair aircraft continued the Michigan group's local control of flight schedules.) Michigan's chosen 5-foot (1.5 m)-wide World War II-surplus antenna was theoretically capable of 5-foot (1.5 m) resolution, but data from only 10% of the beamwidth was used at first, the goal at that time being to demonstrate 50-foot (15 m) resolution. It was understood that finer resolution would require the added development of means for sensing departures of the aircraft from an ideal heading and flight path, and for using that information for making needed corrections to the antenna pointing and to the received signals before processing, after numerous trials in which even small atmospheric turbulence kept the aircraft from flying straight and level enough for good 50-foot (15 m) data, one pre-dawn flight in August 1957 yielded a map-like image of the Willow Run Airport area which did demonstrate 50-foot (15 m) resolution in some parts of the image, whereas the illuminated beam width there was 900 feet (270 m). Although the program had been considered for termination by DoD due to what had seemed to be a lack of results, that first success ensured further funding to continue development leading to solutions to those recognized needs. The SAR principle was first acknowledged publicly via an April 1960 press release about the U. S. Army experimental AN/UPD-1 system, which consisted of an airborne element made by Texas Instruments and installed in a Beech L-23D aircraft and a mobile ground data-processing station made by WRRC and installed in a military van. At the time, the nature of the data processor was not revealed. A technical article in the journal of the IRE (Institute of Radio Engineers) Professional Group on Military Electronics in February 1961 described the SAR principle and both the C-46 and AN/UPD-1 versions, but did not tell how the data were processed, nor that the UPD-1's maximum resolution capability was about 50 feet (15 m). However, the June 1960 issue of the IRE Professional Group on Information Theory had contained a long article on "Optical Data Processing and Filtering Systems" by members of the Michigan group. Although it did not refer to the use of those techniques for radar, readers of both journals could quite easily understand the existence of a connection between articles sharing some authors. An operational system to be carried in a reconnaissance version of the F-4 "Phantom" aircraft was quickly devised and was used briefly in Vietnam, where it failed to favorably impress its users, due to the combination of its low resolution (similar to the UPD-1's), the speckly nature of its coherent-wave images (similar to the speckliness of laser images), and the poorly understood dissimilarity of its range/cross-range images from the angle/angle optical ones familiar to military photo interpreters. The lessons it provided were well learned by subsequent researchers, operational system designers, image-interpreter trainers, and the DoD sponsors of further development and acquisition. In subsequent work the technique's latent capability was eventually achieved, that work, depending on advanced radar circuit designs and precision sensing of departures from ideal straight flight, along with more sophisticated optical processors using laser light sources and specially designed very large lenses made from remarkably clear glass, allowed the Michigan group to advance system resolution, at about 5-year intervals, first to 15 feet (4.6 m), then 5 feet (1.5 m), and, by the mid-1970s, to 1 foot (the latter only over very short range intervals while processing was still being done optically). The latter levels and the associated very wide dynamic range proved suitable for identifying many objects of military concern as well as soil, water, vegetation, and ice features being studied by a variety of environmental researchers having security clearances allowing them access to what was then classified imagery. Similarly improved operational systems soon followed each of those finer-resolution steps. Even the 5-foot (1.5 m) resolution stage had over-taxed the ability of cathode-ray tubes (limited to about 2000 distinguishable items across the screen diameter) to deliver fine enough details to signal films while still covering wide range swaths, and taxed the optical processing systems in similar ways. However, at about the same time, digital computers finally became capable of doing the processing without similar limitation, and the consequent presentation of the images on cathode ray tube monitors instead of film allowed for better control over tonal reproduction and for more convenient image mensuration. Achievement of the finest resolutions at long ranges was aided by adding the capability to swing a larger airborne antenna so as to more strongly illuminate a limited target area continually while collecting data over several degrees of aspect, removing the previous limitation of resolution to the antenna width, this was referred to as the spotlight mode, which no longer produced continuous-swath images but, instead, images of isolated patches of terrain. It was understood very early in SAR development that the extremely smooth orbital path of an out-of-the-atmosphere platform made it ideally suited to SAR operation. Early experience with artificial earth satellites had also demonstrated that the Doppler frequency shifts of signals traveling through the ionosphere and atmosphere were stable enough to permit very fine resolution to be achievable even at ranges of hundreds of kilometers. While further experimental verification of those facts by a project now referred to as the Quill satellite (declassified in 2012) occurred within the second decade after the initial work began, several of the capabilities for creating useful classified systems did not exist for another two decades. That seemingly slow rate of advances was often paced by the progress of other inventions, such as the laser, the digital computer, circuit miniaturization, and compact data storage. Once the laser appeared, optical data processing became a fast process because it provided many parallel analog channels, but devising optical chains suited to matching signal focal lengths to ranges proceeded by many stages and turned out to call for some novel optical components, since the process depended on diffraction of light waves, it required anti-vibration mountings, clean rooms, and highly trained operators. Even at its best, its use of CRTs and film for data storage placed limits on the range depth of images. At several stages, attaining the frequently over-optimistic expectations for digital computation equipment proved to take far longer than anticipated, for example, the SEASAT system was ready to orbit before its digital processor became available, so a quickly assembled optical recording and processing scheme had to be used to obtain timely confirmation of system operation. In 1978, the first digital SAR processor was developed by the Canadian aerospace company MacDonald Dettwiler (MDA). When its digital processor was finally completed and used, the digital equipment of that time took many hours to create one swath of image from each run of a few seconds of data. Still, while that was a step down in speed, it was a step up in image quality. Modern methods now provide both high speed and high quality. Although the above specifies the system development contributions of only a few organizations, many other groups had also become players as the value of SAR became more and more apparent. Especially crucial to the organization and funding of the initial long development process was the technical expertise and foresight of a number of both civilian and uniformed project managers in equipment procurement agencies in the federal government, particularly, of course, ones in the armed forces and in the intelligence agencies, and also in some civilian space agencies. Since a number of publications and Internet sites refer to a young MIT physics graduate named Robert Rines as having invented fine-resolution radar in the 1940s, persons who have been exposed to those may wonder why that has not been mentioned here. Actually, none of his several radar-image-related patents actually had that goal. Instead, they presumed that fine-resolution images of radar object fields could be accomplished by already-known "dielectric lenses", the inventive parts of those patents being ways to convert those microwave-formed images to visible ones. However, that presumption incorrectly implied that such lenses and their images could be of sizes comparable to their optical-wave counterparts, whereas the tremendously larger wavelengths of microwaves would actually require the lenses to have apertures thousands of feet (or meters) wide, like the ones simulated by SARs, and the images would be comparably large. Apparently not only did that inventor fail to recognize that fact, but so also did the patent examiners who approved his several applications, and so also have those who have propagated the erroneous tale so widely. Persons seeking to understand SAR should not be misled by references to those patents. Relationship to phased arrays This section does not cite any sources. (June 2015) (Learn how and when to remove this template message) A technique closely related to SAR uses an array (referred to as a "phased array") of real antenna elements spatially distributed over either one or two dimensions perpendicular to the radar-range dimension, these physical arrays are truly synthetic ones, indeed being created by synthesis of a collection of subsidiary physical antennas. Their operation need not involve motion relative to targets. All elements of these arrays receive simultaneously in real time, and the signals passing through them can be individually subjected to controlled shifts of the phases of those signals. One result can be to respond most strongly to radiation received from a specific small scene area, focusing on that area to determine its contribution to the total signal received, the coherently detected set of signals received over the entire array aperture can be replicated in several data-processing channels and processed differently in each. The set of responses thus traced to different small scene areas can be displayed together as an image of the scene. In comparison, a SAR's (commonly) single physical antenna element gathers signals at different positions at different times. When the radar is carried by an aircraft or an orbiting vehicle, those positions are functions of a single variable, distance along the vehicle's path, which is a single mathematical dimension (not necessarily the same as a linear geometric dimension), the signals are stored, thus becoming functions, no longer of time, but of recording locations along that dimension. When the stored signals are read out later and combined with specific phase shifts, the result is the same as if the recorded data had been gathered by an equally long and shaped phased array. What is thus synthesized is a set of signals equivalent to what could have been received simultaneously by such an actual large-aperture (in one dimension) phased array, the SAR simulates (rather than synthesizes) that long one-dimensional phased array. Although the term in the title of this article has thus been incorrectly derived, it is now firmly established by half a century of usage. While operation of a phased array is readily understood as a completely geometric technique, the fact that a synthetic aperture system gathers its data as it (or its target) moves at some speed means that phases which varied with the distance traveled originally varied with time, hence constituted temporal frequencies. Temporal frequencies being the variables commonly used by radar engineers, their analyses of SAR systems are usually (and very productively) couched in such terms; in particular, the variation of phase during flight over the length of the synthetic aperture is seen as a sequence of Doppler shifts of the received frequency from that of the transmitted frequency. It is significant, though, to realize that, once the received data have been recorded and thus have become timeless, the SAR data-processing situation is also understandable as a special type of phased array, treatable as a completely geometric process. The core of both the SAR and the phased array techniques is that the distances that radar waves travel to and back from each scene element consist of some integer number of wavelengths plus some fraction of a "final" wavelength, those fractions cause differences between the phases of the re-radiation received at various SAR or array positions. Coherent detection is needed to capture the signal phase information in addition to the signal amplitude information, that type of detection requires finding the differences between the phases of the received signals and the simultaneous phase of a well-preserved sample of the transmitted illumination. Every wave scattered from any point in the scene has a circular curvature about that point as a center. Signals from scene points at different ranges therefore arrive at a planar array with different curvatures, resulting in signal phase changes which follow different quadratic variations across a planar phased array. Additional linear variations result from points located in different directions from the center of the array. Fortunately, any one combination of these variations is unique to one scene point, and is calculable, for a SAR, the two-way travel doubles that phase change. In reading the following two paragraphs, be particularly careful to distinguish between array elements and scene elements. Also remember that each of the latter has, of course, a matching image element. Comparison of the array-signal phase variation across the array with the total calculated phase variation pattern can reveal the relative portion of the total received signal that came from the only scene point that could be responsible for that pattern. One way to do the comparison is by a correlation computation, multiplying, for each scene element, the received and the calculated field-intensity values array element by array element and then summing the products for each scene element. Alternatively, one could, for each scene element, subtract each array element's calculated phase shift from the actual received phase and then vectorially sum the resulting field-intensity differences over the array. Wherever in the scene the two phases substantially cancel everywhere in the array, the difference vectors being added are in phase, yielding, for that scene point, a maximum value for the sum. The equivalence of these two methods can be seen by recognizing that multiplication of sinusoids can be done by summing phases which are complex-number exponents of e, the base of natural logarithms. However it is done, the image-deriving process amounts to "backtracking" the process by which nature previously spread the scene information over the array; in each direction, the process may be viewed as a Fourier transform, which is a type of correlation process. The image-extraction process we use can then be seen as another Fourier transform which is a reversal of the original natural one. It is important to realize that only those sub-wavelength differences of successive ranges from the transmitting antenna to each target point and back, which govern signal phase, are used to refine the resolution in any geometric dimension, the central direction and the angular width of the illuminating beam do not contribute directly to creating that fine resolution. Instead, they serve only to select the solid-angle region from which usable range data are received. While some distinguishing of the ranges of different scene items can be made from the forms of their sub-wavelength range variations at short ranges, the very large depth of focus that occurs at long ranges usually requires that over-all range differences (larger than a wavelength) be used to define range resolutions comparable to the achievable cross-range resolution. Highly accurate data can be collected by aircraft overflying the terrain in question. In the 1980s, as a prototype for instruments to be flown on the NASA Space Shuttles, NASA operated a synthetic aperture radar on a NASA Convair 990. In 1986, this plane caught fire on takeoff. In 1988, NASA rebuilt a C, L, and P-band SAR to fly on the NASA DC-8 aircraft. Called AIRSAR, it flew missions at sites around the world until 2004. Another such aircraft, the Convair 580, was flown by the Canada Center for Remote Sensing until about 1996 when it was handed over to Environment Canada due to budgetary reasons. Most land-surveying applications are now carried out by satellite observation. Satellites such as ERS-1/2, JERS-1, Envisat ASAR, and RADARSAT-1 were launched explicitly to carry out this sort of observation. Their capabilities differ, particularly in their support for interferometry, but all have collected tremendous amounts of valuable data. The Space Shuttle also carried synthetic aperture radar equipment during the SIR-A and SIR-B missions during the 1980s, the Shuttle Radar Laboratory (SRL) missions in 1994 and the Shuttle Radar Topography Mission in 2000. Synthetic aperture radar was first used by NASA on JPL's Seasat oceanographic satellite in 1978 (this mission also carried an altimeter and a scatterometer); it was later developed more extensively on the Spaceborne Imaging Radar (SIR) missions on the space shuttle in 1981, 1984 and 1994. The Cassini mission to Saturn used SAR to map the surface of the planet's major moon Titan, whose surface is partly hidden from direct optical inspection by atmospheric haze. The SHARAD sounding radar on the Mars Reconnaissance Orbiter and MARSIS instrument on Mars Express have observed bedrock beneath the surface of the Mars polar ice and also indicated the likelihood of substantial water ice in the Martian middle latitudes. The Lunar Reconnaissance Orbiter, launched in 2009, carries a SAR instrument called Mini-RF, which was designed largely to look for water ice deposits on the poles of the Moon. The Mineseeker Project is designing a system for determining whether regions contain landmines based on a blimp carrying ultra-wideband synthetic aperture radar. Initial trials show promise; the radar is able to detect even buried plastic mines. SAR has been used in radio astronomy for many years to simulate a large radio telescope by combining observations taken from multiple locations using a mobile antenna. The Alaska Satellite Facility provides production, archiving and distribution to the scientific community of SAR data products and tools from active and past missions, including the June 2013 release of newly processed, 35-year-old Seasat SAR imagery. CSTARS downlinks and processes SAR data (as well as other data) from a variety of satellites and supports the University of Miami Rosenstiel School of Marine and Atmospheric Science. CSTARS also supports disaster relief operations, oceanographic and meteorological research, and port and maritime security research projects. - Alaska Satellite Facility - Aperture synthesis - Earth observation satellite - Interferometric synthetic aperture radar (InSAR) - Inverse synthetic aperture radar (ISAR) - Magellan space probe - Radar MASINT - Remote sensing - SAR Lupe - speckle noise - Synthetic aperture sonar - Synthetic array heterodyne detection (SAHD) - Synthetically thinned aperture radar - Terrestrial SAR Interferometry (TInSAR) - Very Long Baseline Interferometry (VLBI) - Wave radar - "Introduction to Airborne RADAR", G. W. Stimson, Chapter 1 (13 pp). - Tomographic SAR. Gianfranco Fornaro. National Research Council (CNR). Institute for Electromagnetic Sensing of the Environment (IREA) Via Diocleziano, 328,I-80124 Napoli, ITALY - Oliver, C. and Quegan, S. Understanding Synthetic Aperture Radar Images. Artech House, Boston, 1998. - Synthetic Aperture Radar Imaging Using Spectral Estimation Techniques. Shivakumar Ramakrishnan, Vincent Demarcus, Jerome Le Ny, Neal Patwari, Joel Gussy. University of Michigan. - Moreira, Alberto; Prats-Iraola, Pau; Younis, Marwan; Krieger, Gerhard; Hajnsek, Irena; P. Papathanassiou, Konstantinos (2013). "A tutorial on synthetic aperture radar". IEEE Geoscience and Remote Sensing Magazine. 1 (1). - R. Bamler; P. Hartl (August 1998). "Synthetic aperture radar interferometry". Inv. Probl. 14 (4): R1–R54. Bibcode:1998InvPr..14R...1B. doi:10.1088/0266-5611/14/4/001. - G. Fornaro, G. Franceschetti, "SAR Interferometry", Chapter IV in G. Franceschetti, R. Lanari, Synthetic Aperture Radar Processing, CRC-PRESS, Boca Raton, Marzo 1999. - V. Pascazio, G. Fornaro (2013). "SAR Interferometry and Tomography: Theory and Applications". Academic Press Library in Signal Processing. Elsevier Ltd. 2. - Reigber, Andreas; Lombardini, Fabrizio; Viviani, Federico; Nannini, Matteo; Martinez del Hoyo, Antonio (2015). "Three-dimensional and higher-order imaging with tomographic SAR: Techniques, applications, issues". 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS). - Massachusetts Institute of Technology, Synthetic Aperture Radar (SAR) Imaging using the MIT IAP 2011 Laptop Based Radar, Presented at the 2011 MIT Independent Activities Period, 24 January 2011. - University of Illinois at Urbana-Champaign, AZIMUTH STACKING ALGORITHM FOR SYNTHETIC APERTURE RADAR IMAGING, By Z. Li, T. Jin, J. Wu, J. Wang, and Q. H. Liu. - NASA, An improved algorithm for retrieval of snow wetness using C-band AIRSAR, Oct 25, 1993. - Three Dimensional Imaging of Vehicles, from Sparse Apertures in Urban Environment, Emre Ertin, Department of Electrical and Computer Engineering, The Ohio State University. - Xiaoxiang Zhu, "Spectral Estimation for Synthetic Aperture Radar Tomography", Earth Oriented Space Science and Technology – ESPACE, 19 September 2008. - S. R. DeGraaf. "SAR Imaging via Modern 2-D Spectral Estimation Methods". Ieee Transactions on Image Processing. 7 (5, May 1998). - D. Rodriguez. "A computational Kronecker-core array algebra SAR raw data generation modeling system". Signals, Systems and Computers, 2001. Conference Record of the Thirty-Fifth Asilomar Conference on Year: 2001. 1. - T. Gough, Peter (June 1994). "A Fast Spectral Estimation Algorithm Based on the FFT". Ieee Transactions on Signal Processing. 42 (6). - Datcu, Mihai; Popescu, Anca; Gavat, Inge (2008). "Complex SAR image characterization using space variant spectral analysis". 2008 IEEE Radar Conference. - J. Capo4 (August 1969). "High resolution frequency wave-number spectrum analysis". Proc. IEEE. 57: 1408–1418. - A. Jakobsson; S. L. Marple; P. Stoica (2000). "Computationally efficient two-dimensional Capon spectrum analysis". IEEE Transactions on Signal Processing. 48 (9). Bibcode:2000ITSP...48.2651J. doi:10.1109/78.863072. - I. Yildirim; N. S. Tezel; I. Erer; B. Yazgan. "A comparison of non-parametric spectral estimators for SAR imaging". Recent Advances in Space Technologies, 2003. RAST '03. International Conference on. Proceedings of Year: 2003. - "Iterative realization of the 2-D Capon method applied in SAR image processing", IET International Radar Conference 2015. - R. Alty, Stephen; Jakobsson, Andreas; G. Larsson, Erik. "Efficient implementation of the time-recursive Capon and APES spectral estimators". Signal Processing Conference, 2004 12th European. - Li, Jian; P. Stoica (1996). "An adaptive filtering approach to spectral estimation and SAR imaging". IEEE Transactions on Signal Processing. 44 (6). Bibcode:1996ITSP...44.1469L. doi:10.1109/78.506612. - Li, Jian; E. G. Larsson; P. Stoica (2002). "Amplitude spectrum estimation for two-dimensional gapped data". IEEE Transactions on Signal Processing. 50 (6). Bibcode:2002ITSP...50.1343L. doi:10.1109/tsp.2002.1003059. - Moreira, Alberto. "Synthetic Aperture Radar: Principles and Applications" (PDF). - Duersch, Michael. "Backprojection for Synthetic Aperture Radar". BYU ScholarsArchive. - Zhuo, LI; Chungsheng, LI. "BACK PROJECTION ALGORITHM FOR HIGH RESOLUTION GEO-SAR IMAGE FORMATION". School of Electronics and Information Engineering, BeiHang University. - Xiaoling, Zhang; Chen, Cheng. "A new super-resolution 3D-SAR imaging method based on MUSIC algorithm". 2011 IEEE RadarCon (RADAR). - A. F. Yegulalp. "Fast backprojection algorithm for synthetic aperture radar". Radar Conference, 1999. the Record of the 1999 IEEE Year: 1999. - Mark T. Crockett, "An Introduction to Synthetic Aperture Radar:A High-Resolution Alternative to Optical Imaging" - C. Romero, High Resolution Simulation of Synthetic Aperture Radar Imaging. 2010. [Online]. Available: http://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1364&context=theses. Accessed: Nov. 14, 2016. - "Y. Yamaguchi; T. Moriyama; M. Ishido; H. Yamada,"Four-component scattering model for polarimetric SAR image decomposition"". IEEE Transactions on Geoscience and Remote Sensing Year: 2005, Volume: 43, Issue: 8. - Woodhouse, H.I. 2009. Introduction to microwave remote sensing. CRC Press, Taylor & Fancis Group, Special Indian Edition. - "A. Freeman and S. L. Durden, "A three-component scattering model for polarimetric SAR data,"". IEEE Trans. Geosci. Remote Sens. 36 (3, pp. 963–973, May 1998.). - "Gianfranco Fornaro; Diego Reale; Francesco Serafino,"Four-Dimensional SAR Imaging for Height Estimation and Monitoring of Single and Double Scatterers"". IEEE Transactions on Geoscience and Remote Sensing Year: 2009, Volume: 47, Issue: 1. - "Haijian Zhang; Wen Yang; Jiayu Chen; Hong Sun," Improved Classification of Polarimetric SAR Data Based on Four-component Scattering Model"". 2006 CIE International Conference on Radar. - Lombardini, Fabrizio; Viviani, Federico. "Multidimensional SAR Tomography: Advances for Urban and Prospects for Forest/Ice Applications". IEEE. - "Synthetic Aperture Radar", L. J. Cutrona, Chapter 23 (25 pp) of the McGraw Hill "Radar Handbook", 1970. (Written while optical data processing was still the only workable method, by the person who first led that development.) - "A short history of the Optics Group of the Willow Run Laboratories", Emmett N. Leith, in Trends in Optics: Research, Development, and Applications (book), Anna Consortini, Academic Press, San Diego: 1996. - "Sighted Automation and Fine Resolution Imaging", W. M. Brown, J. L. Walker, and W. R. Boario, IEEE Transactions on Aerospace and Electronic Systems, Vol. 40, No. 4, October 2004, pp 1426–1445. - "In Memory of Carl A. Wiley", A. W. Love, IEEE Antennas and Propagation Society Newsletter, pp 17–18, June 1985. - "Synthetic Aperture Radars: A Paradigm for Technology Evolution", C. A. Wiley, IEEE Transactions on Aerospace and Electronic Systems, v. AES-21, n. 3, pp 440–443, May 1985 - Gart, Jason H. "Electronics and Aerospace Industry in Cold War Arizona, 1945–1968: Motorola, Hughes Aircraft, Goodyear Aircraft." Phd diss., Arizona State University, 2006. - "Some Early Developments in Synthetic Aperture Radar Systems", C. W. Sherwin, J. P. Ruina, and R. D. Rawcliffe, IRE Transactions on Military Electronics, April 1962, pp. 111–115. - This memo was one of about 20 published as a volume subsidiary to the following reference. No unclassified copy has yet been located. Hopefully, some reader of this article may come across a still existing one. - "Problems of Battlefield Surveillance", Report of Project TEOTA (The Eyes Of The Army), 1 May 1953, Office of the Chief Signal Officer. Defense Technical Information Center (Document AD 32532) - "A Doppler Technique for Obtaining Very Fine Angular Resolution from a Side-Looking Airborne Radar" Report of Project Michigan No. 2144-5-T, The University of Michigan, Willow Run Research Center, July 1954. (No declassified copy of this historic originally confidential report has yet been located.) - "High-Resolution Radar Achievements During Preliminary Flight Tests", W. A. Blikken and G.O. Hall, Institute of Science and Technology, Univ. of Michigan, 1 September 1957. Defense Technical Information Center (Document AD148507) - "A High-Resolution Radar Combat-Intelligence System", L. J. Cutrona, W. E. Vivian, E. N. Leith, and G. O Hall; IRE Transactions on Military Electronics, April 1961, pp 127–131 - "Optical Data Processing and Filtering Systems", L. J. Cutrona, E. N. Leith, C. J. Palermo, and L. J. Porcello; IRE Transactions on Information Theory, June 1960, pp 386–400. - An experimental study of rapid phase fluctuations induced along a satellite to earth propagation path, Porcello, L.J., Univ. of Michigan, April 1964 - Quill (satellite) - "Observation of the earth and its environment: survey of missions and sensors", Herbert J. Kramer - "Principles of Synthetic Aperture Radar", S. W. McCandless and C. R. Jackson, Chapter 1 of "SAR Marine Users Manual", NOAA, 2004, p.11. - U. S. Pat. Nos. 2696522, 2711534, 2627600, 2711530, and 19 others - The first and definitive monograph on SAR is Synthetic Aperture Radar: Systems and Signal Processing (Wiley Series in Remote Sensing and Image Processing) by John C. Curlander and Robert N. McDonough - The development of synthetic-aperture radar (SAR) is examined in Gart, Jason H. "Electronics and Aerospace Industry in Cold War Arizona, 1945–1968: Motorola, Hughes Aircraft, Goodyear Aircraft." Phd diss., Arizona State University, 2006. - A text that includes an introduction on SAR suitable for beginners is "Introduction to Microwave Remote Sensing" by Iain H Woodhouse, CRC Press, 2006. - Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K. P. (2013). "A tutorial on synthetic aperture radar". IEEE Geoscience and Remote Sensing Magazine. 1: 6. doi:10.1109/MGRS.2013.2248301. - InSAR measurements from the Space Shuttle - Images from the Space Shuttle SAR instrument - The Alaska Satellite Facility has numerous technical documents, including an introductory text on SAR theory and scientific applications - NASA radar reveals hidden remains at ancient Angkor – Jet Propulsion Laboratory
<urn:uuid:144edaae-94d4-43d1-8b9f-2b12eb9c9d56>
4.15625
19,478
Knowledge Article
Science & Tech.
29.424327
95,494,535
Philip T. Starks, associate professor of biology at the School of Arts and Sciences at Tufts University, and doctoral student Anne A. Madden published their discovery in the "International Journal of Systematic and Evolutionary Microbiology." The news appeared online September 19, 2011, in advance of print. "We found the fungus in a wasp nest near the dumpsters at Tufts University," says Madden, of the discovery. The research team set out to explore a new environment for novel species of bacteria and fungi—single-celled organisms that inhabit most places in the world. Despite there being more bacterial species in the world than stars in the sky, scientists have only described approximately 10 percent of the species thought to exist, says Madden. Attempts to identify species are hindered by the difficulty scientists encounter when working with organisms so small that hundreds of thousands can fit on the period at the end of this sentence. These wasps often build nests on houses, trash containers and other familiar structures. "Nests of the invasive species of paper wasps had never been investigated for their microbial community," says Madden. "This is despite the wasp's cosmopolitan distribution and their frequent use as a model system in the field of animal behavior. Because researchers know so much about this host wasp, we thought it would be particularly valuable to characterize the microbes of the nest." The scientists took samples from active nests and placed them in a nutrient-based medium as one would plant a garden with a handful of unknown seeds to see what grows. The researchers grew a number of different fungi and used genetic sequencing techniques to tease apart species identities. They found that one fungus had a unique gene sequence that suggested it had not previously been characterized. A Fur-like Fungus Further laboratory studies confirmed that the scientists had indeed discovered a new species, a fluffy, white and fast-growing fungus that resembled bunny fur, says Madden. The scientists named the new species of fungus: Mucor nidicola. They chose the species name nidicola, because the word translates from Latin to “living in another’s nest.” The findings will contribute to understanding the diverse world of fungi. "When most people think of microbes, they immediately think of those bacteria or fungi that cause disease," says Madden. "While certain microbes do cause disease, many produce compounds or carry out reactions that are crucial for human society. In fact, most of the antibiotics on the market are actually produced by bacteria that live in the soil." The researchers now plan to investigate further to see what other species are present in the nest's microbial community, says Madden. "It’s shocking, but also quite exciting, that we know more about what microbes live under the sea than we do about those that associate with the insects that actually live in our houses," says Starks. The study was funded by a Tufts Institute for the Environment fellowship, a National Science Foundation Graduate Student Research Fellowship, and Tufts University Graduate Student Research Awards A.M. Stchigel and J.Guarro of the Universitat Rovira I Virgili, and D.A. Sutton of the University of Texas Health Science Center were co-authors on the paper. Madden A.A., Stchigel A.M., Guarro J, Sutton D.A., Starks P.T. Mucor nidicola sp. nov., a novel fungal species isolated from an invasive paper wasp nest. Int J Syst Evol Microbiol. [Epub ahead of print] doi:10.1099/ijs.0.033050-0. Tufts University, located on three Massachusetts campuses in Boston, Medford/Somerville, and Grafton, and in Talloires, France, is recognized among the premier research universities in the United States. Tufts enjoys a global reputation for academic excellence and for the preparation of students as leaders in a wide range of professions. A growing number of teaching and research initiatives span all Tufts campuses, and collaboration among the faculty and students in the undergraduate, graduate and professional programs across the university's schools is widely encouraged. Alex Reid | Newswise Science News Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:6077b0cf-f0df-4221-bb94-2e61d523146b>
3.875
1,451
Content Listing
Science & Tech.
41.678528
95,494,551
Earthwatch provides citizens with the opportunity to work alongside leading scientists to combat some of the planet’s most pressing environmental issues. With Earthwatch, you'll experience hands-on science in some of the most astounding locations in the world. You'll meet a community of like-minded travelers and return home with stories filled with adventure. WANT TO LEARN MORE? Help gather critical data on the sustainable use of one of Mexico City’s last wetlands. Read More + What can we learn about the behavior, habits, and needs of wildlife of the Mongolian steppe? Read More + How does an active volcano shape the world around it? Peer into the crater of the Masaya Volcano to find out. Read More + Even in a world-class protected area, wildlife needs our support. How are giraffes, elephants, and others faring? Read More + When did ancient Portuguese societies shift to agriculture? Hunter-gatherers and farmers may have coexisted for a brief time here. Unearth t... How did the people of the Khmer Empire manage a changing climate and what can their resilience teach us today? Read More + What can we learn about Italy’s ancient people from the ruins they left along the coast of Tuscany? Help us dust off clues. Read More + When did ancient Portuguese societies shift to agriculture? Hunter-gatherers and farmers may have coexisted for a brief time here. Unearth t... Read More + How much can the lowly caterpillar tell us about the world we live in? More than you might imagine. What does climate change mean for one of America’s most famous national parks? Read More + Help discover and protect this delicate Alpine environment from climate change, and from ourselves. Read More + How is a national treasure being reshaped by the changing climate? Help scientists search for clues in Acadia National Park. Read More + Why have Pacific leatherback sea turtles almost disappeared? Look for answers and solutions on Costa Rica’s beaches. How can we keep shark and ray populations strong? Find answers while exploring some of the world’s most beautiful reefs. Read More + Help Earthwatch scientists monitor the health of the coral reefs that form Australia’s Great Barrier Reef. Read More + Where do endangered sea turtles thrive? Help scientists find out and protect these critical habitats. Read More + Help conserve wildlife within the Amazon Basin, while seeking pink river dolphins, primates, macaw, caiman, giant river otters, piranha and exotic fish. How can we keep shark and ray populations strong? Find answers while exploring some of the world’s most beautiful reefs. What can we learn about Italy’s ancient people from the ruins they left along the coast of Tuscany? Help us dust off clues. Scientists expect to observe the greatest effects of global warming in the Arctic. But what, exactly, will these effects be?
<urn:uuid:803a9f2f-6931-43fd-adc9-4b38cf60f262>
2.65625
619
Content Listing
Science & Tech.
56.33377
95,494,555
"As soon as those two halves came together, like puzzle pieces, you knew it," said Ted Daeschler, PhD, associate curator of vertebrate zoology and vice president for collections at the Academy of Natural Sciences of Drexel University. That surprising puzzle assembly occurred in the fall of 2012, when Jason Schein, assistant curator of natural history at the New Jersey State Museum, visited the Academy's research collections to better identify and describe a recently-unearthed fossil. The discovery linked scientists from both museums to their predecessors from the 19th century, while setting the stage to advance science today. A 3-D scan of the two broken turtle limb fossils from Atlantocheyls mortoni shows a detailed view of their surfaces. Paleontologists from the Academy of Natural Sciences of Drexel University and from the New Jersey State Museum concluded that these two fossils came from the same animal, despite being discovered separately at least 163 years apart. Credit: Credit: Jesse Pruitt, Idaho Museum of Natural History The partial fossil bone that Schein had brought to the Academy was a recent discovery by amateur paleontologist Gregory Harpel. Harpel thought the bone seemed strange and out of place when he noticed it on a grassy embankment, a bit upstream from his usual fossil-hunting haunt at a brook in Monmouth County, N.J. Visiting the brook to search for fossil shark teeth is a weekend hobby for Harpel, an analytical chemist from Oreland, Pa. "I picked it up and thought it was a rock at first – it was heavy," Harpel said. When he realized it was indeed a fossil, certainly much larger and possibly a lot more scientifically significant than shark teeth, he took it to the experts at the New Jersey State Museum, to which he ultimately donated his find. Schein and David Parris, the museum's curator of natural history, immediately recognized the fossil as a humerus – the large upper arm bone – from a turtle, but its shaft was broken so that only the distal end, or end nearest to the elbow, remained. Parris also thought the fossil looked extremely familiar. He joked with Schein that perhaps it was the missing half of a different large, partial turtle limb housed in the collections at the Academy of Natural Sciences of Drexel University. That bone also had a broken shaft, but only its proximal end, nearest to the shoulder, remained. The coincidence was striking. "I didn't think there was any chance in the world they would actually fit," Schein said. That's because the Academy's piece of the puzzle was much too old, according to the conventional wisdom of paleontology. Paleontologists expect that fossils found in exposed strata of rock will break down from exposure to the elements if they aren't collected and preserved, at least within a few years-- decades at the most. There was no reason to think a lost half of the same old bone would survive, intact and exposed, in a New Jersey streambed from at least the time of the old bone's first scientific description in 1849, until Harpel found it in 2012. The Academy's older bone was also without a match of any kind, making a perfect match seem even more farfetched: It was originally named and described by famed 19th-century naturalist Louis Agassiz as the first, or type specimen, of its genus and species, Atlantochelys mortoni. In the intervening years, it remained the only known fossil specimen from that genus and species. It remained so until that fateful day when Schein carried the "new" New Jersey fossil to the Academy in Philadelphia, connecting the two halves. The perfect fit between the fossils left little space for doubt. Stunned by the implications, Schein and Academy paleontology staffers Jason Poole and Ned Gilmore, who had assembled the puzzle together, called Daeschler into the room. "Sure enough, you have two halves of the same bone, the same individual of this giant sea turtle," said Daeschler. "One half was collected at least 162 years before the other half." Now, the scientists are revising their conventional wisdom to say that, sometimes, exposed fossils can survive longer than previously thought. They report their remarkable discovery in the forthcoming 2014 issue of the Proceedings of the Academy of Natural Sciences of Philadelphia. The find is also featured in the April 2014 issue of National Geographic magazine, on newsstands now. "The astounding confluence of events that had to have happened for this to be true is just unbelievable, and probably completely unprecedented in paleontology," said Schein. The fully assembled A. mortoni humerus now gives the scientists more information about the massive sea turtle it came from as well. With a complete limb, they have calculated the animal's overall size – about 10 feet from tip to tail, making it one of the largest sea turtles ever known. The species may have resembled modern loggerhead turtles, but was much larger than any sea turtle species alive today. The scientists believe that the entire unbroken bone was originally embedded in sediment during the Cretaceous Period, 70 to 75 million years ago, when the turtle lived and died. Then those sediments eroded and the bone fractured millions of years later during the Pleistocene or Holocene, before the bone pieces became embedded in sediments and protected from further deterioration for perhaps a few thousand more years until their discovery. News media contact: Rachel Ewing, News Officer, Office of University Communications, Drexel University 215-895-2614 (office), 215-298-4600 (cell), firstname.lastname@example.org Carolyn Belardo, Senior Communications Manager, The Academy of Natural Sciences of Drexel University Susan Greitz, Marketing Coordinator, NJ State Museum About the Academy of Natural Sciences of Drexel University Founded in 1812, the Academy of Natural Sciences of Drexel University is a leading natural history museum dedicated to advancing research, education, and public engagement in biodiversity and environmental science. The museum, at 1900 Benjamin Franklin Pkwy., Philadelphia, is open from 10 a.m. to 4:30 p.m. Monday through Friday and 10 a.m. to 5 p.m. Saturday and Sunday. For more information, call 215-299-1000 or visit ansp.org. About the New Jersey State Museum Established in 1895, the New Jersey State Museums serves the life-long educational needs of residents and visitors through its collections, exhibitions, programs, publications and scholarships in science, history and the arts. Within a broad context, the Museum explores the natural and cultural diversity of New Jersey, past and present. The New Jersey State Museum, located at 205 West State Street in Trenton, is open Tuesday through Sunday from 9 am to 4:45 pm. The Museum is closed Mondays and all state holidays. The NJ State Museum has a "suggested" admission fee. Admission fee revenue supports the Museum's collections, exhibitions and programs. For more information, please visit the Museum's website at http://www.statemuseum.nj.gov or call the recorded information line at (609) 292-6464. On weekends, free parking is available in lots adjacent to and behind the Museum. Please visit http://www.trentonparking.com for a number of options for parking in downtown Trenton during the week. Rachel Ewing | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:af837d0c-5b89-427d-b12a-eb1548e09e13>
3.046875
2,120
Content Listing
Science & Tech.
42.95213
95,494,566
In a breakthrough that could one day revolutionize transportation and electricity generation, scientists at the University in Kanagawa in Japan demonstrated this month a disc that spins at over 200 rotations per minute when placed over a magnet in direct sunlight, saying the discovery could help create a wholly “new class” of solar energy. Professor Jiro Abe and Dr. Masayuki Kobayashi presented their discovery in the December issue of Journal of the American Chemical Society. Speaking a reporter with Phys.org, Abe said their research represents “the first time in the world” that humans have been able to achieve “real-time motion control” of inanimate objects without individual parts of the machine coming into direct contact. The study goes on to explain that it works because the light slightly changes the temperature of the graphite, which causes subtle fluctuations in the material’s “magnetic susceptibility.” Video of the discovery published to YouTube earlier this month showed scientists moving a tiny disc over an array of small magnets by firing a laser at it. Additional footage also featured that same disc levitating over a single magnet, rapidly spinning in place when placed under direct sunlight. “Because this technique is very simple and fundamental, it is expected to apply to various daily living techniques, such as transportation systems and amusement, as well as photo-actuators and light energy conversion systems,” Abe reportedly said. It is unclear how long such a discovery will take to be implemented into a practical mass-transportation system, if ever. It remains questionable if the discovery will lend itself to a device that generates enough electricity to become self-sustaining. This video was published to YouTube on December 19, 2012. Photo: Shutterstock.com, all rights reserved. Updated from an original version to fix a misspelling.
<urn:uuid:4969c1db-0045-482d-862f-da9d21c5f0b2>
3.625
378
News Article
Science & Tech.
28.931939
95,494,570
Trying to convert -24 ℃ to Kelvin units? The first condition that you should understand in Celsius to Kelvin conversion is that 1℃ = 1K, but the Kelvin scale starts at absolute zero (0K = -273.15℃). At this temperature there is minimum energy and all electric devices can’t operate at that temperature. This means that all useful temperatures of elements are in the positive side of the Kelvin scale unlike the Celsius scale that has important temperatures in both negative and positive sides of the scale. What is negative (-)24 Celsius (℃) in Kelvin (K)? |=> -24 ℃ to K = -24 + 273.15 Conversion chart for Celsius (℃) to Kelvin (K) |Centimeters (cm) inches (")| Why convert from Celsius to Kelvin? There are many reasons that will prompt you to convert temperature units, but most import reasons include; presentation, recording and computation. Thus, it is important to understand how to convert from Celsius to Kelvin and vice versa. You can bookmark this page for future reference. [raw_html_snippet id=”Celsius to Kelvin”]
<urn:uuid:884ada53-b9b5-4832-af85-47cb170acdb3>
3.484375
253
Tutorial
Science & Tech.
49.584545
95,494,574
Scientists from nine nations have set sail for the Integrated Ocean Drilling Program (IODP) Tahiti Sea Level Expedition, a research expedition initiated to investigate global sea level rise since the last glacial maximum, approximately 23,000 years ago. For six weeks, aboard the DP HUNTER, the expedition science party will work on the most extensive geological research investigation ever undertaken in a coral reef area. Off the coast of Tahiti, IODP scientists will take samples of fossil corals from the ocean seafloor to analyze the environmental records that are inside them. Scientists expect the coral reefs to yield records on changes in sea surface temperature during the circumscribed period and information on climatic anomalies, including El Niño/Southern Oscillation events. Through this research expedition, IODP scientists aim to learn more about the timing and course of past global sea level changes to better understand present and future sea level rise due to global greenhouse conditions. Since the climax of the last ice age, global sea level has risen by about 120 meters, primarily because of the melting of large inland ice sheets and thermal expansion of the global body of ocean waterattributable to rising temperatures. According to IODP scientists, Tahiti is well situated for these investigations because the island is located in a tectonically stable region. Consequently, changes in sea level here can be related solely to global effects. Because the corals off Tahiti have strict ecological requirements and are extremely sensitive to environmental changes, both natural and human-induced, they are accurate, sensitive recorders of past sea level and climatic change. Abrupt cloud clearing events over southeast Atlantic Ocean are new piece in climate puzzle 23.07.2018 | University of Kansas Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Health and Medicine 23.07.2018 | Earth Sciences 23.07.2018 | Science Education
<urn:uuid:56727531-9963-46ee-a9ff-8e334944e589>
3.734375
892
Content Listing
Science & Tech.
33.094516
95,494,583
Train and car The train and the car started at a constant speed to journey. When the train travels 87 km, the car travels 97 km. How many km travels the train when the car travels 87 km? Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: - Two cars Two cars started against each other at the same time to journey long 293 km. First car went 41 km/h and second 41 km/h. What distance will be between this cars 20 minutes before meet? - Gimli Glider Aircraft Boeing 767 lose both engines at 45000 feet. The plane captain maintain optimum gliding conditions. Every minute, lose 1870 feet and maintain constant speed 212 knots. Calculate how long takes to plane from engines failure to hit ground. Calculate - Pedestrian up-down hill Pedestrian goes for a walk first at plane at 4 km/h, then uphill 3 km/h. Then it is in the middle of the route, turns back and goes downhill at speed 6 km/h. Total walk was 6 hours. How many kilometers went pedestrian? How many times a day hands on a clock overlap? Calculate how many promiles river Vltava average falls, if on section long 928 km flowing water from 1592 m AMSL to 108 m AMSL. From the observatory 14 m high and 32 m from the river bank, river width appears in the visual angle φ = 20°. Calculate width of the river. The observer sees straight fence 100 m long in 30° view angle. From one end of the fence is 153 m. How far is it from the another end of the fence? Cuboid with edge a=24 cm and body diagonal u=50 cm has volume V=17280 cm3. Calculate the length of the other edges. - Triangle ABC Calculate the sides of triangle ABC with area 1404 cm2 and if a: b: c = 12:7:18 - Circle arc Circle segment has a circumference of 41.89 m and 251.33 m2 area. Calculate the radius of the circle and size of central angle. Mast has 13 m long shadow on a slope rising from the mast foot in the direction of the shadow angle at angle 15°. Determine the height of the mast, if the sun above the horizon is at angle 33°. - Cube in a sphere The cube is inscribed in a sphere with volume 3724 cm3. Determine the length of the edges of a cube. - Eiffel Tower Eiffel Tower in Paris is 300 meters high, is made of steel. Its weight is 8000 tons. How tall is the tower model made of the same material, if it weigh is 2.4 kg? In point O acts three orthogonal forces: F1 = 20 N, F2 = 7 N and F3 = 19 N. Determine the resultant of F and the angles between F and forces F1, F2 and F3. - Slope of track Calculate the average slope (in promiles and even in degrees) of the rail tracks between Prievidza (309 m AMSL) and Nitra (167 m AMSL), if the track is 77 km long. The rectangle is 11 cm long and 45 cm wide. Determine the radius of the circle circumscribing rectangle. - Transforming cuboid Cuboid with dimensions 10 cm, 17 and 17 cm is converted into a cube with the same volume. What is its edge length?
<urn:uuid:d5458be0-b2cf-49ee-b206-d8d1099dc616>
3.34375
785
Tutorial
Science & Tech.
86.379671
95,494,598
Like millions of other people, Wanda Diaz Merced plans to observe the August 21 total solar eclipse, when the moon’s shadow will sweep across the sun and, for a few brief moments, coat parts of the United States in darkness. But she won’t see it. She’ll hear it. Diaz Merced, an astrophysicist, is blind, with just 3 percent of peripheral vision in her right eye, and none in her left. She has been working with a team at Harvard University to develop a program that will convert sunlight into sound, allowing her to hear the solar eclipse. The sound will be generated in real time, changing as the dark silhouette of the moon appears over the face of the bright sun, blocking its light. Diaz Merced will listen in real time, too—with her students at the Athlone School for the Blind in Cape Town, South Africa, where she teaches astronomy. “It’s an experience of a lifetime, and they deserve the opportunity,” Diaz Merced said. To capture the auditory version of this astronomical event, the team turned to a piece of technology measuring only a couple inches long: the Arduino, a cheap microcomputer popular with tech-savvy, DIY hobbyists. With a few attachments, Arduinos can be used to create all kinds of electronic devices that interact with the physical world, from the useful, like finger scanners that unlock garage doors, to the silly, like motion-detecting squirt guns. Diaz Merced’s collaborators equipped an Arduino with a light-detecting sensor and speaker, and programmed it to convert light into a clicking noise. The pace of the clicks varies with the intensity of the sunlight hitting the sensor, speeding up as it strengthens and slowing down as it dims. In the moments of totality, when the sun’s outer atmosphere appears as a thin ring around the shadow of the moon, the clicks will be a second or more apart. Allyson Bieryla, an astronomy lab and telescope manager at Harvard, will operate the Arduino from Jackson Hole, Wyoming, inside the path of totality. She will stream the audio on a website online, which Diaz Merced will open on her computer in Cape Town. So far, Bieryla says, “the real challenge has been trying to find a light sensor that’s sensitive enough to get the variation in the eclipse.” In totality, the sun will appear about as bright as a full moon at midnight. The team has tested the Arduino at night, under the moonlight, to make sure it can pick up the faint luminosity. Diaz Merced, a postdoctoral fellow at the Office of Astronomy for Development in South Africa, was diagnosed with diabetes as a child. In her early 20s, when she was studying physics at the University of Puerto Rico, she was diagnosed with diabetic retinopathy, a complication of the disease that destroys blood vessels in the retina. Her vision began to deteriorate, and a failed laser surgery damaged her retinas further, she said. By her late 20s, she was almost completely blind. She recalls watching a partial solar eclipse in 1998 in Puerto Rico, when she still had some sight. “I was able to experience the wonderfulness—of the sun being dark, of having a black ball in the sky,” she said. “That is why it is important to use the sound in order to bring an experience that will bring that same feeling to people who do not see or are not visually oriented.” While Diaz Merced experiences the eclipse from a classroom in Cape Town, Tim Doucette will observe the event at a campground in Nebraska, smack-dab in the path of totality. Doucette is a computer programmer by day and an amateur astronomer by night. He runs a small observatory, Deep Sky Eye Observatory, near his home in Nova Scotia in a sparsely populated area known for low light pollution and star-studded night skies. Doucette is legally blind, and has about 10 percent of his eyesight. He had cataracts as a baby, a condition that clouds the lenses of the eye. To treat the disease, doctors surgically removed the lenses, leaving Doucette without the capacity to filter out certain wavelengths. His eyes are sensitive to ultraviolet and infrared light, and he wears sunglasses during the day to protect his retinas. Without shades, Doucette said he can’t keep his eye open in the brightness of day. But at night, his sensitivity becomes an advantage. With the help of a telescope, Doucette can see the near-infrared light coming from stars and other objects in the sky better than most people. “My whole life, I’ve always been asking people for help, saying, ‘hey, what do you see?’” Doucette said. “When I stargaze with people, the tables are reversed.” Doucette sees best at night, safe from the glare of the sun. He uses starlight to guide him during the short walk from his observatory to his home. “When I’m walking down the road, especially during the summer months, the Milky Way is just this incredible painting going from north to south,” he said. “It’s millions and millions of points of light. It’s like a tapestry of diamonds against a velvety background.” Doucette, armed with his camera equipment, will observe the eclipse with dozens of members of the Royal Astronomical Society of Canada’s Halifax Center, an association of amateur and professional astronomers. He has only witnessed partial solar eclipses in the past. “It should be quite interesting to see what the effect is because of my sensitivity,” he said. During totality, when day becomes night, some objects in the sky may become visible, thanks to his sensitivity to their light. Doucette will wear eclipse sunglasses over his regular pair. Eclipse glasses protect the eyes from sunlight so viewers can look directly at it without hurting their eyes, and they can be bought online for a few dollars. Doucette urged eclipse viewers to use them, citing stories he’d heard of people looking at the sun during an eclipse and waking up blind the next morning, their retinas burned. The shades are necessary before and after totality, when the sun is only partially eclipsed and a thin crescent shines with typical intensity. “Once the eclipse is in totality for about two and a half minutes, I’m told that it’s safe to take the glasses off, but I’m not willing to risk it,” Doucette said. “I’ll still keep my sunglasses on either way.” We want to hear what you think. Submit a letter to the editor or write to email@example.com.
<urn:uuid:7af249a5-140e-4ff4-a9f5-d32f36a0f97e>
3.265625
1,449
Truncated
Science & Tech.
51.551137
95,494,618
11 ay önce2 views Wherever we see a source of information, we think that there is a planner behind it. For example, a paragraph in a book has specific content and is meaningful. Could computer codes randomly come together and operate a program or a computer system? Similarly, we know that the DNA codes bear all the information about a living being. How could a series of coincidences form a perfect genetic code? In fact, that is not possible. Random coincidences cannot generate information. This is what science has established. For example, there are some scientific disciplines designed to avoid plagiarism, that is unauthorized copying of copyright in the field of music or other sources of information, that base their in-depth analysis on the fact that a series of information cannot appear randomly or by coincidence, and that there must be a purpose and therefore a goal, and a will to express some kind of informative content. We all know that all the computers have an operating system that must be installed and updated with specific updates; because otherwise, they cannot organize the information. These updates are not random; they are made in a very specific way and operate by intervening on the software. Therefore we cannot expect to see any random improving modifications on an operating system. On the contrary, the tendency is towards a loss of information because entropy is common to all existing systems. Often, evolutionists respond to this objection by saying that, contrary to the aforementioned processes, cells actually reproduce. This represents a very big problem: this reproduction process in fact, must have been present from the beginning; therefore the loss of information does occur in this case, but the information content cannot be randomly compared. Science argues that information does not originate by chance, but it requires a purposeful action that uses intelligence to create information. The DNA is a molecule that carries, contains and organizes information and cannot have been originated by chance. A9 uygulama mobil linkler : A9 TV iphone : http://itunes.apple.com/tr/app/a9-tv/id432879700 A9 Radyo iphone : https://itunes.apple.com/tr/app/a9-tv-radio/id929302730 A9 TV Android : https://play.google.com/store/apps/details?id=com.chelik.client.a9tv A9 Radyo Android: https://play.google.com/store/apps/details?id=com.chelik.clients.a9radyo
<urn:uuid:8c4962b9-3675-4ab4-813d-9d675df852f9>
3.203125
530
Truncated
Science & Tech.
41.981357
95,494,622
- Open Access © BioMed Central Ltd 2005 Published: 31 January 2005 It began at the bottom of the ocean, off the west coast of Indonesia. From there it spread outward, silently, invisible under the surface of the water. When it came ashore, in forty countries, some as far as 4,000 miles from where it started, it killed upwards of 150,000 people. There have been deadlier catastrophes - the earthquake in China in the mid-1970s killed a quarter of a million in that country, and the cyclone that devastated Bangladesh in the decade before that is thought to have caused half a million deaths - but none that involved so many different nations scattered across so much of the earth's surface. The tsunami spawned by the magnitude 9.0 earthquake of 26 December 2004 was perhaps the first truly global natural disaster in modern history: a third of the world's countries were directly affected. The worldwide scope of the destruction reminds us of something that genomics is also starting to make clear: that we are all truly one people, that national and racial differences are artificial and insignificant compared to the common bond of our humanity. Despite all our technological prowess and environmental hubris, we also have yet another grim reminder that Nature, not Man, is still boss of this planet. The word 'tsunami' comes from the Japanese words for harbor (tsu) and wave (nami). It refers to a series of giant undersea waves that travel at high velocity for very long distances, and that crest when they hit a shoreline in the form of a devastating surge, sometimes as much as 30 meters high. Tsunamis are often called 'tidal waves' but that's a misnomer: the phenomenon has nothing to do with the tides. It has its origins, like everything else that involves the earth's surface, in plate tectonics. It is hard to imagine that the theory of plate tectonics, which is at the heart of all modern geological science, is only a hundred years old and was not widely accepted until the 1970s. Schoolchildren had noticed for hundreds of years that the facing shapes of South America and Africa could be fitted neatly together like pieces of a jigsaw puzzle to make a single entity (Francis Bacon had noticed it in 1620 but drew no conclusion), but it wasn't until 1908 that the amateur American geologist Frank Bursley Taylor proposed that the continents had once slid around and that this motion might have thrust up the world's mountain chains. His theory was taken up by the German planetary astronomer-turned-meteorologist Alfred Wegener, who in 1912 proposed that all the world's continents had once been part of a single giant landmass he called Pangaea, which had split apart in a process of lateral motion that was still continuing. Traditional geologists attacked both Wegener and his ideas viciously, and it wasn't until the decade after his death (he froze to death on a scientific expedition in Greenland in 1930) that the great English geologist Arthur Holmes provided an explanation for how Wegener's motion could occur. In a textbook published in 1944 he speculated that heat caused by the decay of radioactive elements in the earth's crust could produce powerful convection currents that could slide the continents around on the earth's surface. He has probably as good a claim as anyone to be the father of the modern view of continental drift, although, curiously, he often expressed skepticism about his own theory. Harry Hess, a Princeton University geologist, figured out in the 1950s that there were two large plates of land under the floor of the Atlantic Ocean and that their relative motion was responsible for the topography and geology of the seafloor. Finally, in 1963, Cambridge University geophysicist Drummond Matthews and his graduate student Fred Vine used magnetic readings to prove that the seafloor and the continents were in motion. (Canadian geologist Lawrence Morley came up with the same result at the same time but his paper was rejected by the Journal of Geophysical Research.) J. Tuzo Wilson of Toronto showed at about the same time how plate tectonics could explain the behavior of the ocean floor at mid-ocean ridges. Still, even in the 1970s, many textbooks of geology continued to dismiss plate tectonics as physically impossible. Today we know that the surface of the earth is composed of about a dozen large plates and almost two dozen smaller ones, all moving in different directions. Where they grind against one another (regions geologists call 'subduction zones'), the tremendous force can be released either slowly and steadily, giving rise to thermally active regions like Iceland, or sporadically and violently, giving rise to earthquakes and volcanic eruptions. That is what happened on 27 August 1883, when subduction along the Java Trench, where the Indo-Australian plate is moving under the Indonesian Island chain, caused the explosive eruption of the volcano on the Indonesian island of Krakatoa that in turn generated waves that reached 41 meters in height, destroying 165 coastal towns and villages along the Sunda Strait between the islands of Java and Sumatra and killing 36,417 people. (Hollywood made a movie about this great disaster in 1969: Krakatoa, East of Java. In case you are ever tempted to equate Hollywood productions with history, let me point out that Krakatoa is west of Java.) That is probably what happened in 1648 B.C., when the entire Minoan civilization on the island of Crete was wiped out, in a single day, as the result of a tsunami created by the explosive eruption of the volcano on Santorini. And that is what happened on 26 December 2004, when the Indian plate slid underneath the Burma plate (a subduction zone), driving a 6oo-mile long piece of the earth's crust 20 to 50 feet upwards on the floor of the Indian Ocean. This sudden rise in the seafloor displaced an enormous volume of water - exactly as if the ocean were a swimming pool and someone had just dropped a large block of concrete into it. The displacement spread outward in all directions, like the ripples that would spread from that block. But because the event occurred underwater, the displacement traveled underwater until it encountered the sharply rising seafloor on the edge of an island or continent. When the undersea waves hit this obstacle, they were pushed straight up, compressed into walls of water that surged over the landmass. The speed of a tsunami depends on the square of the depth of the ocean: the deeper the water, the faster the displacement travels. The Indonesian tsunami formed in deep water, which meant that the wave velocity reached upwards of 800 km/h, the speed of a commercial jet aircraft. When they encounter the shallow depths of a coastline the speed of tsunami waves slows to perhaps 45 km/h, still fast enough to do tremendous damage. At top speed it took the tsunami less than 7 hours to cross the Indian Ocean and reach the east coast of Africa, where the waves came ashore in Somalia and killed 150 people who cannot possibly have understood that the power that was destroying them had been spawned more than 3,000 miles away. Like the ripples from that dropped block of concrete, a tsunami is actually a series of waves, usually spaced about 15-20 minutes apart, with troughs in between them. To the observer on shore, the approach of a tsunami often begins with a rapid receding of the shoreline, much further out than normal. This is followed, 8-10 minutes later, by the first wave, which surges onto the shore, often traveling a half a mile or more inland. As the cycle repeats, the first wave recedes, carrying anything loose back out to sea. Then the next wave hits. In Thailand, Sri Lanka and Indonesia, where the worst damage occurred, many people who survived the impact of the first wave were swept out to sea as it receded, or were killed by one of the surges that followed. But what fascinates me the most about tsunamis is that, until they reach land, they are practically unnoticeable on the surface of the ocean. Their amplitude in deep water is often only a meter or less. An ocean liner or a fishing vessel would pass right over them, completely unaware that underneath it a force was racing onward that, when it surfaced, could obliterate an entire country. I've been thinking a lot about that sort of thing recently, because it seems to me that it's a pretty good metaphor for what is going on in science. I started writing this column because I believed that genomics was like a tsunami: a force that, when it crested, would change everything, and I wanted to have an excuse to think about what that would mean. The true impact of the genomics revolution is only starting to be apparent now, and it's very different from what it was predicted to be when the Human Genome Project began in the late 1980s. It has not had a significant impact on human health yet - the disease genes that have been discovered have generally come from specific individual research programs, and pharmacogenomics has initially focused on polymorphisms in genes that were already identified before the human genome was sequenced. Genomics has produced technology, such as cDNA microarrays and mass spectroscopy-based proteomics, that is likely to play a major role in diagnostics in the near future, but not necessarily in treatment. No, the major effects, which are now rolling across biology like a series of waves, are cultural. Because of genomics, data gathering and analysis is now valued highly - in some instances above hypothesis-driven research. Because of genomics, targeted big science projects, such as structural genomics, that aim to produce easily appreciated results (usually in the form of large amounts of data), are consuming a large chunk of funding that would otherwise go to individual investigator-initiated basic research. Because of genomics, there is a perception in some quarters that when you have analyzed something you have understood it; mathematical modeling of biological processes is beginning to become a substitute for experimental probing. Because of genomics, a kind of mysticism is creeping back into biology: we use terms, like 'systems biology' and 'emergent properties', that have echoes of vitalism in them -almost as though we are starting to believe that we cannot explain the behavior of living systems in terms of the physics and chemistry of their component parts. We can argue about whether this is good or bad for our field. We can argue about how we should react to it. But we cannot ignore it. None of this could have been appreciated in 1990. It was all moving beneath the surface, moving rapidly and inexorably and now it is upon us. Until the next tsunami comes along, genomics, like molecular biology before it, will change our scientific world whether we like it or not.
<urn:uuid:b859f826-3f47-407c-a20d-1d9e3c3069e4>
3.65625
2,226
Nonfiction Writing
Science & Tech.
36.743807
95,494,624
Flowering plants likely originated between 149 and 256 million years ago according to new UCL-led research. The study, published today in New Phytologist by researchers from the UK and China, shows that flowering plants are neither as old as suggested by previous molecular studies, nor as young as a literal interpretation of their fossil record. The findings underline the power of using complementary studies based on molecular data and the fossil record, along with different approaches to infer evolutionary timescales to establish a deeper understanding of evolutionary dynamics many millions of years ago. "The discrepancy between estimates of flowering plant evolution from molecular data and fossil records has caused much debate. Even Darwin described the origin of this group as an 'abominable mystery'", explained lead author, Dr Jose Barba-Montoya (UCL Genetics, Evolution & Environment). "To uncover the key to solving the mystery of when flowers originated, we carefully analysed the genetic make-up of flowering plants, and the rate at which mutations accumulate in their genomes." Through the lens of the fossil record, flowering plants appear to have diversified suddenly, precipitating a Cretaceous Terrestrial Revolution in which pollinators, herbivores and predators underwent explosive co-evolution. Molecular-clock dating studies, however, have suggested a much older origin for flowering plants, implying a cryptic evolution of flowers that is not documented in the fossil record. "In large part, the discrepancy between these two approaches is an artefact of false precision on both palaeontological and molecular evolutionary timescales," said Professor Philip Donoghue from the University of Bristol's School of Earth Science, and a senior author of the study. Palaeontological timescales calibrate the family tree of plants to geological time based on the oldest fossil evidence for its component branches. Molecular timescales build on this approach, using additional evidence from genomes for the genetic distances between species, aiming to overcome gaps in the fossil record. "Previous studies into molecular timescales failed to explore the implications of experimental variables and so they inaccurately estimate the probable age of flowering plants with undue precision," said Professor Ziheng Yang (UCL Genetics, Evolution & Environment) and senior author of the study. "Similarly, interpretations of the fossil record have not fully recognised its shortcomings as an archive of evolutionary history, that is, that the oldest fossil evidence of flowering plants comes from very advanced, not primitive flowering plant lineages," Professor Donoghue added. The researchers compiled a large collection of genetic data for many flowering plant groups including a dataset of 83 genes from 644 taxa, together with a comprehensive set of fossil evidence to address the timescale of flowering plant diversification. "By using Bayesian statistical methods that borrow tools from physics and mathematics to model how the evolutionary rate changes with time, we showed that there are broad uncertainties in the estimates of flowering plant age, all compatible with early to mid-Cretaceous origin for the group," said Dr Mario dos Reis (School of Biological and Chemical Sciences at Queen Mary University of London), a co-author of the study. The team involved researchers from UCL, Queen Mary University of London, the Chinese Academy of Sciences, the Natural History Museum and the University of Bristol. The study was kindly funded by the Biotechnology and Biosciences Research Council (UK), the Natural Environment Research Council, the Royal Society and the Wolfson Foundation.
<urn:uuid:e451c996-b31b-4fcf-a6b9-04638cc960bb>
3.9375
704
News (Org.)
Science & Tech.
2.684962
95,494,632
University of Oregon-led research provides a new interpretation of how one of America's great rivers got linked to the ocean amid tectonic influences and changing sea level The Colorado River's initial trip to the ocean didn't come easy, but its story has emerged from layers of sediment preserved within tectonically active stretches of the waterway's lower reaches. Bioclastic limestone and cross-bedded conglomerate are visible in exposed rocks at Marl Wash in the Bouse Formation, south of Blythe, California. The wash was named by geologists studying the deposits. The sedimentary structures preserve a record of deposition by strong tidal currents at the north end of the Gulf of California about 6 million years ago, prior to arrival of the Colorado River and its sediment load. A team led by University of Oregon geologist Rebecca Dorsey interpreted a wide range of depositional processes and environments, and their changes through time, using detailed stratigraphic analysis and micropaleontology. Photo by Rebecca Dorsey A scientific team, led by geologist Rebecca Dorsey of the University of Oregon, theorizes that the river's route off the Colorado Plateau was influenced by a combination of tectonic deformation and changing sea levels that produced a series of stops and starts between roughly 6.3 and 4.8 million years ago. Dorsey's team lays out its case in an invited-research paper in the journal Sedimentary Geology. The team's interpretation challenges long-held conventional thinking that once a river connects to the ocean it's a done deal. "The birth of the Colorado River was more punctuated and filled with more uneven behavior than we expected," Dorsey said. "We've been trying to figure this out for years. This study is a major synthesis of regional stratigraphy, sedimentology and micropaleontology. By integrating these different datasets we are able to identify the different processes that controlled the birth and early evolution of this iconic river system." The region covered in the research stretches from the southern Bouse Formation, near present-day Blythe, California, to the western Salton Trough north of where the river now trickles into the Gulf of California. The Bouse Formation and deposits in the Salton Trough have similar ages and span both sides of the San Andreas Fault, providing important clues to the river's origins. Last year, in the journal Geology, a project led by graduate student Brennan O'Connell, a co-author on the new study, concluded that laminated sediments found in exposed rock along the river near Blythe were deposited by tidal currents 5.5 million years ago. The Gulf of California, it was argued, extended into the region, but the age of the deposits and tectonic and sea level changes at work during that time were not well understood. Analyses by Kristin McDougall, a micropaleontologist with the U.S. Geological Survey and co-author on the new paper, helped the team better pinpoint the timing of the limestone deposits to about 6 million years ago, when tiny marine organisms lived in the water and were deposited at the same time. About 5.4 million years ago, conditions changed. Global sea level was falling but instead of bay water levels declining, as would be expected, the water depth increased due to tectonic subsidence of the crust, the researchers discovered. The basal carbonate material left by marine organisms was then inundated by fresh water as the river swept down into lower elevations, bringing with it clay and sand from mountain terrain, they found. "The bay filled up with river sediment as the sediment migrated toward the ocean," Dorsey said. "As more sediment came in, transport processes caused the delta front to move down the valley, transforming the marine bay into a delta and then the earliest through-flowing Colorado River." The river had arrived in the gulf, but only temporarily. A tug-of-war lasting for 200,000 to 300,000 years began some 5.1 million years ago, when the river stopped delivering sediments from upstream. The delta retreated and seawater returned to the lower Colorado River valley for a short time. The evidence is in the stratigraphy and fossils. Researchers found that clay and sand from the river became mixed with and then covered by marine sediment. Something, Dorsey said, apparently was happening upstream, trapping river sediment. A good bet, the researchers think, is tectonic activity, perhaps earthquakes along a fault zone in the river's northern basin that created subsidence in the riverbed or deep lakes along the river's path. At roughly 4.8 million years ago, the river resumed depositing massive amounts of sediment back into the Salton Trough and began rebuilding the delta. Today's view of the delta, however, reflects human-made modern disturbances to the river's sediment discharge and flow of water reaching the gulf. To meet agricultural demands for irrigation and drinking water for human consumption, Hoover Dam was constructed on the river to form Lake Mead during the 1930s. In 1956-1966, Glen Canyon Dam was built, forming Lake Powell. "If we could go back to 1900 before the dams that trap the sediment and water, we would see that the delta area was full of channels, islands, sand bars and moving sediment. It was a very diverse, dynamic and rich delta system. But manmade dams are trapping sediment today, eerily similar to what happened roughly 5 million years ago," Dorsey said. The bottom line of the research, she said, is that no single process controlled the Colorado River's initial route to the sea. "Different processes interacted in a surprisingly complicated sequence of events that led to the final integration of that river out to the ocean," she said. The research, Dorsey said, provides insights that help scientists understand how such systems change through time. The Colorado River is an excellent natural laboratory, she said, because sedimentary deposits that formed prior to and during river initiation are well exposed throughout the lower river valley. "This research," Dorsey said, "is very relevant to today because we have global sea level rising, climate is warming, coastlines are being inundated and submerged, and the supply of river sediment exerts a critical control on the fate of deltas where they meet the ocean. Documenting the complex interaction of these processes in the past helps us understand what is happening today." Mindy B. Homan, a former UO doctoral student and now a geologist with Devon Energy in Wyoming, was a co-author on the study. The National Science Foundation (grant EAR-1546006), Society for Sedimentary Geology and Geological Society of America supported the research. Source: Rebecca Dorsey, professor in the Department of Earth Sciences, 541-346-4431, email@example.com Note: The UO is equipped with an on-campus television studio with a point-of-origin Vyvx connection, which provides broadcast-quality video to networks worldwide via fiber optic network. There also is video access to satellite uplink and audio access to an ISDN codec for broadcast-quality radio interviews. About Dorsey: http://pages. Department of Earth Sciences: http://earthsciences. News release on the 2016 Geology paper: https:/ 2016 Geology paper: http://geology. Jim Barlow | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:3e78f986-af90-4a13-b68a-4ac8684aceb2>
3.359375
2,163
Content Listing
Science & Tech.
40.57413
95,494,633
Harnessing quantum systems for information processing will require controlling large numbers of basic building blocks called qubits. The qubits must be isolated, and in most cases cooled. Physicists have recently demonstrated important steps towards implementing a proposed type of gate, which does not rely on super-cooling their ion qubits. The nanoflasks, which have a span of several nanometers, or millionths of a millimeter, can accelerate chemical reactions for research. In the future, they might facilitate the manufacture of various industrial materials and perhaps even serve as vehicles for drug delivery. Researchers have made a chip-based device that can generate a laser signal with frequencies spaced in a comb-like fashion. Their work could be used in telecommunications applications and in chemical analysis. Scientists have discovered the universal building blocks that cells use to form initial connections with the surrounding environment. These early adhesions have a consistent size of 100 nanometres, are made up of a cluster of around 50 integrin proteins and are the same even when the surrounding surface is hard or soft. In October, an interdisciplinary group of scientists proposed forming a Unified Microbiome Initiative to explore the world of microorganisms that are central to life on Earth and yet largely remain a mystery. An article describes the tools scientists will need to understand how microbes interact with each other and with us. For years, scientists have been pursuing ways to imitate a leaf's photosynthetic power to make hydrogen fuel from water and sunlight. In a new twist, a team has come up with another kind of device that mimics two of a leaf's processes to harness solar energy to purify water.
<urn:uuid:da9d4cc8-660c-4025-9f5f-fe0918560d83>
3.40625
336
Content Listing
Science & Tech.
27.797961
95,494,634
How Are Birds Helping Scientists Rewrite Pollution History? Oct 17 2017 Read 800 Times Monitoring pollution is important because it informs us of the main problems and current threats to the environment from our actions. It’s also useful in the long term, however, because it gives us an understanding of how pollutant levels are changing over time. The problem? Pollution monitoring wasn’t always as effective as it is now. As a result, there’s sometimes a lack of accurate information for pollution levels in the past. However, that information may be available from an unlikely source. Read on to see how birds can help scientists study pollution in the past. Studying black carbon In the US, environmental scientists are studying the effects of black carbon. The substance is formed when fossil fuels and biomass are incompletely combusted, and is emitted as soot. It’s one of the big problems for the mining industry, which can be made more sustainable in a variety of ways. Black carbon is known to have a disastrous impact on human health, causing hundreds of thousands – and even millions – of deaths each year. But researchers are also interested in how it contributes to climate change. When it’s held in the air, black carbon absorbs light from the sun and causes atmospheric temperature to rise. Then when it reaches the ground it’s thought the substance causes more snow and ice to melt, meaning increased levels of ice loss in the Arctic. The role birds play One challenge for studying the effects of carbon on things like health and ice loss is the lack of accurate information from past decades. However, a new study has analysed over a thousand birds from natural history museums. By measuring levels photographing birds and measuring how much light was reflected off them, they could calculate the levels of black carbon trapped in their feathers. This has allowed them to create more accurate records of black carbon levels and how they have changed over time. They found a peak in the first ten years of the 1900s, followed by a slump during the US great depression and subsequent boom through World War II in the 1940s. From the 50s onwards, levels seem to have fallen as gas rose to prominence as a heating fuel. Improving into the future As well as providing a more accurate picture of the past, the new findings could make for more accurate modelling for future events. “The big finding and implication of our study is that we are recovering relative concentrations of atmospheric black carbon that are higher than previously estimated from other methods," said University of California’s Shane DuBay. "It helps constrain and inform how we understand the relative role of black carbon in past climate and by understanding that we can more accurately model future climate scenarios.” Do you like or dislike what you have read? Why not post a comment to tell others / the manufacturer and our Editor what you think. To leave comments please complete the form below. Providing the content is approved, your comment will be on screen in less than 24 hours. Leaving comments on product information and articles can assist with future editorial and article content. Post questions, thoughts or simply whether you like the content. Jul 26 2018 Rome, Italy Aug 26 2018 Stockholm, Sweden Aug 29 2018 Sydney, Australia Aug 29 2018 Kaohsiung, Taiwan Sep 04 2018 Mexico City, Mexico
<urn:uuid:ecefeb9c-0183-400c-b741-27805b97580d>
2.859375
689
News Article
Science & Tech.
48.550125
95,494,636
Hellish atmosphere on Venus is so violent that it makes the planet spin faster and shortens its days by as much as two minutes - Researchers used computer simulations to model the atmosphere on the planet - They examined a bow-shaped cloud structure seen by Japan's Akatsuki satellite - It can be explained by atmospheric waves forming over a mountainous region - Fast-flowing winds tug on gases surrounding the mountains producing the cloud - The 6,200-mile long object exerts a gravitational pull on the planet's rotation - It explains why experts have struggled to work out the length of a day on Venus Days on Venus could be getting shorter by two minutes each rotation thanks to the planet's hellish atmosphere, a new study suggests. Venus’ spin changes speed as its dense, fast-flowing layers of gases interact with a mountainous region on its surface, researchers say. These winds tug on gases surrounding the mountains, producing a 6,200-mile (10,000 km) long cloud that exerts a gravitational pull on the planet's rotation. Experts say it explains why they have been unable to work out the precise length of a day on Venus through their observations. Scroll down for video Days on Venus could be getting shorter by two minutes each rotation, thanks to the planet's hellish atmosphere. Venus’s spin changes speed as its dense, fast-flowing layers of gases interact with a mountainous region on the its surface, researchers say Researchers from the University of California Los Angeles used computer simulations to model the circulation of the Venusian atmosphere. They found that the bow-shaped cloud structure, first seen by Japan's Akatsuki satellite, can be explained by atmospheric waves forming over the mountains. These waves only form in the afternoon and vanish by dusk. The team found that the formation of these waves causes atmospheric pressure fluctuations that actually change the rotation rate of the solid planet, depending on the time of day. 'Overall, a net force is exerted on the mountain, and the whole solid body follows,' Thomas Navarro, study author and researcher at the University of California, told Space.com. These winds tug on gases surrounding the mountains, producing a 6,200-mile (10,000 km) long cloud that exerts a gravitational effect and pulls on the planet's rotation. This computer generate image shows Maat Mons, the second-highest mountain on Venus (stock image) WHAT DO WE KNOW ABOUT VENUS' ATMOSPHERE? Venus' atmosphere consists mainly of carbon dioxide, with clouds of sulphuric acid droplets. The thick atmosphere traps the sun's heat, resulting in surface temperatures higher than 470°C (880°F). The atmosphere has many layers with different temperatures. At the level where the clouds are, about 30 miles (50 km) up from the surface, it's about the same temperature as on the surface of the Earth. As Venus moves forward in its solar orbit while slowly rotating backwards on its axis, the top level of clouds zips around the planet every four Earth days. They are driven by hurricane-force winds travelling at about 224 miles (360 km) per hour. Atmospheric lightning bursts light up these quick-moving clouds. Speeds within the clouds decrease with cloud height, and at the surface are estimated to be just a few miles (km) per hour. On the ground, it would look like a very hazy, overcast day on Earth and the atmosphere is so heavy it would feel like you were one mile (1.6km) deep underwater. Venus rotates slowly, with one revolution taking about 243 Earth days, but measurements by visiting spacecraft have not agreed on the precise length of a Venusian day. Despite the Venusian atmosphere moving much faster than the planet itself, completing one rotation in four Earth days, the cloud structure spotted appearing and disappearing by Akatsuki remained stationary above the mountainous region. If this structure is an atmospheric wave, caused by the lower atmosphere rising over mountain topography, the atmosphere and solid planet might be more closely linked than originally thought. The research team believes that the effect is only relatively small, a change that would alter the length of a Venusian day by only a couple of minutes. But this interplay between the solid planet and its atmosphere may explain at least a part of the discrepancies between past measurements of Venus’s rotation rate. The full findings were published in the journal Nature Geoscience. WHAT IS THE AKATSUKI VENUS CLIMATE ORBITER SATELLITE? The Venus Climate Orbiter mission, or Akatsuki, is studying the atmospheric circulation of Venus. Meteorological information is be obtained by globally mapping clouds and minor constituents successively with four cameras. These detect ultraviolet and infrared wavelengths, as well as lightning with a high-speed imager. Japan Aersospace Exploration Agency's (Jaxa) Venus Climate Orbiter, known as Akatsuki (artist's impression) The satellite also observes the vertical structure of the atmosphere with radio science techniques. The equatorial elongated orbit of the satellite, with a westward revolution, matches the Venusian atmosphere, which also rotates westward. The systematic, continuous imaging observations taken by Akatsuki will provide scientists with an unprecedentedly large dataset of Venusian atmospheric dynamics. Additional targets of the mission are the exploration of the ground surface and the observation of zodiacal light, also known as false dawn. The mission complements the European Space Agency's Venus Express, which orbited Venus until 2014. Most watched News videos - Brave lion cub forced to jump into raging river to follow mother - Boris Johnson attacks Theresa May over Brexit 'fog of self-doubt' - Courageous woman hides victim from kidnappers till cops arrive - The streets of Alcudia in Mallorca are flooded by mini-tsunami - Drowned woman and child found next to survivor clinging to wreck - Police release video of Stormy Daniels' arrest outside strip club - Beach in Ciutadella Menorca hit by mini-tsunami 'rissaga' - Brutal bat attack caught on surveillance video in the Bronx - Macron's security advisor IMPERSONATES police to beat protestors - Model Annabelle Neilson walks the catwalk in 2010 fashion show - Salvage team discovers Russian cruiser 113 years after it sank - 'Africa won the world cup': Trevor Noah mocks France World Cup win
<urn:uuid:8824becf-3e1b-4a78-84c3-6efd2b03cb2e>
3.140625
1,351
Truncated
Science & Tech.
31.244283
95,494,643
Estimates of atmospheric moisture are critical for understanding the links and feedbacks between atmospheric CO2 and global climate. At present, there are few quantitative moisture proxies that are applicable to deep time. We present a new proxy for atmospheric moisture derived from modern climate and leaf biomarker data from North and Central America. Plants have a direct genetic pathway to regulate the production of lipids in response to osmotic stress, which is manifested in a change in the distribution of simple aliphatic lipids such as n-alkanes. The Average Chain Length (ACL) of these lipids is therefore statistically related to mean annual vapor pressure deficit (VPDav), enabling quantitative reconstruction of VPD from sedimentary n-alkanes. We apply this transfer function to the Armantes section of the Calatayud-Daroca Basin in Central Spain, that spans the Middle Miocene Climatic Optimum (MMCO) and the Middle Miocene Climate Transition (MMCT). Reconstructed VPDav rises from 0.13 to 0.92 kPa between 16.5 and 12.4 Ma, indicating a substantial drying through the MMCT. These data are consistent with fossil assemblages and mammalian stable isotope data, highlighting the utility of this new organic molecular tool for quantifying hydrologic variability over geologic timescales. The distribution of ecosystems across the globe is strongly regulated by the availability of water1. In the face of predicted future climate shifts, reconstructions of past climate and hydrology provide critical opportunities to evaluate the relationship between atmospheric CO2 and hydrological regimes during periods when Earth was significantly warmer than today2. At present, however, reconstructions of past water availability suffer from large uncertainties, especially in regions that are prone to large fluctuations in climate and hydrology in response to variations in pCO2 (e.g. Western North America). A number of semi-quantitative geochemical methods have been used to infer paleoaridity. These include: (i) stable isotopes of fossil bones/teeth3,4, speleothems5, and soil carbonates6,7; (ii) the presence and/or provenance of dust in ice cores8; (iii) the carbon isotope composition of plants9,10; and (iv) the geochemical composition of paleosols11,12. While these methods have advanced our understanding of paleohydrological processes, they possess a number of significant mechanistic, temporal or spatial limitations. For example, dust in ice cores is strongly influenced by changes in wind strength and direction, in addition to aridity13. Ice cores also only record the last few million years of glacial conditions, placing important temporal constraints on paleoclimatic reconstructions. Conflict between paleosol and fossil bone carbonate δ18O records4,12 is likely due to the fact that the isotopic composition of ungulate tooth enamel can record the influence of a complex mixture of factors such as temperature, water availability and atmospheric circulation patterns14, as well as the ability of animals to migrate significant distances to continue residing in their ideal habitat15. Furthermore, soil carbonates only form in a discrete range of hydrologic conditions. A shift from a dry environment (characterized by soil carbonate formation) to wetter conditions is often marked by a reduction or complete absence of soil carbonate nodules. In recent decades, there has been a dramatic upsurge in the use of molecules present in the epicuticular waxes coating leaves of terrestrial and aquatic higher plants (Fig. 1) as proxies for paleoclimatic and paleoenvironmental reconstructions16. The are many advantages to this biomarker-based approach. n-Alkyl lipids from higher plants are found in both terrestrial and marine sedimentary systems10,17,18,19,20, and are relatively resistant to shallow burial diagenesis21,22,23, enabling paleoenvironmental reconstruction over hundreds of millions of years of Earth history24,25. A number of methods have been used to infer paleoclimatic information from leaf wax biomarkers. The δ13C compositions of n-alkanes in sedimentary archives are commonly used to identify the proportion of C3 and C4 plants, and hence extrapolate changes in temperature and aridity26,27. Relative humidity and the availability of moisture are also thought to be important determinants of the relationship between the hydrogen isotope composition of precipitation and the δD signal recorded by leaf waxes28. The extent that leaf wax n-alkane δD values record this aridity/transpiration signal, however, remains controversial. Some studies of modern vegetation suggest that transpiration influences n-alkane δD values29, yet grasses grown under controlled greenhouse conditions show no n-alkane δD shifts in response to changes in relative humidity30. In addition, biochemical processes are shown to drive interspecies variation in n-alkane δD among modern plants31,32, suggesting that mechanisms controlling leaf wax lipid hydrogen isotope compositions are not yet fully constrained. The molecular distribution of leaf wax n-alkanes, commonly described by indices such as the Average Chain Length (ACL), is shown to vary along a latitudinal gradient, a phenomenon previously interpreted to be driven by changes in temperature33,34,35. However, correlation does not necessarily imply causation. ACL is also known to be related to changes in the availability of moisture, with longer chain lengths prevalent in more arid environments34,36,37,38. The trend for plants to synthesize longer alkane homologues under conditions with lower humidity has even been documented during glacial periods, where aridity is synchronous with colder climates39. There are several ways to express the moisture state of the atmosphere. Relative humidity (rH), for example, describes the ratio of the actual vapor pressure of the air, (e a ) and the saturation vapor pressure of air at a given temperature, e s (T a ), and is reported as a percentage as shown in Equation 1 40: Relative humidity is therefore not an actual measurement of the quantity of water vapor in the atmosphere, but a simple ratio of two known values40,41. The actual water holding capacity of the atmosphere, however, doubles exponentially for every ~11 °C of temperature increase, meaning the same relative humidity over a range of temperatures can reflect very different atmospheric moisture contents41. In contrast, vapor pressure deficit (VPD) reports an absolute measure of the amount of atmospheric moisture relative to e s (T a ) which is a function of temperature. VPD is defined in Equation 2 40: VPD reflects the atmospheric moisture deficit, which controls the extent to which the atmosphere can extract moisture from land surfaces (and by extrapolation the evaporative demand on plants). If all other factors (e.g. windspeed, water availability) are constant, for a given VPD the rate of evaporation is constant regardless of air temperature40. A VPD of 2 kPa will therefore have the same impact on the rate of evaporation from plant leaves regardless of MAT - indeed, this is why VPD is argued to be a better determinant of plant water stress than rH40,41. Just as plants respond to variation in the intra- vs. extracellular pressure in CO2, the gradient between atmospheric and intracellular or leaf surface concentration of specific molecules (including water), influences the flux of volatile and organic compounds from plant leaves42,43. In addition, gene regulation within plants is shown to be sensitive to specific biologic triggers related to water availability, not temperature44. Here, we show that ACL records changes in VPD at the ecosystem scale, based on the statistically significant relationship between ACL and mean annual VPD (VPDav) across a range of biomes in North and Central America. We then apply this transfer function to an organic molecular record from Miocene sediments in Central Spain that span the Middle Miocene Climatic Optimum (MMCO) and the Middle Miocene Climate Transition (MMCT). This new paleohydrological tool enables reconstruction of VPD during this global climate transition. Sampling and Statistical Analysis Modern calibration studies We compiled a comprehensive dataset of 149 new and previously published soil n-alkane profiles from North and Central America (Fig. 2; Table S1). Soils span a range of Koppen climates, from Koppen number 12 to 4345. We obtained climate variables from the ‘PRISM’ database (PRISM Climate Group, 2010), and the WorldClim 2 database46 (Supplementary Information). Both of these databases have a spatial resolution of ~800 m to 1 km grid squares, and provide 30 year normals of climate parameters such as monthly average temperature, precipitation and vapor pressure deficit. Sample sites span a range of ecosystem types with MAT from −0.2 to 26.6 °C, while ACL values from our selected soils range from 28 to 33, spanning the range of values commonly reported in sedimentary lipid biomarker studies34,47. Importantly, we focus on the ACL recorded in soils and sediments, which integrate vegetation inputs totalling hundreds or thousands of years, providing a combined ecosystem-scale signal rather than that of any individual plant species. Mean annual VPD values for all modern sites range from 0.4 kPa to 1.3 kPa. All statistical analyses describing the relationships between our data are carried out using Minitab v. 17. To establish the relationship between sedimentary leaf wax n-alkane Average Chain Length (ACL) values and annual average vapor pressure deficit (VPDav) we combined previously published soil n-alkane distribution profiles from North and Central America33,35,48, with new measurements of soils and sediments from western North America (SI Table 1). Where possible, we recalculated this index from peak area determinations provided in previous publications using Equation 3 (where C x refers to the peak area of the individual alkane), to ensure consistency in the alkane chain lengths used: We note the existence of a broad positive relationship between VPD and MAT in our sampling locations (Fig. S2). However, the presence of a genetic signalling pathway upregulating the production of leaf wax precursors in response to water deficit44, and the absence of a similar molecular response to increasing temperatures, leads us to conclude that it is VPD, rather than MAT, that is the dominant control on our ACL values. Results and Discussion Relationship between ACL and VPD in Modern Soils We used ACL and climate information from modern soils that span North and Central America to quantify the relationship between chain length distribution and VPD (Supplementary Information Table S2). We performed regression analysis using VPD as the predictor variable and ACL as the response variable, and identified a statistically significant (p < 0.05) positive relationship (Fig. 3) between ACL and VPD, described by Equation 4: Our data show that half of the variation in ACL can be explained by variation in VPD. The residuals plot (Fig. 3b) displays homoscedacity, indicating that errors associated with the model predictions are stochastic. The standard error associated with VPD values calculated using our regression model is ±0.1 kPa. This error was subsequently assumed for all VPD reconstructions from the Miocene sediments. Proposed mechanism driving relationship between ACL and VPD There are two primary mechanisms that could account for the observed relationship between ACL and VPD. The ‘plant production’ mechanism requires that plants actively shift towards preferential production of the longer-chain n-alkane homologues via the acetogenic lipid pathway (thus increasing ACL) in response to external environmental or climatic drivers49. Although the overall composition of leaf wax is genetically regulated and highly responsive to developmental and environmental factors50, previous studies show conflicting results regarding whether n-alkane distribution patterns vary dynamically and systematically in response to environmental change. A number of authors suggest that ACL is primarily linked to temperature34,51, however there is no specific biologic mechanism linking temperature to the chain length of compounds that are produced. But water deficit and osmotic stress stimulate transcription of the CER6 gene, which is involved in the elongation of fatty acid acyl-CoAs longer than C2244,50. This means that, unlike for temperature, there is a specific biologic signalling system that relates leaf wax biosynthetic processes to water deficit. Further, examination of our temperature and VPD data for all sampling sites show that these are linearly related (Supplementary Information Fig. S2), thus apparent links between ACL and MAT are most likely dominantly driven by the moisture deficit effects, rather than temperature alone. The relationship between ACL and VPD does not require the invocation of a plant physiological or biochemical mechanism, however, as it can be explained entirely through physical processes. n-Alkanes are known to be released from plants alongside other volatile compounds (e.g., terpenes, isoprene) and water52,53. When VPD increases, evaporation of these volatiles from leaves rises, as plants regulate stomatal and cuticular transpiration in response to changes in the moisture availability of the atmosphere43. Longer chain length homologues have higher molecular masses than shorter-chain n-alkanes (e.g., nonacosane = 408.799 g/mol, pentacosane = 352.691 g/mol and tricosane = 324.637 g/mol), and evaporate more slowly due to stronger inter-molecule bonds54. As a result, shorter homologues are likely to be preferentially lost when VPD increases, potentially contributing to the observed positive relationship between ACL and VPD. Sedimentary n-alkanes record the combined effect of VPD on many individual plants over hundreds and/or thousands of years. Thus, while modern studies of individual plant n-alkane responses to environmental perturbations may be highly specific and display a wide scatter, sedimentary alkanes record long-term ecosystem-scale changes due to climatic and hydrologic shifts. Further, should plants that preferentially produce longer-chain alkanes (as a protection against increasing aridity) proliferate as an ecosystem becomes more moisture depleted, soils will incorporate this vegetation shift signal over long formation timescales. Thus, as we specifically focus on soils here in our modern calibration, we effectively encompass this potential ‘ecosystem-change’ driver of the observed relationship between ACL and VPR in our transfer function. Paleoaridity changes during the MMCO and MMCT in Central Spain The Miocene was a time of rapid climate shifts. The Middle Miocene Climatic Optimum (MMCO, 15–17 Ma) was the warmest period of the Neogene with temperatures some 3–8 °C warmer than present55. In contrast, the subsequent Middle Miocene Climate Transition (MMCT, ~15–13.7 Ma) saw the widespread expansion of the East Antarctic Ice Sheet, and a shift towards much cooler conditions56. On a global scale, changes in atmospheric pCO2 during the Miocene are likely to have played a dominant role in climate dynamics57. At a regional level, however, spatial heterogeneity in the nature and magnitude of environmental change suggests that more localised mechanisms such as tectonic uplift, changes in regional geology, shifts in temperature gradients or variation in freshwater inputs also influence terrestrial temperature and hydrology across Europe58. The production of new high-resolution sedimentary sequences spanning the MMCO and the MMCT, and the development of new organic molecular tools for paleoclimate reconstruction, are critical steps in the evolution of our understanding of terrestrial environmental change during the mid and late Miocene. The Armantes section is located in the Calatayud-Daroca basin of central Spain and contains expanded sections of fluvial and paleosol sequences spanning the MMCO59 and the cooling associated with the MMCT57,60,61. Sedimentation is continuous, with no observed hiatuses between 17 Ma and 12 Ma62. The section is up to 280 m thick, consisting of alternating red clay/silts and pink/white indurated silty limestones for much of its expanse59. The Calatayud-Daroca basin contains many fossil bearing sediments, making it a key location for understanding the response of ecosystems in Southwest Europe to Miocene climate shifts58. The age model for this section is based on well-established magnetostratigraphy59,62. A number of paleobiological studies establish that many parts of Spain experienced widespread cooling and drying during this time interval60,63, making it an ideal sequence to evaluate our new leaf wax biomarker-based paleoaridity proxy and expand existing paleohydrological reconstructions of the MMCO and MMCT in central Spain. Sediments from the Armantes section record a positive shift of ~3 ACL units between 16.4 and 12.4 Ma. These sediments span > 200 m of stratigraphic section, and are unlikely to have experienced differential diagenesis between the top and bottom of the section following burial. Equally, despite sediments containing both paleosol and floodplain deposits, there is no indication that the observed changes in ACL are due to an increased contribution from wetland/aquatic plants. Indeed, aquatic plants frequently have distinctive n-alkane distribution patterns with carbon chain maxima at C23 or C2564, while all of the Armantes samples analysed here (with the exception of the sample dating from 16.4 Ma) have carbon chain maxima of C29 or C31. Rearranging Eq. 4 as shown in Eq. 5, we calculate reconstructed VPD values varying from 0.13 to 0.92 kPa during this interval, with higher values occurring at ~15.2 Ma during the MMCO, and again between 13.5 and 12.5 Ma, commencing during the MMCT and continuing into the late Miocene (Fig. 4). Comparison of these data with previously existing paleoclimate reconstructions show good agreement, with relative decreases and increases in VPDav occurring broadly in synch with decreases in global air temperature61, atmospheric pCO257, and previous reconstructions of aridity in the region based on fossil tooth assemblages and stable isotope data60,65. We note, however, that the largest shift in reconstructed VPD occurs after 13 Ma, rather than during the MMCT interval (Fig. 4c). Reconstructions of mean annual precipitation in Europe suggest that rainfall declined throughout the late middle to late Miocene, and hence our increase in VPD (suggestive of a reduction in available moisture), is likely to reflect the broader aridification of Central and Southwest Europe previously identified between 13 and 11 Ma58. Meta-analysis of paleoprecipitation data from Europe suggests that such changes in rainfall are not directly correlated with global temperatures, but rather indicative of the influence of processes such as changes in ice-volume, shifts in hemispherical temperature gradients, and regional geography and topography66. Analysis of modern sediment n-alkane profiles show that the distribution of homologues is strongly controlled by the moisture deficit of the environment. There are both biological and physical bases for this response, but most critically organic biomarkers in the sedimentary system record these processes allowing for quantitative reconstruction of paleo-VPD. This new organic molecular tool for paleohydrologic investigations provides new constraints on VPD in Central Spain during a critical climate transition, highlighting the applicability of this technique over geologic timescales. Lipid extraction and quantification We collected materials from fresh exposed paleosol and floodplain sediments from the Armantes section of the Calatayud-Daroca Basin in Central Spain deposited from the peak of MMCO warming (~16.5 Ma) to the MMCT transition period (~12.4 Ma). Modern surface soils collected as part of this study were collected from the top ~10 cm, with visible plant and root material removed. Soils and sediments were freeze dried as soon as possible after collection to minimise the potential for microbial alteration of sedimentary lipids67. All surface soils and Miocene sediments analysed as part of this study were solvent extracted to obtain the aliphatic fraction containing n-alkanes. Approximately 150 g of dried sediment was extracted with soxhlet apparatus, using a 2:1 (v/v) mixture of dichloromethane and methanol. Extracts were concentrated under a stream of N2 gas, and then separated into compound fractions by silica gel chromatography using ashed Pasteur pipettes packed with activated silica gel (70–230 mesh). Aliphatic, aromatic and polar solutions were eluted by the sequential application of 2 ml hexane, 4 ml dichloromethane, and 4 ml of methanol. The hexane fraction was further purified using urea adduction. Adducted normal alkanes were extracted into hexane, concentrated under N2, and analyzed for molecular distributions on a Thermo-Scientific Trace GC Ultra equipped with a split-splitless injector and a flame ionisation detector, using a DB-5 column (60 m × 0.25 mm i.d., 0.25 µm) with helium as a carrier gas. Alkane peaks were identified by comparing retention times to those of a laboratory standard, and the Average Chain Length (ACL) was calculated using Eq. 3, and results are shown in Table S2. All data generated or analysed during this study are included in this published article (and its Supplementary Information files). Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. The authors thank Queenie Chang and Stuart Black for helpful discussions on the regression model. This work was supported in part by NSF-EAR-1338256.
<urn:uuid:f27b4c5d-a384-4774-976b-658565cdfd59>
2.546875
4,554
Academic Writing
Science & Tech.
25.33771
95,494,667
Jupiter, along with its set of satellites and atmosphere, is a pint-sized system within our solar system. The moon is a floating asteroid in space, which is captured by the gravitational pull of the planet. When you think about astronomy, one thing may come to mind: multi-spectral. This is because imaging, many of the cosmic bodies far away from Earth, relies on light data that goes beyond the visual spectrum. Simple, visual telescopy on Earth is impacted by a kind of distortion caused by the atmosphere. This could be the reason why clear images of other planets and stars, even with high-quality scopes, in our own solar system, cannot be easily obtained. Astronomers and astrophysicists have known, for over a century, that the Earth is being bombarded with cosmic rays. However, without advanced equipment and technique, it could not be said for sure from where these rays emanated. Researchers from a multi-group project have reported the detection of particles that make up these cosmic rays. Furthermore, they also have a source for these rays. The next total lunar eclipse will occur at the end of this month, in 2018, according to bodies such as NASA. This event, while not to be the only one to occur this century, may be the most impressive one in the lifetime of most people alive today. This occurrence is also predicted to be accompanied by the ‘blood moon’ phenomenon. Finally, this eclipse will be visible from most of Earth for about four hours per time zone. The space stations of the immediate future are going to have to be clean. This is because they will represent the first wave of manned scientific missions to their destinations (which may include the surface of Mars). Therefore, their crews will need to avoid contamination at all costs, so that the samples of extra-terrestrial material remain pristine. This process will preserve their value to researchers and engineers after they are collected. On Earth, the best option to prevent this kind of contamination is a clean-room environment. Thanks to astrophysics and other similar disciplines, we now think of our galaxy as billions of years old. However, it never occurred to the same scientists that this may have taken its toll on the Milky Way. Europa, a moon that orbits Jupiter, looks like an inert icy ball from the outside. However, many astronomers theorize that there is an Earth-like ocean underneath the crust. This crust could also be hiding energy sources. Therefore, conditions for life may indeed exist on Europa! Some scientists are very interested in knowing what would happen if the sun goes dark for good. This is reasonable enough as nothing on our planet can live without this important star. For some time, astronomers were doubtful about what would happen to the sun at the end of its life. The death-throes observed in other stars were seen as unlikely to apply to it based on its size. “We are just an advanced breed of monkeys on a minor planet of a very average star. But we can understand the Universe. That makes us something very special.” -Stephen Hawking Artificial intelligence (AI) has helped scientists and astronomers develop new methods that determine the long-term habitability of exoplanets, orbiting more than one star. In 2008, in the Nubian Desert (Sudan), scientists discovered a meteorite with diamonds, surrounded by layers of graphite. This asteroid-turned-meteor that hit the Earth was 4 meters in diameter and came to be known as 2008 TC3. This body was considered as a compelling discovery because it revealed many secrets, dating back to the beginning of our solar system. NASA has just announced plans for their next launch that will propel a robotic vehicle to the red planet. The US space authority is working with United Launch Alliance to put its new InSight craft on a northern region of Mars. The team of experts driving this mission have worked out a window, in May, that will last for a short time. The launch will be conducted in a manner that will help InSight into its ideal orbit and subsequent trajectory, toward its destination. A great amount of scientific work goes into finding out exactly what happens at the center of our galaxy. This research is important for numerous reasons, not considering that everything in the Milky Way may be pulled into that heart. An international team of astrophysicists have reported the successful imaging of Icarus, a single star whose properties suggest that it is located approximately 9 billion light years away from Earth. This new finding indicates that Icarus is possibly the remotest body in the universe, that humans have detected till date. Black holes and their activity are typically tracked with spectral telescopy. This is because the actions they are best known for – the destruction of matter that gets too close – have become associated with certain types of emissions in the optical to X-ray ranges. Dubbed the world’s most powerful operational rocket is Falcon Heavy, manufactured by US entrepreneur Elon Musk’s space technology company, SpaceX. It is currently the highest capacity rocket, and could, in the near future, enable humans to travel into space, to other planets or maybe even the moon! For decades, scientists have been using long-range probes and telescopes to discover and study the bodies that inhabit our galaxy and beyond. The results have sparked numerous reports on deep-space phenomena and bodies such as stars, nebulae and black holes. The latter features have captured the imagination of many in a particular way. This is due to telemetry that portrayed black holes as massive, powerful objects capable of such things as pulling in the matter around them and perhaps turning the same into nothing. NASA’s hotly awaited teleconference last night didn’t reveal the existence of alien life, as some had hoped for against all odds, but it did reveal something just as exciting: A solar system with as many planets as our own. The solar system in question orbits a huge star known as Kepler 90, which is located in the Draco constellation, around 2,500 light years away from Earth. NASA and Google have been collaborating on a project using NASA’s Kepler telescope and Google’s powerful Artificial Intelligence technology. The findings are due to be released at a news conference on Thursday December 14.
<urn:uuid:a1d8349a-3ff2-46f1-898b-afb68f6977b9>
3.390625
1,286
Content Listing
Science & Tech.
43.263461
95,494,674
Global declines in insects have sparked wide interest among scientists, politicians, and the general public. Loss of insect diversity and abundance is expected to provoke cascading effects on food webs and to jeopardize ecosystem services. Our understanding of the extent and underlying causes of this decline is based on the abundance of single species or taxonomic groups only, rather than changes in insect biomass which is more relevant for ecological functioning. Here, we used a standardized protocol to measure total insect biomass using Malaise traps, deployed over 27 years in 63 nature protection areas in Germany (96 unique location-year combinations) to infer on the status and trend of local entomofauna. Our analysis estimates a seasonal decline of 76%, and mid-summer decline of 82% in flying insect biomass over the 27 years of study. We show that this decline is apparent regardless of habitat type, while changes in weather, land use, and habitat characteristics cannot explain this overall decline. This yet unrecognized loss of insect biomass must be taken into account in evaluating declines in abundance of species depending on insects as a food source, and ecosystem functioning in the European landscape. We investigated the navigational capabilities of the world’s largest land-living arthropod, the giant robber crab Birgus latro (Anomura, Coenobitidae); this crab reaches 4 kg in weight and can reach an age of up to 60 years. Populations are distributed over small Indo-Pacific islands of the tropics, including Christmas Island (Indian Ocean). Although this species has served as a crustacean model to explore anatomical, physiological, and ecological aspects of terrestrial adaptations, few behavioral analyses of it exist. We used a GPS-based telemetric system to analyze movements of freely roaming robber crabs, the first large-scale study of any arthropod using GPS technology to monitor behavior. Although female robber crabs are known to migrate to the coast for breeding, no such observations have been recorded for male animals. In total, we equipped 55 male robber crabs with GPS tags, successfully recording more than 1,500 crab days of activity, and followed some individual animals for as long as three months. Besides site fidelity with short-distance excursions, our data reveal long-distance movements (several kilometers) between the coast and the inland rainforest. These movements are likely related to mating, saltwater drinking and foraging. The tracking patterns indicate that crabs form route memories. Furthermore, translocation experiments show that robber crabs are capable of homing over large distances. We discuss if the search behavior induced in these experiments suggests path integration as another important navigation strategy. Carbonated hydroxyapatite is the mineral found in vertebrate bones and teeth, whereas invertebrates utilize calcium carbonate in their mineralized organs. In particular, stable amorphous calcium carbonate is found in many crustaceans. Here we report on an unusual, crystalline enamel-like apatite layer found in the mandibles of the arthropod Cherax quadricarinatus (freshwater crayfish). Despite their very different thermodynamic stabilities, amorphous calcium carbonate, amorphous calcium phosphate, calcite and fluorapatite coexist in well-defined functional layers in close proximity within the mandible. The softer amorphous minerals are found primarily in the bulk of the mandible whereas apatite, the harder and less soluble mineral, forms a wear-resistant, enamel-like coating of the molar tooth. Our findings suggest a unique case of convergent evolution, where similar functional challenges of mastication led to independent developments of structurally and mechanically similar, apatite-based layers in the teeth of genetically remote phyla: vertebrates and crustaceans. Biogenic amines, particularly serotonin, are recognised to play an important role in controlling the aggression of invertebrates, whereas the effect of neurohormones is still underexplored. The crustacean Hyperglycemic Hormone (cHH) is a multifunctional member of the eyestalk neuropeptide family. We expect that this neuropeptide influences aggression either directly, by controlling its expression, or indirectly, by mobilizing the energetic stores needed for the increased activity of an animal. Our study aims at testing such an influence and the possible reversion of hierarchies in the red swamp crayfish, Procambarus clarkii, as a model organism. Three types of pairs of similarly sized males were formed: (1) ‘control pairs’ (CP, n = 8): both individuals were injected with a phosphate saline solution (PBS); (2) ‘reinforced pairs’ (RP, n = 9): the alpha alone was injected with native cHH, and the beta with PBS; (3) ‘inverted pairs’ (IP, n = 9): the opposite of (2). We found that, independently of the crayfish’s prior social experience, cHH injections induced (i) the expression of dominance behaviour, (ii) higher glycemic levels, and (iii) lower time spent motionless. In CP and RP, fight intensity decreased with the establishment of dominance. On the contrary, in IP, betas became increasingly likely to initiate and escalate fights and, consequently, increased their dominance till a temporary reversal of the hierarchy. Our results demonstrate, for the first time, that, similarly to serotonin, cHH enhances individual aggression, up to reverse, although transitorily, the hierarchical rank. New research perspectives are thus opened in our intriguing effort of understanding the role of cHH in the modulation of agonistic behaviour in crustaceans. The amphipod crustacean Parhyale hawaiensis is a blossoming model system for studies of developmental mechanisms and more recently regeneration. We have sequenced the genome allowing annotation of all key signaling pathways, transcription factors, and non-coding RNAs that will enhance ongoing functional studies. Parhyale is a member of the Malacostraca clade, which includes crustacean food crop species. We analysed the immunity related genes of Parhyale as an important comparative system for these species, where immunity related aquaculture problems have increased as farming has intensified. We also find that Parhyale and other species within Multicrustacea contain the enzyme sets necessary to perform lignocellulose digestion (‘wood eating’), suggesting this ability may predate the diversification of this lineage. Our data provide an essential resource for further development of Parhyale as an experimental model. The first malacostracan genome will underpin ongoing comparative work in food crop species and research investigating lignocellulose as an energy source. The American brine shrimp Artemia franciscana is invasive in the Mediterranean region where it has displaced native species (the sexual A. salina, and the clonal A. parthenogenetica) from many salt pond complexes. Artemia populations are parasitized by numerous avian cestodes whose effects have been studied in native species. We present a study from the Ebro Delta salterns (NE Spain), in a salt pond where both A. franciscana and native A. salina populations coexist, providing a unique opportunity to compare the parasite loads of the two sexual species in syntopy. The native species had consistently higher infection parameters, largely because the dominant cestode in A. salina adults and juveniles (Flamingolepis liguloides) was much rarer in A. franciscana. The most abundant cestodes in the alien species were Eurycestus avoceti (in adults) and Flamingolepis flamingo (in juveniles). The abundance of E. avoceti and F. liguloides was higher in the A. franciscana population syntopic with A. salina than in a population sampled at the same time in another pond where the native brine shrimp was absent, possibly because the native shrimp provides a better reservoir for parasite circulation. Infection by cestodes caused red colouration in adult and juvenile A. salina, and also led to castration in a high proportion of adult females. Both these effects were significantly stronger in the native host than in A. franciscana with the same parasite loads. However, for the first time, significant castration effects (for E. avoceti and F. liguloides) and colour change (for six cestode species) were observed in infected A. franciscana. Avian cestodes are likely to help A. franciscana outcompete native species. At the same time, they are likely to reduce the production of A. franciscana cysts in areas where they are harvested commercially. Invasive non-native species are of great concern throughout the world. Potential severity of the impacts of non-native species is assessed for effective conservation managements. However, such risk assessment is often difficult, and underestimating possible harm can cause substantial issues. Here, we document catastrophic decline of a soil ecosystem in the Ogasawara Islands, a UNESCO World Heritage site, due to predation by non-native land nemertine Geonemertes pelaensis of which harm has been previously unnoticed. This nemertine is widely distributed in tropical regions, and no study has shown that it feeds on arthropods. However, we experimentally confirmed that G. pelaensis predates various arthropod groups. Soil fauna of Ogasawara was originally dominated by isopods and amphipods, but our surveys in the southern parts of Hahajima Island showed that these became extremely scarce in the areas invaded by G. pelaensis. Carnivorous arthropods decreased by indirect effects of its predation. Radical decline of soil arthropods since the 1980s on Chichijima Island was also caused by G. pelaensis and was first recorded in 1981. Thus, the soil ecosystem was already seriously damaged in Ogasawara by the nemertine. The present findings raise an issue and limitation in recognizing threats of non-native species. Pig carcasses, as human proxies, were placed on the seabed at a depth of 300 m, in the Strait of Georgia and observed continuously by a remotely operated camera and instruments. Two carcasses were deployed in spring and two in fall utilizing Ocean Network Canada’s Victoria Experimental Network under the Sea (formerly VENUS) observatory. A trial experiment showed that bluntnose sixgill sharks could rapidly devour a carcass so a platform was designed which held two matched carcasses, one fully exposed, the other covered in a barred cage to protect it from sharks, while still allowing invertebrates and smaller vertebrates access. The carcasses were deployed under a frame which supported a video camera, and instruments which recorded oxygen, temperature, salinity, density, pressure, conductivity, sound speed and turbidity at per minute intervals. The spring exposed carcass was briefly fed upon by sharks, but they were inefficient feeders and lost interest after a few bites. Immediately after deployment, all carcasses, in both spring and fall, were very rapidly covered in vast numbers of lyssianassid amphipods. These skeletonized the carcasses by Day 3 in fall and Day 4 in spring. A dramatic, very localized drop in dissolved oxygen levels occurred in fall, exactly coinciding with the presence of the amphipods. Oxygen levels returned to normal once the amphipods dispersed. Either the physical presence of the amphipods or the sudden draw down of oxygen during their tenure, excluded other fauna. The amphipods fed from the inside out, removing the skin last. After the amphipods had receded, other fauna colonized such as spot shrimp and a few Dungeness crabs but by this time, all soft tissue had been removed. The amphipod activity caused major bioturbation in the local area and possible oxygen depletion. The spring deployment carcasses became covered in silt and a black film formed on them and on the silt above them whereas the fall bones remained uncovered and hence continued to be attractive to large numbers of spot shrimp. The carcass remains were recovered after 166 and 134 days respectively for further study. - Proceedings of the National Academy of Sciences of the United States of America - Published about 3 years ago It has been suggested that we do not know within an order of magnitude the number of all species on Earth [May RM (1988) Science 241(4872):1441-1449]. Roughly 1.5 million valid species of all organisms have been named and described [Costello MJ, Wilson S, Houlding B (2012) Syst Biol 61(5):871-883]. Given Kingdom Animalia numerically dominates this list and virtually all terrestrial vertebrates have been described, the question of how many terrestrial species exist is all but reduced to one of how many arthropod species there are. With beetles alone accounting for about 40% of all described arthropod species, the truly pertinent question is how many beetle species exist. Here we present four new and independent estimates of beetle species richness, which produce a mean estimate of 1.5 million beetle species. We argue that the surprisingly narrow range (0.9-2.1 million) of these four autonomous estimates-derived from host-specificity relationships, ratios with other taxa, plant:beetle ratios, and a completely novel body-size approach-represents a major advance in honing in on the richness of this most significant taxon, and is thus of considerable importance to the debate on how many species exist. Using analogous approaches, we also produce independent estimates for all insects, mean: 5.5 million species (range 2.6-7.8 million), and for terrestrial arthropods, mean: 6.8 million species (range 5.9-7.8 million), which suggest that estimates for the world’s insects and their relatives are narrowing considerably. How do stunning functional innovations evolve from unspecialized progenitors? This puzzle is particularly acute for ultrafast movements of appendages in arthropods as diverse as shrimps , stomatopods , insects [3-6], and spiders . For example, the spectacular snapping claws of alpheid shrimps close so fast (∼0.5 ms) that jetted water creates a cavitation bubble and an immensely powerful snap upon bubble collapse . Such extreme movements depend on (1) an energy-storage mechanism (e.g., some kind of spring) and (2) a latching mechanism to release stored energy quickly . Clearly, rapid claw closure must have evolved before the ability to snap, but its evolutionary origins are unknown. Unearthing the functional mechanics of transitional stages is therefore essential to understand how such radical novel abilities arise [9-11]. We reconstructed the evolutionary history of shrimp claw form and function by sampling 114 species from 19 families, including two unrelated families within which snapping evolved independently (Alpheidae and Palaemonidae) [12, 13]. Our comparative analyses, using micro-computed tomography (microCT) and confocal imaging, high-speed video, and kinematic experiments with select 3D-printed scale models, revealed a previously unrecognized “slip joint” in non-snapping shrimp claws. This slip joint facilitated the parallel evolution of a novel energy-storage and cocking mechanism-a torque-reversal joint-an apparent precondition for snapping. Remarkably, these key functional transitions between ancestral (simple pinching) and derived (snapping) claws were achieved by minute differences in joint structure. Therefore, subtle changes in form appear to have facilitated wholly novel functional change in a saltational manner. VIDEO ABSTRACT.
<urn:uuid:5ce831fb-b736-43d4-8f0f-a20ae0b80be8>
3.8125
3,263
Academic Writing
Science & Tech.
22.500716
95,494,681
Authors: M. J. Germuska The Vir Theory of Particles provides a formula for the relationship between mass and spin. Using this formula the masses of over 200 particles were calculated with such accuracy that the errors from the actual masses are entirely attributable to the mass measurement errors. The particles come from 16 families including the lightest family N and the heaviest family Y. For each family of particles considered there is one or more Mendeleev-like table where the columns have increasing spin and the rows increasing mass, in such a way that the diagonal cells have the same predicted mass. The empty cells should in future be filled by new particles. Comments: 46 Pages. Title has changed to "Mendeleev-like Tables of Hadrons" Unique-IP document downloads: 67 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:a603197f-261f-4074-93ba-2a37620e60fc>
3.078125
301
Academic Writing
Science & Tech.
42.029113
95,494,682
Electromagnetic Wave Propagation Electromagnetic waves can be generated by a variety of methods, such as a discharging spark or by an oscillating molecular dipole. Visible light is a commonly studied form of electromagnetic radiation, and exhibits oscillating electric and magnetic fields whose amplitudes and directions are represented by vectors that undulate in phase as sinusoidal waves in two mutually perpendicular (orthogonal) planes. This tutorial explores propagation of a virtual electromagnetic wave and considers the orientation of the magnetic and electric field vectors. The tutorial initializes with an electromagnetic wave being generated by the discharging spark from a virtual capacitor. The spark current oscillates at a frequency characteristic of the circuit, and the resulting electromagnetic disturbance is propagated with the electric (E) and magnetic (B) field vectors vibrating perpendicular to each other and to the direction of propagation (Z). The wavelength emitted by the virtual capacitor discharge can be altered (within the visible light range) by using the Wavelength slider. Before discussing the phenomenon of anisotropy further, a basic review of several physical optics principles, necessary to subsequent discussions, is required. As previously mentioned, visible light is a form of electromagnetic wave. If a capacitor is charged (Figure 1) and a spark is discharged through the two electrodes, the current induced by the spark flows down for a short time, slows down, but because of the inductance of the circuit, flows back upwards, recharging the capacitor again. The propagation of an electromagnetic wave, which has been generated by a discharging capacitor or an oscillating molecular dipole, is illustrated by Figure 1. The spark current oscillates at a frequency (ν), which is a characteristic of the circuit. The electromagnetic disturbance that results is propagated with the electronic (E) and magnetic (B) vectors vibrating perpendicularly to each other and also to the direction of propagation (Z). The frequency, ν, is determined by the oscillator, while the wavelength is determined by the oscillation frequency divided by the velocity of the wave. As the current oscillates up and down in the spark gap, at the characteristic circuit frequency (ν), a magnetic field is created that oscillates in a horizontal plane. The changing magnetic field, in turn, induces an electric field so that a series of electrical and magnetic oscillations combine to produce a formation that propagates as an electromagnetic wave. The electric field in an electromagnetic wave vibrates with its vectorial force growing stronger and then weaker, pointing in one direction, and then in the other direction, alternating in a sinusoidal pattern (Figure 1). At the same frequency, the magnetic field oscillates perpendicular to the electric field. The electric and magnetic vectors, reflecting the amplitude and the vibration directions of the two waves, are oriented perpendicular to each other and to the direction of wave propagation. The velocity of the resulting electromagnetic wave can be deduced from the relationships defining the electric and magnetic field interactions. Maxwell's equations prove that velocity equals the speed of light in a vacuum (c; equal to 300,000 kilometers per second) divided by the square root of the dielectric constant (ξ) of the medium times the magnetic permeability (μ) of the medium. Thus, For most materials that occur in living cells (some of which are non-conducting), the magnetic permeability is equal to a value of unity, so that: Empirically, the velocity of light is known to be inversely proportional to the refractive index (n) of the material through which it propagates, therefore: |υ = c/n||(3)| From equations (2) and (3), the conclusion can be drawn that refractive index is equal to the square root of the dielectric constant of that material if the measurements are made at the same frequency: Equation (4) reveals that optical measurements are, in fact, measurements of the electrical properties of the material. The dielectric properties, in turn, directly reflect the spatial three-dimensional arrangement of atoms and molecules that define the structure of a substance. The vector describing the interaction between an electromagnetic field and a substance lies in the same direction as the electric vector. This is true regardless of whether the electric or magnetic vectors are considered, because what matters is the effect of the electric or magnetic fields on the electrons in the material medium (the magnetic field affects those electrons that move in a plane perpendicular to the magnetic field).
<urn:uuid:7efe36e2-5588-4e60-a925-1cbf229e4ba8>
4.125
913
Tutorial
Science & Tech.
12.08966
95,494,718
In this chapter, you will look beyond the three-letter acronyms that form ASP.NET to discover its origins and how it fits in the Internet technologies picture. We will show you the advantages that ASP.NET offers over alternative technologies. We will also compare it to its predecessor, classic ASP. This will ready you for Chapter 4, where we will delve deeper into the technical aspects of ASP.NET and provide you with a basic migration guide from classic ASP. KeywordsBusiness Logic Simple Object Access Protocol Common Gateway Interface Memory Leak Internet Information Service Unable to display preview. Download preview PDF.
<urn:uuid:36700697-0fe9-47c8-867c-69a731c4bb6c>
3.015625
123
Truncated
Software Dev.
45.208137
95,494,742
Radiocarbon and stable isotope constraints on Last Glacial Maximum and Younger Dryas ventilation in the western North Atlantic Keigwin, Lloyd D. MetadataShow full item record Foraminiferal abundance, 14C ventilation ages, and stable isotope ratios in cores from high deposition rate locations in the western subtropical North Atlantic are used to infer changes in ocean and climate during the Younger Dryas (YD) and Last Glacial Maximum (LGM). The δ18O of the surface dwelling planktonic foram Globigerinoides ruber records the present-day decrease in surface temperature (SST) of ∼4°C from Gulf Stream waters to the northeastern Bermuda Rise. If during the LGM the modern δ18O/salinity relationship was maintained, this SST contrast was reduced to 2°C. With LGM to interglacial δ18O changes of at least 2.2‰, SSTs in the western subtropical gyre may have been as much as 5°C colder. Above ∼2.3 km, glacial δ13C was higher than today, consistent with nutrient-depleted (younger) bottom waters, as identified previously. Below that, δ13C decreased continually to −0.5‰, about equal to the lowest LGM δ13C in the North Pacific Ocean. Seven pairs of benthic and planktonic foraminiferal 14C dates from cores >2.5 km deep differ by 1100 ± 340 years, with a maximum apparent ventilation age of ∼1500 years at 4250 m and at ∼4700 m. Apparent ventilation ages are presently unavailable for the LGM < 2.5 km because of problems with reworking on the continental slope when sea level was low. Because LGM δ13C is about the same in the deep North Atlantic and the deep North Pacific, and because the oldest apparent ventilation ages in the LGM North Atlantic are the same as the North Pacific today, it is possible that the same water mass, probably of southern origin, flowed deep within each basin during the LGM. Very early in the YD, dated here at 11.25 ± 0.25 (n = 10) conventional 14C kyr BP (equal to 12.9 calendar kyr BP), apparent ventilation ages <2.3 km water depth were about the same as North Atlantic Deep Water today. Below ∼2.3 km, four YD pairs average 1030 ± 400 years. The oldest apparent ventilation age for the YD is 1600 years at 4250 m. This strong contrast in ventilation, which indicates a front between water masses of very different origin, is similar to glacial profiles of nutrient-like proxies. This suggests that the LGM and YD modes of ocean circulation were the same. Author Posting. © American Geophysical Union, 2004. This article is posted here by permission of American Geophysical Union for personal use, not for redistribution. The definitive version was published in Paleoceanography 19 (2004): PA4012, doi:10.1029/2004PA001029. Suggested CitationArticle: Keigwin, Lloyd D., "Radiocarbon and stable isotope constraints on Last Glacial Maximum and Younger Dryas ventilation in the western North Atlantic", Paleoceanography 19 (2004): PA4012, DOI:10.1029/2004PA001029, https://hdl.handle.net/1912/3433 Showing items related by title, author, creator and subject. Temporal evolution of tritium-³He age in the North Atlantic : implications for thermocline ventilation Robbins, Paul E. (Massachusetts Institute of Technology and Woods Hole Oceanographic Institution, 1997-09)This thesis is a study of the physical mechanisms that ventilate the subtropical thermocline of the eastern North Atlantic. The starting point is an analysis of the existent historical database of natural and anthropogenic ... Goodkin, Nathalie F.; Druffel, Ellen R. M.; Hughen, Konrad A.; Doney, Scott C. (Nature Publishing Group, 2012-05-01)Ventilation and mixing of oceanic gyres is important to ocean-atmosphere heat and gas transfer, and to mid-latitude nutrient supply. The rates of mode water formation are believed to impact climate and carbon exchange ... Doney, Scott C. (Massachusetts Institute of Technology and Woods Hole Oceanographic Institution, 1991-08)The oceanic distributions of tritium 3H), 3He, and the chlorofluorocarbons (CFCs) can be used to constrain the time-scales of the major ventilation pathways for an ocean basin such as the North Atlantic. I present a new ...
<urn:uuid:fef2569d-25be-4927-8b50-1107ce88bdda>
3.015625
995
Academic Writing
Science & Tech.
53.167037
95,494,751
The solitary wave (John Scott Russell’s “Great Wave of Translation” ) was discovered in 1834 by Russell during investigations of the Edinburgh Union Canal. In the shallow water wave problem the solitary wave is a single hump, of large length- scale, travelling without change of form. Its existence was predicted theoretically by Rayleigh and Boussinesq in the 1870’s and then definitively by Korteweg and de Vries in 1895 in the paper in which the now-famous KdV equation was first derived. The wave of permanent form can exist only because of a delicate balance between the linear mechanism of dispersion (produced partly because of finite-depth effects and partly because of surface tension) and the nonlinear one by which the higher parts of a water wave travel more quickly than the lower. Dispersion allows the different frequency or wavelength components of a disturbance to travel away at different speeds, so that a concentrated elevation would in time disperse into an oscillatory wave train along which the wavelength changes slowly and continuously. The nonlinear mechanism in isolation leads to overturning of the wave, as the higher parts overtake the lower. The dispersive and nonlinear mechanisms precisely balance in the solitary wave, and the amplitude, lengthscale and propagation speed are all related; in a certain nondimensional set of coordinates moving with the speed of propagation of waves of infinitesimal amplitude the solitary wave solution of KdV is so that if the lengthscale a −l is given, the amplitude is 2a 2 and the speed V = 4a 2 — narrow pulses are high and travel fast. KeywordsSolitary Wave Water Wave Nonlinear Wave Equation Solitary Wave Solution Schrodinger Equation Unable to display preview. Download preview PDF. - Fordy, A.P. (ed.) (1990). Soliton Theory: A Survey of Results. Manchester University Press, Manchester.Google Scholar
<urn:uuid:c17852cf-64ea-4f16-99dd-7301bf78c499>
3.984375
392
Truncated
Science & Tech.
29.852165
95,494,755
Introduced megafauna are rewilding the Anthropocene - Publication Type: - Journal Article - Ecography, 2018, 41 (6), pp. 857 - 866 - Issue Date: Files in This Item: Copyright Clearance Process - Recently Added - In Progress - Open Access This item is currently unavailable due to the publisher's embargo. The embargo period expires on 8 Aug 2018 © 2017 The Authors Large herbivorous mammals, already greatly reduced by the late-Pleistocene extinctions, continue to be threatened with decline. However, many herbivorous megafauna (body mass ≥ 100 kg) have populations outside their native ranges. We evaluate the distribution, diversity and threat status of introduced terrestrial megafauna worldwide and their contribution towards lost Pleistocene species richness. Of 76 megafauna species, 22 (∼29%) have introduced populations; of these eleven (50%) are threatened or extinct in their native ranges. Introductions have increased megafauna species richness by between 10% (Africa) and 100% (Australia). Furthermore, between 15% (Asia) and 67% (Australia) of extinct species richness, from the late Pleistocene to today, have been numerically replaced by introduced megafauna. Much remains unknown about the ecology of introduced herbivores, but evidence suggests that these populations are rewilding modern ecosystems. We propose that attitudes towards introduced megafauna should allow for broader research and management goals. Please use this identifier to cite or link to this item:
<urn:uuid:a04f1c9e-0bb3-46f7-af68-9f8dda89b396>
3.109375
320
Truncated
Science & Tech.
11.159299
95,494,770
The City of Bellingham has a management tool for the City's terrestrial and freshwater habitats available for public use. The purpose of the Habitat Restoration Technical Assessment (Restoration Assessment) is to provide a science-based framework to guide future habitat restoration, protection, and recovery. In addition, the Restoration Assessment helps to fulfill the City's Legacies and Strategic Commitments by addressing Clean, Safe Drinking Water; Healthy Environment; and Sense of Place in actionable and accessible ways. The Restoration Assessment is intended for use by City staff, consultants, citizens and natural resource organizations to screen and select restoration, mitigation and protection actions. The document can also be used to coordinate City, private and non-profit efforts. Restoration Assessment Overview The Restoration Assessment encompasses all areas within the city limits and urban growth area, with the exception of estuarine/marine areas and the Lake Whatcom watershed. Estuarine and marine nearshore areas are addressed in a separate document, the WRIA 1 Nearshore and Estuarine Assessment and Restoration Prioritization (CGS, 2013). Similarly, the Lake Whatcom watershed has undergone numerous evaluations for restoration priorities under the separate WRIA 1 Watershed Management Project. The Restoration Assessment provides a similar science-based assessment tool for terrestrial and freshwater habitats. The Restoration Assessment divides habitats into four habitat types: The document assesses current functions for each of the four habitat types and then completes three prioritization exercises to identify restoration and protection actions: Preliminary Prioritization: prioritizes actions and locations based solely on ecological factors Secondary Prioritization: refines Preliminary Prioritization results to identify the probability of the ecological improvement by adding factors such as feasibility, scope, and scale Comprehensive Prioritization: refines the Secondary Prioritization result to identify actions that would result in improvement over multiple habitat types Prioritization results were used to rank the City's sub-watersheds into three tiers: Tier 1 sub-watersheds: Sub-watersheds where actions have the potential for the maximum practicable improvement across multiple habitat types. The Restoration Assessment provides detailed information for the Tier 1 sub-watersheds, including a menu of recommended restoration and protection actions with associated maps. Tiers 2 and 3 sub-watersheds: Sub-watersheds where actions could benefit a specific habitat or function but are not likely to have the broad improvement that define the Tier 1 sub-watersheds. The Restoration Assessment prioritizes general types of actions but does not provide the additional detailed information done for the Tier 1 sub-watersheds. The Restoration Assessment can be used to inform restoration actions and programs in many ways. - Characterizing existing ecological conditions and functions - Comparing habitat functions, restoration needs, and opportunities - Identifying which actions are most likely to be effective - Identifying where actions are most likely to be effective - Serving as the foundation for a future City mitigation program Background and Contributors The final Restoration Assessment represents a multi-year effort by the project team, citizens, and staff. Early in the project timeline, key insight was gathered from community members during an October 2013 public Open House. The team also received on-going support from a Technical Advisory Group (TAG) of academics, resource agency staff, consultants and citizens representing 10 different entities. The TAG helped guide the project from beginning through completion in 2015. When the project began, the goal was to create a Master Plan that combined scientific information with community values. However, during the process of assessing and documenting habitat conditions, the project team and TAG recognized the need for a science-based document. Therefore, they agreed the final document would be a "technical assessment," allowing users to utilize the document as a scientific tool. The final Restoration Assessment is that science-based tool. - Environmental Science Associates (ESA) - Northwest Ecological Services (technical assistance) - Veda Environmental (Advisory Group coordination and facilitation) For additional information contact: Analiese Burns, Habitat and Restoration Manager, Public Works Natural Resources, firstname.lastname@example.org (360) 778-7968. Additional Habitat Restoration Information
<urn:uuid:8fe3082b-b05d-4fae-8c35-86335f5ad595>
2.9375
865
Knowledge Article
Science & Tech.
-1.447419
95,494,790
Plastics, with their many useful physical and chemical properties, are widely used in various industries and activities of daily living. Yet, the insidious effects of plastics, particularly long-term effects on aquatic organisms, are not properly understood. Plastics have been shown to degrade to micro- and nanosize particles known as microplastics and nanoplastics, respectively. These minute particles have been shown to cause various adverse effects on aquatic organisms, ranging from growth inhibition, developmental delay and altered feeding behaviour in aquatic animals to decrease of photosynthetic efficiency and induction of oxidative stress in microalgae. This review paper covers the distribution of microplastics and nanoplastics in aquatic ecosystems, focusing on their effects on microalgae as well as co-toxicity of microplastics and nanoplastics with other pollutants. Besides that, this review paper also discusses future research directions which could be taken to gain a better understanding of the impacts of microplastics and nanoplastics on aquatic ecosystems. Chapter, First Online:
<urn:uuid:c895a6c8-e0c0-4630-a65a-fbdb31e51133>
3.171875
205
Truncated
Science & Tech.
0.398846
95,494,822
EcosystemsBook - 2007 In examining both theory and applications, this book, through useful examples, provides a stimulating introduction to ecosystems. It examines the nature, types and characteristics of ecosystems as well as investigating the interactions between various systems and human actions. Using functional ecology as the basis for applying the ecosystem concept in contemporary environmental science and ecology, this second edition of this highly successful volume has been updated to reflect the latest research. It incorporates a strengthened theme in the use of functional ecology in explaining how ecosystems work and how the ecosystem concept may be used in science and applied science, and coverage of the interactions between humans and ecosystems has been substantially bolstered with the addition of chapters on human impacts and large scale impacts on ecosystems, and global environmental change and the consequences for ecosystems. Presented in a student-friendly format, this book features boxed definitions, examples, case studies, summary points, discussion questions and annotated further reading lists. It provides a concise and accessible synthesis of both ecosystem theory and its applications, and will be a valuable resource for students of environmental studies, ecology and geography.
<urn:uuid:11831ca8-7232-4dcf-b7a3-20a149fb2c6a>
3.0625
219
Product Page
Science & Tech.
-2.842262
95,494,873
Counting the catch of a fishermen. Recently, a member of the SPACES team attended a regional workshop in Nosy Be, Madagascar from 27 to 28 April, 2015. The workshop was convened by the Wildlife Conservation Society (WCS) and Dr Emily Darling with the aim to bring together various researchers working on coral reef fisheries in the Western Indian Ocean (WIO). The workshop was attended by about 17 people, representatives of the following countries and organizations: Madagascar (WCS, Blue Ventures, WWF, CNRO and HEAL/Darwin), Mozambique (UniLurio and SPACES), Tanzania (University of Dar es Salaam) and Kenya (WCS). Gleaning on the reef flat is often a very important source of food and income. Here a SPACES team member documents what types of reef organisms are collected in Maringanha At the workshop the following topics were discussed: (1) the WCS upcoming program that aims to globalize the monitoring efforts on coral reef fisheries; (2) which indicators for coral reef fisheries in the WIO should be monitored; and (3) the current monitoring programmes of WIO coral reef fisheries at sites in Kenya, Madagascar, Tanzania and Mozambique. Information on this last topic was provided by the representatives of each organization previous mentioned. This workshop and project aims to develop a coordinated database of monitoring indicators and outline the data sharing agreements across organizations that would be necessary for future collaborations in the WIO. This post is authored by Vera Julien, a researcher at the University Eduardo Mondlane in Mozambique who is working with SPACES on artisanal fisheries surveys in Pemba, Cabo Delgado province and represented SPACES at this workshop. New Publication: Kenyan and Mozambican coral reef ‘carbonate budgets’ contribute to international picture of corals under sea-level rise. SPACES coral reef surveys have contributed to an international picture of how reefs might be able to grow to keep up with sea-level rise, recently published in Nature. The growth of coral reefs is strongly influenced by the amount and types of coral living on the reef surface, but across both regions this growth is […] Artisanal fisheries in Cabo Delgado, Mozambique: rural vs urban fishing centers Pdf link About This working paper investigates the relationship between gear, catch and income generated by the fishers in different seasons. SPACES researchers collected data using fish catch surveys at landing sites in Pemba town, Vamizi and Lalane. A standard questionnaire was used to collect the effort and location of the fishery. The fishery shows […] Changing dynamics of reef framework production in the Western Indian Ocean – Fraser Januchowski-Hartley et al.(1.2 MB) Link to pdf About Fraser Januchowski-Hartley’s presentation at the 2015 WIOMSA symposium on carbonate budget and current coral condition at SPACES sites, Mombasa, Shimoni, Vamizi, and Pemba. Linking reef ecology to island building: Parrotfish identified as major producers of island-building sediment in the Maldives. Geology 2015 Link to pdf (Open Access) Abstract Reef islands are unique landforms composed entirely of sediment produced on the surrounding coral reefs. Despite the fundamental importance of these ecological-sedimentary links for island development and future maintenance, reef island sediment production regimes remain poorly quantified. Using census and sedimentary data from Vakkaru island (Maldives), a sand-dominated atoll […] Remote coral reefs can sustain high growth potential and may match future sea-level trands. Nature Scientific Reports 2015 Link to pdf (Open Access) Abstract Climate-induced disturbances are contributing to rapid, global-scale changes in coral reef ecology. As a consequence, reef carbonate budgets are declining, threatening reef growth potential and thus capacity to track rising sea-levels. Whether disturbed reefs can recover their growth potential and how rapidly, are thus critical research questions. Here we […] Similar impacts of fishing and environmental stress on calcifying organisms in Indian Ocean coral reefs. Marine Ecology Press Series 2016 Link to pdf (Open Access) Abstract Calcification and reef growth processes dominated by corals and calcifying algae are threatened by climate and fishing disturbances. Twenty-seven environmental, habitat, and species interaction variables were tested for their influence on coral and calcifier cover in 201 western Indian Ocean coral reefs distributed across ~20° of latitude and longitude […] Environmental variability indicates a climate-adaptive center under threat in northern Mozambique coral reefs. Ecosphere 2017 Link to pdf (Open Access) Abstract A priority for modern conservation is finding and managing regions with environmental and biodiversity portfolio characteristics that will promote adaptation and the persistence of species during times of rapid climate change. The latitudinal edges of high-diversity biomes are likely to provide a mixture of environmental gradients and biological diversity […] Drivers and predictions of coral reef budget trajectories. Proceedings Of The Royal Society B-biological Sciences 2017 Link to pdf (Open Access) Abstract Climate change is one of the greatest threats to the long-term maintenance of coral-dominated tropical ecosystems, and has received considerable attention over the past two decades. Coral bleaching and associated mortality events, which are predicted to become more frequent and intense, can alter the balance of different elements that […] Ecological Underwater Surveys All information including publications, conference presentations and news items related to underwater ecological surveys is tagged below. New paper from SPACES team members shows the positive correlation between the orange-lined triggerfish and calcifier cover SPACES Co-investigators, Tim McClanahan and Nyawira Muthiga, have recently published the paper, Similar impacts of fishing and environmental stress on calcifying organisms in Indian Ocean coral reefs (Open Access– free to read) in the Marine Ecology Progress Series. They investigated coral and calcifier cover in 201 western Indian Ocean reefs. McClanahan and Muthiga found that coral and calcifier cover […] How important are parrotfish for coral reef islands? Parrotfishes are a beautiful, colourful and ubiquitous group of fishes that are present on coral reefs around the world. They’ve received a lot of attention due to their importance in both fisheries, and in how they can help to maintain coral reef health through preventing outbreaks of fleshy macroalgae, that can overgrow and out-compete corals. […]
<urn:uuid:056ead24-8c2f-49e8-836b-bb7e046e1ec9>
2.546875
1,335
Content Listing
Science & Tech.
13.897735
95,494,881
The main magnetic field, generated by turbulent currents within the deep mass of molten iron of the Earth's outer core, periodically flips its direction, such that a compass needle would point south rather than north. Such polarity reversals have occurred hundreds of times at irregular intervals throughout the planet's history – most recently about 780,000 years ago – but scientists are still trying to understand how and why. A new study of ancient volcanic rocks, reported in the Sept. 26 issue of the journal Science, shows that a second magnetic field source may help determine how and whether the main field reverses direction. This second field, which may originate in the shallow core just below the rocky mantle layer of the Earth, becomes important when the main north-south field weakens, as it does prior to reversing, says Brad Singer, a geology professor at the University of Wisconsin-Madison. Singer teamed up with paleomagnetist Kenneth Hoffman, who has been researching field reversals for over 30 years, to analyze ancient lava flows from Tahiti and western Germany in order to study past patterns of the Earth's magnetic field. The magnetism of iron-rich minerals in molten lava orients along the prevailing field, then becomes locked into place as the lava cools and hardens. "When the lava flows erupt and cool in the Earth's magnetic field, they acquire a memory of the magnetic field at that time," says Singer. "It's very difficult to destroy that in a lava flow once it's formed. You then have a recording of what the paleofield direction was like on Earth." Hoffman, of both California Polytechnic State University at San Luis Obispo and UW-Madison, and Singer are focusing on rocks that contain evidence of times that the main north-south field has weakened, which is one sign that the polarity may flip direction. By carefully determining the ages of these lava flows, they have mapped out the shallow core field during multiple "reversal attempts" when the main field has weakened during the past million years. During those periods of time, weakening of the main field reveals "virtual poles," regions of strong magnetism within the shallow core field. For example, Singer says, "If you were on Tahiti when those eruptions were taking place, your compass needle would point to not the North Pole, not the South Pole, but Australia." The scientists believe the shallow core field may play a role in determining whether the main field polarity flips while weakened or whether it recovers its strength without reversing. "Mapping this field during transitional states may hold the key to understanding what happens in Earth's core when the field weakens to a point where it can actually reverse," Hoffman says. Current evidence suggests we are now approaching one of these transitional states because the main magnetic field is relatively weak and rapidly decreasing, he says. While the last polarity reversal occurred several hundred thousand years ago, the next might come within only a few thousand years. "Right now, historic records show that the strength of the magnetic field is declining very rapidly. From a quick back-of-the-envelope prediction, in 1,500 years the field will be as weak as it's ever been and we could go into a state of polarity reversal," says Singer. "One broad goal of our research is to provide some predictive capability for what could happen and what could be the signs of the next reversal." Kenneth Hoffman | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:b5462423-c0e3-448d-bd70-5261c114dd04>
3.921875
1,336
Content Listing
Science & Tech.
44.254764
95,494,883
When is a clone not a clone? According to new research from Rockefeller University’s Peter Mombaerts, creating mice by a two-step transfer of DNA does not reliably produce animals that are genetic duplicates of an original, and in some cases even creates “cloned” mice of the wrong sex. Scientists typically clone mice (and other animals) using a single-step process in which a donor cell nucleus -- containing the animal’s DNA -- is transferred into an egg, which has had its nucleus removed. The embryo is then activated, cultured and placed into the uterus of a female mouse. Recently, scientists have developed an alternative, two-step cloning procedure that may be more efficient. In the first step, after nuclear transfer the embryo is allowed to develop to a blastocyst stage, an early form of an embryo that is essentially a hollow ball of cells. The embryo is not placed into the mouse uterus, but converted into an embryonic stem cell line in a culture dish. In the second step, the embryonic stem cells are injected into another blastocyst made up of tetraploid cells -- cells with twice the DNA of normal cells. The process was believed to help ensure that the resulting embryo contains only descendants of embryonic stem cells, while allowing tissues outside of the embryo, such as the placenta, to be formed from the tetraploid cells (descendants of embryonic stem cells do not contribute to the placenta). Mombaerts and colleagues Jinsong Li, Tomohiro Ishii, and Duancheng Wen report in the September 20 issue of Current Biology that the resulting embryo is not necessarily a clone of the original mouse. They supply two pieces of evidence. In a first series of experiments the researchers applied the two-step method to 619 embryos. Their strategy was to mark the tetraploid cells with a “reporter” gene, called beta-galactosidase. They thus produced a total of 11 live born mice by injecting unmarked embryonic stem cells into marked tetraploid blastocysts. As expected, their placentas contained marked cells, but three mice also had marked cells within their bodies. In other words, some of their body cells were descendants of the tetraploid cells, meaning the mice were not pure clones but a mixture of two cell types. In a second series of experiments, Mombaerts and colleagues applied the two-step procedure to 204 embryos, three of which survived to adulthood. The embryonic stem cells injected into the blastocysts were all derived from a male cell line, but two of the offspring were female. The scientists then mated a male and a female mouse and obtained offspring. Analysis of the embryonic stem cells showed that some had lost the Y chromosome, which is necessary for an organism to display male characteristics. “Loss of the Y chromosome, a gross genetic alteration, indicates that the genome of the mouse donating the nucleus is not copied exactly in these mice,” says Mombaerts. “If something as big as the Y chromosome can get lost along the way during the two-step cloning procedure, there may be many other genetic alterations that are not as dramatic, but do alter the genetic structure of the cells in those mice.” While the majority of the cells in these mice probably do come from the embryonic stem cells, the occasional contribution by the tetraploid cells and the genetic alterations that arise spontaneously during embryonic stem cell culture are two reasons for which it is not appropriate to call the mice pure clones. Instead, Mombaerts now refers to these mice as “clonal” to differentiate them from true clones that are produced by the conventional, one-step cloning method. Publication: Current Biology 15(18): R756-R757(2005) Source: Rockefeller University Explore further: Researchers identify protein essential for making stem cells
<urn:uuid:b3a2f101-2173-412b-97ad-2aece8ec176f>
3.375
811
News Article
Science & Tech.
32.454228
95,494,892
The thermal behaviour of spacecraft especially of manned ones is governed by various thermophysical effects. Heat transfer by radiation, conduction and convection influence the overall behaviour of the spacecraft. Energy production, distribution and consumption have also strong impacts on the temperature field prevailing in the spaceplane. Moreover, chemical reactions, metabolic rates of the crew and external heat loads determine the thermal behaviour of the spacecraft. For design and optimisation of the system and its subsystems a mathematical model is required which describes these effects. There are many tools available with which parts of the above mentioned aspects can be covered. However, a comprehensive model of a spacecraft reflecting all aspects can only be established with a combination of different simulation tools. In the present paper methods are described, how different simulation tools can be used to set up such a model. Some of them are used during the definition and set up of the models. Others are required during the execution of the calculations. As an example, the procedure to establish the Thermal Mathematical Models for the Pressurized Volumes of HERMES is described. During setup and establishment of the ESATAN model the radiative heat transfer between the inner surfaces was studied. At the same time a CFD study of the flow field in the volumes was carried out. The resulting node model is then connected with a modular system simulation of the ECLSS to get a higher level model in which both, the lumped parameter modelisation of the pressurized volumes as well as the component based model are combined and iteratively processed. Steady state and transient loadcases have been calculated and gave excellent results.
<urn:uuid:67db5155-6eea-4898-8e57-fcee923b3264>
2.890625
324
Academic Writing
Science & Tech.
23.72572
95,494,895
Sign In / Sign Out - ASU Home - My ASU - Colleges and Schools - Map and Locations Dormant: something that is inactive. Not growing or developing. Fossil: the remains, or an impression of remains of a plant or animal that existed in a past geological age that has been removed from the soil... more Paleontology: a science dealing with the life of past geological periods. Often, people think of paleontology as only the study of fossils, like dinosaur bone hunters. Paleontologist study much more than bone fossils, including plants, pollen, spores, and seeds both living and fossilized. These scientists are called palynologists. Radiocarbon: a chemical analysis tool used by scientists to find out how old organic materials are based on their content of the radioisotope carbon-14. Radiocarbon dating is believed to reliably tell the age of organic materials up to 40,000 years old. Also called carbon dating and carbon-14 dating... more You might be asking yourself, how long can a seed stay dormant? This is a question that scientists have tried to answer in many different ways. Scientists have found they can still germinate seeds found in preserved plant samples in an herbarium. An herbarium is a collection of pressed plants. The oldest living seed they have found this way is 90 yrs old. That's probably older than your grandmother! Another scientist, William James Beal, started an experiment in 1880 - that's more than 100 yrs ago! He put seeds into 20 time capsules and he dug one up every 5 years. He died 25 years after he started the experiment, so he was only able to dig up 5 time capsules. He found that some of the 25 year old seeds could still sprout. Another scientist knew of his experiment and decided to keep it going. For the next 15 years he dug up another time capsule every 5 years. He was still finding seeds that could germinate, so he decided to change it to every 10 years. That way he could make the experiment last longer. The latest time capsule was unearthed in April of the year 2000, by yet another scientist. The seeds inside had been buried for 120 years, and some of them could still germinate. The next time capsule is scheduled to come up in the year 2020, the seeds will be 140 years old, how old will you be? To try and find the oldest living seed, some scientists have been collecting and trying to germinate seeds found in archeological and prehistoric sites. They thought they had found a 10,000 year old living seed, because they found it in an animal burrow in layer of soil that was that old. But some scientists argued that they couldn't really be sure that the seeds were as old as the burrow they were found in. How could they be sure the seed didn't fall in while the paleontologists explored the site? So a scientist named Jane Shen-Miller decided to radiocarbon date old seed pods and germinate the seeds inside. She now holds the, so far, undisputed record for the oldest living seed to germinate is over 1000 years. It was radiocarbon dated to 1288 years old (there is a possible error of +/- 271 years). Dr. Biology. (2009, October 07). How Long Can Seeds Live Underground?. ASU - Ask A Biologist. Retrieved July 20, 2018 from https://askabiologist.asu.edu/content/how-long-can-seeds-live-underground Dr. Biology. "How Long Can Seeds Live Underground?". ASU - Ask A Biologist. 07 October, 2009. https://askabiologist.asu.edu/content/how-long-can-seeds-live-underground Dr. Biology. "How Long Can Seeds Live Underground?". ASU - Ask A Biologist. 07 Oct 2009. ASU - Ask A Biologist, Web. 20 Jul 2018. https://askabiologist.asu.edu/content/how-long-can-seeds-live-underground Celery seeds from a jar. How long could they be stored and still be planted and grow? A long running experiment is letting us know the answer.
<urn:uuid:9c98515c-937f-49c1-90b6-c9eed06b809e>
3.625
888
Q&A Forum
Science & Tech.
68.449482
95,494,903
PhD Student, Pennsylvania State University Climate change is widely recognized as the leading challenge. Temperature and other environmental factors can strongly influence plants and thus have implications for other trophic levels. Temperature is predicted to rise by 2-5°C by the end of the century. Elevated temperature may affect both insect and plant performances to alter the outcome of plant-herbivore interaction. Despite the importance, limited work has investigated the biotic interactions with elevated temperature. Here we explored how increasing temperature affects the plant-herbivore interactions in a model crop species tomato, Solanum lycopersicum against tomato fruit worm, Helicoverpa zea. In particular, two questions were addressed: a) How will elevated temperature affect the development and defense strategies in plants? b) How will this affect (both direct and indirect) the herbivore performance? Results: a) Plant Performance: • Elevated temperature decreased the shoot and root biomass by 18% and 14 % in 30°C, and 65% and 38% in 35°C respectively • Higher temperature significantly reduced plant height and the rate of photosynthesis. b) Defensive Proteins and Glandular Trichome Density • A significant 23 and 63 % higher PPO activity (constitutive) was observed with plants grown at 30°C compared with 25°C and 35°C. • In response to caterpillar feeding (induced) PPO activity was 1 and 31% higher in 30°C compared with 25°C and 35°C . • Constitutive TPI activity were insignificant between treatments, however induced TPI activities fell by 7.4 and 24.4 % at 30°C and 35°C with respect to highest recorded at 30°C. • Trichome density was highest among the plants grown at 30°C, followed by 25°C and 35°C. c) Herbivore Performance: • Direct effect: Prolonged development and increased body mass at lower temperatures. • Indirect effects (mediated via hostplant quality): Plants grown at elevated temperature hindered the growth of larvae • Elevated temperature affects the plant growth rates, which alters the concentration of defensive compounds and density of glandular trichomes. • At 35°C, a temperature well above the optimal growth temperature of 25°C, plant growth, PPO, TPI and trichome density were lowest indicative of the super-optimal thermal stress. • The growth rate increased linearly due to direct effect of temperature but the host-plant quality affected growth, resulting from an interactive effect of primary/secondary metabolites and trichome density Abstract: Climate warming will fundamentally alter basic life history strategies of many ectothermic insects. In the lab, rising temperatures increase growth rates of lepidopteran larvae but also reduce final pupal mass and increase mortality. Using in situ field warming experiments on their natural host plants, we assessed the impact of climate warming on development of monarch (Danaus plexippus) larvae. Monarchs were reared on Asclepias tuberosa grown under 'Ambient' and 'Warmed' conditions. We quantified time to pupation, final pupal mass, and survivorship. Warming significantly decreased time to pupation, such that an increase of 1 °C corresponded to a 0.5 day decrease in pupation time. In contrast, survivorship and pupal mass were not affected by warming. Our results indicate that climate warming will speed the developmental rate of monarchs, influencing their ecological and evolutionary dynamics. However, the effects of climate warming on larval development in other monarch populations and at different times of year should be investigated. Pub.: 04 Nov '15, Pinned: 02 Nov '17 Abstract: Rising temperatures can influence the top-down control of plant biomass by increasing herbivore metabolic demands. Unfortunately, we know relatively little about the effects of temperature on herbivory rates for most insect herbivores in a given community. Evolutionary history, adaptation to local environments, and dietary factors may lead to variable thermal response curves across different species. Here we characterized the effect of temperature on herbivory rates for 21 herbivore-plant pairs, encompassing 14 herbivore and 12 plant species. We show that overall consumption rates increase with temperature between 20 and 30 °C but do not increase further with increasing temperature. However, there is substantial variation in thermal responses among individual herbivore-plant pairs at the highest temperatures. Over one third of the herbivore-plant pairs showed declining consumption rates at high temperatures, while an approximately equal number showed increasing consumption rates. Such variation existed even within herbivore species, as some species exhibited idiosyncratic thermal response curves on different host plants. Thus, rising temperatures, particularly with respect to climate change, may have highly variable effects on plant-herbivore interactions and, ultimately, top-down control of plant biomass. Pub.: 27 May '14, Pinned: 02 Nov '17 Abstract: Climate change can profoundly alter species' distributions due to changes in temperature, precipitation, or seasonality. Migratory monarch butterflies (Danaus plexippus) may be particularly susceptible to climate-driven changes in host plant abundance or reduced overwintering habitat. For example, climate change may significantly reduce the availability of overwintering habitat by restricting the amount of area with suitable microclimate conditions. However, potential effects of climate change on monarch northward migrations remain largely unknown, particularly with respect to their milkweed (Asclepias spp.) host plants. Given that monarchs largely depend on the genus Asclepias as larval host plants, the effects of climate change on monarch northward migrations will most likely be mediated by climate change effects on Asclepias. Here, I used MaxEnt species distribution modeling to assess potential changes in Asclepias and monarch distributions under moderate and severe climate change scenarios. First, Asclepias distributions were projected to extend northward throughout much of Canada despite considerable variability in the environmental drivers of each individual species. Second, Asclepias distributions were an important predictor of current monarch distributions, indicating that monarchs may be constrained as much by the availability of Asclepias host plants as environmental variables per se. Accordingly, modeling future distributions of monarchs, and indeed any tightly coupled plant-insect system, should incorporate the effects of climate change on host plant distributions. Finally, MaxEnt predictions of Asclepias and monarch distributions were remarkably consistent among general circulation models. Nearly all models predicted that the current monarch summer breeding range will become slightly less suitable for Asclepias and monarchs in the future. Asclepias, and consequently monarchs, should therefore undergo expanded northern range limits in summer months while encountering reduced habitat suitability throughout the northern migration. Pub.: 24 Feb '15, Pinned: 02 Nov '17 Abstract: For insect herbivores, rising temperatures lead to exponentially higher metabolic rates. As a result, basic nutritional demands for protein and carbohydrates can be altered at high temperatures. It is hypothesized that temperature‐driven increases in metabolic nitrogen turnover will exacerbate protein limitation by increasing metabolic nitrogen demand. To test this hypothesis, the present study examines whether metabolic nitrogen turnover at higher temperatures causes protein limitation of a generalist herbivore, the beet armyworm Spodoptera exigua Hübner (Lepidoptera : Noctuidae). Third‐instar S. exigua larvae were reared at 25 and 30 °C on three artificial diets of varying protein : carbohydrate ratios (23 : 26, 17 : 26 and 6 : 26 %P : %C, respectively) and their growth rates, metabolic nitrogen demand and consumption rates were measured. Warming was found to lead to temperature‐induced protein limitation of the S. exigua larvae by increasing metabolic nitrogen demand at the same time as reducing nitrogen digestion efficiency. Because climate change is increasing atmospheric temperatures rapidly worldwide, it is suggested that a better understanding of how temperature change can influence metabolic demands will provide key information for predicting herbivore growth rates and foraging strategies in the future. Pub.: 09 Apr '16, Pinned: 02 Nov '17
<urn:uuid:fc7ef5e7-3d26-4ece-a765-b83d5d1e3673>
3.453125
1,684
Academic Writing
Science & Tech.
15.274562
95,494,911
According to The New Daily, scientists accidentally discovered that waste product from the petroleum industry, when mixed with waste product from the citrus industry, produce a polymer capable of absorbing mercury out of water. “We take sulphur, which is a by-product of the petroleum industry, and we take limonene, which is the main component of orange oil, so is produced in large quantities by the citrus industry, and we’re able to react them together to form a type of soft red rubber, and what this material does is that it can grab mercury out of the water,” Dr Chalker said. He said they conducted toxicity studies to make sure that the polymer itself was not harmful to the environment. “That gives us hope that we’ll be able to commercialise and actually use this in the environment,” Dr Chalker said. |“In comparison to the waste produced by every other kind of electricity production, that quantity is close to zero.”| |“If you’re going to do this thing, you might as well announce it’s a container.”| |“The only thing worse than assuming that carbon removal will save the day is assuming it will save the day.”| |Turn Waste Plastic into a Precious Resource| |“Cultured meat is finally on its way towards becoming a commercial reality.”| |“There are but two powers in the world, the sword and the mind. In the long run the sword is always beaten by the mind.”| |Japanese Robot Serves Ice Cream From Inside a Vending Machine| |“Although this transition is irreversible, it carries potential for several robotic applications.”| |How to Avoid Jury Duty| |Why, Typewriters Are Alive and Well, Thank you| |CaptchaTweet: Write Tweets in Captcha Form| |“If you really want to save the planet, you should die.”| |The (Very Scary) People of Public Transit| |When the Wrong Hastag Can Get You Killed by an Assassination Drone| |Somebody Needs to Build a New Facebook Stat| |Bizarre Record Covers|
<urn:uuid:d0afdb63-6a0d-41fb-abba-230c162580f0>
2.578125
477
Content Listing
Science & Tech.
38.952222
95,494,913
Authors: Nainan K. Varghese Discovery of gravitational attraction necessitated a cause for even distribution of macro bodies throughout universe. Assumed mutual attraction due to gravitation defies their even distribution unless it is counteracted by repulsion between them, at least in case of large-scale groups of macro bodies. None of current concepts supplies a rational theory. Alternative concept, presented in book ‘MATTER (Re-examined)’, proposes a logical explanation that describes how neighbouring galaxies overcome gravitational attraction, to settle at a stable distance from each other, during major part of their life. Halo at outer periphery of a spinning galaxy is formed by independent primary matter-particles. Their primary electric fields are mechanically oriented to create sufficient electromagnetic repulsion between neighbouring galaxies. Additionally, outward expansions of universal medium from central regions of galaxies or from novae in them cause gravitational repulsion between them. These are natural processes originated in universal medium, which encompasses entire universe. Since galaxies are able to maintain their relative positions in space, universe (as a whole) is able to have perpetual steady state of existence, except for local recycling of matter. Comments: 10 Pages. Unique-IP document downloads: 331 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:3cee0fc8-dce7-489c-b7f3-1c94165dab23>
3.140625
389
Knowledge Article
Science & Tech.
19.164151
95,494,918
Sea level rise change predictions New research suggests that some of the early data of sea level rise as a result of climate change may be skewed. The 1991 eruption of Mt. Pinatubo may have led to biased results, but researchers still believe sea levels could rise 20 feet by 2100. Facebook Prev Article Next Article Videos Checkout these cool gadgets... Related Posts In This Taxi You Must Pedal to Get a Discount Toy Race Cars Are FAST ! Some Boys Never Grow Up Helicopter + Wrecking Ball = Awesomeness! All Suit Up Scenes of Iron Man – 2008 to 2017 Make A Condom Glowing Lamp How Fast Can You Duck Tape Yourself To A Wall?
<urn:uuid:af6c070d-d5b0-49d4-a55a-da4d7eb958d1>
3.15625
137
Spam / Ads
Science & Tech.
65.037817
95,494,950
30 years to divert asteroid that is threatening the Earth By GEORGE GORDAN, Daily Mail Last updated at 10:11 26 June 2006 The asteroid - named Apophis - could hit with a force equal to 880million tons of TNT and 65,000 times greater than the atomic bomb dropped on Hiroshima. It is feared the damage would exceed the combined effects of Hurricane Katrina, the Asian tsunami and the San Francisco earthquake. The odds of the chunk of rock actually striking the planet on April 13, 2036, are only one in 6,250. But experts believe the risk is too great to ignore. The Americans have ordered their space agency NASA to investigate how to nudge or blast Apophis into a different orbit within 20 years. The big fear is that the 1,000ft-wide asteroid will slip through a gravitational keyhole as it comes closer to the earth at that time. Ideas for the "encounter" range from a laser cannon fired from the Moon or the top of a spacecraft - or actually smashing a spacecraft into the asteroid itself. The third option proposed by astronauts Ed Lu and Stanley Love is to use a gravitational tractor. A nuclear-powered spacecraft would hover near Apophis using its gravitational pull to change its orbit. Blasted off course If all else fails, Apophis - named after the Egyptian god known as The Destroyer - could still be blasted off course with a nuclear device. "For all practical purposes the mission would have to be done before the 2029 flyby to take advantage of the leverage afforded by an encounter," said Steve Chesley, an astronomer with the multi-million pound Near Earth Object Programme at NASA's Jet Propulsion Laboratory in Pasadena, California. America's B612 Foundation, which aims to defend Earth from asteroid impacts, last year warned U.S. Congress about the threat. Its founder, former NASA astronaut Rusty Schweickart, wants to land a radio transponder on the asteroid's back so the slightest deviation of its path will be instantly relayed to earth. Mr Schweickart is also presenting a paper to the United Nations discussing how courses of action in space may become a global decision as opposed to one taken in Washington. Most watched News videos - Beach in Ciutadella Menorca hit by mini-tsunami 'rissaga' - Love Island TEASER: Georgia gets anxious as she could be kicked off - Courageous woman hides victim from kidnappers till cops arrive - Model Annabelle Neilson walks the catwalk in 2010 fashion show - Putin hands Trump a football during meeting in Helsinki - Brave lion cub forced to jump into raging river to follow mother - The streets of Alcudia in Mallorca are flooded by mini-tsunami - 'She is a remainer': Rees-Mogg launches attack on PM - Driver who punched mother during road rage insists he's a victim - 'He's a solid guy': Steve Bannon on Tommy Robinson - Emmanuel Macron 'dabs' with French players Mendy and Pogba - Two Duchesses at Wimbledon: Kate and Meghan to watch women's final
<urn:uuid:d095a756-82c7-4c96-ad6e-0d8a562e26a6>
2.671875
657
Truncated
Science & Tech.
37.810595
95,494,951
Imagine this: your fuel gauge is hovering near empty. You stop by the nearest store, turn in your empty hydrogen cartridge, buy a full one and pop it into your car. Presto, you’re on your hydrogen-powered way again, emitting just the faintest traces of water out the tailpipe. Researchers at SSRL and Stanford have taken a step closer to this futuristic vision by adding hydrogen to tiny cylinders made entirely out of carbon. Carbon nanotubes, 50,000 times narrower than a human hair, have excited the imaginations of scientists hoping to make nano-electronics. Recent experiments at SSRL and the Advanced Light Source in Berkeley have shown that the tubes are also a promising material for storing hydrogen safely, efficiently and compactly. The basic idea is this: use electricity to split water into hydrogen (and oxygen) atoms, put the hydrogen into a fuel cell, which strips the electron from the hydrogen atom and forces it across a membrane, generating an electrical current which can power your car. The hydrogen ions are reunited with oxygen, making a watery exhaust. In their attempt to store hydrogen, the researchers bombarded a film of carbon nanotubes with a hydrogen beam. Then they studied the film with different x-ray spectroscopy techniques to see if any hydrogen atoms had formed chemical bonds with the carbon. To their delight, they found that about 65 percent of the carbon atoms had bonded to hydrogen atoms. “It was a surprise that we could get so many carbon-hydrogen bonds. It gives us hope it can be used as a material for storing hydrogen,” said Anders Nilsson (Materials Research). Single-walled carbon nanotubes are essentially a one-atom-thick layer of carbon rolled into a tube. All the carbon atoms are on the surface, allowing easy access for bonding. The carbon atoms have double bonds with each other. The incoming hydrogens break the double bonds, allowing a hydrogen to attach to a carbon while the carbon atoms renew their grip on each other with single bonds. The carbon nanotubes offer safe storage because the hydrogen atoms are bonded to other atoms, rather than freely floating as a potentially explosive gas. The researchers estimated that five percent of the total weight of the hydrogenated nanotubes came from the hydrogen atoms, and they are already working to boost that number. For its FreedomCAR program, the Department of Energy has set the goal of developing a material that can hold six percent of the total weight in hydrogen by the year 2010. Because hydrogen is the lightest element, the storage material also needs to be light—as is carbon—to hold a high percentage of hydrogen by weight. In addition to upping the weight percent of hydrogen, researchers also need to overcome challenges in releasing the stored hydrogen so it can be used in a fuel cell. Currently the hydrogen-carbon bonds break above 600 °C, but two cycles of hydrogenating the carbon nanotubes and then breaking the hydrogen-carbon bonds appears to cause defects in the tubes. Ideally, the hydrogen would be released at 50 to 100 °C. Adding metal catalysts and adjusting the radius of the tubes are potential solutions. This was the first experiment conducted on the new SPEAR3 beamline 5-1. The work was supported by the Global Climate Energy Project as well as the DOE. Source: Stanford Linear Accelerator Center, by Heather Rock Woods Explore further: Why gold-palladium alloys are better than palladium for hydrogen storage
<urn:uuid:28411964-b2c5-4fc1-aeff-3ba66461f990>
4.09375
715
News Article
Science & Tech.
39.159204
95,494,956
That’s the latest startling evidence to emerge from research into the likely fate of reefs under climate change and rising sea levels, at the ARC Centre of Excellence for Coral Reef Studies (CoECRS). “We’ve known for a while that having a lot of sediment in the water is bad for corals and can smother them. What we didn’t realize is how permanent this state of affairs can become, to the point where it may prevent the corals ever re-establishing,” says Professor David Bellwood of CoECRS and James Cook University. The killer blow for a degraded coral reef is a thick mat of sand and weeds that shrouds the rocky surfaces on which the corals would normally grow, preventing them from re-establishing. This gritty algal ‘turf’ has shown itself to be remarkably hardy and, once in place, makes it almost impossible for the corals to return. If sea levels rise, then the smothered reef ‘drowns’ and never recovers, Prof, Bellwood says. “We know this from geological history, at the time of previous sea level rises. The reason we are doing the work is to see whether or not coral reefs will be able to keep up with rising sea levels under climate change.” But Prof. Bellwood and colleague Dr Chris Fulton from the Australian National University have also uncovered a remarkable link in the chain which explains why the algal turf can win in its ‘turf war’ with the corals. When the water is thick with sediment and it settles on the seaweeds, herbivorous reef fish turn up their noses at the gritty food, much as humans disdain a sandwich that has been dropped on a sandy beach. “Remarkably we found that when there is little sediment around and plenty of fish, the fish ‘mowed’ the weeds very fast, eating two thirds of their length in about 4 hours. This action by fish in keeping the algal turf down gives the corals a chance to re-establish” said Dr Fulton. “But if there is a lot of sediment in the water, the fish go off their feed, the weeds grow, more sand settles – and the murky shroud that smothers the reef becomes more stable, often permanent. Then, when sea levels rise, the reef drowns.” Prof. Bellwood says that in many cases the sediment is generated naturally by the reef itself, particles are swept into its back lagoon and then stirred up by wind, tide and wave to settle on the turf-covered flats. “In those cases it is almost like the reef defecating onto itself,” he adds. In other cases the sediment is released from the land, often as a result of human activity such as farming, grazing, land clearing or construction. In either case, if there is enough sediment in the water to settle on the seaweed, it turns the weed-eating fish off their meal. “We’re not entirely sure why this is - it may be that the sediment acts as an antacid and gives the fish indigestion by preventing their stomach acids digesting their food. Or it may simply be that fish, like people, don’t appreciate a mouthful of sand and mud.” There is not a lot that humans can do to disrupt the natural processes that cause reefs to smother under stable algal turfs, then drown as sea levels rise, Prof. Bellwood says. However, he adds, there is plenty we can do to reduce our own impact on the process by checking the flow of erosion off the land onto coral reefs, and by ensuring that populations of weed-eating fish are maintained at levels high enough to control the weeds - and give the corals an even chance of making a comeback.More information: David Bellwood | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:c7f79b5f-6f28-4c34-9f0f-ccb6b31f5617>
3.5
1,475
Content Listing
Science & Tech.
51.638415
95,494,959
The Science of Power GenerationeBook - 2013 Power generation is a relatively recent concern because humans had little need for sustained power until the dawn of the Industrial Revolution. Today, modern civilization is wholly dependent on the production and distribution of power. Without it, our way of life would be extinguished. In Lights On! , Mark Denny reveals the mysterious world of power generation. He takes us on a fun tour, examining the nature of energy, tracing the history of power generation, explaining the processes from production through transmission to use, and addressing questions that are currently in the headlines, such as: * Is natural gas the best alternative energy source in the near term? * Could solar power be the answer to all our problems? * Why is nuclear power such a hard sell, and are the concerns valid? Devoting individual chapters to each of the forms of power in use today--electrical, coal, oil and natural gas, hydro, nuclear, and solar--Denny explains the pros and cons of each, their availability worldwide, and which are in dwindling supply. Making clear that his approach is that of "a scientist and engineer, not a politician or businessman," Denny addresses environmental concerns by providing information to help readers understand the science and engineering of power generation so they can discuss contemporary energy issues from an informed perspective. For those who wish to delve deeper into the science, a technical appendix provides estimations for a variety of power generators. Anyone who is interested in how energy works and how it is transformed to power our lives will get a charge out of Lights On!
<urn:uuid:3b74a2b9-bfa4-4148-9b36-ab474eac4095>
3.328125
321
Product Page
Science & Tech.
31.210801
95,494,975
Please see the attached file for the fully formatted problems. 10. A compound C10H13NO (A) has the following 1H NMR spectrum: delta (ppm) Integration Multiplicity 7.4-7.2 4H multiplet 3.2 3H singlet 2.3 3H singlet 1.8 3H singlet Compound A was insoluble in 5% aqueous HCl, but dissolved slowly on heating with aqueous sodium hydroxide. This reaction mixture was cooled and carefully acidified. Distillation of this mixture gave a volatile acidic liquid (B) which smelled like vinegar. This material was titrated and found to have a molecular weight of 60 ±I. The residue from the distillation of B was made alkaline, and from this solution a basic substance Compound C (C8H11N) was obtained. Treatment of C with excess benzenesulphonyl chloride in alkaline solution gave a solid (D), which was insoluble in acid. Compound C was subjected to high pressure hydrogenation to give Compound E (C8H17N). Treatment of E with excess methyl iodide followed by heating with silver oxide gave trimethylamine and 3- methylcyclohexene. Give structures for A, B, C, D, and E. Show your reasoning and reaction equations. 12. Show plausible mechanisms for the following....© BrainMass Inc. brainmass.com July 17, 2018, 9:26 pm ad1c9bdddf Easy stepwise explanation to elucidate structure of compounds using given spectroscopic data as well as reaction mechanisms
<urn:uuid:134ec1e0-b616-4cbe-88c6-5d2dbdcd75aa>
3.03125
346
Tutorial
Science & Tech.
57.132848
95,494,977
posted by Siddhartha When electricity is passed through water hydrogen and Oxygen gas are found change. A change in the state of matter occurs. Is this a chemical or physical change. This is a physical change. All changes of state are physical changes. You are right that changes of state are physical changes; however, this is the decomposition of H2O into two different substances and that is a chemical change. Water changed from a liquid to a vapor is a change of state and a physical change but the water liquid is H2O and the water vapor is H2O. In the problem, the electrolysis of water changes the H2O to H2 and O2 and the fact that you have changed a liquid to two gases is irrelevant. Yes it's right after electrolysis hydrogen and oxygen become separate and everybody will call it as a physicalchange but it's wrong when it's vaporised hydrogen becomes H2 oxygen becomes O2 it can't be formed as water again so it's a chemical change
<urn:uuid:4b64211b-76a1-4531-9f1a-4522ee725790>
2.9375
210
Comment Section
Science & Tech.
45.217371
95,494,979
Skip to comments.Astronomy Picture of the Day -- Comet Lovejoy Before Galaxy M63 Posted on 12/02/2013 8:49:44 PM PST by SunkenCiv Explanation: Comet Lovejoy was captured last week passing well in front of spiral galaxy M63. Discovered only three months ago and currently near its maximum brightness, Comet Lovejoy can be seen near the Big Dipper from dark northerly locations before dawn with the unaided eye. An unexpected rival to Comet ISON, C/2013 R1 (Lovejoy), pictured above, is currently sporting a large green coma and a beautifully textured ion tail. Comet Lovejoy is now headed back to the outer Solar System but should remain a good sight in binoculars for another few weeks. Conversely, spiral galaxy M63, lies far in the distance and is expected to remain stationary on the sky and hold its relative brightness for at least the next few million years. (Excerpt) Read more at 184.108.40.206 ... [Credit & Copyright: Damian Peach] They should use that for the cover of an astronomy textbook. Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.
<urn:uuid:2d2c81c2-cb3f-496a-b41f-9a26a929b5a8>
2.828125
285
Comment Section
Science & Tech.
52.991783
95,494,988
- Open Access Do birds in flight respond to (ultra)violet lighting? © The Author(s) 2017 Received: 29 September 2017 Accepted: 3 December 2017 Published: 19 December 2017 Concerns for bird collisions with wind turbines affect the deployment of onshore and offshore wind-power plants. To avoid delays in consenting processes and to streamline the construction and operation phase, functional mitigation measures are required which efficiently reduces bird mortality. Vision is the primary sensory system in birds, which for a number of species also includes the ultraviolet spectrum. Many bird species that are known to collide with offshore wind turbines are sensitive in the violet or ultraviolet spectrum. For species that are mainly active at lower ambient light levels, lighting may deter birds from the lit area. Utilizing (ultra)violet lights may in addition not disturb humans. However, we do not know whether UV-sensitive birds in flight actually respond behaviourally to UV lights. We therefore tested the efficacy of two types of lights within the violet (400 nm) and ultraviolet (365 nm) spectrum to deter birds from the lit area. These lights were placed vertically and monitored continuously between dusk and dawn using an avian radar system. Relative to control nights, bird flight activity (abundance) was 27% lower when the ultraviolet light was on. Violet light resulted in a 12% decrease in overall abundance, and in addition, a vertical displacement was seen, increasing the average flight altitude by 7 m. Although temporal changes occurred, this effect persisted over the season below 40 m above sea level. Although the results from this pilot study are promising, we argue there still is a long way to go before a potentially functional design to mitigate collisions that has proven to be effective in situ may be in place. To satisfy the energy demand and to increase the share of renewable resources wind energy deployment has increased (Intergovernmental Panel on Climate Change 2011). Concerns about the impact of wind energy on birdlife, through collisions, disturbance and habitat loss, have at the same time become more acute (Gove et al. 2013). Because of the fast rate of deployment, it has become challenging to verify impacts on birdlife and develop ways of mitigating these (Gove et al. 2013). These concerns can be an economic problem for the energy industry and for society as a whole (Cole 2011), and reduces the predictability in the planning and consenting processes. Collision with wind turbines is the main cause for direct bird mortality at wind-power plants. However, mitigating wind-turbine induced bird mortality is particularly complicated because it may originate from collision, disturbance and barrier effects that are site- and species-specific (Marques et al. 2014; May et al. 2015). Concerns regarding collision risk have stimulated research to quantify these effects post-construction, and to predict the extent of effects pre-construction in connection with the planning of new wind-power plants. Developing and testing techniques for mitigating collision risk, however, still need to be improved (Intergovernmental Panel on Climate Change 2011). It is paramount to develop practical and functional tools, products and other measures that reduce bird mortality related to offshore and onshore wind energy production, in order to avoid delay in consenting processes and to streamline the construction and operation phase while preserving bird populations at those sites. In practice, it is normally a very long step from documenting the extent of the impact caused by the construction and operation of wind-power plants to successful mitigation (Lehman et al. 2007). The main reason why mortality-reducing tools are not already developed is due to the challenges in assessing the effectiveness of such tools in situ (May et al. 2015). The detected number of birds killed at a wind-power plant is often too low, due to removal by scavengers or observer bias, to be used as the only criteria to assess a measure’s effectiveness within a limited timeframe. Studies of behavioural responses, such as avoidance and/or reduced activity in the vicinity of the turbines due to the measures tested, are therefore necessary (May et al. 2017). Due to the relatively short history of wind energy production, there is a lack of comparative studies of scientific quality (May et al. 2015). Investigations in this field are connected with several uncertainties such as statistical significance, time required to get reliable results, number of trials/turbines, costs and practicalities, especially in connection with offshore installations. Still, in situ testing of promising pathways to mitigate impacts are important to increase our knowledge incrementally. One promising approach is that of utilizing the sensitivity of many bird species within the (ultra)violet spectrum to deter birds from turbines using ultraviolet or violet (UV) lighting (May et al. 2015). Visual deterrence using UV lights will be most effective at low light levels, and may therefore mainly help to mitigate collisions of nocturnal and crepuscular flight activity between dusk and dawn. Still, to-day there exists no proof-of-concept whether birds are in fact deterred in flight from areas lit up by (ultra)violet light (but see Hunt and McClure 2015). Vision is the primary sensory system of birds, with a higher acuity and a generally lower temporal resolution than humans. It will therefore be important to take into account avian visual ecology when designing mitigation measures (Martin 2011). Blackwell and Bernhardt (2004) and Blackwell et al. (2009, 2012) empirically showed that birds’ visual systems (visual acuity and visual fields) enable them to respond behaviourally to lighting regimes (no, constant or pulsating lights) in approaching objects (aircraft and vehicle). Birds have tetrachromatic colour vision and spectral sensitivities of photoreceptors between 320 and 700 nm (Osorio and Vorobyev 2008). Birds with highly sensitive UVS-cones and birds with the less UV-sensitive VS-cone variant are all sensitive to UV light (Lind et al. 2014). Both gulls and passerines are sensitive within the ultraviolet spectrum (355–380 nm: Ödeen et al. 2010; Lind et al. 2014). Gulls are species of concern for collision with offshore wind turbines given their ecology (Furness et al. 2013). Raptors and owls, including Burrowing Owl (Athene cunicularia) and Golden Eagle (Aquila chrysaetos), seabirds and waders, and gallinaceous birds have (ultra)violet single cones with sensitive functions that peak in the violet spectrum (402–426 nm: Håstad et al. 2005; Ödeen and Håstad 2013; Doyle et al. 2014; Lind et al. 2014). At onshore wind-power plants, raptors and owls are often perceived to be vulnerable to collision mortality (Drewitt and Langston 2008). Typically, the lower limit of the human visible spectrum is 390 nm. Given these differences, it may be possible to develop mitigation measures without disturbing humans. Here we have to keep in mind that objects reflecting or emitting ultraviolet light may, besides having an illumination effect, be viewed as a different colour to the avian eye (Cook et al. 2011). Although the avian eye is better than ours—in avoiding obstacles or chasing prey in fast flight—its performance decreases with falling light levels more so than for humans (Jarvis et al. 2002), perhaps even during rain or cloudy weather. The function of ultraviolet vision in birds is thought to be related to orientation, foraging and signalling (Bennett and Cuthill 1994). Ultraviolet cones in the eyes of birds have been shown to be receptive to both visual and magnetic information (Bischof et al. 2011). Although monochromatic ultraviolet light (373 nm) could disrupt natural orientation behaviour in European Robins (Erithacus rubecula) (Wiltschko et al. 2014); these two mechanisms (vision and magnetoreception) were found to be independent of each other. Birds may be able to separate visual and magnetic information derived from ultraviolet cones through light at other wavelengths, magnetite-based receptors in the beak, and/or optic flow during movement (Bischof et al. 2011; Wiltschko et al. 2014). In fact, Gauthreaux and Belser (2006) summarize how filtering out all wavelengths but for ultraviolet light essentially eliminated migratory bird mortality at ceilometers. Also, Poot et al. (2008) showed that nocturnally migrating birds were least disoriented from blue lights (455 nm). Although it might be tempting to operationalize innovative mitigation tools solely based on theory, pre-development ex situ feasibility studies as well as post-development in situ testing are important prior to deployment. Such assessments ensure that unsuccessful techniques can be eliminated in an early phase and the effectiveness of innovations can be documented. Lights within the UV spectrum (≤ 400 nm) with low power input are now available. Martin (2011, 2012) argues that the sensory ecology in birds is more attuned to observing and responding to impulses from the ground. We therefore envision utilizing UV lights that sweep upwards during nighttime encircling the rotor swept zone to exclude birds from this risky area (i.e. light-fence). UV lights are invisible to the human eye but may deter nocturnal birds from entering the rotor swept zone without creating visual nuisance for humans. However, there are potential health hazards connected with UV light, which can be harmful to peoples’ (and birds’) eyesight that will have to be taken into account. Such a system must therefore be constructed in such a way that it will be safe to the public, which means that it must be mounted out of reach of people (well above the base), with the light beam pointing upwards, perhaps in the form of pulsating beams. However, before commencing with intricate engineering to conceive optical deterring devices to birds, it will be important to know beforehand whether bird species that are sensitive within the (ultra)violet spectrum actually respond behaviourally to UV light. As a feasibility study, we therefore explored whether birds in flight respond to ultraviolet/violet lights. In theory, UV lights may either lead to behavioural evasion of the birds in flight (partial avoidance, with a shift in their flight path) or displacement from the lighted area (leading to reduced activity) (May 2015). Alternatively, birds might also be attracted to the UV lights leading to opposite results. The main hypothesis for testing the possible efficacy of UV lights was that birds would refrain from flying through the area of light. This hypothesis leads to the following research questions: (1) to which extent UV light leads to a proportional reduction in the number of recorded flights within pre-specified distances and altitudes around the lighted area, and (2) to which extent recorded flights verifiably change altitude, and at which distance? These effects may in addition change over time due to positive (getting used to the lights) or negative (avoiding the area altogether) habituation. Study site and experimental design This feasibility study was done ex situ, i.e. outside the wind-power plant, testing for possible responses in birds to UV light. To ensure adequate confidence in the outcomes of this test, two UV LED lights (Type EXT400; Martin, Denmark) were placed vertically on top of a 2.5 m mast located near Veiholmen on the island of Smøla (63.50961°N, 7.9761°E) during spring (March–May 2014). Veiholmen is a fishing village located on a group of tiny islands in the northern part of Smøla Municipality in Møre og Romsdal county, Norway. From the onset, the lights, one within the violet (400 nm) and one within ultraviolet (365 nm) wavelength spectrum (see Additional file 1: Figure S1 for spectrograms), were used with maximized irradiance and a beam width of 62°. The nominal output of both lights was prior to operation increased to 700 mA (from 500 mA); resulting in an increased irradiance of 10 and 24% for respectively 365 and 400 nm (respectively: 0.169 and 0.646 Watt/m2). The nominal output gives a safety distance threshold for humans of circa 8 m (Martin pers. comm.). The experimental design was approved by the Civil Aviation Authority, municipality of Smøla and the private landowner. The lighting regime alternated between one of the two lights (365 nm: Tuesdays, Saturdays; 400 nm: Thursdays, Sundays) with intermediate control-days without any lighting (Mondays, Wednesdays, Fridays). Thereafter the lights were sequentially on every other day for 2 months during dawn/dusk and nighttime (17:00–08:00). During daytime, UV lights were expected to be ineffective due to exceeding levels of UV light from the sun and were therefore not activated. Background UV light levels were measured continuously at the avian radar van (see below) using an UVA sensor (315–400 nm; Type SKU 421/I; Skye Instruments, United Kingdom). Avian radar monitoring Bird flight movements in the vicinity of the UV lights was recorded continuously (24/7) using a vertical Frequency Modulated Continuous Wave (FMCW) radar (coherent X-band radar with 20° beam width and 1° beam height) to obtain best coverage of data collection with the area of interest. The FMCW forms an integral part of the ROBIN avian radar system (ROBIN B.V., the Netherlands), enabling automatically tracking and storing all bird movement information in a database (including e.g. timestamp, georeference, radar cross section). The avian radar system was stationed circa 370 m from the UV lights and scanning the sector in the same azimuth (17°). At this distance, the beam width covers an area of circa 65 m on either side of the lights and detectability is not hampered by detection loss over range. Utilization of this avian radar followed the permit given by the Norwegian Communications Authority, and was locally approved by the municipality and the private landowner. Radar monitoring commenced in the period 18 March 2014 through to and including 31 May 2014 (75 days). Due to technical complications, the radar was malfunctioning in five distinct periods (20–25 March (6 days); 6–15 April (10 days); 3–5 May (3 days); 24–27 May (4 days); 5 June (1 day)). Horizontal radar data could not be recorded in the areas surrounding the UV lights due to extensive amounts of ground clutter. Weather permitting and when practically feasible, groundtruthing of radar tracks was done during dawn and dusk. During these periods, radar tracks were groundtruthed by visually observing bird species. Prior to each groundtruthing session, the lights were checked whether they were fully functional (by checking an indication LED underneath the lamps). Behavioural changes in (log-transformed) flight altitude were assessed by comparing five a priori defined models using the lmer function of the lme4 library in the statistical software programme R version 3.2.2 (R Core Team 2015). All models included random effects to control for temporal autocorrelation by clustering over each unique track, for spatial patterns in altitude selection by including distance from the radar (i.e. range) as a random slope, and for daily varying environmental conditions by clustering over night-of-year (NOY). NOY was defined as 24-h periods from noon to noon. Spatial patterns in altitude selection—unrelated to the actual test—could occur due to adjusted flight altitudes above land masses versus over sea and the presence of a fish landing close to the radar site. Also, the lights produce cones covering an increasing circular area with increasing altitude. However, dependent on the way birds see when in flight (Martin 2011), an effect does not necessarily have to be limited to this cone alone. The a priori models were defined to capture various responses: (1) no effect including only the intercept; (2) behavioural response to the UV lights irrespective of distance, when the bird perceives the light ahead (UV_TYPE); (3) increasing behavioural response towards the UV lights, when the bird enters the light cone (UV_TYPE*Distance); and (4) and (5) including potential habituation to the UV lights for the last two models (*NOY). These models were compared for overall likelihood of fit using log-Likelihood tests. Effects of the UV lights on flight activity was assessed within the volume directly surrounding the UV lights, including only data within 50 m in both distance and altitude. Flight activity, defined as the number of radar track points per 10 m intervals in both distance and altitude per night represented the response variable in the glmer function with a Poisson distribution. Due to the block design, both distance and altitude were included in the models as factors to control for spatial patterns in flight activity. All models included random effects to control for daily varying environmental conditions by clustering over night-of-year (NOY), as well as clustering by record to control for overdispersion. Five a priori defined contrasting models were assessed: (1) no effect including only a possible spatial pattern (Distance and Altitude) in activity; (2) UV light adjusted flight activity irrespective of both distance and altitude (UV_TYPE; no interaction); (3) UV light adjusted flight activity in distance (UV_TYPE*Distance); (4) UV light adjusted flight activity in altitude (UV_TYPE*Altitude); (5) UV light adjusted flight activity both horizontally (UV_TYPE*Distance) and vertically (UV_TYPE*Altitude). While model 2 relates to full displacement, models 3 and 4 relate to partial displacement, respectively horizontally and vertically. Model 5 relates to responses when birds enter the light cone. To further assess potential temporal changes in effects over the season; we also ran the same models for each separate month. All data collected during this study, as well as the R scripts to run the analyses and produce the figures are included in this published article in the Additional file 2: uvdata.xlsx, Additional file 3: UV_analysis.R. Groundtruthed avian radar tracks to species (group) during the testing period Number of individuals Large gulls species White-tailed Eagle (Haliaeetus albicilla) Eider Duck (Somateria mollissima) Eurasian Curlew (Numenius arquata) Eurasian Oystercatcher (Haematopus ostralegus) Small gulls species Diving ducks (Aythyinae spp.) Comparison of model parsimony for contrasting models explaining effects of UV light (365 and 400 nm) on flight altitude Comparison of model parsimony for contrasting models explaining effects of UV light (365 and 400 nm) on flight activity Distance + Altitude*UV_TYPE (Distance + Altitude)*UV_TYPE Distance + Altitude Distance + Altitude + UV_TYPE Altitude + Distance*UV_TYPE This pilot study indicated that UV-light displaced birds in the vicinity of the light source. While birds were partially displaced in altitude by the emitted light at 365 nm, birds mostly adjusted their flight altitude when subjected to light of 400 nm. This displacement effect persisted over the season below 20–30 m a.s.l. but with increasing use of higher altitudes over time. This may indicate habituation over time, but could also just occur at the periphery of the lit area. Verification of this would require long-term studies and could not be determined in the current study. The test design consisted of UV-lights placed 2.5 m off the ground and pointing upwards; observed displacement is therefore also relative to ground level. However, when implemented, the UV-lights at a wind turbine have to be placed such that they encompass the rotor-swept zone; i.e. 50–100 m above ground level. To which extent displacement will occur at higher altitudes above ground level could not be answered by this study. Also, in order to be effective in deterring birds from the rotor swept zone of a wind turbine, most birds need to display avoidance along the entire rotor blade length (40–50 m). This study indicated that, given the lights used, only a proportion of birds avoided the lights to up to 30 m at a maximum. However, when the light is allowed to reflect off the surface of the turbine blades, e.g. using a UV-reflective coating (Young et al. 2003), it is possible that this would extend the range of the deterring effect, compared to this experiment that used an isolated lamp. To increase the efficacy of the UV lights to deter birds from the area, this necessitates that the energy emitted from the lights are diffused as little as possible, using either higher output or a beam that is as narrow as possible. However, with increased irradiance (Watt/m2) and/or narrower beam width, also the minimum safety distances increase within which eye damage may occur. The efficacy of UV light as a deterrence measure likely also depends on the solar irradiance levels to reach above-solar threshold values. The time-period when lights emit more UV-light than the sun, diminishes towards midday and varies seasonally. This likely also varies at different latitudes with higher solar irradiance towards the equator but more variable daytime periods at higher latitudes. Lastly, in practical applications, potentially harmful ecological effects of UV light pollution should be considered, such as attraction of insects (van Langevelde et al. 2011). Attraction of insects may in turn attract birds, and especially bats, to the wind turbines resulting in increased collision risk in these species. Based on this pilot study, we recommend that further research is required investigating physiological vision damage distance thresholds in birds and humans (dependent on how the lights will be directed), testing behavioural responses in birds to other lighting regimes (e.g. narrower beam width or laser, moving or flickering beam) in combination with potential side effects (e.g. insect attraction, light pollution). This pilot study has shown that birds were partially displaced by UV-light within a limited distance range, which may allow the development of mitigation designs only encompassing the rotor swept zone potentially combined with a triggering system to minimize habituation (May et al. 2015). This may in turn, when tested successfully in situ, help reduce collision risk while maintaining utilization of the area in-between the wind turbines by birds. Vision is the primary sensory system in birds, which for a number of species also includes the ultraviolet spectrum. Many bird species that are known to collide with offshore wind turbines are sensitive in the violet or ultraviolet spectrum. For species that are mainly active at lower ambient light levels, UV lighting may deter birds from the lit area. However, we do not know whether UV-sensitive birds in flight actually respond behaviorally to UV lights. In this study we found that, relative to control nights, bird flight activity was reduced within the lit area when the (ultra)violet light was on. In addition, the avian radar based data showed a limited vertical displacement in flight altitude which persisted over the season below 40 m above sea level. Still, with regard to implementation, we argue there still is a long way to go before a potentially functional design to mitigate collisions at offshore wind turbines that has proven to be effective in situ may be in place. RM contrived the study design, contributed to the statistical analyses of the data and wrote the manuscript. JÅ contributed to the analyses of the avian radar data and contributed to the manuscript. ØH and ELD carried out the field experiment, setting up and monitoring the avian radar system, and also helped with data preparation and the manuscript. All authors read and approved the final manuscript. We would like to thank Martin Professional for supplying us with the UV-lights free of charge. We would also like to thank Bjarke Laubek for supporting us in arranging the UV-lights and shipping them to us. We would also like to thank the INTACT steering committee for their input concerning study design and outcome. The authors declare that they have no competing interests. Availability of data and materials All data analyzed during this study are included in this published article in supplementary information files. Consent for publication Ethics approval and consent to participate This study was executed as part of the Innovative Mitigation Tools for Avian Conflicts with wind Turbines (INTACT) project. This project was financed by a consortium consisting of the Research Council of Norway (Grant 226241), Vattenfall, Statkraft, Statoil, Energy Norway, TrønderEnergi Kraft, Norwegian Water Resources and Energy Directorate and NINA. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. - Bennett ATD, Cuthill I. Ultraviolet vision in birds: what is its function? Vis Res. 1994;34:1471–8.View ArticlePubMedGoogle Scholar - Bevanger K, Berntsen F, Clausen S, Dahl EL, Flagstad Ø, Follestad A, Halley D, Hanssen F, Johnsen L, Kvaløy P, Lund-Hoel P, May RF, Nygård T, Pedersen H-C, Reitan O, Røskaft E, Steinheim Y, Stokke BG, Vang R. Pre- and post-construction studies of conflicts between birds and wind turbines in coastal Norway (BirdWind). Report on findings 2007–2010. Trondheim: Research NIfN; 2010.Google Scholar - Bischof H-J, Niessner C, Peichl L, Wiltschko R, Wiltschko W. Avian UV/violet cones as magnetoreceptors. The problem of separating visual and magnetic information. Commun Integr Biol. 2011;4:713–6.View ArticlePubMedPubMed CentralGoogle Scholar - Blackwell BF, Bernhardt GE. Efficacy of aircraft landing lights in stimulating avoidance behavior in birds. J Wildl Manag. 2004;68:725–32.View ArticleGoogle Scholar - Blackwell BF, DeVault TL, Seamans TW, Lima SL, Baumhardt P, Fernandez-Juricic E. Exploiting avian vision with aircraft lighting to reduce bird strikes. J Appl Ecol. 2012;49:758–66.View ArticleGoogle Scholar - Blackwell BF, Fernandez-Juricic E, Seamans TW, Dolan T. Avian visual system configuration and behavioural response to object approach. Anim Behav. 2009;77:673–84.View ArticleGoogle Scholar - Cole SG. Wind power compensation is not for the birds: an opinion from an environmental economist. Restor Ecol. 2011;19:147–53.View ArticleGoogle Scholar - Cook ASCP, Ross-Smith VH, Roos S, Burton NHK, Beale N, Coleman C, Daniel H, Fitzpatrick S, Rankin E, Norman K, Martin G. Identifying a range of options to prevent or reduce avian collision with offshore wind farms using a UK-based case study. Thetford: British Trust for Ornithology; 2011.Google Scholar - Doyle JM, Katzner TE, Bloom PH, Ji Y, Wijayawardena BK, DeWoody JA. The genome sequence of a widespread apex predator, the golden eagle (Aquila chrysaetos). PLoS ONE. 2014;9:e95599.View ArticlePubMedPubMed CentralGoogle Scholar - Drewitt AL, Langston RH. Collision effects of wind-power generators and other obstacles on birds. Ann NY Acad Sci. 2008;1134:233–66.View ArticlePubMedGoogle Scholar - Furness RW, Wade HM, Masden EA. Assessing vulnerability of marine bird populations to offshore wind farms. J Environ Manag. 2013;119:56–66.View ArticleGoogle Scholar - Gauthreaux SA, Belser CG. Effects of artificial night lighting on migrating birds. In: Rich C, Longcore T, editors. Ecological consequences of artificial night lighting. Washington: Island Press; 2006. p. 67–93.Google Scholar - Gove B, Langston RHW, McCluskie A, Pullan JD, Scrase I. Wind farms and birds: An updated analysis of the effects of wind farms on birds, and best practice guidance on integrated planning and impact assessment. Strasbourg: Europe Co; 2013.Google Scholar - Hunt WG, McClure CJW. Do raptors react to ultraviolet light? J Raptor Res. 2015;49:342–3.View ArticleGoogle Scholar - Håstad O, Ernstdotter E, Ödeen A. Ultraviolet vision and foraging in dip and plunge diving birds. Biol Lett. 2005;1:306–9.View ArticlePubMedPubMed CentralGoogle Scholar - Intergovernmental Panel on Climate Change. IPCC special report on renewable energy sources and climate change mitigation. Cambridge: Cambridge University Press; 2011.Google Scholar - Jarvis JR, Taylor NR, Prescott NB, Meeks I, Wathes CM. Measuring and modelling the photopic flicker sensitivity of the chicken (Gallus g. domesticus). Vis Res. 2002;42:99–106.View ArticlePubMedGoogle Scholar - Lehman RN, Kennedy PL, Savidge JA. The state of the art in raptor electrocution research: a global review. Biol Conserv. 2007;136:159–74.View ArticleGoogle Scholar - Lind O, Mitkus M, Olsson P, Kelber A. Ultraviolet vision in birds: the importance of transparent eye media. Proc R Soc Lond B. 2014;281:20132209.View ArticleGoogle Scholar - Marques AT, Batalha H, Rodrigues S, Costa H, Pereira MJR, Fonseca C, Mascarenhas M, Bernardino J. Understanding bird collisions at wind farms: an updated review on the causes and possible mitigation strategies. Biol Conserv. 2014;179:40–52.View ArticleGoogle Scholar - Martin GR. Understanding bird collisions with man-made objects: a sensory ecology approach. Ibis. 2011;153:239–54.View ArticleGoogle Scholar - Martin GR. Through birds’ eyes: insights into avian sensory ecology. J Ornithol. 2012;153:S23–48.View ArticleGoogle Scholar - May RF. A unifying framework for the underlying mechanisms of avian avoidance of wind turbines. Biol Conserv. 2015;190:179–87.View ArticleGoogle Scholar - May R, Reitan O, Bevanger K, Lorentsen SH, Nygard T. Mitigating wind-turbine induced avian mortality: sensory, aerodynamic and cognitive constraints and options. Renew Sustain Energy Rev. 2015;42:170–81.View ArticleGoogle Scholar - May R, Gill AB, Köppel J, Langston RHW, Reichenbach M, Scheidat M, Smallwood S, Voigt CC, Hüppop O, Portman M. Future research directions to reconcile wind turbine–wildlife interactions. In: Köppel J, editor. Wind energy and wildlife interactions: presentations from the CWW2015 conference. Cham: Springer; 2017. p. 255–76.View ArticleGoogle Scholar - Osorio D, Vorobyev M. A review of the evolution of animal colour vision and visual communication signals. Vision Res. 2008;48:2042–51.View ArticlePubMedGoogle Scholar - Ödeen A, Håstad O. The phylogenetic distribution of ultraviolet sensitivity in birds. BMC Evol Biol. 2013;13:36.View ArticlePubMedPubMed CentralGoogle Scholar - Ödeen A, Håstad O, Alström P. Evolution of ultraviolet vision in shorebirds (Charadriiformes). Biol Lett. 2010;6:370–4.View ArticlePubMedGoogle Scholar - Ödeen A, Håstad O, Alström P. Evolution of ultraviolet vision in the largest avian radiation—the passerines. BMC Evol Biol. 2011;11:313.View ArticlePubMedPubMed CentralGoogle Scholar - Poot H, Ens BJ, de Vries H, Donners MAH, Wernand MR, Marquenie JM. Green light for nocturnally migrating birds. Ecol Soc. 2008;13:47.View ArticleGoogle Scholar - R Core Team. Team: R: a language and environment for statistical computing. Vienna: R Foundation for Statistical, Computing; 2015.Google Scholar - van Langevelde F, Ettema JA, Donners M, WallisDeVries MF, Groenendijk D. Effect of spectral composition of artificial light on the attraction of moths. Biol Conserv. 2011;144:2274–81.View ArticleGoogle Scholar - Wiltschko R, Munro U, Ford H, Stapput K, Thalau P, Wiltschko W. Orientation of migratory birds under ultraviolet light. J Comp Physiol A Neuroethol Sens Neural Behav Physiol. 2014;200:399–407.View ArticlePubMedGoogle Scholar - Young DP Jr, Erickson WP, Strickland MD, Good RE, Sernka KJ. Comparison of avian responses to UV-light-reflective paint on wind turbines. Subcontract report July 1999–December 2000. Golden: National Renewable Energy Laboratory; 2003.View ArticleGoogle Scholar
<urn:uuid:712e3141-b63d-4532-8635-3125c96a6732>
3.234375
7,139
Truncated
Science & Tech.
45.153529
95,495,037
Coastal salt marshes are highly sensitive wetland ecosystems that can sustain long-term impacts from anthropogenic events such as oil spills. In this study, we examined the microbial communities of a Gulf of Mexico coastal salt marsh during and after the influx of petroleum hydrocarbons following the Deepwater Horizon oil spill. Total hydrocarbon concentrations in salt marsh sediments were highest in June and July 2010 and decreased in September 2010. Coupled PhyloChip and GeoChip microarray analyses demonstrated that the microbial community structure and function of the extant salt marsh hydrocarbon-degrading microbial populations changed significantly during the study. The relative richness and abundance of phyla containing previously described hydrocarbon-degrading bacteria (Proteobacteria, Bacteroidetes, and Actinobacteria) increased in hydrocarbon-contaminated sediments and then decreased once hydrocarbons were below detection. Firmicutes, however, continued to increase in relative richness and abundance after hydrocarbon concentrations were below detection. Functional genes involved in hydrocarbon degradation were enriched in hydrocarbon-contaminated sediments then declined significantly (p<0.05) once hydrocarbon concentrations decreased. A greater decrease in hydrocarbon concentrations among marsh grass sediments compared to inlet sediments (lacking marsh grass) suggests that the marsh rhizosphere microbial communities could also be contributing to hydrocarbon degradation. The results of this study provide a comprehensive view of microbial community structural and functional dynamics within perturbed salt marsh ecosystems. Ecosystem boundary retreat due to human-induced pressure is a generally observed phenomenon. However, studies that document thresholds beyond which internal resistance mechanisms are overwhelmed are uncommon. Following the Deepwater Horizon (DWH) oil spill, field studies from a few sites suggested that oiling of salt marshes could lead to a biogeomorphic feedback where plant death resulted in increased marsh erosion. We tested for spatial generality of and thresholds in this effect across 103 salt marsh sites spanning ~430 kilometers of shoreline in coastal Louisiana, Alabama, and Mississippi, using data collected as part of the natural resource damage assessment (NRDA). Our analyses revealed a threshold for oil impacts on marsh edge erosion, with higher erosion rates occurring for ~1-2 years after the spill at sites with the highest amounts of plant stem oiling (90-100%). These results provide compelling evidence showing large-scale ecosystem loss following the Deepwater Horizon oil spill. More broadly, these findings provide rare empirical evidence identifying a geomorphologic threshold in the resistance of an ecosystem to increasing intensity of human-induced disturbance. We used a first-of-its-kind comprehensive scenario approach to evaluate both the vertical and horizontal response of tidal wetlands to projected changes in the rate of sea-level rise (SLR) across 14 estuaries along the Pacific coast of the continental United States. Throughout the U.S. Pacific region, we found that tidal wetlands are highly vulnerable to end-of-century submergence, with resulting extensive loss of habitat. Using higher-range SLR scenarios, all high and middle marsh habitats were lost, with 83% of current tidal wetlands transitioning to unvegetated habitats by 2110. The wetland area lost was greater in California and Oregon (100%) but still severe in Washington, with 68% submerged by the end of the century. The only wetland habitat remaining at the end of the century was low marsh under higher-range SLR rates. Tidal wetland loss was also likely under more conservative SLR scenarios, including loss of 95% of high marsh and 60% of middle marsh habitats by the end of the century. Horizontal migration of most wetlands was constrained by coastal development or steep topography, with just two wetland sites having sufficient upland space for migration and the possibility for nearly 1:1 replacement, making SLR threats particularly high in this region and generally undocumented. With low vertical accretion rates and little upland migration space, Pacific coast tidal wetlands are at imminent risk of submergence with projected rates of rapid SLR. Salt marshes are valued for their ecosystem services, and their vulnerability is typically assessed through biotic and abiotic measurements at individual points on the landscape. However, lateral erosion can lead to rapid marsh loss as marshes build vertically. Marsh sediment budgets represent a spatially integrated measure of competing constructive and destructive forces: a sediment surplus may result in vertical growth and/or lateral expansion, while a sediment deficit may result in drowning and/or lateral contraction. Here we show that sediment budgets of eight microtidal marsh complexes consistently scale with areal unvegetated/vegetated marsh ratios (UVVR) suggesting these metrics are broadly applicable indicators of microtidal marsh vulnerability. All sites are exhibiting a sediment deficit, with half the sites having projected lifespans of less than 350 years at current rates of sea-level rise and sediment availability. These results demonstrate that open-water conversion and sediment deficits are holistic and sensitive indicators of salt marsh vulnerability. Wetlands are the largest natural source of atmospheric methane. Here, we assess controls on methane flux using a database of approximately 19 000 instantaneous measurements from 71 wetland sites located across subtropical, temperate, and northern high latitude regions. Our analyses confirm general controls on wetland methane emissions from soil temperature, water table, and vegetation, but also show that these relationships are modified depending on wetland type (bog, fen, or swamp), region (subarctic to temperate), and disturbance. Fen methane flux was more sensitive to vegetation and less sensitive to temperature than bog or swamp fluxes. The optimal water table for methane flux was consistently below the peat surface in bogs, close to the peat surface in poor fens, and above the peat surface in rich fens. However, the largest flux in bogs occurred when dry 30-day averaged antecedent conditions were followed by wet conditions, while in fens and swamps, the largest flux occurred when both 30-day averaged antecedent and current conditions were wet. Drained wetlands exhibited distinct characteristics, e.g. the absence of large flux following wet and warm conditions, suggesting that the same functional relationships between methane flux and environmental conditions cannot be used across pristine and disturbed wetlands. Together, our results suggest that water table and temperature are dominant controls on methane flux in pristine bogs and swamps, while other processes, such as vascular transport in pristine fens, have the potential to partially override the effect of these controls in other wetland types. Because wetland types vary in methane emissions and have distinct controls, these ecosystems need to be considered separately to yield reliable estimates of global wetland methane release. Salt marsh habitat loss to vegetation die-offs has accelerated throughout the western Atlantic in the last four decades. Recent studies have suggested that eutrophication, pollution and/or disease may contribute to the loss of marsh habitat. In light of recent evidence that predators are important determinants of marsh health in New England, we performed a total predator exclusion experiment. Here, we provide the first experimental evidence that predator depletion can cause salt marsh die-off by releasing the herbivorous crab Sesarma reticulatum from predator control. Excluding predators from a marsh ecosystem for a single growing season resulted in a >100% increase in herbivory and a >150% increase in unvegetated bare space compared to plots with predators. Our results confirm that marshes in this region face multiple, potentially synergistic threats. Predator depletion on Cape Cod (USA) has released the herbivorous crab Sesarmareticulatum from predator control leading to the loss of cordgrass from salt marsh creek banks. After more than three decades of die-off, cordgrass is recovering at heavily damaged sites coincident with the invasion of green crabs (Carcinusmaenas) into intertidal Sesarma burrows. We hypothesized that Carcinus is dependent on Sesarma burrows for refuge from physical and biotic stress in the salt marsh intertidal and reduces Sesarma functional density and herbivory through consumptive and non-consumptive effects, mediated by both visual and olfactory cues. Our results reveal that in the intertidal zone of New England salt marshes, Carcinus are burrow dependent, Carcinus reduce Sesarma functional density and herbivory in die-off areas and Sesarma exhibit a generic avoidance response to large, predatory crustaceans. These results support recent suggestions that invasive Carcinus are playing a role in the recovery of New England salt marshes and assertions that invasive species can play positive roles outside of their native ranges. Drought has many consequences in the tidally dominatedSpartinasp. salt marshes of the southeastern US; including major dieback events, changes in sediment chemistry and obvious changes in the landscape. These coastal systems tend to be highly productive, yet many salt marshes are also nitrogen limited and depend on plant associated diazotrophs as their source of ‘new’ nitrogen. A 4-year study was conducted to investigate the structure and composition of the rhizosphere diazotroph assemblages associated with 5 distinct plant zones in one such salt marsh. A period of greatly restricted tidal inundation and precipitation, as well as two periods of drought (June-July 2004, and May 2007) occurred during the study. DGGE ofnifH PCR amplicons from rhizosphere samples, Principal Components Analysis of the resulting banding patterns, and unconstrained ordination analysis of taxonomic data and environmental parameters were conducted. Diazotroph assemblages were organized into 5 distinct groups (R² = 0.41,pvalue < 0.001) whose presence varied with the environmental conditions of the marsh. Diazotroph assemblage group detection differed during and after the drought event, indicating that persistent diazotrophs maintained populations that provided reduced supplies of new nitrogen for vegetation during the periods of drought. Landscape-level shifts in plant species distribution and abundance can fundamentally change the ecology of an ecosystem. Such shifts are occurring within mangrove-marsh ecotones, where over the last few decades, relatively mild winters have led to mangrove expansion into areas previously occupied by salt marsh plants. On the Texas (USA) coast of the western Gulf of Mexico, most cases of mangrove expansion have been documented within specific bays or watersheds. Based on this body of relatively small-scale work and broader global patterns of mangrove expansion, we hypothesized that there has been a recent regional-level displacement of salt marshes by mangroves. We classified Landsat-5 Thematic Mapper images using artificial neural networks to quantify black mangrove (Avicennia germinans) expansion and salt marsh (Spartina alterniflora and other grass and forb species) loss over 20 years across the entire Texas coast. Between 1990 and 2010, mangrove area grew by 16.1 km2, a 74% increase. Concurrently, salt marsh area decreased by 77.8 km2, a 24% net loss. Only 6% of that loss was attributable to mangrove expansion; most salt marsh was lost due to conversion to tidal flats or water, likely a result of relative sea level rise. Our research confirmed that mangroves are expanding and, in some instances, displacing salt marshes at certain locations. However, this shift is not widespread when analyzed at a larger, regional level. Rather, local, relative sea level rise was indirectly implicated as another important driver causing regional-level salt marsh loss. Climate change is expected to accelerate both sea level rise and mangrove expansion; these mechanisms are likely to interact synergistically and contribute to salt marsh loss. The greenhead horse fly, Tabanus nigrovittatus Macquart, is frequently found in coastal marshes of the Eastern United States. The greenhead horse fly larvae are top predators in the marsh and thus vulnerable to changes in the environment, and the adults potentially are attracted to polarized surfaces like oil. Therefore, horse fly populations could serve as bioindicators of marsh health and toxic effects of oil intrusion. In this study, we describe the impact of the April 2010 Deep Water Horizon oil spill in the Gulf of Mexico on tabanid population abundance and genetics as well as mating structure. Horse fly populations were sampled biweekly from oiled and unaffected locations immediately after the oil spill in June 2010 until October 2011. Horse fly abundance estimates showed severe crashes of tabanid populations in oiled areas. Microsatellite genotyping of six pristine and seven oiled populations at ten polymorphic loci detected genetic bottlenecks in six of the oiled populations in association with fewer breeding parents, reduced effective population size, lower number of family clusters and fewer migrants among populations. This is the first study assessing the impact of oil contamination at the level of a top arthropod predator of the invertebrate community in salt marshes.
<urn:uuid:7e68c340-3945-4f9f-bb96-b306aeb61fc3>
2.8125
2,648
Academic Writing
Science & Tech.
15.342033
95,495,059
When hurricanes Harvey, Irma, and Maria recently pounded Texas, Florida, and Puerto Rico, the severity of the storms was often attributed to climate change. “Four Underappreciated Ways That Climate Change Could Make Hurricanes Even Worse,” was the headline in the Washington Post; “Hurricane Harvey’s Size and Impact Points to Climate Change,” noted NPR. But five years ago, when Superstorm Sandy took its famous left turn to barrel into the East Coast, there was no such certainty. Instead, a debate played outin the media and on Twitter about whether it was fair to blame climate change for the storm’s intensity. One reason why the conversation has changed is that the science has become much more advanced, especially in the field of attribution science—a relatively new discipline within climate science that looks at how climate change factors into individual weather events. During Hurricane Katrina in 2005, attribution science was in its “infancy,” one scientist told the Post, and even when Sandy hit, the dominant narrative was that climate change doesn’t necessarily affect individual extreme weather events. But today, both the computer models and how scientists have learned to communicate the lessons of climate science have become more sophisticated, and there is a growing body of peer-reviewed published literature on the subject, though not without some controversy. No one suggests that climate change caused a single weather event, but scientists have grown a lot more comfortable talking about the ways rising temperatures affect extreme weather, creating the right conditions to fuel wetter storms and bigger, more dangerous storm surges. One issue that’s received more attention is that, in the past, attribution studies started “with the assumption that a given event is ‘natural,’ and the burden of proof is on the claim that the event was caused, exacerbated, or made more likely” by human causes, social scientists Lisa Lloyd and Naomi Oreskes and climate scientist Michael Mann wrote in an article for Climatic Change that was published in 2017. “The null hypothesis is that there is no human contribution.” But then came Superstorm Sandy. Researchers used data from the storm to try to get a handle on the effects of climate change on extreme weather. One of the major papers was from the National Center for Atmospheric Research’s Kevin Trenberth and two colleagues in 2015. They argued that it was necessary to come up with a new way of approaching attribution. Even though computer models weren’t yet sophisticated enough to account for all the atmospheric dynamics in climate change, it was still possible to account for other dynamics. The oceans are warmer and higher, and the atmosphere holds more moisture than it used to, which is linked to heavier rainfall. Using Sandy’s weather models, which were very accurate, the researchers could alter the heat and the moisture levels of those models to account for climate change. The conclusion? “We find that indeed Superstorm Sandy is more intense, it is bigger, the rainfall is heavier, and so there’s a climate change component to the strength of that storm,” Trenberth tells Mother Jones. “The environment in which all these storms, including Superstorm Sandy, is occurring is fundamentally different than it used to be.” That was not the only challenge to conventional thinking. Starting in 2011, a year before Sandy, the Bulletin of the American Meteorological Society began publishing an annual state-of-the-science report titled “Attribution of Extreme Weather Events in the Context of Climate Change.” Last year, Heidi Cullen, a chief scientist with Climate Central studying weather variability, wrote that the annual work had the potential to have the same impact on the conversation around weather and global warming as the surgeon general’s 1964 report on smoking and lung cancer. “Scientists are now able to assess, in some cases within days, whether and how much the risk of such an extreme weather event has changed compared to the past—that is, before heat-trapping greenhouse gases altered our climate,” she wrote in the New York Times. Even if the science has become more precise in understanding the nuances and connections between climate change and extreme weather, many politicians aren’t listening. EPA Administrator Scott Pruitt has said that to “have any kind of focus on the cause and effect of the storm, versus helping people, or actually facing the effect of the storm, is misplaced.” Climate Desk is a journalistic collaboration dedicated to exploring the impact—human, environmental, economic, political—of a changing climate. The partners are The Atlantic, Atlas Obscura, CityLab, Grist, The Guardian, High Country News, HuffPost, Medium, Mother Jones, the National Observer, New Republic, Newsweek, Reveal, Slate, Undark, Wired.
<urn:uuid:7bab4860-6064-46cc-9753-d229e911f162>
3.484375
989
Knowledge Article
Science & Tech.
27.641179
95,495,061
Benzyl Chloride(redirected from Benzylchloride) benzyl chloride[′ben·zəl ′klȯr‚īd] an organic compound, C6H5CH2CI; a colorless liquid with a sharp odor; Tb = 179.3° C. Benzyl chloride is not water-soluble; it is miscible in alcohol, chloroform, and other organic solvents. When heated with water, benzyl chloride gradually hydrolyzes to form benzyl alcohol. Benzyl chloride is obtained industrially by passing chlorine through toluene that has been heated to 90°-100° C and contains 1 percent PCl3. Benzyl chloride is used to obtain benzyl alcohol and, more importantly, benzyl cellulose, which is widely used in the production of plastics, films, electrical insulating materials, and lacquers.
<urn:uuid:f6334e85-3b1a-41e6-b395-e8596cd70e04>
2.921875
193
Knowledge Article
Science & Tech.
36.7925
95,495,072
Understanding soil water dynamics and accurately estimating groundwater recharge are essential steps in achieving efficient and sustainable management of groundwater resources in regions with deep vadose zones. The objective of this study was to understand transient data and the dynamics nature of water from deep sections at the thick vadose zone, and to estimate groundwater recharge by applying Darcy's law of unsaturated water fluxes. The study was conducted during year 2009–2013 at Luancheng Agro-ecosystem Experimental Station of Chinese Academy of Sciences, which is located in the North China Plain. The water contents were measured with water probes and matric suctions using pressure transducers at depths of 9 and 11 m and were combined with laboratory measurements of unsaturated hydraulic conductivity to estimate groundwater recharge. The results indicated that the soil water content at 9- and 11-m depths increased following the rainy season and then gradually stabilized. And the intensity and continuity of precipitation events played an important role in soil water changes. The soil water dynamics between different depths (9 and 11 m) indicated a time lag (approximately 5–11 days). The groundwater recharge ranged from 7.60 to 19.75 mm resulting from hysteresis over the study period. Research Article|November 18 2015 Water dynamics and groundwater recharge in a deep vadose zone 1Key Laboratory of Agricultural Water Resources, Center for Agricultural Resources Research, Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, 286 Huaizhong Road, Shijiazhuang, Hebei 050021, China E-mail: email@example.com Search for other works by this author on: Zhaoqiang Ju, Xiaoxin Li, Chunsheng Hu; Water dynamics and groundwater recharge in a deep vadose zone. Water Science and Technology: Water Supply 1 June 2016; 16 (3): 579–586. doi: https://doi.org/10.2166/ws.2015.165 Download citation file: Don't already have an account? Register You could not be signed in. Please check your email address / username and password and try again.
<urn:uuid:32f415a6-8a7d-440d-9d8f-0a934651847f>
2.71875
440
Academic Writing
Science & Tech.
37.903111
95,495,081
The tiny, dim object, called UDFj-39546284, is a compact galaxy of blue stars that existed 480 million years after the Big Bang. More than 100 such mini-galaxies would be needed to make up our Milky Way. The new research offers surprising evidence that the rate of star birth in the early universe grew dramatically, increasing by about a factor of 10 from 480 million years to 650 million years after the Big Bang. "NASA continues to reach for new heights, and this latest Hubble discovery will deepen our understanding of the universe and benefit generations to come," said NASA Administrator Charles Bolden, who was the pilot of the space shuttle mission that carried Hubble to orbit. "We could only dream when we launched Hubble more than 20 years ago that it would have the ability to make these types of groundbreaking discoveries and rewrite textbooks." Astronomers don't know exactly when the first stars appeared in the universe, but every step farther from Earth takes them deeper into the early formative years when stars and galaxies began to emerge in the aftermath of the Big Bang. "These observations provide us with our best insights yet into the earlier primeval objects that have yet to be found," said Rychard Bouwens of the University of Leiden in the Netherlands. Bouwens and Illingworth report the discovery in the Jan. 27 issue of the British science journal Nature. This observation was made with the Wide Field Camera 3 starting just a few months after it was installed in the observatory in May 2009, during the last NASA space shuttle servicing mission to Hubble. After more than a year of detailed observations and analysis, the object was positively identified in the camera's Hubble Ultra Deep Field-Infrared data taken in the late summers of 2009 and 2010. The object appears as a faint dot of starlight in the Hubble exposures. It is too young and too small to have the familiar spiral shape that is characteristic of galaxies in the local universe. Although its individual stars can't be resolved by Hubble, the evidence suggests this is a compact galaxy of hot stars formed more than 100-to-200 million years earlier from gas trapped in a pocket of dark matter. "We're peering into an era where big changes are afoot," said Garth Illingworth of the University of California at Santa Cruz. "The rapid rate at which the star birth is changing tells us if we go a little further back in time we're going to see even more dramatic changes, closer to when the first galaxies were just starting to form." The proto-galaxy is only visible at the farthest infrared wavelengths observable by Hubble. Observations of earlier times, when the first stars and galaxies were forming, will require Hubble's successor, the James Webb Space Telescope (JWST). The hypothesized hierarchical growth of galaxies -- from stellar clumps to majestic spirals and ellipticals -- didn't become evident until the Hubble deep-field exposures. The first 500 million years of the universe's existence, from z of 1000 to 10, is the missing chapter in the hierarchical growth of galaxies. It's not clear how the universe assembled structure out of a darkening, cooling fireball of the Big Bang. As with a developing embryo, astronomers know there must have been an early period of rapid changes that would set the initial conditions to make the universe of galaxies what it is today. "After 20 years of opening our eyes to the universe around us, Hubble continues to awe and surprise astronomers," said Jon Morse, NASA's Astrophysics Division director at the agency's headquarters in Washington. "It now offers a tantalizing look at the very edge of the known universe -- a frontier NASA strives to explore." Hubble is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center in Greenbelt, Md., manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C. For more information about object UDFj-39546284 and Hubble, visit:http://www.nasa.gov/hubble Trent J. Perrotto | Newswise Science News What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:bcc3858a-e6b7-4f2e-a0cb-9cb833a9de35>
3.578125
1,503
Content Listing
Science & Tech.
44.724077
95,495,086
Heterogeneity of species interactions in food webs can result from characteristics of substrates as well as attributes of top consumers. We performed a streamside channel experiment to evaluate the impact of crayfish on lower trophic levels in detritus-based (leaf packs) and algal-based food webs (hard-bottoms). After 43 days, both male and female crayfish had dramatically promoted leaf decomposition, with males processin material at a faster rate. However, the difference in leaf processing rates was not related to a greater level of male activity. Despite the sex-related difference in residual leaf dry mass, densities of invertebrates in leaf packs were similarly low in the presence of crayfish of either sex, due to resource consumption, physical dislodgment (bioturbation) and/or predation. No trophic cascade was evident in the leaf pack assemblage. In the hard-bottom assemblage, the results confirmed circumstantial field evidence that crayfish reduce predatory Tanypodinae and indirectly increase collector-gatherer Chironominae following the prediction of a trophic cascade. However, no other taxa were indirectly facilitated, because of strong direct effects of crayfish on algal abundance (through direct consumption and bioturbation). Overall, impacts of crayfish on lower trophic levels were more pronounced in the structurally complex, detritus-based assemblages than in its hard-bottom, algal-based counterpart. This conflicts with the expectation that net predation effects should be weaker where structural complexity is greater but is mainly a consequence of the profound engineering effects of crayfish in reducing colonisable substrate when they shred and disturb detrital material. Effects of crayfish may therefore propagate differently and with varying strength depending on substrate. Moreover, engineering activities and predation by crayfish appear to have been of overwhelming significance with subtle sex differences in leaf processing rates failing to lead to differences in invertebrate densities. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:9f692927-1df4-428b-bf4c-d902b756d93e>
2.734375
434
Academic Writing
Science & Tech.
7.320618
95,495,100
Study: Earth Has A ‘Natural Thermostat’ To Regulate Climate During Extreme Temperature Swings PARIS — The possibility of controlling the Earth’s temperature has long led to various experiments by inquisitive scientists, but without great results. Now a recent study found proof for the first time ever of a natural thermostat that helps regulate the planet during extreme temperature swings. British scientists say they’ve discovered that the preeminent mechanism that allows the Earth to recover from global cooling events is linked to the weathering of rocks. Rocks dissolve by rain and river water during the weathering process, and carbon dioxide is taken from the atmosphere to carbon-rich rocks in nearby waterways. When weathering runs its course, there’s a decrease in carbon dioxide on our planet. The researchers examined rocks from about 445 million years ago, which matches out to the second largest extinction period in the planet’s history. Using samples from Canada and Scotland, the rocks showed that the global chemical weathering rate declined, which meant less carbon dioxide was removed and the climate was able to recover from the cool temperature. “From looking at the relative abundance of lithium isotopes in ocean-derived rocks, we were able to confirm that chemical weathering is the driver of the Earth’s natural thermostat,” explains lead scientist, Dr. Philip Pogge von Strandmann, in a news release. “When there is a warmer climate, there is more weathering, and when it is cooler there is less weathering: this is what you would expect, given that chemical reactions go faster with increasing temperature.” The researchers had discovered evidence in earlier studies that showed weathering played a significant part in the Earth cooling down during periods of extreme heat, but the latest study proved just the opposite — when the planet experiences major cold spells, weathering slows and the “natural thermostat” allows the world to warm back up. “This is the process that has allowed life to survive on Earth for around 4 billion years,” says Pogge von Strandman. - Study Reveals Which Species Is Likely To Be Last Survivor On Earth - 100 Million Years in Darkness: Ancient Flowers Trapped In Amber Come To Light - Evolutionary Enigma: Study Finds Traces of ‘Ghost Species’ In Human Saliva - Climate Change Could Widen Gap Between Rich and Poor, Study Finds - Meteorites May Have Been Key Ingredient To Creation Of Life On Earth, Study Finds - Study: Ship Exhaust Responsible For Stronger Storms, More Lightning Strikes Out At Sea
<urn:uuid:10c6b28d-c38d-47b4-8d4e-8cde09edab9f>
3.953125
549
News Article
Science & Tech.
25.793473
95,495,144
- Research article - Open Access From Africa to Europe and back: refugia and range shifts cause high genetic differentiation in the Marbled White butterfly Melanargia galathea © Habel et al; licensee BioMed Central Ltd. 2011 Received: 27 January 2011 Accepted: 21 July 2011 Published: 21 July 2011 The glacial-interglacial oscillations caused severe range modifications of biota. Thermophilic species became extinct in the North and survived in southern retreats, e.g. the Mediterranean Basin. These repeated extinction and (re)colonisation events led to long-term isolation and intermixing of populations and thus resulted in strong genetic imprints in many European species therefore being composed of several genetic lineages. To better understand these cycles of repeated expansion and retraction, we selected the Marbled White butterfly Melanargia galathea. Fourty-one populations scattered over Europe and the Maghreb and one population of the sibling taxon M. lachesis were analysed using allozyme electrophoresis. We obtained seven distinct lineages applying neighbour joining and STRUCTURE analyses: (i) Morocco, (ii) Tunisia, (iii) Sicily, (iv) Italy and southern France, (v) eastern Balkans extending to Central Europe, (vi) western Balkans with western Carpathian Basin as well as (vii) south-western Alps. The hierarchy of these splits is well matching the chronology of glacial and interglacial cycles since the Günz ice age starting with an initial split between the galathea group in North Africa and the lachesis group in Iberia. These genetic structures were compared with past distribution patterns during the last glacial stage calculated with distribution models. Both methods suggest climatically suitable areas in the Maghreb and the southern European peninsulas with distinct refugia during the last glacial period and underpin strong range expansions to the North during the Postglacial. However, the allozyme patterns reveal biogeographical structures not detected by distribution modelling as two distinct refugia in the Maghreb, two or more distinct refugia at the Balkans and a close link between the eastern Maghreb and Sicily. Furthermore, the genetically highly diverse western Maghreb might have acted as source or speciation centre of this taxon, while the eastern, genetically impoverished Maghreb population might result from a relatively recent recolonisation from Europe via Sicily. The impacts of climatic oscillations on the earth's biota have been intensively studied . In the western Palaearctic, thermophilic organisms went extinct over major parts of Central and North Europe during cold stages and survived in the lowlands of lower latitudes in often distinct refugia [2–6]. Molecular studies revealed that most of these taxa exclusively survived glacial periods south of the European high mountain chains in the Iberian, Italian and Balkan peninsulas, and some even in additional extra-Mediterranean refugia [7, 8]. The long-term isolation of populations in these retreats over many thousands of years resulted in genetic differentiation . During the warmer interglacial periods, species expanded their distribution ranges northwards and extended their different genetic lineages over more northern areas [9, 10]. In contrast to the three more intensively studied Mediterranean refugia of southern Europe (Iberia, peninsular Italy and the Balkans), little is known about North African refugia and the biogeographical relation between the Maghreb and southern Europe separated by the two narrow sea straits of Gibraltar and Sicily. It has been shown that the Maghreb is often sub-structured following an east-west [e.g. [11–13]] or south-north differentiation pattern [e.g. [5, 14]]; in some cases, genetic continuity was demonstrated between the Maghreb and Sicily [e.g. [15, 16]]. Other studies underline the important role of Sicily as diversification centre for European taxa unravelling deep genetic splits between this island and peninsular Italy (e.g. Erinaceus europaeus: ; Pseudepidalea viridis: [18, 19]). Few molecular analyses also reveal the outstanding importance of North Africa as a refugium for thermophilic species during glacial periods [e.g. [12, 14, 20, 21]]. However, most studies focus either on the Maghreb or the southern European refugia and do not combine the distribution of species all over north-western Africa and throughout Europe. To study the biogeographical importance of the Maghreb region and its connection with Europe, we selected the Marbled White butterfly species complex Melanargia galathea (Linnaeus, 1758) and Melanargia lachesis (Hübner, 1790) as a model system using two analytical tools (allozyme polymorphisms and distribution modelling). Today, M. galathea is widely distributed from the Maghreb region (mountain ranges of Morocco, Algeria and Tunisia) [22, 23] to the English Midlands , and from the Pyrenees to the Baltic Sea in Poland . On the Iberian Peninsula, M. galathea is replaced by its sibling species M. lachesis. Thus, the Italian peninsula is the only possible link between North Africa and Europe for M. galathea. Which refugia are of importance for the glacial survival of the M. galathea /lachesis species complex during the subsequent glacial periods? Is there any evidence of genetic structuring within the North African and Italian refugia? Which routes of expansion and retraction followed the butterfly throughout time? All enzyme loci had banding patterns consistent with known quaternary structures. While most loci were inherited autosomally, 6PGDH and ME were located on the Z chromosome so that hemizygous females (but not males) had a single copy . No general linkage disequilibrium was observed for any locus (all p > 0.05 after Bonferoni correction). A total of 13 analysed loci were polymorphic, but two loci (FUM, GPDH) were monomorphic throughout all samples. Allele frequencies for each enzyme and population are given in an additional file 1. Sampling location and five parameters of genetic diversity for 41 populations of Melanargia galathe a from its western Palaearctic distribution area and one population of M. lachesis from the Pyrenees: number of individuals analysed (N), mean number of alleles per locus (A), percentage of expected (H e ) and observed (H o ) heterozygosity, percentage of polymorphic loci not exceeding 95% (P95) and total number of polymorphic loci (Ptot) Date of sampling H e (%) H o (%) T-Table de Yagurta F-Col de Tende RO-Porta di Fier Transilvanici E-Col de Perbes* Means of sample sizes and genetic diversities of the different genetic groups of Melanargia galathea and M. lachesis; p values of Kruskal Wallis ANOVAs among groups are given 36.0 ± 7.9 2.10 ± 0.19 17.0 ± 2.2 15.3 ± 2.8 34.5 ± 5.4 45.2 ± 9.4 36.0 ± 0.0 2.38 ± 0.10 18.4 ± 1.6 17.6 ± 3.0 36.7 ± 6.7 53.3 ± 5.4 37.4 ± 7.8 2.00 ± 0.17 13.9 ± 2.1 11.6 ± 2.4 33.3 ± 6.7 49.3 ± 8.9 37.7 ± 5.8 2.13 ± 0.13 15.8 ± 1.8 13.4 ± 1.3 33.3 ± 6.6 55.6 ± 10.2 Italy + SE France 35.3 ± 8.3 2.19 ± 0.24 17.9 ± 2.3 15.1 ± 2.6 38.1 ± 8.3 52.4 ± 11.2 eastern Balkans + Central Europe 36.8 ± 7.4 2.04 ± 0.15 17.8 ± 1.7 16.9 ± 2.0 33.7 ± 2.9 38.7 ± 3.6 26.5 ± 15.6 2.01 ± 0.22 15.3 ± 1.3 14.4 ± 1.1 31.6 ± 3.3 40.0 ± 5.5 p (Kruskal Wallis ANOVA) eastern Balkans + Romania 35.7 ± 8.4 2.05 ± 0.15 17.1 ± 2.2 15.5 ± 1.8 33.3 ± 3.8 39.0 ± 4.6 37.7 ± 7.0 2.03 ± 0.16 18.5 ± 1.1 18.0 ± 1.4 34.0 ± 2.2 38.5 ± 2.9 p (Kruskal Wallis ANOVA) Analyses of molecular variance for all Melanargia galathea populations and one population of M.lachesis among individuals within populations All galathea and lachesis Sicily and Tunisia Italy, Balkans, Central Europe Italy and S France west Balkan group east Balkan and Central Europe Hierarchical variance analyses of Melanargia galathea and M.lachesis among genetic groups prop. of among groups variance of total variance among pops. galathea vs lachesis Tunisia + Sicily vs rest of Europe Tunisia vs Morocco Tunisia vs Sicily Sicily vs Italy Italy vs S France (Condat) Italy vs SW Alps (Col de Tende) east Balkans + Central Europe vs SW Alps (Col de Tende) Italy vs SW Alps vs west Balkans vs east Balkans + Central Europe east Balkans + Romania vs Central Europe Morocco vs continental Europe The genetic diversities among these genetic lineages showed significant differences (Table 2). Thus, the Morocco group showed the highest values achieved for A, He as well as Ho, and the means for P95 and Ptot were above average. On the other extreme, Tunisia had the lowest means for A, He and Ho, and the mean for P95 was well below average; the genetic diversities of Tunisia were lower (A, He, Ho, Ptot) or equal than in the otherwise rather similar populations from Sicily. The four groups from mainland Europe all have mostly intermediate genetic diversities scatted around the respective mean values. Species distribution modelling According to the classification of Swets , we received 'excellent' AUC values in our 100 models (average training AUC = 0.927, average test AUC = 0.902). On average, the 'temperature annual range' had the highest explanatory power (30.3%), followed by the 'minimum temperature of the coldest month' (16.8%), the 'precipitation of the warmest quarter' (14.8%), the 'annual precipitation' (10.2%), the 'maximum temperature of the warmest month' (8.2%) and the 'precipitation of the driest quarter' (7.2%). All other variables contributed less than 5% each. The average minimum training presence was 0.05, and the lowest 10 percentile training omission threshold was 0.36. The obtained allozyme data displayed in neighbour-joining phenograms, structure plots and hierarchical variance analyses indicate a profound genetic split between the two taxa, M. galathea and M. lachesis. Nazari et al. supported this pattern by three lines of evidence: (i) differences of the male genitalia between M. lachesis and M. galathea, (ii) a stronger difference in wing patterns between these two taxa than between M. galathea population in Europe and the Maghreb and (iii) remarkable differences in DNA sequences of the nuclear wg gene between M. lachesis and M. galathea, but no major differentiation between M. galathea samples from Europe and the Maghreb. However, the sequences of the two mtDNA genes cox1 and 16S contradict the common pattern of allozymes, genital structures, wing patterns and nuclear DNA sequences: This marker is not well distinguishing M. galathea from Europe and M. lachesis, but shows remarkable differences between Europe and the Maghreb with this split being dated back to the Messinia Salinity Crises more than 5 My ago . Having in mind the differentiation pattern in all known marker systems, we believe that these two mtDNA lineages in the entire species complex might have originated at that time horizon, but were distributed to different geographical regions only much later by lineage sorting, maybe hereby exemplifying one case of the often observed difference between mtDNA on the one hand and nuclear DNA sequences, morphological characteristics and allozyme pattern on the other . Our allozyme data further show strong differentiation within M. galathea into two major groups with respective subgroups: (i) Sicily - Tunisia with (i-a) Sicily and (i-b) Tunisia as well as (ii) all other M. galathea with (ii-a) Morocco, (ii-b) Italy with parts of southern France, (ii-c) western Balkan including the western Carpathian Basin, (ii-d) eastern Balkans with Romania and Central Europe, and (ii-e) the south-western Alps. Atlantic-Mediterranean origin of the M. galathea/lachesisspecies complex From the Maghreb to Europe The deepest split in the M. galathea populations is between the Sicily - Tunisia group and all the other populations. As this split is about twice the genetic differentiation among their subgroups and less than half of the distance against M. lachesis, the onset of the Riss glaciation (about 310 ky BP) might be the trigger for vicariance and thus the beginning of this differentiation. As (i) Iberia was continuously blocked for the expansion of M. galathea to Europe by M. lachesis [cf. 27] and (ii) all European M. galathea populations except Sicily are more similar to populations from Morocco than from Tunisia, a scenario with this split taking place in the Maghreb is little likely. This assumption is further supported by SDMs for ice age conditions predicting mostly continuous distributions over North Africa (Figure 3b) thus allowing vicariance in this region only during the relatively short interglacial stages. For these reasons, M. galathea must have reached Europe before the Riss glaciation. As the region of the eastern Sahara in Egypt apparently always have been too dry for an expansion of M. galathea, this first expansion of M. galathea to Europe must have been from Tunisia to Sicily (Figure 4a), a sea strait known for biogeographical connections for many taxa [e.g. ; and references therein]. As the Strait of Sicily was considerably narrower during glacial periods due to eustatic sea level lowering, the transition from Mindel glaciation to Holstein interglacial with still low sea level but already higher temperatures might have been a suitable time period for this dispersal. After arrival to Sicily, the Holstein interglacial might have given suitable condition for the expansion of M. galathea over most parts of Europe, including the Balkans but excluding Iberia as this peninsula was already populated by M. lachesis (Figure 4a). With the climatic cooling of the Riss ice age, which was considerably longer than the following Würm glaciation and had longer durations of minimum temperatures [33, 37], M. galathea most probably was nearly extinct in Europe only surviving in the southernmost possible retreats in Sicily and the southern Balkans (Peleponnesos), but also in the Maghreb; M. lachesis could survive in southern Iberia (Figure 4b). This vicariance might be the origin of the two major European lineages of M. galathea with the eastern one by chance evolving similarly in allele frequencies as the Morocco lineage, with this similarity therefore not representing recent biogeographical connection between them. Riss vicariance events most likely have also been responsible for other differentiation processes as e.g. in the Polyommatus coridon /hispana complex [e.g. ]. ...and back to the Maghreb As the time for differentiation between the four M. galathea lineages from continental Europe is assumed to be the result of one glacial cycle (see above) and as the differentiation between populations from Sicily and Tunisia are in the same order of magnitude, we assume that the onset of this differentiation is in the same time frame. As the genetic diversity is significantly higher in Sicily than in Tunisia and the warm and dry interglacial climatic conditions in Tunisia generally unsuitable for the survival of M. galathea, we assume that a colonisation most likely has taken place from Sicily to Tunisia. While the sea level was still considerably lowered at the transition from Riss to Eem thus facilitating dispersal between these two areas, this time period might be the most likely for this expansion event. During the following Eem interglacial, the Balkan refuge of M. galathea most probably could colonise most parts of Europe apart from Iberia and Sicily, which were occupied by other genetic lineages of this species complex (Figure 4b). The existence of extra-Mediterranean refugia for thermophilic taxa During the Würm ice age, which was not more severe than the two previous glaciations but with a shorter maximum , the Marbled White butterflies were not that much pushed to the South than in the previous cases. This is well matching the remarkable differentiation of the species in Europe allowing to distinguish five lineages (see above), which most likely are the result of survival of the Würm ice age in a larger number of different refugia. This pattern implies at least two different refugia at the Balkan Peninsula at the western and the eastern flank; more in detail analyses also support a third Balkan centre in the peninsula's southern parts (Figure 4c). This pattern of multiple refugia in the Balkans was already erected by Reinig postulating different centres of survival in the western, southern and eastern Balkans and was later supported by genetic analyses showing genetic divergences between these areas for a variety of different animal species [e.g. [18, 40–42]]. Furthermore, different Würm refugia have to be postulated for Sicily and peninsular Italy, a pattern also repeated by other genetic analyses [e.g. [17, 43]]. Furthermore, other genetic studies show a remarkable genetic differentiation in the southernmost parts of peninsular Italy [e.g. [34, 44, 45]]. The last remaining lineage of M. galathea in the south-western Alps most likely is not representing a Mediterranean refuge of this species, but an extra-Mediterranean refuge area at the southern slopes of the glaciated Alps (Figure 4c). As already shown by Steward and Lister , glacial survival of temperate species in Europe was not only possible in the classical Mediterranean refugia sensu de Lattin , but also in small climatically buffered pockets in more northern regions [8, 48, 49]. Recent works especially highlight the southern and south-eastern parts of the Alps of particular importance for additional Würm ice age refugia for temperate species [e.g. [42, 50, 51]], and also for species formerly thought to be of exclusive Mediterranean origin [e.g. [52, 53]]. This apparently was also the case for the Marbled White. During the Postglacial, several lineages of M. galathea were mostly blocked in their expansion by other lineages representing the respective leading edges [cf. 54]. In the case of M. galathea in Morocco, their northwards expansion was blocked by M. lachesis distributed in Iberia. The lineage surviving in the eastern Balkans apparently had the most important impact in the recolonisation of more northern parts of Europe as its dispersal was not hampered by any major mountain obstacle [cf. 9] so that this lineage could expand throughout Central Europe to the western parts of Germany (Figure 4c). However, the samples of north-eastern France and southern Germany show an intermediate genetic structure between this lineage and the south-western Alps lineage, making hybrid origin of these populations rather likely and thus expansion of the southern Alps lineage over the chains of the Alps. Also the Italian lineage could expand beyond its refugium to southern France. Therefore the entire region of northern France and southern Germany might be a zone of mixing between these three lineages. Hybrid zones between different taxa are frequently observed in this region [e.g. [9, 55]]. Furthermore, the southernmost population in Calabria (southern Italy) has an intermediate genetic texture between the Italian and the Sicily group thus speaking for a postglacial contact and intermixing between these two groups in this region. The hierarchical structure of our allozyme data set on M. galathea and M. lachesis is consistent with the chronology of the last four glacial-interglacial cycles. Based on this consistency, we derive the following scenario, which in our opinion is the most likely one: (i) The beginning of the Günz ice age might have affected the vicariance between the two species. (ii) M. galathea might have crossed from Tunisia to Sicily at the transition from Mindel ice age to Holstein interglacial and (iii) subsequently spread all over Europe, but retreated in the Maghreb to the higher elevations of the Atlas mountains. (iv) The members of this species complex survived the coldest periods of the Riss glaciation only in southern Iberia, Morocco, Sicily and the southern Balkans (Peleponnesos). (v) At the transition from Riss ice age to Eem interglacial, Tunisia was recolonised from Sicily. (vi) The southern Balkan group might have colonised major parts of Europe during the Eem interglacial including Italy and Central Europe. (vii) Populations of this group survived the Würm ice age in Italy, the southern margin of the Alps, the western and eastern flank of the Balkan peninsula; members of other lineages survived in Sicily, Tunisia, Morocco and Iberia. (viii) During the Postglacial, only the eastern Balkan and the Italian lineage showed major northwards range expansion. (ix) Hybridisation between lineages most probably occurred in western Central Europe and southern Calabria. Alleles were labelled according to their relative mobility, starting with "1" for the slowest. All laboratory results were stored on cellulose acetate plates. These banding patterns were (re)analysed by one person (JCH). Allele frequencies, Nei's standard genetic distances and parameters of genetic diversity (i.e. mean number of alleles per locus, A, expected heterozygosity, He, and observed heterozygosity, Ho, total percentage of polymorphic loci, Ptot, and percentage of polymorphic loci with the most common allele not exceeding 95%, P95) were computed with G-Stat . As sample sizes do not differ significantly, the calculation of allelic richness correcting for population sizes was not necessary. For detecting differences of means of genetic diversities among genetic lineages and sublineages, we calculated U-tests using STATISTICA. Conventional F statistics, AMOVAs, hierarchical genetic variance analysis, tests of Hardy-Weinberg equilibrium and linkage disequilibrium were calculated with ARLEQUIN 3.1 . Phenograms using the neighbour joining algorithm were constructed with PHYLIP , including bootstrap-values (calculated based on 1,000 iterations). To define individual based genetic clusters we performed STRUCTURE analyses . As burn-in and simulation lengths we used 100,000 and 300,000 iterations per run based on the admixture model with correlated gene frequencies comparing different groupings (from K = 2 to K = 10). Species Distribution Modelling Over the last few decades, Geographic Information System (GIS) based Species Distribution Models (SDMs) have become vital tools used to predict the potential distribution of species under current conditions and climate change scenarios [62–64]. In combination with palaeoclimatological data, SDMs have been suggested as a mean of inferring species' past distributions [65, 66], especially when combined with phylogeographic techniques . We compiled a set of 3,483 species records of M. galathea from online data bases (Global Biodiversity Information Facility - GBIF; http://www.gbif.org) and our own field surveys. The accuracy of all records was checked in DIVA-GIS 5.4 and only those which could be unambiguously assigned to a single grid cell with a resolution of 2.5 arc min (ca. 4 km in the study area) were used for further processing. Since unequal spatial clumping of species records may cause problems when computing SDMs, the species records were filtered in geographic space, leaving only 1 record per 10 arc min. The final data set comprised 535 records (Figure 3a) scattered all over the known range of the species in Europe and North Africa. We obtained information on current and past climate as describedby the Community Climate System Model (CCSM; http://www.ccsm.ucar.edu) with a spatial resolution of 2.5 arc min from the Worldclim data base (; http://www.worldclim.org). Original palaeoclimatological data were previously processed as described by Peterson and Nyári (2007) . A total of 19 BIOCLIM variables were previously suggested as suitable for SDM computation [71, 72]. However, inclusion of too many inter-correlated variables or biologically irrelevant predictors may hamper the transferability of SDMs through space and time [73–76]. Therefore, we first computed a pair-wise correlation matrix based on Pearson's correlation coefficients among all 19 predictor variables and excluded those with R 2 > 0.75. Subsequently, we chose a final set of eleven predictors describing biologically relevant climate conditions for the long-term persistence of M. galathea populations (i.e. annual mean temperature, maximum temperature of warmest month, minimum temperature of coldest month, temperature annual range, mean temperature of wettest quarter, mean temperature of driest quarter, annual precipitation, precipitation of wettest quarter, precipitation of driest quarter, precipitation of warmest quarter, precipitation of coldest quarter). For SDM computation, Maxent 3.3.2 was applied [77, 78] using the default program settings. Random background records were automatically sampled by Maxent within the study area. Species records were split 100 times into 70% used for model training and 30% for model evaluation via the area under the receiver operating characteristic curve (AUC; ). Subsequently, the average predictions for current and past conditions of the logistic output of the 100 models were computed and transformed into presence/absence maps applying the average minimum training presence and average 10% training omission as thresholds. We acknowledge financial support by the German Academic Exchange Service (PostDoc Programme) and the Musée national d'histoire naturelle Luxembourg to JCH as well as from the Ministry of Education, Science, Youth and Culture of the Rhineland-Palatinate state of Germany to DR (project: 'Implications of global change for biological resources, law and standards'). We thank Claas Damken (Auckland, New Zealand) and Marc Meyer (Luxembourg) for field assistance. We thank the Fonds National de la Recherche Luxembourg for covering the publication fees. - Lomolino MV: Biogeography. Sinauer Assn. 2005, 465-Google Scholar - Reinig WF: Die Holarktis. Gustav-Fischer-Verlag. 1937, JenaGoogle Scholar - De Lattin G: Grundriß der Zoogeographie. Verlag Gustav Fischer. 1967, JenaGoogle Scholar - Dennis RLH, Williams WR, Shreeve TG: A multivariate approach to the determination of faunal units among European butterfly species (Lepidoptera: Papilionoidea, Hesperioidea). Zoological Journal of the Linnean Society. 1991, 101: 1-49. 10.1111/j.1096-3642.1991.tb00884.x.View ArticleGoogle Scholar - Hewitt GM: Some genetic consequences of ice ages, and their role in divergence and speciation. Biological Journal of the Linnean Society. 1996, 58: 247-276.View ArticleGoogle Scholar - Taberlet P, Fumagalli L, Wust-Saucy A-G, Cosson J-F: Comparative phylogeography and postglacial colonization routes in Europe. Molecular Ecology. 1998, 7: 453-464. 10.1046/j.1365-294x.1998.00289.x.View ArticlePubMedGoogle Scholar - Hewitt GM: Genetic consequences of climatic oscillation in the quaternary. Phil Trans R Soc Lond B. 2004, 359: 183-195. 10.1098/rstb.2003.1388.View ArticleGoogle Scholar - Schmitt T: Molecular biogeography of Europe: Pleistocene cycles and postglacial trends. Frontiers in Zoology. 2007, 4: 11-10.1186/1742-9994-4-11.View ArticlePubMedPubMed CentralGoogle Scholar - Hewitt GM: Post-glacial re-colonization of European biota. Biological Journal of the Linnean Society. 1999, 68: 87-112. 10.1111/j.1095-8312.1999.tb01160.x.View ArticleGoogle Scholar - Hewitt GM: The genetic legacy of the Quaternary ice ages. Nature. 2000, 405: 907-913. 10.1038/35016000.View ArticlePubMedGoogle Scholar - Cosson J-F, Hutterer R, Libois R, Sara M, Taberlet P, Vogel P: Phylogeographical footprint of the Strait of Gibraltar and Quaternary climatic fluctuations in the western Mediterranean: a case study with the greater white-toothed shrew, Crocidura russula (Mammalia: Soricidae). Molecular Ecology. 2005, 14: 1151-1162. 10.1111/j.1365-294X.2005.02476.x.View ArticlePubMedGoogle Scholar - Carranza S, Arnold EN, Pleguezuelos JM: Phylogeny, biogeography, and evolution of two Mediterranean snakes, Malpolon monspessulanus and Hemorrhois hippocrepis (Squamata, Colubridae), using mtDNA sequences. Molecular Phylogenetics and Evolution. 2006, 40: 532-546. 10.1016/j.ympev.2006.03.028.View ArticlePubMedGoogle Scholar - Carranza S, Harris DJ, Arnold EN, Batista V, Gonzalez de la Vega JP: Phylogeography of the lacertid lizard, Psammodromus algirus, in Iberia and across the Strait of Gibraltar. Journal of Biogeography. 2006, 33: 1279-1288. 10.1111/j.1365-2699.2006.01491.x.View ArticleGoogle Scholar - Fritz U, Barata M, Busack SD, Fritsch G, Castilho R: Impact of mountain chains, sea straits and peripheral populations on genetic and taxonomic structure of a freshwater turtle, Mauremys leprosa (Reptilia, Testudines, Geoemydidae). Zoologica Scripta. 2006, 35: 97-108. 10.1111/j.1463-6409.2005.00218.x.View ArticleGoogle Scholar - Franck P, Garnery L, Loiseau A, Oldroyd BP, Hepburn HR, Solignac M, Cornuet J-M: Genetic diversity of the honeybee in Africa: microsatellite and mitochondrial data. Heredity. 2001, 86: 420-430. 10.1046/j.1365-2540.2001.00842.x.View ArticlePubMedGoogle Scholar - Habel JC, Rödder D, Scalercio S, Meyer M, Schmitt T: Strong genetic cohesiveness between Italy and the Maghreb in four butterfly species. Biological Journal of the Linnean Society. 2010, 99: 818-830. 10.1111/j.1095-8312.2010.01394.x.View ArticleGoogle Scholar - Santucci F, Emerson B, Hewitt GM: Mitochondrial DNA phylogeography of European hedgehogs. Molecular Ecology. 1998, 7: 1163-1172. 10.1046/j.1365-294x.1998.00436.x.View ArticlePubMedGoogle Scholar - Seddon JM, Santucci F, Reeve NJ, Hewitt GM: DNA footprints of European hedgehogs, Erinaceus europaeus and E. concolor. Pleistocene refugia, postglacial expansion and colonization routes. Molecular Ecology. 2001, 10: 2187-2198. 10.1046/j.0962-1083.2001.01357.x.View ArticlePubMedGoogle Scholar - Colliard C, Sicilia A, Turrisi GF, Arculeo M, Perrin N, Stöck M: Strong reproductive barriers in a narrow hybrid zone of West-Mediterranean green toads (Bufo viridis subgroup) with Plio-Pleistocene divergence. BMC Evolutionary Biology. 2010, 10: 232-10.1186/1471-2148-10-232.View ArticlePubMedPubMed CentralGoogle Scholar - Veith M, Mayer C, Samraoui B, Barrosso DD, Bogaerts S: From Europe to Africa and vice versa: evidence for multipe intercontinental dispersal in ribbed salamanders (Genus Pleurodeles). Journal of Biogeography. 2004, 31: 159-171. 10.1111/j.1365-2699.2004.00957.x.View ArticleGoogle Scholar - Paulo OS, Pinto I, Bruford MW, Jordan WC, Nichols RA: The double origin of Iberian peninsular chameleons. Biological Journal of the Linnean Society. 2008, 75: 1-7.View ArticleGoogle Scholar - Tennent J: The butterflies of Morocco, Algeria and Tunisia. Gem publishing Company. 1996, WallingfordGoogle Scholar - Tolman T, Lewington R: Field guide butterflies of Britain and Europe. Harper Collins Publishers. 1997, LondonGoogle Scholar - Asher J, Warren M, Fox R, Harding P, Jeffcoate G, Jeffcoate S: The millennium atlas of butterflies in Britain and Ireland. 2001, Oxford University Press OxfordGoogle Scholar - García-Barros E, Munguira ML, Martín Cano J, Romo Benito H, Garcia-Pereira P, Maravalhas ES: Atlas de las mariposas diurnas de la Península Ibérica e islas Baleares (Lepidoptera: Papilionoidea and Hesperioidea). Monografias Sociedad Entomológica. 2004, Aragonesa, 11:Google Scholar - Buszko J: Atlas rozmieszczenia motyli dziennych w Polsce (Lepidoptera: Papilionidae, Hesperiidae). Edycja Turpress. 1997, TorunGoogle Scholar - Habel JC, Schmitt T, Müller P: The fourth paradigm pattern of postglacial range expansion of European terrestrial species: The phylogeography of the Marbled White butterfly (Satyrinae, Lepidoptera). Journal of Biogeography. 2005, 32: 1489-1497. 10.1111/j.1365-2699.2005.01273.x.View ArticleGoogle Scholar - Habel JC, Meyer M, El Mousadik A, Schmitt T: Africa goes Europa: The complete phylogeography of the Marbled White butterfly species complex Melanargia galathea /lachesis. Organisms, Diversity and Evolution. 2008, 8: 121-129. 10.1016/j.ode.2007.04.002.View ArticleGoogle Scholar - Schmitt T, Habel JC, Zimmermann M, Müller P: Genetic differentiation of the marble white butterfly, Melanargia galathea, accounts for glacial distribution patterns and postglacial range expansion in southeastern Europe. Molecular Ecology. 2006, 15: 1889-1901. 10.1111/j.1365-294X.2006.02900.x.View ArticlePubMedGoogle Scholar - Swets K: Measuring the accuracy of diagnostic systems. Science. 1988, 240: 1285-1293. 10.1126/science.3287615.View ArticlePubMedGoogle Scholar - Nazari V, Hagen WT, Bozano GC: Molecular systematics and phylogeny of the Marbled Whites (Lepidoptera: Nymphidae, Satyrinae, Melanargia Meigen). Sysematics Entomology. 2009, 35: 132-147.View ArticleGoogle Scholar - Hein J, Schierup MH, Winf C: Gene genealogies, variation and evolution - A primer in coalescent theory. 2005, Oxford University Press, OxfordGoogle Scholar - Gibbard P, van Kolfschoten T: The Pleistocene and Holocene epochs. A geologic time scale. Edited by: Gradstein FM, Ogg JG, Smith AG. 2004, Cambridge University Press, Cambridge, 98: 441-452.Google Scholar - Steinfartz S, Veith M, Tautz D: Mitochondrial sequence analysis of Salamandra taxa suggests old splits of major lineages and postglacial recolonizations of Central Europe from distinct source populations of Salamandra salamandra. Molecular Ecology. 2000, 9: 397-410. 10.1046/j.1365-294x.2000.00870.x.View ArticlePubMedGoogle Scholar - Carranza S, Wade E: Taxonomic revision of Algero-Tunisian Pleurodeles (Caudata: Salamandridae) using molecular and morphological data. Revalidation of the taxon Pleurodeles nebulosus (Guichenot, 1850). Zootaxa. 2004, 488: 1-24.View ArticleGoogle Scholar - Fromhage L, Vences M, Veith M: Testing alternative vicariance scenarios in Western Mediterranean discoglossid frogs. Molecular Phylogenetics and Evolution. 2003, 31: 308-322.View ArticleGoogle Scholar - Quante M: The changing climate: future. Relict species - phylogeography and conservation biology. Edited by: Habel JC, Assmann T. 2010, Springer Heidelberg, 9-56.Google Scholar - Schmitt T, Varga Z, Seitz A: Are Polyommatus hispana and Polyommatus slovacus bivoltine Polyommatus coridon (Lepidoptera: Lycaenidae)? The discriminatory value of genetics in the taxonomy. Organisms, Diversity and Evolution. 2005, 5: 297-307. 10.1016/j.ode.2005.01.001.View ArticleGoogle Scholar - Reinig WF: Chorologische Voraussetzungen für die Analyse von Formenkreisen. Syllegomena Biologica, Festschrift für O Kleinschmidt. 1950, 346-378.Google Scholar - Lenk P, Fritz U, Joger U, Winks M: Mitochondrial phylogeography of the European pond turtle, Emys orbicularis (Linnaeus 1758). Molecular Ecology. 1999, 8: 1911-1922. 10.1046/j.1365-294x.1999.00791.x.View ArticlePubMedGoogle Scholar - Seddon JM, Reeve N, Hewitt GM: Caucasus Mountains divide postulated postglacial colonization routes in the white-breasted hedgehog, Erinaceus concolor. Journal of Evolutionary Biology. 2002, 15: 463-467. 10.1046/j.1420-9101.2002.00408.x.View ArticleGoogle Scholar - Pinceel J, Jordaens K, Pfenninger M, Backeljau T: Rangewide phylogeography of a terrestrial slug in Europe: evidence for Alpine refugia and rapid colonization after the Pleistocene glaciations. Molecular Ecology. 2005, 14: 1133-1150. 10.1111/j.1365-294X.2005.02479.x.View ArticlePubMedGoogle Scholar - Cooper SJ, Ibrahim KM, Hewitt GM: Postglacial expansion and genome subdivision in the European Grasshopper Chorthippus parallelus. Molecular Ecology. 1995, 4: 49-60. 10.1111/j.1365-294X.1995.tb00191.x.View ArticlePubMedGoogle Scholar - Podnar M, Mayer W, Tvrtkovic N: Phylogeography of the Italian wall lizard, Podarcis sicula, as revealed by mitochondrial DNA sequences. Molecular Ecology. 2005, 14: 575-588. 10.1111/j.1365-294X.2005.02427.x.View ArticlePubMedGoogle Scholar - Canestrelli D, Cimmaruta R, Costantini V, Nascetti G: Genetic diversity and phylogeography of the Apennine yellow-bellied toad Bombina pachypus, with implications for conservation. Molecular Ecology. 2006, 15: 3741-3754. 10.1111/j.1365-294X.2006.03055.x.View ArticlePubMedGoogle Scholar - Steward JR, Lister AM: Cryptic northern refugia and the origins of the modern biota. Trends in Ecology and Evolution. 2001, 16: 608-613. 10.1016/S0169-5347(01)02338-2.View ArticleGoogle Scholar - de Lattin G: Beiträge zur Zoogeographie des Mittelmeergebietes. Verhandlungen der deutschen Zoologischen Gesellschaft, Kiel. 1949, 143-151.Google Scholar - Nève G, Verlaque R: Genetic differentiation between and among refugia. Relict species - phylogeography and conservation biology. Edited by: Habel JC, Assmann T. 2010, Springer Heidelberg, 277-294.Google Scholar - Habel JC, Augenstein B, Nève G, Rödder D, Assmann T: Population genetics and ecological niche modelling reveal high fragmentation and potential future extinction of the endangered relict butterfly Lycaena helle. Relict species - phylogeography and conservation biology. Edited by: Habel JC, Assmann T. 2010, Springer Heidelberg, 417-440.Google Scholar - Schmitt T, Seitz A: Allozyme variation in Polyommatus coridon (Lepidoptera: Lycaenidae): identification of ice-age refugia and reconstruction of post-glacial expansion. Journal of Biogeography. 2001, 28: 1129-1136. 10.1046/j.1365-2699.2001.00621.x.View ArticleGoogle Scholar - Gratton P, Konopinski MK, Sbordoni V: Pleistocene evolutionary history of the Clouded Apollo (Parnassius mnemosyne): genetic signatures of climate cycles and a 'time-dependent' mitochondrial substitution rate. Molecular Ecology. 2008, 17: 4248-4262. 10.1111/j.1365-294X.2008.03901.x.View ArticlePubMedGoogle Scholar - Magri D: Patterns of post-glacial spread and the extent of glacial refugia of European beech (Fagus sylvatica). Journal of Biogeography. 2008, 35: 450-463. 10.1111/j.1365-2699.2007.01803.x.View ArticleGoogle Scholar - Magri D, Vendramin GG, Comps B, Dupanloup I, Geburek T, Gomory D, Latalowa M, Litt T, Paule L, Roure JM, Tantau I, van der Knaap WO, Petit RJ, de Beaulieu JL: A new scenario for the Quaternary history of European beech populations: palaeobotanical evidence and genetic consequences. New Phytologist. 2006, 71: 199-221.View ArticleGoogle Scholar - Hampe A, Petit RJ: Conserving biodiversity under climate change: the rear edge matters. Ecology Letters. 2005, 8: 461-467. 10.1111/j.1461-0248.2005.00739.x.View ArticlePubMedGoogle Scholar - Habel JC, Dieker P, Schmitt T: Biogeographical connections between the Maghreb and the Mediterranean peninsulas of southern Europe. Biological Journal of the Linnean Society. 2009, 98: 693-703. 10.1111/j.1095-8312.2009.01300.x.View ArticleGoogle Scholar - Nei M: Genetic distances between populations. The American Naturalist. 1972, 106: 283-291. 10.1086/282771.View ArticleGoogle Scholar - Siegismund HR, Müller J: Genetic structure of Gammarus fossarum populations. Heredity. 1991, 66: 419-436. 10.1038/hdy.1991.52.View ArticleGoogle Scholar - Excoffier L, Larval G, Schneider S: Arlequin ver. 3.0: an integrated software package for population genetics data analysis. Evolutionary Bioinformatics Online. 2005, 1: 47-50.Google Scholar - Saitou N, Nei M: The neighbor-joining method: a new method for reconstructing phylogenetic trees. Molecular Biology and Evolution. 1987, 4: 406-425.PubMedGoogle Scholar - Felsenstein J: PHYLIP (Phylogeny Inference Package) Ver. 3.5.c. Department of Genetics, University of Washington. 1993, Seattle, WashingtonGoogle Scholar - Pritchard JK, Stephens M, Donnelly P: Inference of population structure using multilocus genotype data. Genetics. 2000, 155: 945-955.PubMedPubMed CentralGoogle Scholar - Guisan A, Zimmermann N: Predictive habitat distribution models in ecology. Ecological Modelling. 2000, 135: 147-186. 10.1016/S0304-3800(00)00354-9.View ArticleGoogle Scholar - Jeschke JM, Strayer DL: Usefulness of bioclimatic models for studying climate change and invasive species. Annals of the New York Academy of Sciences. 2008, 1134: 1-24. 10.1196/annals.1439.002.View ArticlePubMedGoogle Scholar - Elith J, Leathwick JR: Species distribution models: Ecological explanation and prediction across space and time. Annual Reviews in Ecology, Evolution and Systematics. 2009, 40: 677-697. 10.1146/annurev.ecolsys.110308.120159.View ArticleGoogle Scholar - Waltari E, Hijmans RJ, Peterson AT, Nyári AS, Perkins SL, Guralnick RP: Locating pleistocene refugia: Comparing phylogeographic and ecological niche model predictions. PLoS one. 2007, 7: 1-11.Google Scholar - Hijmans RJ, Guarino L, Jarvis A, O'brien R, Mathur P, Bussink C, Cruz M, Barrantes I, Rojas E: DIVA-GIS version 5.2 manual. 2005Google Scholar - Nogués-Bravo D: Predicting the past distributions of species climatic niches. Global Ecology and Biogeography. 2009, 18: 521-531. 10.1111/j.1466-8238.2009.00476.x.View ArticleGoogle Scholar - Rödder D, Weinsheimer F, Lötters S: Molecules meet macroecology - combining species distribution modells and phylogeographic studies. Zootaxa. 2010, 2426: 54-60.Google Scholar - Hijmans RJ, Cameron SE, Parra JL, Jones PG, Jarvis A: Very high resolution interpolated climate surfaces for global land areas. International Journal of Climatology. 2005, 25: 1965-1978. 10.1002/joc.1276.View ArticleGoogle Scholar - Peterson AT, Nyári ÁS: Ecological niche conservatism and pleistocene refugia in the thrush-like mourner, Shiffornis sp., in the neotropics. Evolution. 2007, 62: 173-183.PubMedGoogle Scholar - Busby JR: Bioclim - a bioclimatic analysis and prediction system. Nature conservation: Cost effective biological surveys and data analysis. Edited by: CR Margules, MP Austin. 1991, CSIRO, Melbourne, 64-68.Google Scholar - Beaumont LJ, Hughes L, Poulsen M: Predicting species distributions: Use of climatic parameters in bioclim and its impact on predictions of species' current and future distributions. Ecological Modelling. 2005, 186: 250-269.View ArticleGoogle Scholar - Heikkinen RK, Luoto M, Araújo MB, Virkkala R, Thuiller W, Sykes MT: Methods and uncertainties in bioclimatic envelope modeling under climate change. Progress in Physical Geography. 2006, 30: 751-777. 10.1177/0309133306071957.View ArticleGoogle Scholar - Rödder D, Lötters S: Niche shift versus niche conservatism? Climatic charactersistics of the native and invasive ranges of the mediterranean house gecko (Hemidactylus turcicus). Global Ecology and Biogeography. 2009, 18: 674-687. 10.1111/j.1466-8238.2009.00477.x.View ArticleGoogle Scholar - Rödder D, Schmidtlein S, Veith M, Lötters S: Alien invasive slider turtle in unpredicted habitat: A matter of niche shift or of predictors studied?. PLoS one. 2009, 4: e7843-10.1371/journal.pone.0007843.View ArticlePubMedPubMed CentralGoogle Scholar - Rödder D, Lötters S: Explanative power of variables used in species distribution modelling: An issue of general model transferability or niche shift in the invasive greenhouse frog (Eleutherodactylus planirostris). Naturwissenschaften. 2010, 97: 781-796. 10.1007/s00114-010-0694-7.View ArticlePubMedGoogle Scholar - Phillips SJ, Anderson RP, Schapire RE: Maximum entropy modeling of species geographic distributions. Ecological Modelling. 2006, 190: 231-259. 10.1016/j.ecolmodel.2005.03.026.View ArticleGoogle Scholar - Phillips SJ, Dudík M: Modeling of species distributions with Maxent: new extensions and a comprehensive evaluation. Ecography. 2010, 31: 161-175.View ArticleGoogle Scholar - Pearce J, Ferrier S: An evaluation of alternative algorithms for fitting species distribution models using logistic regression. Ecological Modelling. 2000, 128: 128-147.View ArticleGoogle Scholar This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
<urn:uuid:a40a9012-01f8-4e28-8fbb-1775d4adaaf5>
2.953125
11,389
Truncated
Science & Tech.
51.38512
95,495,153
Viruses infect methane-eating archaea beneath the seafloor The intraterrestrials, they might be called. Strange creatures live in the deep sea, but few are odder than the viruses that inhabit deep ocean methane seeps and prey on single-celled microorganisms called archaea. The least understood of life's three primary domains, archaea thrive in the most extreme environments on the planet: near hot ocean rift vents, in acid mine drainage, in the saltiest of evaporation ponds and in petroleum deposits deep underground. Virus in the deep blue sea While searching the ocean's depths for evidence of viruses, scientists have found a remarkable new one, a virus that seemingly infects archaea that live beneath the ocean floor. The researchers were surprised to discover that the virus selectively targets one of its own genes for mutation, and that this capacity is also shared by archaea themselves. The findings appear today in a paper in the journal Nature Communications. The project was supported by a National Science Foundation (NSF) Dimensions of Biodiversity grant to characterize microbial diversity in methane seep ecosystems. Dimensions of Biodiversity is supported by NSF's Directorates for Biological Sciences and Geosciences. New information about life in ocean depths "Life far beneath the Earth's subsurface is an enigma," said Matt Kane, program director in NSF's Division of Environmental Biology. "By probing deep into our planet, these scientists have discovered new information about Earth's microbes and how they evolve." "Our study uncovers mechanisms by which viruses and archaea can adapt in this hostile environment," said David Valentine, a geoscientist at the University of California Santa Barbara (UCSB) and co-author of the paper. The results, he said, raise new questions about the evolution and interaction of the microbes that call the planet's interior home. "It's now thought that there's more biomass inside the Earth than anywhere else, just living very slowly in this dark, energy-limited environment," said paper co-author Sarah Bagby of UCSB. Using the submersible Alvin, Valentine and colleagues collected samples from a deep-ocean methane seep by pushing tubes into the ocean floor and retrieving sediments. The contents were brought back to the lab and fed methane gas, helping the methane-eating archaea in the samples to grow. When the team assayed the samples for viral infection, they discovered a new virus with a distinctive genetic fingerprint that suggested its likely host was methane-eating archaea. Genetic sequence of new virus holds the key The researchers used the genetic sequence of the new virus to chart other occurrences in global databases. "We found a partial genetic match from methane seeps off Norway and California," said lead author Blair Paul of UCSB. "The evidence suggests that this viral type is distributed around the globe in deep ocean methane seeps." Further investigation revealed another unexpected finding: a small genetic element, known as a diversity-generating retroelement, that accelerates mutation of a specific section of the virus's genome. Such elements had been previously identified in bacteria and their viruses, but never among archaea or the viruses that infect them. "These researchers have shown that cutting-edge genomic approaches can help us understand how microbes function in remote and poorly known environments such as ocean depths," said David Garrison, program director in NSF's Division of Ocean Sciences. While the self-guided mutation element in the archaea virus resembles known bacterial elements, the researchers found that it has a divergent evolutionary history. "The target of guided mutation--the tips of the virus that make first contact when infecting a cell--is similar," said Paul. "But the ability to mutate those tips is an offensive countermeasure against the cell's defenses, a move that resembles a molecular arms race." Unusual genetic adaptations Having found guided mutation in a virus-infecting archaea, the scientists reasoned that archaea themselves might use the same mechanism for genetic adaptation. In an exhaustive search, they identified parallel features in the genomes of a subterranean group of archaea known as nanoarchaea. Unlike the deep-ocean virus that uses guided mutation to alter a single gene, the nanoarchaea target at least four distinct genes. "It's a new record," said Bagby. "Bacteria had been observed to target two genes with this mechanism. That may not seem like a huge difference, but targeting four is extraordinary." According to Valentine, the genetic mutation that fosters these potential variations may be key to the survival of archaea beneath the Earth's surface. "The cell is choosing to modify certain proteins," he said. "It's doing its own protein engineering. While we don't yet know what those proteins are being used for, learning about the process can tell us something about the environment in which these organisms thrive." Viral DNA sequencing was provided through a Gordon and Betty Moore Foundation grant. The research team also included scientists from the University of California, Los Angeles; the University of California, San Diego; and the U.S. Department of Energy's Joint Genome Institute. Cheryl Dybas | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:cd1ba073-239d-43c7-b930-789e78830597>
3.875
1,665
Content Listing
Science & Tech.
35.614691
95,495,157
An astrophysics researcher, Billy Quarles, has identified the possible compositions of the seven planets in the TRAPPIST-1 system. By studying the compositions of "thousands of numerical simulations to identify the planets stable for millions of years, Quarles concluded that six of the seven planets are consistent with an Earth-like composition,” science journal Phy.org reported. The exception is TRAPPIST-1f, which has a mass of 25 percent water, signifying that TRAPPIST-1e may be the best candidate for future habitability studies. In May last year, astronomers from MIT announced the discovery of an extremely unusual star system in the nearest region of Earth — TRAPPIST-1, which is only 40 light-years away from Earth in the direction of the constellation of Aquarius. It was reported then that three planets revolving around this red dwarf, which presumably have a mass comparable to that of earth, are inside the so-called "zone of life" where water can exist in liquid form. The scientists studied the spectrum of TRAPPIST-1 star beams, trying to understand the composition of the atmosphere of its planets, when they unexpectedly discovered that there were actually not three, but seven planets, six of them within the zone of life. All these planets have almost "terrestrial" dimensions and have a Martian or terrestrial climate, except for the first planet, TRAPPIST-1b, which is more like Venus than Mars or Earth. However, after further investigations it was discovered that one planet, TRAPPIST-1f, located in the center of the zone of life and considered one of the main candidates for the role of a twin of earth, is actually a "planet-ocean.” Around 20-25 percent of its mass is water; this water, due to its proximity to the star, will be heated to very high temperatures and will cover the planet with a dense cloud of vapor, which will make the existence of life impossible on it. All the other six planets are more similar to Earth – the proportion of water in their mass should not exceed several percent, and their insides should be composed of rocks similar in composition and density to terrestrial minerals. The most suitable for life, therefore, is not TRAPPIST-1f, but its smaller neighbor TRAPPIST-1e, located slightly closer to the star. In addition, life can also exist on the planet TRAPPIST-1g, which makes one revolution around the star in 13 incomplete days. The scientists plan to focus their efforts on these two planets in the future and they hope to obtain more accurate data on the composition and suitability of these planets for life.
<urn:uuid:9841bb67-7bbc-4979-bb5a-1c769c7918d1>
3.6875
561
Personal Blog
Science & Tech.
33.460323
95,495,171
There are times when you can literally hear the screech of millions of mosquitoes caught in this eerie spider web. Officials at Lake Tawakoni State Park say the sprawling spider web is a big attraction for some visitors, while others will not go anywhere near it. Now entomologists are debating the origin and rarity of the web that blankets several trees, shrubs and the ground along a 200-yard (182-metre) stretch of trail in a North Texas park. The webs bring to mind the terrifyingly large spiders featured in the Harry Potter movies "At first, it was so white it looked like fairyland," said Donna Garde, superintendent of the park about 45 miles (72 kilometres) east of Dallas. "Now it's filled with so many mosquitoes that it's turned a little brown. There are times you can literally hear the screech of millions of mosquitoes caught in those webs." Spider experts say the web may have been constructed by social cobweb spiders, which work together, or could be the result of a mass dispersal in which the arachnids spin webs to spread out from one another. "I've been hearing from entomologists from Ohio, Kansas, British Columbia - all over the place," said Mike Quinn, an invertebrate biologist with the Texas Parks and Wildlife Department who first posted photos online. Herbert A. "Joe" Pase, a Texas Forest Service entomologist, said the massive web is very unusual. "From what I'm hearing it could be a once-in-a-lifetime event," he said. But John Jackman, a professor and extension entomologist for Texas A&M University, said he hears reports of similar webs every couple of years. "There are a lot of folks that don't realise spiders do that," said Jackman, author of "A Field Guide to the Spiders and Scorpions of Texas." "Until we get some samples sent to us, we really won't know what species of spider we're talking about," Jackman said. Garde invited the entomologists out to the park to get a firsthand look at the giant web. "Somebody needs to come out that's an expert. I would love to see some entomology intern come out and study this," she said. Park rangers said they expect the web to last until fall, when the spiders will start dying off.
<urn:uuid:6b04fa3a-4560-4cf9-84bd-421c1d0091e0>
2.75
500
Personal Blog
Science & Tech.
54.00464
95,495,172
+44 1803 865913 By: George Thomson 343 pages, 85 b/w illustrations, tables The Meadow Brown butterfly (Maniola jurtina) has been the subject for research since the middle of last century. Aspects of its genetics, morphology and ecology have fascinated scientists and naturalists alike. It has been drawn and painted by artists since the Middle Ages. Its relatives in the tribe Maniolini (the genera Maniola, Pyronia, Aphantopus, Hyponephele and Cercyonis) have not been given as much attention and differences between these insects are not well understood. The main part of this book is a PhD thesis, written in 1987, the outcome of a six-year study of, and lifelong interest in, the butterflies of the tribe Maniolini from genetic and morphological perspectives, leading to some suggestions on the relationships between the genera and species. Although we live in a different world now, especially in the light of more recent DNA techniques, the thesis still attracts a great deal of attention and forms a useful basis for further study. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects On behalf of Parque Nacional Nahuel Huapi I would like to thank NHBS. The book will be very useful for my students. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:248a8631-cec3-4eaa-923a-aad6a4d99b6d>
2.546875
309
Product Page
Science & Tech.
41.024279
95,495,193
Makhlouf M. Makhlouf Richard D. Sisson Jr. Danielle Lynn Cote A castable alloy, i.e., one that flows easily to fill the entire mold cavity and also has resistance to hot tearing during solidification, must invariably contain a sufficient amount of a eutectic structure. For this reason, most traditional aluminum casting alloys contain silicon because the aluminum-silicon eutectic imparts to the alloy excellent casting characteristics. However, the solidus temperature in the Al-Si system does not exceed 577°C, and the major alloying elements (i.e., zinc, magnesium, and copper) used with silicon in these alloys further lower the solidus temperature. Also, these elements have high diffusivity in aluminum and so, while they enhance the room temperature strength of the alloy, they are not useful at elevated temperatures. Considering nickel-base super alloys, whose mechanical properties are retained up to temperatures that approach 75% of their melting point, it is conceivable that castable aluminum alloys can be developed on the same basis so that they are useful at temperatures approaching 350 °C. A castable aluminum alloy intended for high temperature applications must contain a eutectic structure that is stable at temperatures higher than 600°C, and must contain second phase precipitate particles that are thermodynamically stable at the service temperature. Transition metal trialuminides with the general chemical formula AlxTMy in which TM is a transition metal, are excellent candidates for both the eutectic structure and the precipitate particles. In this research, the use of transition metals in the constitution of aluminum casting alloys is investigated with emphasis on the morphology, crystallography, and mechanisms of formation of the various phases. Worcester Polytechnic Institute Materials Science & Engineering All authors have granted to WPI a nonexclusive royalty-free license to distribute copies of the work. Copyright is held by the author or authors, with all rights reserved, unless otherwise noted. If you have any questions, please contact firstname.lastname@example.org. Fan, Y. (2015). Precipitation Strengthening of Aluminum by Transition Metal Aluminides. Retrieved from https://digitalcommons.wpi.edu/etd-dissertations/209 Al-Mn, Al-Zr-V, Al-Zr, Al-Ni, Transition Metal, Precipitation hardening
<urn:uuid:e9e5529a-edf8-4749-baa6-724000653816>
2.71875
513
Academic Writing
Science & Tech.
23.06354
95,495,206
Weighing twice as much as the Sun, it is the most massive neutron star measured to date. Together with a short orbital period of only 2.5 hours, the system provides insight into binary stellar evolution and the emission of gravitational radiation. An artist’s impression of the PSR J0348+0432 binary system. The pulsar (with radio beams) is extremely compact, leading to a strong distortion of space-time (illustrated by the green mesh). The white-dwarf companion is shown in light-blue. Science / J. Antoniadis (MPIfR) The energy loss through this radiation has already been detected in the radio observations of the pulsar, making it a laboratory for General Relativity in extreme conditions. The findings are in excellent agreement with Einstein's theory. Imagine half a million Earths packed into a sphere 20 kilometers in diameter, spinning faster than an industrial kitchen blender. These extreme conditions, almost unimaginable by human standards, are met in a neutron star – a type of stellar remnant formed in the aftermath of a supernova explosion. Neutron stars often catch the attention of astronomers because they offer the opportunity to test physics under unique conditions. They were first discovered almost half a century ago as pulsars which emit radio pulses like a lighthouse. Pulsar research has been honored with two Nobel prizes, one for their discovery (1974) and one for the first indirect detection of gravitational waves (1993) – a consequence of Einstein’s theory of General Relativity. With these masses at hand, one can calculate the amount of energy taken away from the system by gravitational waves, causing the orbital period to shrink. The team immediately realized that this change in the orbital period should be visible in the radio signals of the pulsar and turned its full attention to PSR J0348+0432, using the three largest single-dish radio telescopes on Earth (Fig. 2). “Our radio observations with the Effelsberg and Arecibo telescopes were so precise that by the end of 2012 we could already measure a change in the orbital period of 8 microseconds per year, exactly what Einstein’s theory predicts”, states Paulo Freire, scientist at MPIfR. “Such measurements are so important that the European Research Council has recently funded BEACON, a new state-of-the-art system for the Effelsberg radio telescope.”In terms of gravity, PSR J0348+0432 is a truly extreme object, even compared to other pulsars which have been used in high precision tests of Einstein’s general relativity. At its surface, for example, it has a gravitational strength that is more than 300 billion times stronger than that on Earth. In the center of that pulsar, more than one billion tons of matter is squeezed into a volume of a sugar cube. These numbers nearly double the ones found in other ‘pulsar gravity labs’. In the language of general relativity, astronomers were able for the first time to precisely investigate the motion of an object with such a strong space-time curvature (see Fig. 1). “The most exciting result for us was, that general relativity still holds true for such an extreme object”, says Norbert Wex, a theoretical astrophysicist in MPIfR’s fundamental physics research group. In fact, there are alternative theories that make different predictions, and therefore are now ruled out. In this sense, PSR J0348+0432 is taking our understanding of gravity even beyond the famous ‘Double Pulsar’, J0737-3039A/B, which was voted as one of the top ten scientific breakthroughs of 2004 by the ‘Science’ journal. BEACON: The Effelsberg observations were part of "BEACON", a 1.9-million-Euro project funded by the European Research Council aimed to push tests of gravity theories into new territories. Paulo Freire/MPIfR is the principal investigator of BEACON. The project has funded a state-of-the-art instrument to be installed at Effelsberg in the coming months that will target the pulsar with the aim to substantially improve the accuracy of the published results.Original Publication: Norbert Junkes | Max-Planck-Institut Further reports about: > Astronomie > Astronomy > ESO’s Very Large Telescope > Earth's magnetic field > European Research Council > Large Hadron Collider > MPIfR > Max-Planck-Institut > Mobile phone > Pulsar > Radioastronomie > Relativity > Telescope > extreme conditions > fundamental physics > general relativity > gravitational waves > neutron star > radio telescope > white dwarf What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:6947c501-7212-41f7-99a0-184dea332144>
3.671875
1,621
Content Listing
Science & Tech.
39.195021
95,495,236
Jupiter boasts some of the most spectacular auroras in the solar system—vast, super-energetic fields of light that are permanently on display and bigger than planet Earth. Lately, scientists say, the Jovian lights have been even more magnificent than usual. Researchers are matching up ultraviolet imagery taken by the Hubble telescope with data from spacecraft Juno, which is set to enter Jupiter’s orbit next week. Their goal is to better understand how the solar wind affects the planet’s auroras. “These auroras are very dramatic and among the most active I have ever seen,” said Jonathan Nichols, an astronomist at the University of Leicester, in a statement on Thursday. “It almost seems as if Jupiter is throwing a firework party for the imminent arrival of Juno.” The latest images build on an earlier collection of photos taken since Jupiter’s auroras were first discovered in 1979. “Some months ago, we thought we had some idea of what planets were like,” the planetary scientist Laurence Soderblom told newspapers at the time, “and we discovered how narrow our vision really was.” Scientists already know that Jupiter’s auroras are caused by more than just solar storms. The planet’s gigantic magnetosphere also brightens the otherworldly lights with a constant stream of mega-intense charged particles. And all this is happening on a huge scale. Think of it this way: If Jupiter’s enormous magnetic field were visible to the eye, scientists say, it would appear from Earth to be the same size as the sun—even though it is five times farther away. Jupiter also snags some additional charged particles from Io, one of its volcano-strewn moons. The gravitational tension between Jupiter and Io causes volcanic reactions on the moon, which then spews bursts of electrically charged atoms into space, further feeding Jupiter’s auroras. If all goes as planned in the coming months, Juno will soon transmit a trove of information back to Earth that may help reveal more detail about the mechanics of Jupiter’s dazzling lights. If it were possible to stand on the surface of the planet and look up, Jupiter’s auroras would ignite the entire sky. That is, of course, if you could see in ultraviolet. Mathias Jäger, a spokesman for ESA and Hubble, put it to me this way: “With the naked eye, the auroras would barley be visible—if at all.” Then again, Juno may be able to catch a rare glimpse of Jupiter’s lights without ultraviolet assistance. If it does, they’re likely to appear red in color. “We can't see the visible auroras from Earth as the planet's disc is too bright, but they've been observed in the night side by Galileo,” Nichols told me, referring to the spacecraft that traveled to Jupiter in the 1990s. “Hopefully Juno will get a great view from a vantage point above the poles!” We want to hear what you think. Submit a letter to the editor or write to email@example.com.
<urn:uuid:4480e60e-f190-48cd-aa55-c330ccd762ea>
3.34375
661
News Article
Science & Tech.
48.058314
95,495,237
By adding a few modifications to their successful wastewater fuel cell, researchers have coaxed common bacteria to produce hydrogen in a new, efficient way. Bruce Logan and colleagues at Penn State University had already shown success at using microbes to produce electricity. Now, using starter material that could theoretically be sourced from a salad bar, the researchers have coaxed those same microbes to generate hydrogen. By tweaking their design, improving conditions for the bacteria, and adding a small jolt of electricity, they increased the hydrogen yield to a new record for this type of system. "We achieved the highest hydrogen yields ever obtained with this approach from different sources of organic matter, such as yields of 91 percent using vinegar (acetic acid) and 68 percent using cellulose," said Logan. In certain configurations, nearly all of the hydrogen contained in the molecules of source material converted to useable hydrogen gas, an efficiency that could eventually open the door to bacterial hydrogen production on a larger scale. Logan and lead author Shaoan Cheng announced their results in the Nov. 12, 2007, online version of Proceedings of the National Academy of Sciences. "Bruce Logan is a clear leader in this area of research on sustainable energy," said Bruce Hamilton, NSF director of the environmental sustainability program at NSF and the officer overseeing Logan's research grant. "Advances in sustainable energy capabilities are of paramount importance to our nation's security and economic well-being. We have been supporting his cutting-edge research on microbial fuel cells for a number of years and it is wonderful to see the outstanding results that he continues to produce." Other systems produce hydrogen on a larger scale, but few if any match the new system for energy efficiency. Even with the small amount of electricity applied, the hydrogen ultimately provides more energy as a fuel than the electricity needed to drive the reactor. Incorporating all energy inputs and outputs, the overall efficiency of the vinegar-fueled system is better than 80 percent, far better than the efficiency for generation of the leading alternative fuel, ethanol. Even most electrolysis techniques, methods to extract hydrogen from water using electricity, pale in comparison to the new method. "We can do that by using the bacteria to efficiently extract energy from the organic matter," said Logan. By perfecting the environment for the bacteria to do what they already do in nature, the new approach can be three to ten times more efficient than standard electrolysis. Additional information about the new technology and how it works can be found in the Penn State press release at http:// Josh Chamot | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:6b1f2fd0-2772-46b3-8bda-8c38b3ab619c>
3.5625
1,105
Content Listing
Science & Tech.
34.023161
95,495,252
A study of human–mammal interaction across the globe found animals are more prone to take to the night around humans. Jason G. Goldman reports. Some moth species have evolved long wing tails that flutter and twist as the moth flies, which distract hungry bats. Christopher Intagliata reports. Are consciousness, free will and God insoluble mysteries? His mummified remains show he ate meat and a poisonous fern before his demise Animals' inner lives are stranger than we can imagine Subtle mutations can undermine our ability to fend off a specific bug The coffin, discovered in Alexandria, Egypt, is a rare example of an unopened tomb Microscopic wear patterns on fossil teeth reveal what our ancestors ate—and provide insights into how climate change shaped human evolution 2.1-million-year-old stone tools suggest hominins reached East Asia much earlier than thought Wine book author Kevin Begos explains that just a few varieties of wine grapes dominate the industry, which leaves them vulnerable to potentially catastrophic disease outbreaks. North America’s first domesticated dogs died out after European colonization, but they share a genetic link to a transmissible tumor spread globally Iridescence appears to break up the recognizable shape of objects—making them harder to spot. Karen Hopkin reports. By analyzing 200 surgeries, anthropologists found mixed-gender operating room teams exhibited the highest levels of cooperation. Christopher Intagliata reports. Most invertebrates get smaller on average in cities, although a few very mobile species respond to urbanization by growing. Listeners to a person letting loose with a roar can accurately estimate the size and formidability or the human noise maker. Christopher Intagliata reports. With its tongue attached to the bottom of its mouth, the dinosaur probably ate like modern crocodiles Certain motifs in swamp sparrow songs can last hundreds, even thousands of years—evidence of a cultural tradition in the birds. Christopher Intagliata reports. Blood relations may be the key factor for mole rats, meerkats and others. But how do humans fit in? Elastic springs help tiny animals stay fast and strong. New work is finding what size critters must be to benefit from the springs Herbicides are under evolutionary threat. Can modern agriculture find a new way to fight back?
<urn:uuid:0fa683fc-df56-439c-a83a-07872a10fe17>
2.796875
483
Content Listing
Science & Tech.
33.140221
95,495,257
The discovery of the single top confirms important parameters of particle physics, including the total number of quarks, and has significance for the ongoing search for the Higgs particle at Fermilab’s Tevatron, currently the world’s most powerful operating particle accelerator. Previously, top quarks had only been observed when produced by the strong nuclear force. That interaction leads to the production of pairs of top quarks. The production of single top quarks, which involves the weak nuclear force and is harder to identify experimentally, has now been observed, almost 14 years to the day of the top quark discovery in 1995. Searching for single-top production makes finding a needle in a haystack look easy. Only one in every 20 billion proton-antiproton collisions produces a single top quark. Even worse, the signal of these rare occurrences is easily mimicked by other “background” processes that occur at much higher rates. "Observation of the single top quark production is an important milestone for the Tevatron program," said Dr. Dennis Kovar, Associate Director of the Office of Science for High Energy Physics at the U.S. Department of Energy. "Furthermore, the highly sensitive and successful analysis is an important step in the search for the Higgs." Discovering the single top quark production presents challenges similar to the Higgs boson search in the need to extract an extremely small signal from a very large background. Advanced analysis techniques pioneered for the single top discovery are now in use for the Higgs boson search. In addition, the single top and the Higgs signals have backgrounds in common, and the single top is itself a background for the Higgs particle. To make the single-top discovery, physicists of the CDF and DZero collaborations spent years combing independently through the results of proton-antiproton collisions recorded by their experiments, respectively. Each team identified several thousand collision events that looked the way experimenters expect single top events to appear. Sophisticated statistical analysis and detailed background modeling showed that a few hundred collision events produced the real thing. On March 4, the two teams submitted their independent results to Physical Review Letters. The two collaborations earlier had reported preliminary results on the search for the single top. Since then, experimenters have more than doubled the amount of data analyzed and sharpened selection and analysis techniques, making the discovery possible. For each experiment, the probability that background events have faked the signal is now only one in nearly four million, allowing both collaborations to claim a bona fide discovery that paves the way to more discoveries. “I am thrilled that CDF and DZero achieved this goal,” said Fermilab Director Pier Oddone. “The two collaborations have been searching for this rare process for the last fifteen years, starting before the discovery of the top quark in 1995. Investigating these subatomic processes in more detail may open a window onto physics phenomena beyond the Standard Model.” Kurt Riesselmann | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:062bf2dc-295c-4b97-8afb-f8eca677216b>
3.546875
1,205
Content Listing
Science & Tech.
36.252077
95,495,269
Jun 28 2018 Tautomers are formed by an interconvertible reaction called tautomerization whereby there is a formal migration of a hydrogen atom along with a switch of a single bond and an adjacent double bond. A common example is the keto to enol tautomerism: During tautomerization a chemical equilibrium of the tautomers will be reached based on several factors, including, pH, temperature and solvent. Tautomerizations are catalyzed by: bases (deprotonation, formation of a delocalized anion, and, protonation at a different position of the anion; and acids (protonation, formation of a delocalized cation, and deprotonation at a different position adjacent to the cation). ICM will only generate energetically favorable tautomers. Generally tautomers that have a change in hybridization state are less stable and so ICM will not generate these thus reducing the number of scaffolds generated. For example the keto form shown below is more stable by ~14 kcal.mol than the enol therefore ICM will not generate the enol form. The tautomer generation MoldHf (heat of formation) was trained on the data from http://webbook.nist.gov (e.g. http://webbook.nist.gov/cgi/cbook.cgi?ID=000050-00-0&Units=SI&cTG=on). We used PLS regression and our custom fingerprints to train the model. To generate tautomeric conformations of your compound: |Copyright© 1989-2018, Molsoft,LLC - All Rights Reserved.| This document contains proprietary and confidential information of The content of this document may not be disclosed to third parties, copied or duplicated in any form, in whole or in part, without the prior written permission from Molsoft, LLC.
<urn:uuid:06981ba7-1fb3-4928-9ef1-2172f965570f>
2.84375
403
Documentation
Science & Tech.
38.713219
95,495,270
A View from Emerging Technology from the arXiv New Image Database Could Help Explain Evolution of Human Eye Images from the birthplace of the human race could help explain some mysteries of human vision, says scientists The human eye is an amazing piece of machinery. It can distinguish some ten million colours thanks to the remarkable light-sensitive rod and cone cells that populate the back of the eye. These cells neatly divide up the process of vision. The rods, some 90 million of them, have a peak sensitivity to reddish light and work best in low light, providing our night vision. The cones, on the other hand, some 5 million of them, come in three types. These are sensitive to long wavelengths (ie red), medium wavelengths (green) and short wavelengths (blue) producing colour vision. They are designated L, M and S cones respectively. But here’s the puzzle: S cones are rare making up less than 10 per cent of the total. The L and M cones are much more common but their ratio can vary dramatically. People with otherwise normal colour vision can have L:M ratios of between 1:4 and 15:1. (Other primates have different ratios although the ratio in new world monkeys is similar to ours.) The question that leaves biologists scratching their heads is why. One idea is that this distribution of cone cell types is the result of an adaptation to the environment in which the human eye evolved. So if we can work out what that environment was like, we could get a handle on the forces that shaped our visual system. Today, Gasper Tkacik at the University of Pennsylvania in Philadelphia and several pals reveal an interesting approach to solving this problem. Their idea is to find a place like the one in which humans evolved and to measure the lighting conditions found there. And by comparing a big enough sample with measurements from elsewhere, it should be possible to work out how and why the human eye evolved with its curious ratios of cone cell types. So where to look. The consensus view is that humans diverged from other hominids about 3 million years ago in Africa. One place thought to be representative of the conditions that existed then is the Okavango Delta in Botswana, where the Okavanga River empties into a swamp at the edge of the Kalahari desert forming the world’s largest inland river delta. (Most of the water evaporates.) If humans evolved in conditions like this, then it’s possible that the lighting conditions there might give us a clue about mysteries like the cone cell ratio. So Tkacik and buddies travelled to Botswana and have taken 5000 six-megapixel images of the area using a Nikon D70 digital SLR. they then carefully calibrated them to accurately capture the statistics of the light reaching the camera sensors and put them all on the web. Today, they describe the various ways in which they’ve built this database and announce that it is publicly available under a creative commons license for research in computer vision, the psychophysics of perception and visual neuroscience. The image above is one example. The idea is that comparing the statistics associated with these images with those from other areas will produce some insight into the evolution of the visual system. There is another possibility of course. That the peculiar characteristics of the human eye are the result of some much more dramatic incident, such as the Toba supervolcano eruption 70,000 years ago, which may have reduced the global human population to less than 15,000. Other bottlenecks at other times are thought to have reduced the number of humans to less than 2000. Perhaps the human visual system is optimised to survive during one of these catastrophes. Finding these lighting conditions on Earth today might be significantly harder. Whatever the cause, the images from Botswana are an interesting first step in rediscovering the conditions in which we evolved and working out why we are the we are. Ref:arxiv.org/abs/1102.0817: Natural Images From The Birthplace Of The Human Eye Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:e9ae83fc-4657-43e8-8cf7-72bcd461d097>
3.890625
869
Truncated
Science & Tech.
45.147617
95,495,286
Software Engineering } 007 } Agile versus Spiral } This lecture discusses key ideas of agile framework and its differences with traditional waterfall and spiral software development methodologies. Waterfall model is a very early model of software development. As software systems became complex, it needed new techniques to manage software development process. In waterfall model cost dynamically increase if we try to implement design changes in very last phase of design cycle and often impossible to accommodate design changes in later phases of development. Hence concept of iterative software engineering models came in picture. Which also suffered with larges complex software developments e.g. operating systems software. Later on spiral model concept came in picture which is a combination of waterfall and iterative methodologies and which significantly improved the quality and project cost. In fact spiral is so far the best technique out there. There are many other models people proposed in that line but they turned out to be versions of spiral methodology. In recent years as the software systems, IT infrastructure need grows rapidly more and more service companies poped-up like mushrooms making quick bucks providing fast food to organization which have almost no knowledge of IT systems. These organizations needed a framework where they can accommodate changes from customer in quicker way without focusing on quality, and hence so agile framework came in picture. But do not go by the fancy name of agile framework. The focus of such framework is to make customer happy by accommodating customer demands quickly during the design. This can lead to greater danger of maintenance and sustainability of system deployed. But think about the business on service companies, they charge more money for maintenance of the these software which they deployed under agile framework quickly and made quick bucks. As time goes on they make more money in maintenance from customer. Its like billing customer through the life-cycle of the software and costing more to customer in long run. Does it make sense? Agile methodologies do not in fact focus on quality and there is almost no planning required. But think about is it good proper way of doing things? I say not. It needs to be in such way that “Design once and it keeps working through out life without touching it.” which is very similar to the mindset of German engineering – the best in the world! Software developed and deployed under agile framework are analogues to giving fast food to customer which is not healthy. customer will get sick and will have to spend money on medication and doctors. Quality suffers!! Software developed under spiral framework are best so far which are like giving healthy food with delicacy. It is made once and works forever. Spiral is the best choice! [ ►Subscribe ] Leprofesseur } on YouTube. We appreciate your feedback and support. Do not forget to give thumbs-up 🙂 1,724 total views, 6 views today
<urn:uuid:f5b76124-bc2a-4e8e-b059-ce825f46d144>
2.984375
565
Personal Blog
Software Dev.
35.596507
95,495,288