text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
You are here: Applications and projects Earth Observation Center Applications and projects Projects: Civil Security - Environment Projects: Land Surface Applications and Projects: Atmosphere According to the current plans of space agencies and satellite operators, in the course of the next two decades over 100 satellite-based instruments will be providing a wealth of data about the earth’s atmosphere from ground level up to the border to outer space. The spectrum extends from detecting trace gases and aerosols to clouds, precipitation and radiation to global measurement of temperature and wind. In parallel, powerful data communication and analysis systems will be established in order to make these data available in a standardized format and ready to use worldwide in near-real-time. Anwendungen und Projekte: Atmosphäre ENDORSE is involved in the user-driven creation of so-called ‘downstream services’ for the renewable energies sector of the European Union’s Global Monitoring for Environment and Security (GMES) Programme. Based on the GMES services for land, atmosphere and security, specific applications will be developed together with users from the fields of solar energy, wind energy, distributed power networks, bioenergy and daylighting for buildings. The goal of the ESA project aerosol_cci, a climate change initiative, is to design consistent prototype algorithms for the production of long-term aerosol data sets from several European earth observation sensors. The project starts with an in depth analysis and comparison of the retrieval results for several existing algorithms. Based on this analysis, elements of community algorithms and harmonized retrieval are then developed. Topics of investigation are the different assumptions and approaches for modeling optical aerosol properties, how to deal with surface reflectance and its anisotropy, cloud masking, and the use of auxiliary data sets. Within Europe’s family of GMES (Global Monitoring for Environment and Security) projects, MACC-II (Monitoring Atmospheric Composition and Climate, phase 2) deals with the atmosphere. DLR is in charge of the MACC solar radiation service and the interface to MACC-II users. Furthermore, DLR contributes satellite-based information on stratospheric ozone chemistry, tropospheric trace gases and aerosols. The Sahara is a huge reservoir for the introduction of dust into the atmosphere, which can be carried as far as the Americas in air currents. In the SAMUM (Saharan Mineral Dust Experiment) project, the optical effects of mineral dust will be investigated in detail. A team of eight national institutes will take in-situ aerial as well as remote sensing measurements and combine them with models to derive information on the spatial distribution and transport of these dust layers as well as their physical and chemical composition. - Volcano Monitoring (PROMOTE and EXUPERY) Volcanic eruptions not only endanger the local population, they can also have a negative effect on air traffic. Because the eruption clouds of volcanoes threaten the functioning of airplane engines, operators of aircraft needs to be promptly informed about volcanic eruptions so that flight paths can be modified accordingly. Virtual Institute "Environmental Research Station Schneefernerhaus" (UFS) The Environmental Research Station Schneefernerhaus (2650 a.s.l.) on the mountain Zugspitze is an internationally linked center of competence for high altitude, climate and environmental research with focus on development, demonstration and operation of innovative technologies in the context of atmospheric and climate research, satellite validation, model verification, quality assurance for value added products (e.g. in the framework of GMES), analyses for the understanding of climate system processes, environmental and high altitude medicine, early detection of natural hazards, cosmic radiation and radioactivity. The UFS has the status of a global station within the Global Atmosphere Watch Programme (GAW) of the World Meteorological Organization (WMO). It is in addition part of the NDACC program and linked with the ICSU/WMO World Data Center for Remote Sensing of the Atmosphere (WDC-RSAT), which is hosted by DFD. As part of the EU Seventh Framework Program, the EnerGEO project was established to devise a strategy for estimating the influence of the exploitation and use of energy resources on the environment and various ecosystems. This assessment will be based on models and remote sensing data. The strategy is to be demonstrated making use of pilot projects involving a variety of energy sources, including fossil fuels, biomass, solar energy and wind power. Helmholtz Alliance Planetary Evolution and Life The Helmholtz Alliance is using an interdisciplinary approach to investigate the relationships between the formation of planets and the evolution of life. Entire planets are included in the study, from their outer envelopes of magnetosphere and atmosphere to their core. Beginning with Earth, other planetary bodies in our solar system will also be studied, such as the earthlike planets Venus and Mars. But moons are also a subject of investigation, on which life is at least theoretically possible according to present knowledge. The wide-ranging methodologies being developed may also even involve planets outside our solar system. Copyright © 2013 German Aerospace Center (DLR). All rights reserved.
<urn:uuid:57e6522f-bc3a-42e6-a685-b6cf9e9fe3ae>
2.84375
1,063
Content Listing
Science & Tech.
13.613275
800
North American Dipteran Pollinators: Assessing Their Value and Conservation Status Carol Ann Kearns, University of Colorado at Boulder Full Text: HTML Recent attention to pollinator declines has focused largely on bees and vertebrates. However, few pollination systems are obligate, and pollinators that complement the role of bees may respond differently to environmental disturbance. The conservation status of North American fly pollinators remains undocumented. In this paper, methods for monitoring shifts in dipteran pollinator abundance are discussed. The need for further basic research into pollination by flies is addressed, and the significance of dipteran conservation is considered. anthophilous flies, anthropogenic disturbance, conservation, Diptera, Dipteran conservation, generalist pollinators, North America, pollination, pollinator declines, population fluctuation, redundant pollination systems Ecology and Society. ISSN: 1708-3087
<urn:uuid:8e123884-c467-4f6b-ab9d-bbf9180b1741>
2.859375
187
Academic Writing
Science & Tech.
-14.901694
801
Earth from Space: Bloom-filled Baltic This Envisat image captures blue-green algae blooms filling the Baltic Sea, which is roughly 1600 km long, 190 km wide and has a surface area of about 377 000 sq km. 'Algae bloom' is the term used to describe the rapid multiplying of phytoplankton, microscopic marine plants that drift on or near the surface of the sea. Floating freely in the water, phytoplankton are sensitive to sunlight and local environmental variations such as nutrient levels, temperature, currents and winds. The blooms seen here are due to favourable conditions – lots of sunshine, little wind and an increase of nutrients from run-off following the ice season – in the area over the past weeks. Although algae blooms are a normal and essential phenomenon, they can be harmful to humans and animals when they produce toxic substances, occur too often or last too long, depleting the concentration of oxygen in the water. Due to the toxicity of some phytoplankton and marine algae species, it is important to monitor blooms so that fishermen, fish farmers and public health officials know about such events as soon as possible. While individually microscopic, the chlorophyll that phytoplankton use for photosynthesis collectively tints the surrounding ocean waters, providing a means of detecting these tiny organisms from space with dedicated 'ocean colour' sensors, like Envisat's Medium Resolution Imaging Spectrometer (MERIS). Algae blooms impact the ability of radar sensors, such as the Advanced Synthetic Aperture Radar on Envisat, to detect oil spills because their presence produces a similar dampening effect on the water’s surface. It is important, therefore, for agencies such as the European Maritime Safety Agency, which monitors European waters for oil spills, to know when algae blooms appear to warn satellite image analysts. Visible in the image (clockwise from bottom left) are parts of Germany, Sweden, Estonia (top right), Latvia, Lithuania, the Russian territory of Kalingrad and Poland. Also visible are the Swedish islands of Gotland and Öland (middle) and the Danish island of Bornholm (lower left). MERIS acquired this image on 11 July 2010 at a resolution of 300 m.
<urn:uuid:7bd2d003-3bb9-408d-9d21-2b65e00262c3>
3.6875
473
Knowledge Article
Science & Tech.
25.646894
802
GNU MIX Development Kit (MDK) MIX is Donald Knuth's mythical computer as described in his monumental work The Art Of Computer Programming. As any of its real counterparts, the MIX features registers, memory cells, an overflow toggle, comparison flags, input-output devices, and a set of binary instructions executable by its virtual CPU. You can program the MIX using an assembly language called MIXAL, the MIX Assembly Language. So, what's the use of learning MIXAL? The MIX computer is a simplified version of real CISC computers, and its assembly language closely resembles real ones. You can learn MIX/MIXAL as an introduction to computer architecture and assembly programming: see the MDK documentation for a tutorial on MIX and MIXAL. MDK (MIX Development Kit) offers an emulation of MIX and MIXAL. The current version of MDK includes the following applications: - mixasm A MIXAL compiler, which translates your source files into binary ones, executable by the MIX virtual machine. - mixvm A MIX virtual machine which is able to run and debug compiled MIXAL programs, using a command line interface with readline's line editting capabilities. - gmixvm A MIX virtual machine with a GTK+ GUI which allows you running and debugging your MIXAL programs through a nice graphical interface (see screenshots). - mixguile A Guile interpreter with an embedded MIX virtual machine, manipulable through a library of Scheme functions. - mixal-mode.el An Emacs major mode for MIXAL source files editing, providing syntax highlighting, documentation lookup and invocation of mixvm within Emacs (since version 22, mixal-mode is part of the standard Emacs distribuition). - mixvm.el An elisp program which allows you to run mixvm within an Emacs GUD window, simultaneously viewing your MIXAL source file in another buffer. Using the MDK tools, you'll be able to - write, compile and execute MIXAL programs, - set breakpoints and run your programs step by step, - set conditional breakpoints (register change, memory change, etc.), - collect execution timing statistics, - trace executed instructions, - inspect and modify the MIX registers, flags and memory contents at any step, - simulate MIX input-output devices using the standard output and your file system. The user's manual is distributed with the source tarball in texinfo format, which is converted to info files during the installation process. It is also available in a variety of formats in the documentation section. - Repository: git://git.sv.gnu.org/mdk.git - Development branch: master - Online access here You can get the sources using the following incantation: git clone git://git.sv.gnu.org/mdk.git or, for those of you behind a firewall, git clone http://git.sv.gnu.org/r/mdk.git
<urn:uuid:d4ccbbe1-504a-46e6-9d96-c6fc967b3d11>
3.15625
643
Product Page
Software Dev.
43.188553
803
How to use BitMap in Turbo C++ This simple program explain you about how to use BitMap in Turbo C++. NOTE :Before Running this program you have to copy one file in your "bgi" directory. That file either you have to download from the net or i will provide you. The file name is "SVGA256.BGI". Copy this file at your " tc\bgi\ ". Your tc path. I am not able to attach this file because the file extension is not supported.. If you need this file, i will provide you. 1) You have to use the BitMap structure, there is no need to change anything in this structure. Just copy this structure in one of the .cpp file like "BitMap.cpp". char type; /* Magic identifier */ unsigned long size; /* File size in bytes */ unsigned short int reserved1, reserved2; unsigned long offset; /* Offset to image data, bytes */ extern A HEADER,HEADER1; unsigned long size; /* Header size in bytes */ unsigned long width,height; /* Width and height of image */ unsigned short int planes; /* Number of colour planes */ unsigned short int bits; /* Bits per pixel */ unsigned long compression; /* Compression type */ unsigned long imagesize; /* Image size in bytes */ unsigned long xresolution,yresolution; /* Pixels per meter */ unsigned long ncolours; /* Number of colours */ unsigned long importantcolours; /* Important colours */ extern B INFOHEADER,INFOHEADER1; 2) In the main file i mean the file which contain the "main" function write this code. // This Global Function is used for the resolution of the bitmap. You can set the return value either 1,2 or 3. For me 3 is the best combination. int gd = DETECT, md, a; initgraph(&gd,&md,"c:\\tc\\bgi"); //Path may be different in your computer. //Suppose you have one show function which read the bitmap from the disk. Then this show function looks like this. //Here you have to define the path of the bitmap file. Like according to this example i have to open one Board1.bmp file. So write you bitmap file path here. unsigned char Ch; File.read((char*)&HEADER,14); //This is the header part of the Bitmap. It always looks like same. Don't change the content hear. The value remains 14 here. File.read((char*)&INFOHEADER,40); //This is another part of the bitmap, here also the value remains same like 40 here. unsigned int i; if(PaletteData)//if memory allocated successfully //read color data //Don't change the code here because i have done some shifting here. Its working fine. outp(0x03c8,0); //tell DAC that data is coming for(i=0;i<256*3;i++) //send data to SVGA DAC for(i=0;i<INFOHEADER.height;i++) //This for loop is used to display the bitmap. File.read(&Ch,1); // Here Ch reads the color of your bitmap. putpixel(XCor+j++,YCor+INFOHEADER.height-i-1,Ch); //XCor and YCor are the X and Y cordinates. It depends upon you. //Another way to display the Bitmap is. Suppose i have another Show1() Function. This is simple as compare to previous show function. File.seekg(54,ios::beg); //Its remains same. Means value always remains 54 File.seekg(256*4,ios::cur); //Its remains same means value always remain 256*4. for(int i=0;i<40;i++) //Here 40 shows the height of the bitmap. It may be differ it depends upon the size of your bitmap. for(int j=0;j<36;j++) //Here 36 shows the width of the bitmap. It may be differ it depends upon the size of your bitmap. File.read(&Ch,1); //Here Ch is the character which reads the color of your bitmap.
<urn:uuid:42406c72-efc8-4327-affe-94c3b061f01f>
2.625
937
Tutorial
Software Dev.
71.10182
804
GREENHOUSE gas emissions in the 1990s could have been underestimated by billions of tonnes, throwing doubt on some of the maths behind the Kyoto Protocol, research by Australian and international scientists suggests. The research team measured real-world changes in the amount of CO2 building up in the atmosphere against the amount of gases that each country said it emitted. And, like a jigsaw puzzle with one or two missing pieces, the picture did not quite match. ''The simplest explanation is there has been an underestimate in the accounting of about 7 per cent through that early period in the 1990s,'' said the lead researcher, Roger Francey, an honorary fellow at the CSIRO. ''The increase in CO2 in the atmosphere doesn't reflect the reported emissions. This may be because the methodology for getting national emissions was far less developed than today, and only really developed for a few countries, so they were relying much more on estimates.'' If confirmed, the findings would carry some potentially good news about the rate of climate change: if emissions were higher in the 1990s, then they have not been increasing at quite such a steep rate to reach today's level. It would mean emissions have been rising more steadily for the past three decades, at the middle range of predictions by Intergovernmental Panel on Climate Change, rather than surging up since 2000. The group's findings are contained in a paper published on Monday in the journal Nature Climate Change. Although an error of 7 per cent in estimated emissions is within the stated level of uncertainty, it still means that emissions equivalent to about four times the size of Australia's annual greenhouse output had somehow been ''lost''. The accounting method is made even more complicated by the performance of ''carbon sinks'' which absorb large but varying amounts of carbon out of the atmosphere. ''When they were adding up all the emissions from around the world back then … it's understandable that there might have been significant errors,'' said the head of the CSIRO's Changing Atmosphere group, Paul Fraser. ''Exactly how that may have happened, it's hard to know … What it shows is that the IPCC estimates of more recent times have got it about right.''
<urn:uuid:90fd43b1-633c-4b55-8c3b-0e8fe0a3912e>
3.546875
438
News Article
Science & Tech.
37.596817
805
caroline apprend a nager elle prend des lecons de In a coordinate plane, the points (2,4) and (3,-1) are on a line. Which of the following must be true? 1. The line crosses the x-axis. 2. The line passes through (0, 0). 3. The line stays above the x-axis at all times. 4. The line rises from the lower left to the upper right. ... X=2 Is that right Ms. Sue? I will spell Algebra correctly from now on thanks for your help. Solve the equation 15(x+3)=75 Sorry Mr. Reiny I could not find the page where I had asked the question on Sunday when I went back to look, thanks for the link and the answer. Please show me step by step how to Make a table of solutions for the equation,and then use the table to graph the equation. y = 2x -1 Who was the best president Make a table of solutions for the equation, and then use the table to graph the equation. Just graph one of them. y = 2x -1 How do I make one, may I use microsoft Excel? Sorry Mr. Reiny, I guess I should of figured that out since you are so smart at doing the math problems. I do not have an option key on my windows 7 keyboard but I bet there is another way I can do the underline thing. Thanks Again for taking time out of your day to help us Ma... Thanks Reiny, you assumed correct, how did you get the line under the greater than sign? You are a very smart and kind woman to have been such a great help, Thanks! For Further Reading
<urn:uuid:b893e824-8132-4f01-92ab-ecd84c5ffbe8>
3.203125
364
Comment Section
Science & Tech.
88.598527
806
Yes - of course it does. Without "random" in front choice is an attribute with no object. I didn't actually do it, but thats what would happen - "choice is not defined blahblah" Is that the same as saying its not a "global namespace"? I'd have to consult a reference book to be 100% sure. I'm not certain how a pure object oriented language treats namespaces compared to a procedural language (ie, C++ is both procedural and OOP - I need to do a review For example, In C++ you explicitly state your namespaces - in 99% of cases students do this by adding a line 'using namespace std;' (std = standard) near the top of their file, which is frowned upon with most real projects. By doing this they don't have to put the namespace std in front of functions defined in std. cout << "hello" << endl; //prints hello std::cout << "hello" << std::endl; //prints hello Now say you have a special cout function that prints ascii numbers instead of the letters to the console. You can define a namespace in your file called Manta and do this.... Manta::cout << "hello" << std::endl; //prints 90 88 96 96 99 (just guessing the ascii values) In practice, namespaces are used in procedural languages to avoid name clashes. When a project gets large enough, you start running out of good descriptive variable names, so it is better to create seperate namespaces and reuse these descriptive names instead of resorting to complicated naming gyrations. "It looks to me like random might be a static class with static methods, hence, no need to instantiate anything. " Yes - a very good way to say it. How come the texts don't say that? Got me - maybe I should write a book. This is just what I think is happening... I'd have to consult python.org to be sure. Do Java and C++ have Modules? Please describe or give a definition to me for that. No. Java has the following.... packages - groups of related classes form a package. example: javax.swing is the package for the swing classes example: java.lang contains the core classes of the Java language classes - you know what these are.... Math is a class containing fields and methods related to math JButton is a class for instantiating a button in swing And you can create your own packages.... there are a few rules for doing this. In C++, which supports both procedural and OOP, the main library is called the STL - standard template library, which uses the namespace std like I showed you above. Instead of using a package, C++ has a keyword called friend - imo, friends are the most unfriendly thing I've seen in any language and I much prefer Java's use of packages. My language class didn't cover Python - C++, Java, Ada, LISP, Fortran, Prolog, Cobol and some others - here is what one of the tutorials says... You can use a module to organize a number of Python definitions in a single file. <snip> A package is a way to organize a number of modules together as a unit. Python packages can also contain other packages. So Python has both modules and packages where it looks like a module is a related to group of classes and functions, and a package is a related group of modules and other packages. Here is a link that I think will explain it in detail.... I plan on reading it later tonight.
<urn:uuid:62be6a2b-62af-4b65-b55d-fa907187d43b>
2.78125
764
Comment Section
Software Dev.
66.57796
807
Coleps Video No. 1 This barrel-shaped ciliate is covered by a layer of protective, calcareous plates and is commonly found in freshwater. Coleps is a rapid swimmer, revolving as it travels and using this motion to bore out chunks of other protozoans it is feeding upon. Questions or comments? Send us an email. © 1995-2013 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners. This website is maintained by our
<urn:uuid:75fede60-aaaa-43a3-8aa0-1ddcab28b8a4>
2.703125
147
Truncated
Science & Tech.
56.859891
808
Tiny phytoplankton in the oceans of the Northern Hemisphere had far larger extinction rates during the mass extinction event 65 million years ago than those living in the Southern Hemisphere, according to a paper published online this week in Nature Geoscience. The recovery of the Northern Hemisphere phytoplankton also occurred significantly later than recovery in the southern oceans. Populations of phytoplankton smaller than 20 micrometres were decimated during the Cretaceous/Tertiary mass extinction, which is linked to an impact event. Timothy Bralower and colleagues propose that the clouds of debris from the impact ― which would have blocked the sunlight that these phytoplankton needed to grow, and poisoned them as the metal-laden dust fell to the ocean surface ― were concentrated in the Northern Hemisphere, leading to the higher extinction rates. The team also suggests that the recovery of marine diversity in the north may have been hindered by the phytoplankton's slower start in this region.
<urn:uuid:f5fa14c6-eceb-476d-a47f-2b67315b738c>
4.03125
209
News Article
Science & Tech.
16.813154
809
Anybody who has ever opened a guidebook to birds or plants is familiar with the range maps showing where a particular species lives. Precise ranges are well known for organisms that are well-studied, such as birds and trees in North America and Europe. . In the tropics and for harder to observe organisms, in contrast, we don't always know exactly where the organisms live. We understand what limits species ranges and what defines range edges for only a small handful of organisms. We do know climate can play a large role in determining range limits, but other factors such as soils and competing organisms also play roles. Due to the computer revolution in biology, it has now become commonplace to build correlational models that link the range of a species to commonly measured (and easily available) climate variables like mean annual temperature. These models are used extensively to provide predictions about where poorly known species live. They are also increasingly being used to make predictions about where species will live in the future under a rapidly changing climate. Yet there is awareness that these models could be greatly improved if we did some hard thinking (using our biological knowledge) and some hard work (using GIS skills) to develop a state-of-the-art set of environmental indices/metrics that affect organisms. This working group proposes to develop such a state-of-the-art compilation of environmental factors likely to have a strong impact on species ranges. The goal is to share this compilation with the research community at large, and to use the information to explore what factors are actually the most important and most predictive in determining where a species lives. We will integrate state-of-the-art data and tools including weather station data, satellite remote sensing data and marine buoys in order to create a set of freely available, consistently formatted and scaled environmental data "layers" easily usable in GIS-based analyses. We will also develop a guide book for practitioners documenting best practices. In this way we hope to greatly improve our ability to know where poorly studied species live, and to predict where species will live in the future under climate change. More information about this research project, and participants.
<urn:uuid:c75dbb44-feb7-4406-b090-3465a3d21c0e>
3.53125
437
Academic Writing
Science & Tech.
26.3825
810
Press Release 09-241 Waterworld Discovered Transiting a Nearby Star Charbonneau team realizes major advance in discovering habitable planets December 16, 2009 View a Webcast with astronomer David Charbonneau. Astronomers announced today that they have discovered a "super-Earth" orbiting a red dwarf star only 40 light-years from Earth. They found this nearby planet with a small fleet of ground-based telescopes no larger than those many amateur astronomers have in their backyards. Although the super-Earth is too hot to sustain life, the discovery shows that current, ground-based technologies are capable of finding almost-Earth-sized planets in warm, life-friendly orbits. The discovery is being published in the December 17 issue of the journal Nature. A super-Earth is defined as a planet between one and ten times the mass of the Earth. The new-found world, GJ1214b, is about 6.5 times as massive as the Earth. Its host star, GJ1214, is a small, red type M star about one-fifth the size of the Sun. It has a surface temperature of only about 4,900 degrees F and a luminosity only three-thousandths as bright as the Sun. GJ1214b orbits its star once every 38 hours at a distance of only 1.3 million miles. Astronomers estimate the planet's temperature to be about 400 degrees Fahrenheit. Although warm as an oven, it is still cooler than any other known transiting planet because it orbits a very dim star that emits only about three-thousandths as much energy per second as does the Sun. Since GJ1214b crosses in front of its star, astronomers were able to measure its radius, which is about 2.7 times that of Earth. This makes GJ1214b one of the two smallest transiting worlds astronomers have discovered, the other being CoRoT-7-b. The resulting density suggests that GJ1214b is composed of about three-fourths water and other ices, and one-fourth rock. There are also tantalizing hints that the planet has a gaseous atmosphere. "Despite its hot temperature, this appears to be a waterworld," said Zachory Berta, a graduate student at the Harvard-Smithsonian Center for Astrophysics (CfA) who first spotted the hint of the planet among the data. "It is much smaller, cooler, and more Earthlike than any other known exoplanet." Berta added that some of the planet's water should be in the form of exotic materials like Ice VII (seven)--a crystalline form of water that exists at pressures greater than 20,000 times Earth's sea-level atmosphere. Astronomers found the new planet using the MEarth (pronounced "mirth") Project--an array of eight identical 16-inch-diameter RC Optical Systems telescopes that monitor a pre-selected list of 2,000 red dwarf stars. Each telescope perches on a highly accurate Software Bisque Paramount and funnels light to an Apogee U42 charge-coupled device (CCD) chip, which many amateurs also use. "Since we found the super-earth using a small ground-based telescope, this means that anyone else with a similar telescope and a good CCD camera can detect it too. Students around the world can now study this super-earth!" said David Charbonneau of CfA, lead author and head of the MEarth project. MEarth looks for stars that periodically decrease in brightness because planets cross in front of, or transits, their stars. During such a mini-eclipse, the planet blocks a small portion of the star's light, making it dimmer. Using innovative data processing techniques, astronomers can tease out the telltale signal of a transiting planet and distinguish it from "false positives" such as eclipsing double stars. NASA's Kepler mission also uses transits to look for Earth-sized planets orbiting Sun-like stars. However, such systems dim by only one part in ten thousand. The higher precision required to detect these transits means that such worlds can only be found from space. In contrast, a super-Earth transiting a small, red dwarf star yields a more pronounced decrease in brightness that can be detected from the ground. Astronomers then use instruments like the HARPS (High Accuracy Radial Velocity Planet Searcher) spectrograph at the European Southern Observatory to measure the companion's mass and confirm it is a planet, as they did with this discovery. When astronomers compared the measured radius of GJ1214b to theoretical models, they found that the observed radius exceeds the model's prediction, even assuming a pure water planet. Something more than the planet's solid surface may be blocking the star's light--specifically, a surrounding atmosphere. The team also notes that, if it has an atmosphere, those gases are almost certainly not primordial. The star's heat is gradually boiling off the atmosphere. Over the planet's lifetime, several billion years, much of the original atmosphere may have been lost. The next step for astronomers is to try to directly detect and characterize the atmosphere, which will require a space-based instrument like NASA's Hubble Space Telescope. GJ1214b is only 40 light-years from Earth, within the reach of current observatories. "Since this planet is so close to Earth, Hubble should be able to detect the atmosphere and determine what it's made of," said Charbonneau. "That will make it the first super-Earth with a confirmed atmosphere--even though that atmosphere probably won't be hospitable to life as we know it." "The future for further discovery is bright," said Donald Terndrup, program manager in NSF's Division of Astronomical Sciences, "Because this discovery was made early in the MEarth project, there may be many super-Earths around cool stars. Dr. Charbonneau's team may also discover strange or unexpected worlds, and this will show us how common or rare earth-like planets truly are, further expanding the frontiers of astronomical science." Lisa-Joy Zgorski, NSF (703) 292-8311 firstname.lastname@example.org Christine Pulliam, Harvard-Smithsonian Center for Astrophysics (617) 495-7463 email@example.com Donald Terndrup, NSF (703) 292-4901 firstname.lastname@example.org David Charbonneau, Harvard-Smithsonian Center for Astrophysics (617) 496-6515 email@example.com Images and movies: /news/longurl.cfm?id=187 Research Channel interview with Charbonneau about his life's work: /news/longurl.cfm?id=188 David Charbonneau: DISCOVER Magazine's Scientist of the Year: http://discovermagazine.com/2007/dec/scientist/?searchterm=charbonneau David Charbonneau, NSF's 2009 Alan T. Waterman Awardee Release: http://www.nsf.gov/news/news_summ.jsp?cntn_id=114304&org=NSF&from=news NSF and NSB Pay Tribute To Top American Scientists, including brief Charbonneau statement: http://www.nsf.gov/nsb/news/news_summ.jsp?cntn_id=114819&org=NSF The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2012, its budget was $7.0 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly. Get News Updates by Email Useful NSF Web Sites: NSF Home Page: http://www.nsf.gov NSF News: http://www.nsf.gov/news/ For the News Media: http://www.nsf.gov/news/newsroom.jsp Science and Engineering Statistics: http://www.nsf.gov/statistics/ Awards Searches: http://www.nsf.gov/awardsearch/
<urn:uuid:8d0724d1-f45b-4e43-8baf-b1be50052bfc>
3.171875
1,791
News (Org.)
Science & Tech.
57.767004
811
Query: classification: "42.83" |Title||Temporal and spatial patterns of laying in the Moluccan megapode Eulipoa wallacei (G.R. Gray)| |Keywords||Moluccan megapode; Eulipoa wallacei; egg production; lunar synchrony| |Abstract||The Moluccan megapode Eulipoa wallacei (G.R. Gray, 1860) lays its eggs at night in the sand at communal nesting beaches. The majority of the world’s Moluccan megapode population rely on only two nesting grounds on the islands of Halmahera and Haruku, Indonesia. An understanding of the ecological characteristics of these breeding sites is thus important in terms of conservation. Studies of the largest of the two nesting grounds in Halmahera have shown specific temporal and spatial patterns of egg laying. In this paper I discuss the adaptive significance and conservation implications of these laying patterns.| |Download paper|| http://www.repository.naturalis.nl/document/46262 |
<urn:uuid:98b4fecc-fb15-4189-8e5b-661392044a09>
2.734375
228
Structured Data
Science & Tech.
37.591538
812
June 8, 1999 Built in record time in just 12 months, QuikScat, NASA's new ocean-observing satellite, will be launched on a Titan II rocket from California's Vandenberg Air Force Base at 7:15 p.m. Pacific Daylight Time on June 18. This satellite will be NASA's next "El Niño watcher" and will be used to better understand global weather abnormalities. The Quick Scatterometer, or QuikScat, will provide climatologists, meteorologists and oceanographers with daily, detailed snapshots of ocean winds as they swirl above the world's oceans. The mission will greatly improve weather forecasting. Winds play a major role in every aspect of weather on Earth. They directly affect the turbulent exchanges of heat, moisture and greenhouse gases between Earth's atmosphere and the ocean. To better understand their impact on oceans and improve weather forecasting, the satellite carries a state-of-the-art radar instrument called a scatterometer for a two-year science mission. "Knowledge about which way the wind blows and how hard is it blowing may seem simple, but this kind of information is actually a critical tool in improved weather forecasting, early storm detection and identifying subtle changes in global climate," said Dr. Ghassem Asrar, associate administrator of NASA's Office of Earth Science, Washington, DC. The mission will help Earth scientists determine the location, structure and strength of severe marine storms - hurricanes in the Atlantic, typhoons near Asia and mid-latitude cyclones worldwide - which are among the most destructive of all natural phenomena. The National Oceanic and Atmospheric Administration (NOAA), a chief partner in the QuikScat mission, will use mission data for improved weather forecasting and storm warning, helping forecasters to more accurately determine the paths and intensities of tropical storms and hurricanes. As NASA's next "El Niño watcher," QuikScat will be used to better understand global El Niño and La Niña weather abnormalities. Changes in the winds over the equatorial Pacific Ocean are a key component of the El Niño/La Niña phenomenon. QuikScat will be able to track changes in the trade winds along the equator. Scatterometers operate by transmitting high-frequency microwave pulses to the ocean surface and measuring the "backscattered" or echoed radar pulses bounced back to the satellite. The instrument senses ripples caused by winds near the ocean's surface, from which scientists can compute the winds' speed and direction. The instruments can acquire hundreds of times more observations of surface wind velocity each day than can ships and buoys, and are the only remote-sensing systems able to provide continuous, accurate and high-resolution measurements of both wind speeds and direction regardless of weather conditions. The satellite is the first obtained under NASA's Indefinite Delivery/Indefinite Quantity program for rapid delivery of satellite core systems. The procurement method provides NASA with a faster, better and cheaper method for the purchase of satellite systems through a "catalog," allowing for shorter turnaround time from mission conception to launch. Total mission cost for QuikScat is $93 million. Fifteen times a day, the satellite will beam down collected science data to NASA ground stations, which will relay them to scientists and weather forecasters. SeaWinds will provide ocean wind coverage to an international team of climate specialists, oceanographers and meteorologists interested in discovering the secrets of climate patterns and improving the speed with which emergency preparedness agencies can respond to fast-moving weather fronts, floods, hurricanes, tsunamis and other natural disasters. By combining QuikScat's wind data with information on ocean height from another ocean-observing satellite, the joint NASA- French TOPEX/Poseidon mission, scientists will be able to obtain a more complete, near-real-time look at wind patterns and their effects on ocean waves and currents, said Dr. Timothy Liu, QuikScat project scientist at NASA's Jet Propulsion Laboratory, Pasadena, CA. He added that QuikScat will complement data being collected by other Earth-monitoring satellites such as NASA's currently orbiting Tropical Rain Measurement Mission (TRMM) and Terra, which will be launched later this year. The 870-kilogram (1,910-pound) QuikScat satellite, provided by Ball Aerospace & Technologies Corp., Boulder, CO, with its 200-kilogram (450-pound) radar instrument, called SeaWinds, will be placed in a circular, near-polar orbit with a ground speed of 6.6 kilometers per second (14,750 miles per hour). The satellite will circle Earth every 101 minutes at an altitude of 800 kilometers (500 miles). A press kit with detailed information on the QuikScat launch and mission is available on the Internet at http://www.jpl.nasa.gov/files/misc/qslaunch.pdf . QuikScat is managed for NASA's Office of Earth Science, Washington, DC, by the Jet Propulsion Laboratory, which also built the Seawinds radar instrument and will provide ground science processing systems. NASA's Goddard Space Flight Center, Greenbelt, MD, managed development of the satellite, designed and built by Ball Aerospace & Technologies Corp., Boulder, CO. NASA's Earth Sciences Enterprise is a long-term research and technology program designed to examine Earth's land, oceans, atmosphere, ice and life as a total integrated system. JPL is a division of the California Institute of Technology, Pasadena, CA. Other social bookmarking and sharing tools: The above story is reprinted from materials provided by NASA/Jet Propulsion Laboratory. Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:1e6dd016-85de-49a9-b816-7451951860c9>
3.1875
1,195
News Article
Science & Tech.
33.528406
813
During the Star Wars years of the 1980s, Tom Paterson worked at a defense think tank creating elaborate mathematical models to help military commanders quickly decide which weapons to deploy to counter incoming missiles. Inputs from hundreds of sensors had to be combined to generate a consummate picture of events that would be unfolding in a matter of minutes, enabling the fateful choice about when to launch. When the cold war ended, Paterson, like many defense engineers, tried to find a way to apply his skills elsewhere. He ultimately took on a task that made shooting down missiles seem pedestrian. A challenge faced by engineers in the Star Wars program--designing software to pick out critical targets despite an overload of data--carried over to simulations of how drugs work in the metabolic and immune systems that drive the most complex machine we know. This article was originally published with the title Reverse-Engineering Clinical Biology.
<urn:uuid:8c5268a3-fb7e-4809-b768-86b4ebe3e60c>
2.984375
180
Truncated
Science & Tech.
25.887308
814
(Sen) - Inventive techniques using observations with some of the world's biggest telescopes have allowed astronomers to discover five new planets orbiting one of the closest stars to the Earth. Tau Ceti, which lies only 12 light-years away and can easily be spotted on a clear night, is a star similar to the Sun and also single. One of the new worlds around it is in its habitable zone, so named because water could exist on it in liquid form. The new planets all have sizes, or masses, between two and six times that of the Earth, making the new solar system one of the least massive yet found. But that is not altogether surprising because smaller worlds are bound to be easier to detect when they are closer to us. The discoveries, which come just two months after the announcement of an Earth-sized planet in the nearest star system to us, Alpha Centauri, were made by an international team from the UK, Chile, the USA and Australia. They examined the starlight from Tau Ceti using spectrographs on three telescopes - HARPS on the 3.6m telescope at the European Southern Observatory in La Silla, Chile, UCLES on the Anglo-Australian Telescope in Siding Spring, Australia, and HIRES on the 10m Keck telescope on Mauna Kea, Hawaii (567 data points). After making more than 6,000 measurements, the team using computer modelling techniques and managed to improve the sensitivity of their observations so that smaller planets than normal revealed themselves in the data. The team's leader, Mikko Tuomi from the University of Hertfordshire in the UK, explained: “We pioneered new data modelling techniques by adding artificial signals to the data and testing our recovery of the signals with a variety of different approaches. This significantly improved our noise modelling techniques and increased our ability to find low mass planets." Hugh Jones, also from the University of Hertfordshire, said: "We chose Tau Ceti for this noise modelling study because we had thought it contained no signals. And as it is so bright and similar to our Sun it is an ideal benchmark system to test out our methods for the detection of small planets." Where to spot Tau Ceti in the night sky. Credit: University of Hertfordshire The new worlds add to a harvest of more than 800 exoplanets that have been discovered around other stars since 1995. Most of those found have been "hot Jupiters" - gas giants zipping round close to their host stars in just days. Team member Steve Vogt, from University of California Santa Cruz, said "This discovery is in keeping with our emerging view that virtually every star has planets, and that the galaxy must have many such potentially habitable Earth-sized planets. They are everywhere, even right next door! "We are now beginning to understand that Nature seems to overwhelmingly prefer systems that have a multiple planets with orbits of less than one hundred days. "This is quite unlike our own Solar System where there is nothing with an orbit inside that of Mercury. So our Solar System is, in some sense, a bit of a freak and not the most typical kind of system that Nature cooks up."
<urn:uuid:df11def8-d7f5-431f-b0d9-35d6ab9e6b13>
3.390625
661
News Article
Science & Tech.
40.714611
815
According to Jerry Ostriker (Plumian Professor, Cambridge; Professor of Astrophysics, Princeton; Provost, Princeton), "Surveys aren’t just something that astronomers do, they are the only thing astronomers do." These words are understandable, given Prof. Ostriker’s intimate association with the Sloan Digital Sky Survey that is presently transforming our view of the optical universe. The ability to systematically survey one quarter of the sky, with the dynamic range and spatial resolution to zoom in to study individual objects, is providing us with the first truly 3-dimensional map of the nearby cosmos. The optical portion of the spectrum unveils the moderately energetic and hot components of the universe, but the physics of the cool constituents is probed at radio wavelengths. [New Paragraph] The Allen Telescope Array (ATA) of 350 telescopes, each 6.1 m in diameter, will do for the radio sky what the Sloan Digital Sky Survey has done for the optical sky. And it will do it so rapidly that it will also provide the first systematic look at the transient radio universe. The ATA provides simultaneous access to any frequency between 500 MHz and 11.2 GHz, with four separate frequency channels feeding a suite of signal processing backends that can produce wide-angle radio images of the sky in 1024 colors, and at the same time, study up to 32 point sources of interest within its large field of view. This new approach to commensally sharing the sky allows SETI (the Search for ExtraTerrestrial Intelligence) and traditional radio astronomical science to both use the telescope nearly all the time: our tools are beginning to be commensurate with the size of the vast explorations of the radio sky that we wish to undertake. [New Paragraph] This talk will put the ATA into context with the rest of the SETI activities around the world and describe the initial SETI observations we intend to conduct.
<urn:uuid:80101575-69ba-46e0-b340-1aca4f28fe13>
2.765625
389
News (Org.)
Science & Tech.
32.056667
816
Extraordinary claims require extraordinary evidence, as astronomer Carl Sagan once said. Sagan was talking about UFOs and aliens, but his words now stand as a watchword for skepticism in science. But how do we know when a claim is extraordinary? Say, maybe when the aliens don't arrive from space? Consider the controversial "arseniclife", short for arsenic-based life, bacteria study. Rather than arriving on a UFO, the microbe was unveiled at NASA headquarters, announced at an "astrobiology" news briefing on Dec. 2, 2010, as "the first known microorganism on Earth able to thrive and reproduce using the toxic chemical arsenic." Arsenic is a poison. How did the finding that the bacteria called GFAJ-1 replaced phosphorus, a basic chemical constituent of biochemistry, with arsenic even in its DNA make its way into the hallowed journalScience, and onto the stage at NASA? The idea turned out to be too good to be true, as two studies also published by Science that refuted the claim showed this summer. Both showed that the microbe, discovered in California's Mono Lake, in fact seems to need some minuscule amount of phosphorus to survive. It's just tough, not completely alien, in its biochemistry. "Science magazine is perhaps the most important publication in the United States for communicating science to the public. Therefore, when Science publishes something funded by NASA that is likely to broadly interest the public, NASA is effectively required to hold a press conference," says chemist Steven Benner of the Foundation for Applied Molecular Evolution in Gainesville, Fla. Benner served as the outside skeptical voice at the NASA press conference announcing the finding. "This all assumes, of course, a very high quality of peer review at Science magazine," Benner says. And that might be the rub. A USA TODAY investigation suggests that the problems with the arseniclife story started with Science's "peer review" of the findings before it ever took the NASA stage. Peer review is the basic process in science where anonymous outside experts eyeball a study for journal editors, acting as gatekeepers on publications. In the case of the 2010 GFAJ-1 study, three anonymous reviewers, along with Science editor Caroline Ash, asked the authors 32 questions about the study, which were answered by the authors a month ahead of the press conference. And they made a few general comments on the paper. Here are some: "The results are exceptional," said Reviewer 1. "It's a pleasure to get a well-received and carried-out study to review," said Reviewer 2. "Reviewing this paper was a rare pleasure," said Reviewer 3, adding later on: "Great job!" Most scientists don't often see such upbeat peer review comments on their papers, notes Princeton genome expert Leonid Kruglyak, an author on one of the 2012 papers refuting the arseniclife results. Kruglyak was also one of a collection of outside experts asked to review the reviews, which NASA released to USA TODAY in January after a Freedom of Information Act request. "In retrospect, what is the interesting aspect of them is there wasn't anything different, or out of the ordinary in them (compared) with how most papers are handled," Kruglyak says. "This was a par-for-the-course review." In particular, Kruglyak notes Peer Reviewer 2 actually pointed out the problem in proving whether phosphorus was really absent from the GFAJ-1 bug's biochemistry. The concern seems muted and is not drawn as a reason to block publication, however, amid a series of other more positive comments. Looking at the reviewer comments, roughly half were requests for added explanations or spelling fixes. Another half-dozen were questions on the chemistry or biology of the microbe. The rest involved inquiries about further experiments, which original lead study author Felisa Wolfe-Simon often suggested were "beyond the scope of the paper." Kruglyak says such requests for new experiments are not uncommon in reviews, nor is it unusual for study authors to demur, wanting to save something for a future study and leaving it up to the journal editor to adjudicate the dispute. Peer review expert Elizabeth Wager, former chair of the Committee on Publication Ethics, says the review looked entirely typical for scientific journals. "The only thing that is surprising is how strongly positive the reviewers are," Wager says in an e-mail. "They clearly think this is an important piece of work and also comment that it is clearly presented." "Judging from the quotes, the three reviewers were enthusiastic about the paper. Indeed, these reviews would be described by most scientists as 'glowing,' " says biologist Patricia Foster of Indiana University in Bloomington. In particular, she notes at least three striking things about the reviews lost in these glowing comments. "First, there is little biology mentioned - the reviewers questioned only mildly the authors' assumption that there was too little phosphorus in the medium to support growth. This assumption was a major point of disagreement voiced by scientists after the paper appeared." Second, the reviewers didn't question wide variation in chemistry analysis of the medium that the bugs grew in (specifically, how much contaminating phosphorous was in it), something that later critics saw as a major shortcoming. Finally, they only commented, rather than inquiring, on the biochemistry implied by the results, the central extraordinary claim that arsenic was acting in place of phosphorus in the metabolism of the GFAJ-1 bacteria. "In conclusion, I believe that NASA would have had no reason to doubt the results of the paper based on these reviews. In fact, NASA officials would have felt encouraged to publicize the paper," Foster says. Foster does note that the study authors added a figure (and a few more authors) to the paper, suggesting the GFAJ-1 bug had arsenic in its DNA, as a result of the review wanting to see more evidence of arsenic inside the bacteria. The figure became another point of controversy in the debate over the study. (Wager says that adding a new figure and authors is "perfectly acceptable" during peer review and wouldn't necessarily trigger a re-review of a study.) Basically, the reviewers took at face value the fundamental claim by the study authors that the GFAJ-1 bug was growing without any phosphorus, says microbial ecologist Norman Pace of the University of Colorado. "Once you accept that, everything else follows," Pace says. "You just have to have a certain expertise to know that is nearly impossible; removing phosphorus is just very hard." In general, Pace says that he believes peer review improves studies and works as a "pretty strong" corrective to error in science. "There is lots of poor science out there, but important claims like this one are checked up on and proven true or false, as this (arseniclife) one was, so I think things are not actually so suspect out there in science. In essence, this was all found out in due course. My belief is the peer-review system is fundamentally sound." Regardless, Pace says there was "poor judgment at multiple levels" in the arseniclife case, from an "overly exuberant" interpretation of the study results by the authors to the peer reviewers missing "the big crux of the results: the claim of absence of phosphorus," to NASA repeating some of the mistakes that caused the agency trouble in 1996, when it publicized results suggesting a Martian meteorite contained microscopic signs of life. Still, Pace had actually recommended the publication of that 1996 paper, he says. "I do think it is important to get noteworthy results out there." In 2011, Science editor-in-chief Bruce Alberts echoed that comment in a statement on the arseniclife study. "We hope that the study and the subsequent exchange being published today will stimulate further experiments - whether they support or overturn this conclusion. In either case, the overall result will advance our knowledge about conditions that support life, an important outcome for science and education," Alberts says. Worth noting is that NASA scientist Michael New and original study lead author Wolfe-Simon, now of the Lawrence Berkeley (Calif.) National Laboratory, both still supported the original 2010 study's findings this summer when the refuting studies were published. "Science is continuously evaluating its peer-review policies and procedures with the goal of a rigorous and fair process," magazine spokeswoman Ginger Pinholster said in response to questions about whether the journal has changed its peer-review practices since 2010. This year, the journal added an additional step to the review process. Once all reviews are in on a manuscript, all the reviewers are invited to read them and comment. "This step allows the reviewers to react to the comments of the other reviewers and may help the editor to calibrate the reviewer comments," Pinholster says by e-mail. Kruglyak cautions against looking back too harshly at the reviewers of the arseniclife study. "In hindsight we can see what went on, but that's how hindsight works," Kruglyak says. "It was a pretty spectacular claim. In the big picture, I'm not surprised about it not working out." Update: One other scientist asked to review the reviews, microbial ecologist James Cotner of the University of Minnesota in St. Paul, makes the point that it is up to the journal to ensure that peer reviewers are the most appropriate experts to review a study. "Part of the problem with this paper may have been that it is a very interdisciplinary topic (molecular biology, microbial ecology, physical chemistry, etc.) so it may have been hard to make sure all fields were appropriately represented," Cotner says, by email. "Without knowing who the reviewers were, it's hard to say if the editor insured that the best and most appropriate folks were reviewing it. But when there are only three reviewers, it can be very difficult." Copyright 2013 USATODAY.com Read the original story: Glowing reviews on 'arseniclife' spurred NASA's embrace
<urn:uuid:f736850d-3fbb-4a28-bde9-15c63379644b>
3.203125
2,061
News Article
Science & Tech.
37.790192
817
PI: Amy Bower (WHOI) Deep ocean convection is limited to a small number of isolated regions worldwide, including the Labrador Sea, but it has a profound impact on the ocean’s thermohaline circulation and climate. While the convection process itself has been studied intensively over the last decade, the restratification of the water column after convection, which will directly impact convection during subsequent winters, is not as well-studied. It has recently been suggested that the decay of coherent, long-lived, anticyclonic eddies shed from a surrounding warm boundary current are potentially important in restratifying convection regions. This idea is most developed in the Labrador Sea, where anticyclonic eddies containing a core of warm, salty water from the Irminger Current (a remnant of the Gulf Stream) have been observed. The goal of the proposed research is to advance our understanding of the role of Irminger Rings in deep convection by collecting new information on their initial structure and on the evolution of their core properties as they propagate across the Labrador Sea. To meet this goal, we plan to deploy one densely instrumented mooring in the northeastern Labrador Sea near, but offshore of the eddy formation site to document the full water column hydrographic and velocity structure of about 12 new rings where they detach from the boundary and enter the interior. The mooring will also serve as the “launch pad” for the automatic release of a profiling float each time an eddy sweeps by the mooring. Trapped within the eddies by the strong azimuthal velocities, the floats will track the eddy trajectories and measure changes in eddy core properties as they move from the formation site toward the convection region. When this research program is completed, we will have unprecedented information on the structure and heat and salt content of nascent Irminger Rings that have separated from the boundary, improved estimates of the heat and freshwater fluxes associated with rings, and new information on where and how their anomalous core properties are spread within the Labrador Sea. OceanInsight: Irminger Rings Project Overview Link to OceanInsight Irminger Rings Project Overview Popular Science: Fieldwork The Unseen Currents On the Labrador Sea, the scientific crew of the research vessel Knorr hunts for underwater storms, sinks a two-mile mooring—and gathers clues to the planet’s fate. March, 2011. Furey, H., A. Bower, and T. McKee. An Irminger Ring Mooring in the Labrador Sea, Preliminary Results. Ocean Sciences Meeting, 2010. Bower, A. S., H. H. Furey, and T. McKee. An Irminger Ring Mooring in the Labrador Sea. Poster presented at the 2009 Spring EGU and AMOC Conferences: project overview and early results.
<urn:uuid:bb499de1-55ad-43bb-9179-a3790bdeb6a5>
3.40625
605
Knowledge Article
Science & Tech.
39.382818
818
These is a drawing of the interior of Io. Click on image for full size Interior of Io The diagram to the left shows the interior of Io. When the Galileo spacecraft flew by Io it took measurements which showed that Io was separated into two layers, as shown in this picture. Thus scientists think that Io has a large core, covered with a rocky material. There is no ice within Io. Shop Windows to the Universe Science Store! Our online store on science education, ranging from evolution , classroom research , and the need for science and math literacy You might also be interested in: Differentiation is a scientific term which really means "to separate". In their earliest history, elements which made the planets would part into separate regions, if the planet were warm enough. This...more Galileo is a spacecraft that has been orbiting Jupiter for eight years. On September 21, 2003, Galileo will crash into Jupiter. It will burn up in Jupiter's atmosphere. The crash is not an accident! The...more Amalthea was discovered by E Barnard in 1872. Of the 17 moons it is the 3rd closest to Jupiter. Amalthea is about the size of a county or small state. Amalthea is named after the goat in Greek mythology...more Callisto was first discovered by Galileo in 1610. It is the 2nd largest moon in the solar system, and is larger than the Earth's moon. It is about as big as the distance across the United States. Callisto...more Measurements by the Galileo spacecraft have been shown that Callisto is the same inside from the center to the surface. This means that Callisto does not have a core at the center. This means that, unlike...more Many different types of surface are shown in this picture. In the front is a huge crater, which goes for a long way over the surface. This crater could be compared to that of Mimas. They both show that...more The surface of Callisto is deeply marked with craters. Craters are the little white marks in the picture. It looks like it might be the most heavily cratered body in the whole solar system. And some of...more
<urn:uuid:889341f1-6688-4305-a1ca-a53e92e4feaa>
3.8125
466
Content Listing
Science & Tech.
64.861136
819
Given that only half of global warming is due to CO2, while another half is caused mainly by methane, world-leading scientists such as Professor of Global Environmental Health Kirk Smith and NASA’s Prof. James Hansen call for methane reduction strategies, for instance through reducing livestock, to be implemented rather than risky and untried geoengineering carbon sequestration strategies. Dr Smith writes: “One tonne of methane is responsible for nearly 100 times more warming over the first five years of its lifetime in the atmosphere than a tonne of CO2. Methane is removed from the atmosphere much more rapidly than CO2, with a half-life of 8.5 years compared with many decades for CO2.” According to NASA article entitled “Global warming in the twenty-first century: An alternative scenario“, co-authored by Professor Hansen: “Rapid warming in recent decades has been driven mainly by non-CO2 greenhouse gases (GHGs), such as chlorofluorocarbons (CFCs), Methane (CH4), and Nitrous Oxide (N2O), not by the products of fossil fuel burning. If sources of Methane and Ozone (O3) precursors were reduced in the future, the change in climate forcing by non-CO2 GHGs in the next 50 years could be near zero.” Source: Global warming in the twenty-first century: An alternative scenario NASA site – abbreviated version – Proceedings of the National Academy of Sciences (PNAS) – full article Date: 25 June 2009
<urn:uuid:a23002ef-1971-45cb-84ce-ae7fd3866846>
3.734375
326
News (Org.)
Science & Tech.
26.838158
820
A video and paper in Current Biology about a veined octopus, Amphioctopus marginatus, that carries coconut halves to deploy as a shelter has gotten a lot of play in the popular press. The story is usually accompanied by the claim that this is the first reported case of invertebrate tool use. Maybe this is true amongst the squishies (cephalopods), but I think that arthropods accomplish much more exciting feats of tool use every day. Coconut octopus, meet coconut crab. Earlier, I talked a little about coconut crabs, mentioning that they use mollusk shells when they are small, and eventually discard them as they grow. They also have an intermediate size behavior where they use hollowed out coconut shells as a shelter. Photos: Finn et al., 2009 and Nancy and Neil. First a disclaimer: I think cephalopods are awesome. They are probably the second coolest animal group behind mantis shrimp. Also, this video and paper represent a really interesting finding, and any antagonism in this post is meant to be humorous. I only take exception to the tool use claim. I, of course, realize that any assessment of “tool use” is completely dependent on how you define “tool use.” However, even by the researchers own exclusive definition, arthropods still beat their motile mollusk to the punch. Let’s see how they define tool use in order to exclude the numerous arthropod examples: …simple behaviours, such as the use of an object (or objects) as shelter, are not generally regarded as tool use, because the shelter is effectively in use all the time, whereas a tool provides no benefit until it is used for a specific purpose. This rules out examples such as the use of gastropod shells by hermit crabs, but includes situations where there is an immediate cost, but a deferred benefit, such as dolphins carrying sponges to protect against abrasion during foraging and where an object is carried around in a non-functional form to be deployed when required. Actually, I don’t see how this definition even negates hermit crabs from tool use. There is no benefit for the crab in dragging around a heavy shell or coconut on its back while it forages. It is only beneficial later, when the animal wants to rest or block an attack. I would say that is a fairly specific deferred purpose with an immediate cost. Regardless, there are a bunch of other examples of arthropod tool use that I can pull off the top of my head. - Spiders construct structurally elaborate webs as methods of defense as well as prey capture. In addition to the common trap webs, some Gladiator Spiders also make web nets that they hold in their front arms and use to pin prey to the ground. - Gonodactyloid mantis shrimp are capable of complex masonry work, chiseling into form and stacking walls of rubble and shells around their lairs. They strongly suggest planning capacities with this behavior. - The amphipod Phronima hollows out tunicate carcasses to live inside and drive around in the deep-sea in search of more prey. Again, calling any of this “tool use” is extremely definition dependent. However, under the above interpretation, I think many arthropods have just as strong a claim to tool use as the octopus. Regardless, squishies and crunchies need to get along so that they can join their tool-using forces to protect us from those goddamn dolphins.
<urn:uuid:a2ad3ed3-feec-45bf-a39f-6113cdd51f10>
3.34375
736
Personal Blog
Science & Tech.
43.075423
821
Last month, I had the opportunity to attend a sea level rise forum in Raleigh, NC organized by the North Carolina Department of Environment and Natural Resources; Division of Coastal Management. The purpose of this forum was to discuss the latest sea level rise science, what it means for North Carolina’s coastal communities and how the state can begin to prepare for the changes to come. The two day event boasted an impressive line-up of expert speakers from around the country. Over 200 local and state decision makers, scientists, planners, engineers and environmental advocates participated in this event that seemed to foster true collaboration between all involved. This forum was just one step in a series that the Division of Coastal Management (DCM) plans to take to understand and plan for future sea level rise. Public Perceptions of Rising Seas Last year the DCM issued a 10-question scoping survey to gauge public perception about rising seas. During the forum Tancred Miller, Coastal Policy Analyst with the DCM, shared with the audience their results. Interestingly, 75% of participants believe sea level rise is happening and 66% believe that the state should take action now to plan and prepare for future sea level rise. The results of this survey will be used as a communications tool to help the DCM design public outreach opportunities and to address the gaps in the public’s understanding of this issue. This sea level rise forum was the second step in the DCM’s “sea level rise roadmap” (see diagram); their clear path forward to address this issue in the state. To read the full report and analysis click here. “Potential Death Sentence” for North Carolina Beaches According to Dr. Stanley Riggs, Distinguished Professor of Geology at East Carolina University, most North Carolina beaches are eroding at a long term average of 15 feet/year. This rapid rate of erosion coupled with rising seas and a limited offshore sand supply to replenish the beaches creates a “potential death sentence” for the future of the state’s barrier island communities. “Humans are just as impactful as storms”, Dr. Riggs explained. Because we have heavily urbanized our coastlines we have essentially stopped the natural migration of barrier island ecosystems; the environments that naturally protect human development from storm surges. Shocking Statistics from Dr. Riggs: * Somewhere between 350 and 500 houses on NC beaches are sandbagged and/or in danger of washing away. * Twenty-four miles of coastal highway are collapsing and 100+ miles are threatened. Dr. Riggs made a big push for responsible coastal planning and development by ending his presentation posing the following question: “We have a choice”, said Dr. Riggs. “Should we engineer our dynamic coastal system to keep up with the ongoing rise in sea level or should we begin adapting to these changes now to maintain a sustainable coastal system and associated economy?” Future Sea Level Rise Estimates for North Carolina There were more than a few scientists at this forum that addressed past, present and future sea level rise changes in North Carolina. Estimates were taken from numerous resources including Intergovernmental Panel on Climate Change (IPCC) assessments and various peer-reviewed studies. All of the most recent studies conclude that sea level is rising much faster than predicted. Scientists like Dr. Gordon Hamilton, Research Professor at the University of Maine, explained that the range in estimates that are out there on sea level rise depend on how the model accounts for changes in ice sheet dynamics. Ice sheets are extremely sensitive; they can decay rapidly in non-linear ways which leads to uncertainty in sea level rise estimates. “The general consensus since the IPCC’s latest assessment in 2007 is that we can expect at the very least, 1 meter of global sea level rise by the end of the century.” - Dr. Gordon Hamilton The Dialogue Continues What will ultimately determine the fate of North Carolina’s coastal communities, ecosystems and associated economies will depend on how the state and her residents choose to mitigate and adapt to rapidly rising seas and a changing climate. The Division of Coastal Management along with all of their partners should be applauded by the dialogue that they have started and have committed to continuing in the state. There is a lot of activity going on in North Carolina on the topic of sea level rise, adaptation and mitigation and the state is positioning itself to be a leader in our region in forward-thinking planning that accounts for global warming impacts to our treasured coastal places. * Workshop on March 2nd and 3rd, 2010: Planning for North Carolina’s Future; Ask the Climate Question * SACE Archive Webinar: “Planning to Protect: Helping SE Communities think about Adaptation” * Video: “Treasured Places in Peril: The Outer Banks” * New Resource: NOAA Climate Services Leave a comment
<urn:uuid:1234ae3e-24c8-4ea4-9f8f-46eb3c698717>
2.515625
1,016
News (Org.)
Science & Tech.
35.667109
822
The Center is registered in EPA's Acid Rain Program (since 2003) Some of the major accomplishments of the program through 2009 include: · Power plants have decreased emissions of SO2, aprecursor to acid rain, to 5.7 million tons in 2009, a 67 percent decrease from 1980 levels and a 64 percent decrease from 1990levels.The Acid Rain Program was established under the 1990 Clean Air ActAmendments and requires significant emission reductions of SO2 andnitrogen oxides (NOx) from the electric power industry. The program sets a permanent cap on the total amount of SO2 that may be emitted by electric generating units in the United States, and includes provisions for trading and banking emission allowances. The program is phased in, with this year phasing in the final 2010 SO2 cap set at 8.95 milliontons, a level of about one-half of the emissions from the power sectorin 1980. More information on the Acid Rain Program (EPA) · Air quality has improved; the average amount of ambientSO2 decreased 76 percent between 1980 and 2009. The largestsingle-year reduction in SO2 since the start of the Acid Rain Program occurred between 2008 and 2009. · Reductions in fine particle levels yielded benefitsincluding about 20,000-50,000 lives saved annually. · Many lakes and streams affected by acid rain in the eastare exhibiting signs of recovery.
<urn:uuid:4c407b96-559d-4af8-8603-582b5b4f6c51>
2.890625
285
News (Org.)
Science & Tech.
45.207405
823
In a typical year, it is not uncommon for a dozen or so comets to come within range of amateur telescopes. During the month of October, 2010, a small comet will pass unusually close to Earth. On Oct. 20, Comet Hartley 2 will pass just over 11 million miles (18 million km) from Earth. That is close enough for the comet to be seen through binoculars or even, in the darkest skies, with the naked eye. Amateur stargazers aren’t the only ones looking out for Hartley 2 this month. In September 2007, NASA woke up its hibernating DI spacecraft and, in November, sent it the maneuvering instructions to intercept Hartley 2. The spacecraft is precisely on schedule to rendezvous with the comet on November 4 as it approaches the sun. This week’s online current events activity is a study of comets, the Hartley 2 Comet, and NASA’s attempt to study it. Begin your investigation into comets by visiting Worldbook@NASA, which features excellent overviews on many topics related to space and astronomy, including Comets. As you read this page, look for answers to the following questions: - What are some of the ingredients that make up a comet? - How big are most comets? - How does a comet tail form, and which direction does it always point? - What is the relationship between comets and meteor showers? - What have scientists learned about the nucleus of a comet? More information about comets can be found at the Nine Planets site. As you read, look for answers to these questions: - Name two examples of comet appearances in antiquity (ancient human history) - How many comets have been cataloged? - What are the five parts of a comet? - How do comets “die”? Comet Hartley 2 Now that you have some solid background information about comets in general, let’s see what we can learn about Comet Hartley 2. Start by going to the web site of Sky and Telescope magazine, and read Comet Hartley 2 At Its Best, written by Greg Bryant. This first half of the page is an ongoing blog with dated status updates, followed by the original article. As you read, just understand that you are reading backwards in time. When you get to the October 8 update, watch the wide-field animation created by Ernesto Guido and Giovanni Sostero. Can you see the slight movement of the comet against the stationary star field? As you read the original article, look for answers to the following questions: - In what year was Hartley 2 discovered? - Why does the moon play a factor in viewing comets? - What does the EPOXI spacecraft’s name stand for? - How close will the spacecraft get to the comet? - What does the number 2 mean in “Hartley 2”? - Why was this comet beyond visual discovery until after 1982? Learn more about the EXOXI mission by visiting the official mission web site. From the home page, click Mission on the left and read the 10 phases of the mission. What is the purpose of the Earth fly-bys? What happens during the Comet Approach Phase? What will scientists be looking for during the Encounter Phase? What data will be gathered? Finish your online study of comets this week by Comparing Comets. This is a student activity developed by NASA in which students can make their own observations based on photos of two different comet nuclei. Print this worksheet or follow along online and record your answers separately. Follow the directions on each page. On page 2, as you are looking at the two photos of comet surfaces, listen to this audio recording of students making their own observations about the comets in a teacher-led discussion. Comets are not easy to study. Because of their speed and orbit, it is (currently) impossible for humans to travel to comets to make firsthand observations. Instead, scientists send up remotely controllable probes to intercept comets, take photographs, and make a variety of different measurements. This practice is not limited to astronomers. For centuries humans have been creating tools used to measure, weigh, count, or in many other ways analyze things that are beyond our physical senses. In a current or recent issue of the e-edition, look for news stories that cite examples of people using tools to measure or analyze. A good example might be DNA testing for criminal evidence, but you will find many others. Based on your findings, how important have these tools become in our daily lives? Why is it becoming increasingly important to measure, collect, and analyze information?
<urn:uuid:bae30682-110c-416b-bf84-eaae0f65eb55>
4.28125
978
Knowledge Article
Science & Tech.
54.011878
824
posted by Dr. Amber Jenkins Mimicry is the sincerest form of flattery. At least that’s how it goes in the technology world. Today’s researchers are developing solar-powered cells that look and behave exactly like plant leaves — they're green, they blow in the wind, and they absorb light and convert it into electrons. Straight from Mother Nature’s textbook. Now an illuminating new design has come to the table. It’s less a case of mimicking a plant and more a case of harnessing real-life fruit — in this case, tomatoes. Yep, those acidic little yummies are being used to run an LED table lamp called “Still Light.” The lamp, which comes out of d-VISION, an Israeli internship program for Product Development and Industrial Design, hooks up tomatoes to copper and zinc electrodes. The tomatoes act as electrolytes for the current to pass through and help power the LED. Like all good things, it comes to an end — when the tomatoes eventually rot. This isn’t nature’s solution to our energy problems. But it is a cool new design and might help spur on the next generation of tomato technology.
<urn:uuid:14e1faad-e170-4612-afa7-7b2ba41ef34e>
3.46875
250
Personal Blog
Science & Tech.
60.588462
825
API for probabilities.finite-distributions Full namespace name: clojure.contrib.probabilities.finite-distributions Finite probability distributions This library defines a monad for combining finite probability Public Variables and Functions Usage: (certainly v) Returns a distribution in which the single value v has probability 1. Usage: (choose & choices) Construct a distribution from an explicit list of probabilities and values. They are given in the form of a vector of probability-value pairs. In the last pair, the probability can be given by the keyword :else, which stands for 1 minus the total of the other probabilities. Variant of the dist monad that can handle undefined values. Usage: (cond-prob pred dist) Returns the conditional probability for the values in dist that satisfy the predicate pred. Monad describing computations on fuzzy quantities, represented by a finite probability distribution for the possible values. A distribution is represented by a map from values to probabilities. Usage: (join-with f dist1 dist2) Returns the distribution of (f x y) with x from dist1 and y from dist2. Usage: (make-distribution coll f) Returns the distribution in which each element x of the collection has a probability proportional to (f x) Usage: (normalize weights) Convert a weight map (e.g. a map of counter values) to a distribution by multiplying with a normalization factor. If the map has a key :total, its value is assumed to be the sum over all the other values and it is used for normalization. Otherwise, the sum is calculated explicitly. The :total key is removed from the resulting distribution. Usage: (prob pred dist) Return the probability that the predicate pred is satisfied in the distribution dist, i.e. the sum of the probabilities of the values that satisfy pred. Usage: (uniform coll) Return a distribution in which each of the elements of coll has the same probability. Usage: (zipf s n) Returns the Zipf distribution in which the numbers k=1..n have probabilities proportional to 1/k^s.
<urn:uuid:8244fbf3-963b-4a47-b249-6879e79d406f>
2.640625
487
Documentation
Software Dev.
40.991346
826
Line-of-sight propagation refers to electro-magnetic radiation or acoustic wave propagation. Electromagnetic transmission includes light emissions traveling in a straight line. The rays or waves may be diffracted, refracted, reflected, or absorbed by atmosphere and obstructions with material and generally cannot travel over the horizon or behind obstacles. At low frequencies (below approximately 2 MHz or so) radio signals travel as ground waves, which follow the Earth's curvature due to diffraction with the layers of atmosphere. This enables AM radio signals in low-noise environments to be received well after the transmitting antenna has dropped below the horizon. Additionally, frequencies between approximately 1 and 30 MHz can be reflected by the F1/F2 Layer, thus giving radio transmissions in this range a potentially global reach (see shortwave radio), again along multiple deflected straight lines. The effects of multiple diffraction or reflection lead to macroscopically "quasi-curved paths". However, at higher frequencies and in lower levels of the atmosphere, neither of these effects are significant. Thus any obstruction between the transmitting antenna and the receiving antenna will block the signal, just like the light that the eye may sense. Therefore, since the ability to visually see a transmitting antenna (disregarding the limitations of the eye's resolution) roughly corresponds to the ability to receive a radio signal from it, the propagation characteristic of high-frequency radio is called "line-of-sight". The farthest possible point of propagation is referred to as the "radio horizon". In practice, the propagation characteristics of these radio waves vary substantially depending on the exact frequency and the strength of the transmitted signal (a function of both the transmitter and the antenna characteristics). Broadcast FM radio, at comparatively low frequencies of around 100 MHz, are less affected by the presence of buildings and forests. Radio horizon The radio horizon is the locus of points at which direct rays from an antenna are tangential to the surface of the Earth. If the Earth were a perfect sphere and there were no atmosphere, the radio horizon would be a circle. The radio horizon of the transmitting and receiving antennas can be added together to increase the effective communication range. Antenna heights above 1,000,000 feet (189 miles; 305 kilometres) will cover the entire hemisphere and not increase the radio horizon. Radio wave propagation is affected by atmospheric conditions, ionospheric absorption, and the presence of obstructions, for example mountains or trees. Simple formulas that include the effect of the atmosphere give the range as: The simple formulas give a best-case approximation of the maximum propagation distance but are not sufficient to estimate the quality of service at any location. Earth bulge and atmosphere effect Earth bulge is a term used in telecommunications. It refers to the circular segment of earth profile which blocks off long distance communications. Since the geometric line of sight passes at varying heights over the Earth, the propagating radio wave encounters slightly different propagation conditions over the path. The usual effect of the declining pressure of the atmosphere with height is to bend radio waves down toward the surface of the Earth, effectively increasing the Earth's radius, and the distance to the radio horizon, by a factor around 4/3. This k-factor can change from its average value depending on weather. Geometric distance to horizon Assuming a perfect sphere with no terrain irregularity, the distance to horizon from a high altitude transmitter (i.e., line of sight) can readily be calculated. Let R be the radius of Earth and h be the altitude of a telecommunication station. Line of sight distance d of this station is given by the Pythagorean theorem; Since the altitude of the station is much less than the radius of the Earth, If the height is given in metres, and distance in kilometres, If the height is given in feet, and the distance in miles, The actual service range The above analysis doesn’t take the effect of atmosphere on the propagation path of the RF signals into consideration. In fact, the RF signals don’t propagate in straight lines. Because of the refractive effects of atmospheric layers, the propagation paths are somewhat curved. Thus, the maximum service range of the station, is not equal to the line of sight (geometric) distance. Usually a factor k is used in the equation above k > 1 means geometrically reduced bulge and a longer service range. On the other hand, k < 1 means a shorter service range. Under normal weather conditions k is usually chosen to be 4/3. That means that, the maximum service range increases by % 15 for h in meters and d in km. for h in feet and d in miles ; But in stormy weather, k may decrease to cause fading in transmission. (In extreme cases k can be less than 1.) That is equivalent to a hypothetical decrease in Earth radius and an increase of Earth bulge. In normal weather conditions, the service range of a station at an altitude of 1500 m. with respect to receivers at sea level can be found as, Line-of-sight propagation as a prerequisite for radio distance measurements Travel time of radio waves between transmitters and receivers can be measured disregarding the type of propagation. But, generally, travel time only then represents the distance between transmitter and receiver, when line of sight propagation is the basis for the measurement. This applies as well to RADAR, to Real Time Locating and to LIDAR. This rules: Travel time measurements for determining the distance between pairs of transmitters and receivers generally require line of sight propagation for proper results. Whereas the desire to have just any type of propagation to enable communication may suffice, this does never coincide with the requirement to have strictly line of sight at least temporarily as the means to obtain properly measured distances. However, the travel time measurement may be always biased by multi-path propagation including line of sight propagation as well as non line of sight propagation in any random share. A qualified system for measuring the distance between transmitters and receivers must take this phenomenon into account. Thus filtering signals traveling along various paths makes the approach either operationally sound or just tediously irritating. Impairments to line-of-sight propagation Low-powered microwave transmitters can be foiled by tree branches, or even heavy rain or snow. If a direct visual fix cannot be taken, it is important to take into account the curvature of the Earth when calculating line-of-sight from maps. The presence of objects not in the direct visual line of sight can interfere with radio transmission. This is caused by diffraction effects: for the best propagation, a volume known as the first Fresnel zone should be kept free of obstructions. Reflected radiation from the ground plane also acts to cancel out the direct signal. This effect, combined with the free-space r−2 propagation loss to a r−4 propagation loss. This effect can be reduced by raising either or both antennas further from the ground: the reduction in loss achieved is known as height gain. Mobile telephones Although the frequencies used by mobile phones (cell phones) are in the line-of-sight range, they still function in cities. This is made possible by a combination of the following effects: - r−4 propagation over the rooftop landscape - diffraction into the "street canyon" below - multipath reflection along the street - diffraction through windows, and attenuated passage through walls, into the building - reflection, diffraction, and attenuated passage through internal walls, floors and ceilings within the building The combination of all these effects makes the mobile phone propagation environment highly complex, with multipath effects and extensive Rayleigh fading. For mobile phone services these problems are tackled using: - rooftop or hilltop positioning of base stations - many base stations (a phone can typically see six at any given time) - rapid handoff between base stations (roaming) - extensive error correction and detection in the radio link - sufficient operation of mobile phone in tunnels when supported by split cable antennas - local repeaters inside complex vehicles or buildings Other conditions may physically disrupt the connection surprisingly without prior notice: - local failure when using the mobile phone in buildings of concrete with steel reinforcement - temporal failure inside metal constructions as elevator cabins, trains, cars, ships See also - Anomalous propagation - Field strength in free space - Knife-edge effect - Non-line-of-sight propagation - Over-the-horizon radar - Radial (radio) - Rician fading, stochastic model of line-of-sight propagation - Christopher Haslett, Essentials of radio wave propagation, Cambridge University Press, 2008 052187565X pages 119-120 - Mean Radius of the Earth is ≈6.37 x 106 metres = 6370 km. See Earth radius - R.Busi: Technical Monograph3108-1967 High Altitude VHF and UHF Broadcasting Stations, European Broadcasting Union Brussels,1967 - This analysis is for high altitude to sea level reception. In microwave radio link chains, both stations are high altitudes. - Article on the importance of Line Of Sight for UHF reception - Attenuation Levels Through Roofs - Approximating 2-Ray Model by using Binomial series by Matthew Bazajian
<urn:uuid:73f782c1-7cad-44a0-9556-b3f7e3645dbb>
3.90625
1,938
Knowledge Article
Science & Tech.
29.193023
827
Posted Aug 24, 2003 by Joe Otten The thing that has always puzzled me about black holes is what happens to the entropy of objects that fall into them? By its description, a single infinitely dense point seems to have a very low entropy. But if we then let a high entropy object fall into a black hole, we appear to have a contradiction to the second law of thermodynamics. This topic is an active one in the field of astrophysics and quantum gravitation. In general, however, it is required of a black hole that its event horizon always increase, much like the total entropy of a closed system (i.e. - the universe). This thought lead to the hypothesis that a black hole's entropy is proportional to its event horizon's surface area. This came to be the Bekenstein-Hawking Formula: If a black hole has an entropy, then it follows all the other laws of thermodynamics and has a temperature, also. So the black hole will radiate energy. This is where things start getting fuzzy. How can something that is impossible to escape radiate anything? I'm afraid I don't know much about what's new in that field of thought. Thanks for that. There is Hawking radiation, but I guess that is not what you are talking about. Could it be a mistake to consider a black hole demarcated by its event horizon to be an object, and thus to apply thermodynamic principles to that object. After all the event horizon is not a physical structure and need not be in the same place from one moment to the next. (That episode of Voyager where the ship was stuck inside the event horizon of a black hole, looking for a crack to get out would have been hilarious if it had been slightly less obtuse.) The natural answer is that the laws of physics break down in a black hole. The entropy just vanishes. Entropy is a property of the universe, and all properties of the universe break down at the event horizo of a black hole. Please note that Not Panicking Ltd is not responsible for the content of any external sites listed. The content on h2g2 is created by h2g2's Researchers, who are members of the public. Unlike Edited Guide Entries, the content on this page has not necessarily been checked by a h2g2 editor. In the event that you consider anything on this page to be in breach of the site's House Rules, please
<urn:uuid:9616e3bc-10ce-4eb0-8304-f62106edf360>
2.75
505
Comment Section
Science & Tech.
56.145625
828
As shown in the previous section, the table expression in the SELECT command constructs an intermediate virtual table by possibly combining tables, views, eliminating rows, grouping, etc. This table is finally passed on to processing by the select list. The select list determines which columns of the intermediate table are actually output. The simplest kind of select list is * which emits all columns that the table expression produces. Otherwise, a select list is a comma-separated list of value expressions (as defined in Section 4.2). For instance, it could be a list of column names: SELECT a, b, c FROM ... The columns names a, b, and c are either the actual names of the columns of tables referenced in the FROM clause, or the aliases given to them as explained in Section 188.8.131.52. The name space available in the select list is the same as in the WHERE clause, unless grouping is used, in which case it is the same as in the HAVING clause. If more than one table has a column of the same name, the table name must also be given, as in: SELECT tbl1.a, tbl2.a, tbl1.b FROM ... When working with multiple tables, it can also be useful to ask for all the columns of a particular table: SELECT tbl1.*, tbl2.a FROM ... (See also Section 7.2.2.) If an arbitrary value expression is used in the select list, it conceptually adds a new virtual column to the returned table. The value expression is evaluated once for each result row, with the row's values substituted for any column references. But the expressions in the select list do not have to reference any columns in the table expression of the FROM clause; they can be constant arithmetic expressions, for instance. The entries in the select list can be assigned names for subsequent processing, such as for use in an ORDER BY clause or for display by the client application. For example: SELECT a AS value, b + c AS sum FROM ... If no output column name is specified using AS, the system assigns a default column name. For simple column references, this is the name of the referenced column. For function calls, this is the name of the function. For complex expressions, the system will generate a generic name. The AS keyword is optional, but only if the new column name does not match any PostgreSQL keyword (see Appendix C). To avoid an accidental match to a keyword, you can double-quote the column name. For example, VALUE is a keyword, so this does not work: SELECT a value, b + c AS sum FROM ... but this does: SELECT a "value", b + c AS sum FROM ... For protection against possible future keyword additions, it is recommended that you always either write AS or double-quote the output column name. Note: The naming of output columns here is different from that done in the FROM clause (see Section 184.108.40.206). It is possible to rename the same column twice, but the name assigned in the select list is the one that will be passed on. After the select list has been processed, the result table can optionally be subject to the elimination of duplicate rows. The DISTINCT key word is written directly after SELECT to specify this: SELECT DISTINCT select_list ... (Instead of DISTINCT the key word ALL can be used to specify the default behavior of retaining all rows.) Obviously, two rows are considered distinct if they differ in at least one column value. Null values are considered equal in this comparison. Alternatively, an arbitrary expression can determine what rows are to be considered distinct: SELECT DISTINCT ON (expression [, expression ...]) select_list ... Here expression is an arbitrary value expression that is evaluated for all rows. A set of rows for which all the expressions are equal are considered duplicates, and only the first row of the set is kept in the output. Note that the "first row" of a set is unpredictable unless the query is sorted on enough columns to guarantee a unique ordering of the rows arriving at the DISTINCT filter. (DISTINCT ON processing occurs after ORDER BY sorting.) The DISTINCT ON clause is not part of the SQL standard and is sometimes considered bad style because of the potentially indeterminate nature of its results. With judicious use of GROUP BY and subqueries in FROM, this construct can be avoided, but it is often the most convenient alternative.
<urn:uuid:078dec7b-148d-4b86-815d-78f899ab58e9>
3.390625
949
Documentation
Software Dev.
60.170707
829
As soon as they lay their eggs, the female cichlids scoop them up into their mouths and incubate them until they hatch. R. Buckminster Fuller was a twentieth century scientist, philosopher, inventor, and was also named a great architect. Frogs also aren’t fussy eaters: any live prey will do. Some large species of frogs can gulp up a mouse, bat, or small snake in one mouthful, which is fortunate, because frogs can’t chew. If they have any teeth at all, they’re usually only good for holding onto the prey.
<urn:uuid:fb056eb6-e471-4532-8e19-5716b878c886>
2.6875
127
Knowledge Article
Science & Tech.
64.371667
830
Is Copy and Paste Programming Really a Problem? But it’s also a natural way to get stuff done – find something that already works, something that looks close to what you want to do, take a copy and use it as a starting point. Almost everybody has done in at some point. This is because there are times when copy and paste programming is not only convenient, but it might also be the right thing to do. First of all, let’s be clear what I mean by copy and paste. This is not copying code examples off of the Internet, a practice that comes with its own advantages and problems. By copy and paste I mean when programmers take a shortcut in reuse – when they need to solve a problem that is similar to another problem in the system, they’ll start by taking a copy of existing code and changing what they need to. Early in design and development, copy and paste programming has no real advantage. The code and design are still plastic, this is your chance to come up with the right set of abstractions, routines and libraries to do what the system needs to do. And there’s not a lot to copy from anyways. It’s late in development when you already have a lot of code in place, and especially when you are maintaining large, long-lived systems, that the copy and paste argument gets much more complicated. Why Copy and Paste? Programmers copy and paste because it saves time. First, you have a starting point, code that you know works. All you have to do is figure out what needs to be changed or added. You can focus on the problem you are trying to solve, on what is different, and you only need to understand what you are going to actually use. You are more free to iterate and make changes to fit the problem in front of you – you can cleanup code when you need to, delete code that you don’t need. All of this is important, because you may not know what you will need to keep, what you need to change, and what you don’t need at all until you are deeper into solving the problem. Copy and paste programming also reduces risk. If you have to go back and change and extend existing code to do what it does today as well as to solve your new problem, you run the risk of breaking something that is already working. It is usually safer and less expensive (in the short term at least) to take a copy and work from there. What if you are building a new B2B customer interface that will be used by a new set of customers? It probably makes sense to take an existing interface as a starting point, reuse the scaffolding and plumbing and wiring at least and as much of the the business code as makes sense, and then see what you need to change. In the end, there will be common code used by both interfaces (after all, that’s why you are taking a copy), but it could take a while before you know what this code is. Finding a common design, the right abstractions and variations to support different implementations and to handle exceptions can be difficult and time consuming. You may end up with code that is harder to understand and harder to maintain and change in the future – because the original design didn’t anticipate the different exceptions and extensions, and refactoring can only take you so far. You may need a new design and implementation. Changing the existing code, refactoring or rewriting some of it to be general-purpose, shared and extendable, will add cost and risk to the work in front of you. You can’t afford to create problems for existing customers and partners just because you want to bring some new customers online. You’ll need to be extra careful, and you’ll have to understand not only the details of what you are trying to do now (the new interface), but all of the details of the existing interface, its behavior and assumptions, so that you can preserve all of it. It’s naïve to think that all of this behavior will be captured in your automated tests – assuming that you have a good set of automated tests. You’ll need to go back and redo integration testing on the existing interface. Getting customers and partners who may have already spent weeks or months to test the software to retest it is difficult and expensive. They (justifiably) won’t see the need to go through this time and expense because what they have is already working fine. Copying and pasting now, and making a plan to come back later to refactor or even redesign if necessary towards a common solution, is the right approach here. When Copy and Paste makes sense In Making Software’s chapter on “Copy-Paste as a Principled Engineering Tool”, Michael Godfrey and Cory Kapser explore the costs of copy and paste programming, and the cases where copy and paste make sense: - Forking – purposely creating variants for hardware or platform variation, or for exploratory reasons. - Templating –some languages don’t support libraries and shared functions well so it may be necessary to copy and paste to share code. Somewhere back in the beginning of time, the first COBOL programmer wrote a complete COBOL program – everybody else after that copied and pasted from each other. - Customizing – creating temporary workarounds – as long as it is temporary. practice of “clone and own” to solve problems in big development organizations. One team takes code from another group and customizes it or adapts it to their own purposes – now they own their copy. This is a common approach with open source code that is used as a foundation and needs to be extended to solve a proprietary problem. When Copy and Paste becomes a Problem When to copy and paste, and how much of a problem it will become over time, depends on a few important factors. First, the quality of what you are copying – how understandable the code is, how stable it is, how many bugs it has in it. You don’t want to start off by inheriting somebody else’s problems. How many copies have been made. A common rule of thumb from Fowler and Beck`s Refactoring book is “three strikes and you refactor”. This rule comes from recognizing that by making a copy of something that is already working and changing the copy, you’ve created a small maintenance problem. It may not be clear what this maintenance problem is yet or how best to clean it up, because only two cases are not always enough to understand what is common and what is special. But the more copies that you make, the more of a maintenance problem that you create – the cost of making changes and fixes to multiple copies, and the risk of missing making a change or fix to all of the copies increases. By the time that you make a third copy, you should be able to see patterns – what’s common between the code, and what isn’t. And if you have to do something in three similar but different ways, there is a good chance that there will be a fourth implementation, and a fifth. By the third time, it’s worthwhile to go back and restructure the code and come up with a more general-purpose solution. How often you have to change the copied code and keep it in sync – particularly, how often you have to change or fix the same code in more than one place. How well you know the code, do you know that there are clones and where to find them? How long it takes to find the copies, and how sure you are that you found them all. Tools can help with this. Source code analysis tools like clone detectors can help you find copy and paste code – outright copies and code that is not the same but similar (fuzzier matching with fuzzier results). Copied code is often fiddled with over time by different programmers, which makes it harder for tools to find all of the copies. Some programmers recommend leaving comments as markers in the code when you make a copy, highlighting where the copy was taken from, so that a maintenance programmer in the future making a fix will know to look for and check the other code. Copy and Paste programming doesn’t come for free. But like a lot of other ideas and practices in software development, copy and paste programming isn’t right or wrong. It’s a tool that can be used properly, or abused. Brian Foote, one of the people who first recognized the Big Ball of Mud problem in software design, says that copy and paste programming is the one form of reuse that programmers actually follow, because it works. It’s important to recognize this. If we’re going to Copy and Paste, let's do a responsible job of it. (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
<urn:uuid:5369c24e-168d-48c7-8b92-c49d00e68e2c>
2.734375
1,881
Knowledge Article
Software Dev.
51.909472
831
Podcasts & RSS Feeds Fri November 18, 2011 Climate Panel: More Extreme Weather On The Way Brace yourself for more extreme weather. A group of more than 200 scientists convened by the United Nations says in a new report that climate change will bring more heat waves, more intense rainfall and more expensive natural disasters. These conclusions are from the latest effort of the Intergovernmental Panel on Climate Change — a consensus statement from researchers around the world. And since this is a consensus, the conclusions are carefully couched. Take, for example, the issue of rainfall. "It is likely that the frequency of heavy precipitation will increase in the 21st century over many regions," says the report, which defines "likely" as more than a 66 percent chance. You might expect heavy rainfall would lead to flooding, but the report is reluctant to make that strong a link. One of the authors, David Easterling from the National Climatic Data Center in Asheville, N.C., says dams and flood-control projects in some places can handle the heavier bursts of rain, so it's not automatically the case that downpours lead to flooding. Another example of this deliberate, cautious approach is evident in the report's conclusions about hurricanes and typhoons (hurricanes' equivalent in the Western Pacific). The science suggests strongly that hurricanes will eventually become more powerful, "but we won't really be able to detect an increase for another 30 to 40 years," Easterling tells NPR. "It may be there, but the increases are not huge. It's not like you'll see a doubling of wind speeds, but [storms are] expected to become more intense." That said, the report finds no compelling evidence that hurricanes and typhoons will become more frequent as a result of climate change. Much clearer is the trend about who is vulnerable to disaster. The report finds that 95 percent of the lives lost to natural disasters are in the developing world, where people often lack the infrastructure and resources to cope with calamity. Most of the financial costs are borne by the developed world, where valuable property is in harm's way. And that's not simply because of natural disasters — it's principally because of the deliberate choices we've made to develop along vulnerable coastlines and floodplains. You don't have to look far for evidence of those costs. This year "has been one of the most costly from extreme weather events, with more billion-dollar events than ever before," says Mindy Lubber, president of Ceres, a nonprofit that helps businesses and investors adapt to climate change. "The drought and wildfires in the Southwest and Southern Plains, for example, cost more than $9 billion in direct damage to cattle, agriculture and infrastructure." And Hurricane Irene, which killed at least 45 people, cost an additiional $7 billion. "Perhaps these multibillion-dollar events that are coming at us fast and furious will be enough to get policymakers to sit up and listen and realize we've got to change — and we have to move quickly," Lubber says. And perhaps a report that trips over itself to be extra cautious — as the IPCC's tangled jargon frequently does — will also garner more credibility for the statements it does make with confidence.
<urn:uuid:ac85de45-aa76-493a-a023-3f6e999c643b>
3.03125
661
News Article
Science & Tech.
41.208526
832
About this Base Converter Base-2 to base-62 are accepted. "A" stands for 10, "Z" for 35, "a" (lower-case) for 36 and "z" (lower-case) for 61. Decimals are supported. This is a custom function because PHP's base_convert() doesn't accept decimals and only goes up to base-36. It's only as precise as PHP is, so don't blindly copy the smallest decimal thinking it will always be correct. Is there any standard for displaying numbers higher than base-36? I've used lowercase letters to go up to base-62, but I couldn't find if that's what is commonly done. (Then again, I guess nothing is commonly done, since anything beyond base-16 doesn't really have much use, to my knowledge.) Fun game: Enter your name and supply base-36 (or higher) as the starting base and see what number you get in another base. My first name in base-38 for instance returns EPKCO in base-42. What's this about? A base is the system with which numbers are displayed. If we talk about base-n, the system has n characters (including 0) available to display a number. Numbers are represented with digits which are smaller than n. Therefore, 3 in base-3 is 10: because that system doesn't have a "3", it starts over (1, 2, 10, 11, 12, 20, 21, 22, 100, etc.). The base we usually use is base-10, because we have 10 (when including 0) digits until we start over again (8,9,10). In base-2 (binary), we only have 2 characters, i.e. 0 and 1, until we start over again. Following this example, the binary number 10 is 2 in our (base-10) system. Does it make sense that a finite fraction ("decimal") is infinite in another base? It totally does. If you want to convert 645 from base-8 to base-10, you do 6*82 + 4*81 + 5*80 = 421. After the comma you keep on decrementing the exponent, meaning that if you have 21.35 in base-7 you get to its base-10 equivalent by doing 2*71 + 1*70 + 3*7-1 + 5*7-2. 7-1 (= 1/7), however, is 0.142857... in base-10, while it's simply written as 0.1 in base-7.
<urn:uuid:ce241d9c-a5e7-42c3-95fb-ca084d804bf4>
3.3125
547
Q&A Forum
Software Dev.
86.924719
833
Decimals: Terminating or Repeating? Date: 01/26/2001 at 15:10:31 From: Seegee Subject: How to tell if decimals are terminating or repeating? How can you tell just by looking at a fraction whether, in decimal form, it will terminate or repeat? My math teacher said there was a way, but I don't see how. Please help. Date: 01/26/2001 at 15:49:23 From: Doctor Greenie Subject: Re: How to tell if decimals are terminating or repeating? Hi, Seegee - If a decimal fraction terminates, then it has a name like one of the following: " ____ tenths" " ____ hundredths" " ____ thousandths" " ____ ten-thousandths" ... " ____ millionths" ... " ____ ten-billionths" ... etc., etc. When you write these numbers as common fractions, what is special about the denominators? The answer to that question should be a big hint toward the answer to your question, but it won't give the complete answer. For example, here are a couple of fractions whose decimal representations terminate but that don't have names from the "infinite" list above: 3/4 (= .75) and 5/8 (= .625). So why do these two have terminating decimals, while a fraction like 1/3 does not? It is because the first two can be written as equivalent fractions with names from the list above, while the fraction 1/3 cannot: 3/4 = 75/100 = seventy-five hundredths 5/8 = 625/1000 = six hundred twenty-five thousandths but you can't write 1/3 = a/10 or = b/100 or = c/1000 or .... where a, b, c, or any other of the numerators are integers. I have still only hinted at the precise answer to your question. If you can't quite figure out the whole answer after studying what I've written, you can find the complete answer in the Dr. Math archives. Click on the "Search the Archives" link on the main Dr. Math page and use "repeating decimal" or "terminating decimal" as the phrase to search for (do not use quotation marks, but be sure to click on the button that makes the search engine look for the entire phrase instead of the individual words). The search will provide you with links to several pages where this question is discussed. - Doctor Greenie, The Math Forum http://mathforum.org/dr.math/ Date: 01/26/2001 at 15:32:14 From: Doctor Rob Subject: Re: How to tell if decimals are terminating or repeating? Thanks for writing to Ask Dr. Math, Seegee. The fraction will terminate if and only if the denominator has for prime divisors only 2 and 5, that is, if and only if the denominator has the form 2^a * 5^b for some exponents a >= 0 and b >= 0. The number of decimal places until it terminates is the larger of a and b. The proof of this lies in the fact that every terminating decimal has the form n/10^e, for some e >= 0 (e is the number of places to the right that the decimal point must be moved to give you an integer, and n is that integer), and every fraction of that form has a terminating decimal found by writing down n and moving the decimal point e places to the left. Now when you cancel common factors from n/10^e = n/(2*5)^e = n/(2^e*5^e), it may reduce the exponents in the denominator, but that is all that can happen. - Doctor Rob, The Math Forum http://mathforum.org/dr.math/ Search the Dr. Math Library: Ask Dr. MathTM © 1994-2013 The Math Forum
<urn:uuid:0b59b90c-c94d-4c23-a630-2488e207f922>
2.5625
825
Q&A Forum
Science & Tech.
79.667553
834
The ICM and the IGM show metal lines in the X-ray spectra. These metals cannot have been produced in the gas, but they must have been produced in the galaxies and subsequently transported from the galaxies into the ICM/IGM by certain processes like e.g. ram-pressure stripping, galactic winds, galaxy-galaxy interaction or jets from active galaxies. The metallicity is the best indicator for finding out which of these processes are most important. Of special interest is the distribution of metals. So far there are only few examples of measured metallicity variations in real 2D maps and not only profiles. In CL0939+4713 we find different metallicity in the different subclusters (De Filippis et al. 2002). In the Perseus cluster also clear metallicity variations were found (Schmidt et al. 2002). 1D profiles are not very useful in this context because photons from regions in the cluster which are very far apart are accumulated in the same spectrum. Apart from the metallicity distribution also the evolution of the metallicity is interesting. As soon as enough XMM and CHANDRA observations of distant clusters are available we can compare the metallicities in these clusters with those of nearby clusters. This is another way of distinguishing between the enrichment processes as different processes have different time dependence. In addition element ratios can be derived, e.g of Fe and -elements to get information on the different types of supernovae that have contributed to the metal enrichment. Various processes have been suggested for the transport of gas from the galaxies to the ICM/IGM. 30 years ago Gunn & Gott (1972) suggested ram-pressure stripping: as the galaxy moves through the cluster and approaches the cluster centre it feels the increasing pressure of the intra-cluster gas. At some point the galaxy is not able anymore to retain its ISM. The ISM is stripped off and lost to the ICM and with it all its metals. Many numerical simulations have been performed to investigate this process, first 2D models (Takeda et al. 1984; Gaetz et al. 1987; Portnoy et al. 1993; Balsara et al. 1994). With increasing computing power also more detailed 3D models could be calculated (Abadi et al. 1999; Quilis et al. 2000; Vollmer et al. 2001; Schulz & Struck 2001; Toniazzo & Schindler 2001). In Fig. 6 such a simulated stripping process is shown for an elliptical galaxy. Figure 6. Gas density (grey scale) and pressure (contours) of a galaxy moving downwards towards the cluster centre. The arrows show the Mach vectors (white when M > 1, black otherwise). The gas of the galaxy is stripped due to ram pressure (from Toniazzo & Schindler 2001). Another possible process is galactic winds e.g. driven by supernovae (De Young 1978). Also for this process simulations have been performed on order to see whether only winds can account for the observed metallicities. The results were quite discordant as the following two examples show. Metzler & Evrard (1994, 1997) found that winds can account for the metals, while Murakami & Babul (1999) concluded that winds are not very efficient for the metal enrichment. In the simulations of Metzler & Evrard quite steep metallicity gradients showed up which are not in agreement with observations. A third possible process is galaxy-galaxy interactions, like tidal stripping or galaxy harassment. Also during these interactions a lot of ISM can be lost to the ICM and IGM. This process is very likely more efficient in groups of galaxies, because in these systems the relative velocities are smaller and therefore the interaction timescales are longer. The ram-pressure stripping on the other hand is probably less efficient in groups because not only the pressure of the IGM is lower than that of the ICM, but also the velocities are smaller. This is also very important as the stripping is about proportional to gas v2. A forth possible mechanism is jets emitted by active galaxies. These jets can also carry metals. Fig. 7 shows the interaction of jets with the ICM as it was discovered by X-ray observations. In the cluster RBS797 minima in the X-ray emission have been detected in a CHANDRA observation (Schindler et al. 2001). The X-ray depressions are arranged opposite with respect to the cluster centre. It is very likely that the pressure of the relativistic particles in the jets push away the X-ray gas. Preliminary radio observations with the VLA confirm this hypothesis. Figure 7. CHANDRA image of the central part of the cluster RBS797 (from Schindler et al. 2001). There are depressions in the X-ray emission which are located opposite to each other with respect to the cluster centre (see arrows). These depressions can be explained by an active galaxy in the centre of the cluster, which has two jets. The pressure of the relativistic particles in the jets push away the X-ray gas resulting in minima in the X-ray emission. Simulations with different enrichment processes were also performed on cosmological scales. Also here quite discordant results have been found as the two following examples show. Gnedin (1998) found that galactic winds play only a minor role, while galaxy mergers eject most of the gas. In contrary to these results Aguirre et al. (2001) concluded that winds are most important and ram-pressure stripping is not very efficient. The reason for these differences are probably the large ranges in scale that are covered by these simulations, from cosmological scale down to galaxy scales. Therefore only a small number of particles are left for each single galaxy and hence galaxies are not well resolved. This can be the reason for the discordant results. In order to clarify this we are currently performing comprehensive simulations, which include the different enrichment processes.
<urn:uuid:b0a9cfe3-eed9-450e-b1d8-fa1847ed42a6>
3.109375
1,246
Academic Writing
Science & Tech.
47.962867
835
The factors behind the calving process were not well understood US researchers have come up with a way to predict the rate at which ice shelves break apart into icebergs. These sometimes spectacular occurrences, called calving events, are a key step in the process by which climate change drives sea level rise. Computer models that simulate how ice sheets might behave in a warmer world do not describe the calving process in much detail, Science journal reports. Until now, the factors controlling this process have not been well understood. Ice sheets, such as those in Antarctica and Greenland, spread under their own weight and flow off land over the ocean water. Ice shelves are the thick, floating lips of ice sheets or glaciers that extend out past the coastline. Timelapse footage of an iceberg breaking away from a glacier in July 2008. The event took approximately 15 minutes (Video: Fahnestock/UNH) The Ross Ice Shelf in Antarctica floats for as much as 800km (500 miles) over the ocean before the edges begin to break and create icebergs. But other ice shelves may only edge over the water for a few kilometres. A team led by Richard Alley at Pennsylvania State University, US, analysed factors such as thickness, calving rate and strain rate for 20 different ice shelves. "The problem of when things break is a really hard problem because there is so much variability," said Professor Alley. "Anyone who has dropped a coffee cup knows this. Sometimes the coffee cup breaks and sometimes it bounces." The team's results show that the calving rate of an ice shelf is primarily determined by the rate at which the ice shelf is spreading away from the continent. The researchers were also able to show that narrower shelves should calve more slowly than wider ones. Ice cracking off into the ocean from Antarctica and Greenland could play a significant role in future sea level rise. Floating ice that melts does not of itself contribute to the height of waters (because it has already displaced its volume), but the shelf from which it comes acts as a brake to the land-ice behind. Removal of the shelf will allow glaciers heading to the ocean to accelerate - a phenomenon documented when the Larsen B shelf on the Antarctic Peninsula shattered in spectacular style in 2002. This would speed sea level rise. The UN Intergovernmental Panel on Climate Change in its 2007 assessment forecast that seas could rise by 18 to 59 cm (7-23ins) this century. However, in giving those figures, it conceded that ice behaviour was poorly understood. This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.
<urn:uuid:345a4045-6f1f-4b6d-b5c0-385afebb5719>
4.1875
592
News Article
Science & Tech.
47.018817
836
The two-week expedition in January encountered new species of fish, seaweed and other ocean life at little-studied Saba Bank Atoll, a coral-crowned seamount 250 kilometers southeast of Puerto Rico in the Dutch Windward Islands. In a series of dives buffeted by high winds and strong currents, scientists from Conservation International (CI), the Netherlands Antilles government and Smithsonian Institution's Museum of Natural History found scores more fish species than previously known in the region and vast beds of diverse seaweed, including a dozen or more possible new species. "We discovered a new species literally every day we were there," said Michael Smith, director of CI's Caribbean Biodiversity Initiative. Among the apparent new fish species found were two types of gobi, while the total number of fish species recorded reached 200, compared to fewer than 50 before the expedition. The unprecedented richness of marine life and vulnerable status of the atoll's coral beds make Saba Bank a prime candidate for designation as a Particularly Sensitive Sea Area (PSSA) under the International Maritime Organization (IMO). Mark Littler, marine botanist of the Smithsonian Institution's National Museum of Natural History, declared Saba Bank the richest area for seaweeds in the Caribbean basin, including as many as a dozen new species along with commercially valuable species that will facilitate the creation of economic activity zones under PSSA designation. Paul Hoetjes, marine biologist with the Ministry of Nature Affairs for the Netherlands Antilles (MINA), called the expedition crucial to getting the area protected to benefit local populations.
<urn:uuid:aa1e7887-11e4-4c22-8351-2603e2629595>
3.265625
323
News Article
Science & Tech.
5.966109
837
This might be a rare case about which Einstein was wrong. More than 60 years ago, the great physicist scoffed at the idea that anything could travel faster than light, even though quantum mechanics had suggested such a condition. Now four Swiss researchers have brought the possibility closer to reality. Testing a concept called "spooky action at a distance"--a phrase used by Einstein in criticizing the phenomenon--they have shown that two subatomic particles can communicate nearly instantaneously, even if they are separated by cosmic distances. Alice's Wonderland had nothing on quantum physics, which describes a bizarre state of matter and energy. Not only can the same atom exist in two locations at once, but merely attempting to observe a particle will alter its properties. Perhaps least intuitive is the characteristic called entanglement. As described by quantum mechanics, it means that two entangled particles can keep tabs on each other no matter how far apart they are. Physicists have been trying for decades to determine whether this property is real and what might cause it. In the process, they've uncovered evidence for it but not much about its properties. Physicist Nicolas Gisin and colleagues at the University of Geneva in Switzerland split off pairs of quantum-entangled photons and sent them from the university's campus through two fiber-optic cables to two Swiss villages located 18 kilometers apart. Thinking of the photons like traffic lights, each passed through specially designed detectors that determined what "color" they were when entering the cable and what color they appeared to be when they reached the terminus. The experiments revealed two things: First, the physical properties of the photons changed identically during their journey, just as predicted by quantum theory--when one turned "red," so did the other. Second, there was no detectable time difference between when those changes occurred in the photons, as though an imaginary traffic controller had signaled them both. The result, the team reports in tomorrow's issue of Nature, is that whatever was affecting the photons seems to have happened nearly instantaneously and that according to their calculations, the phenomenon influencing the particles had to be traveling at least 10,000 times faster than light. Given Einstein's standard speed limit on light traveling within conventional spacetime, the experiments show that entanglement might be controlled by something existing beyond it. Gisin says that once the scientific community "accepts that nature has this ability, we should try to create models that explain it." Although the research doesn't demonstrate spooky action at a distance directly, it does provide "a lower boundary for the speed" necessary for the phenomenon, says theoretical physicist Martin Bojowald of Pennsylvania State University in State College. Cosmologist Sean Carroll of the California Institute of Technology in Pasadena says that it's "yet another experiment that tells us quantum mechanics is right" and that there "really is an intrinsic connection between entangled particles, not that some signal passes quickly between them when an observation is performed." And physicist Lorenza Viola of Dartmouth College says there's much more to be determined. "I am sure we are not finished unveiling what the quantum [effects] due to entanglement really are and how powerful they can be."
<urn:uuid:46289f9b-357a-4cc5-84f9-63d88b754926>
3.53125
638
News Article
Science & Tech.
27.437129
838
By examining the frequency of extreme storm surges in the past, previous research has shown that there was an increasing tendency for storm hurricane surges when the climate was warmer. But how much worse will it get as temperatures rise in the future? How many extreme storm surges like that from Hurricane Katrina, which hit the U.S. coast in 2005, will there be as a result of global warming? New research from the Niels Bohr Institute show that there will be a tenfold increase in frequency if the climate becomes two degrees Celcius warmer. The results are published in the scientific journal Proceedings of the National Academy of Science (PNAS). Tropical cyclones arise over warm ocean surfaces with strong evaporation and warming of the air. The typically form in the Atlantic Ocean and move towards the U.S. East Coast and the Gulf of Mexico. If you want to try to calculate the frequency of tropical cyclones in a future with a warmer global climate, researchers have developed various models. One is based on the regional sea temperatures, while another is based on differences between the regional sea temperatures and the average temperatures in the tropical oceans. There is considerable disagreement among researchers about which is best. New model for predicting cyclones "Instead of choosing between the two methods, I have chosen to use temperatures from all around the world and combine them into a single model," explains climate scientist Aslak Grinsted, Centre for Ice and Climate at the Niels Bohr Institute at the University of Copenhagen. He takes into account the individual statistical models and weights them according to how good they are at explaining past storm surges. In this way, he sees that the model reflects the known physical relationships, for example, how the El Niño phenomenon affects the formation of cyclones. The research was performed in collaboration with colleagues from China and England. The statistical models are used to predict the number of hurricane surges 100 years into the future. How much worse will it be per degree of global warming? How many 'Katrinas' will there be per decade? Since 1923, there has been a 'Katrina' magnitude storm surge every 20 years. 10 times as many 'Katrinas' "We find that 0.4 degrees Celcius warming of the climate corresponds to a doubling of the frequency of extreme storm surges like the one following Hurricane Katrina. With the global warming we have had during the 20th century, we have already crossed the threshold where more than half of all 'Katrinas' are due to global warming," explains Aslak Grinsted. "If the temperature rises an additional degree, the frequency will increase by 3-4 times and if the global climate becomes two degrees warmer, there will be about 10 times as many extreme storm surges. This means that there will be a 'Katrina' magnitude storm surge every other year," says Aslak Grinsted and he points out that in addition to there being more extreme storm surges, the sea will also rise due to global warming. As a result, the storm surges will become worse and potentially more destructive. Explore further: Study: Tropical cyclones are occurring more frequently than before More information: "Projected Atlantic hurricane surge threat from rising temperatures," by Aslak Grinsted, John C. Moore, and Svetlana Jevrejeva
<urn:uuid:2f131a1f-b5d6-4927-a819-681b2dc26ec8>
3.5
681
News Article
Science & Tech.
37.22
839
One of the biggest questions in science has always been how the universe formed. Over the last century, we've made a lot of progress about understanding the way it has developed: The Big Bang Model. With the most sophisticated telescopes ever built, scientists have now made measurements confirming the accuracy of this model within just moments of when the universe began. Despite all this evidence, the question remains: What actually started the whole process? Physics actually does have an answer to this question - one which doesn't require the intervention of a creator deity - and in his newest book physicist Lawrence Krauss lays the explanation out in language that is accessible to non-scientists. If this sounds interesting, check out our review of Krauss's A Universe From Nothing. If you've read the book, then be sure to let us know what you thought of it. Could the laws of quantum physics and relativity have created the universe as we know it? Is God a necessary component of the universe?
<urn:uuid:00b71dff-bd72-4193-b5dd-967dea413b6c>
3.359375
198
Personal Blog
Science & Tech.
46.765129
840
A theoretical analysis of recent experiments suggests that a key feature of a topological quantum computer—the unusual statistics of quasiparticles in the quantum Hall effect—may finally have been observed. By exploiting the concept of particle-hole duality, one can realize a point junction between integer and fractional quantum Hall phases, which constitutes a crucial building block towards possible applications of the quantum Hall effect. The fractional quantum Hall effect, thought to be special to two dimensions, may also flourish in three, providing a possible explanation for anomalies observed in certain 3D materials in high magnetic fields. Physics2, 24 (2009) – Published March 30, 2009 The surprising prediction that currents can flow forever in small normal metal rings was confirmed almost twenty years ago. Highly precise new experiments find good agreement with theory that was not seen till now. H. A. Fertig, Physics2, 15 (2009) – Published February 23, 2009 Measurements of the heat transport at the edges of two-dimensional electron systems appear to provide explanations about the quantum Hall state that have not been forthcoming via charge transport experiments. Crystalline structures have been observed in nanoislands of electrons floating above superfluid helium. The energy required to add or subtract an electron from these quantum-dot-like islands agrees well with theory. Physics1, 36 (2008) – Published November 24, 2008 The esoteric concept of “axions” was born thirty years ago to describe the strong interaction between quarks. It appears that the same physics—though in a much different context—applies to an unusual class of insulators. Graphene has been idealized as a two-dimensional electron system in which the electrons behave like massless fermions, but how “perfect” is it? Scientists now show they can prepare free-standing sheets of graphene that have some of the highest electron mobilities of any inorganic semiconductor. A decade ago, experimentalists showed that persistent currents can flow in nonsuperconducting mesoscopic metal rings, but there was no theory that correctly explained the magnitude or direction of the unexpectedly large currents. Theorists are now proposing a simple idea that may at last explain these results. Electrons in graphene can be described by the relativistic Dirac equation for massless fermions and exhibit a host of unusual properties. The surfaces of certain band insulators—called topological insulators—can be described in a similar way, leading to an exotic metallic surface on an otherwise “ordinary” insulator.
<urn:uuid:a7f014e1-dfcd-46b7-926a-0554c521be0c>
3
528
Content Listing
Science & Tech.
26.498604
841
glob, globfree - generate pathnames matching a pattern int glob(const char *restrict pattern, int flags, int(*errfunc)(const char *epath, int eerrno), glob_t *restrict pglob); void globfree(glob_t *pglob); The glob() function is a pathname generator that shall implement the rules defined in XCU Pattern Matching Notation, with optional support for rule 3 in XCU Patterns Used for Filename Expansion. The structure type glob_t is defined in <glob.h> and includes at least the following members: Count of paths matched by pattern. Pointer to a list of matched pathnames. Slots to reserve at the beginning of gl_pathv. The argument pattern is a pointer to a pathname pattern to be expanded. The glob() function shall match all accessible pathnames against this pattern and develop a list of all pathnames that match. In order to have access to a pathname, glob() requires search permission on every component of a path except the last, and read permission on each directory of any filename component of pattern that contains any of the following special characters: '*', '?', and '['. The glob() function shall store the number of matched pathnames into pglob->gl_pathc and a pointer to a list of pointers to pathnames into pglob->gl_pathv. The pathnames shall be in sort order as defined by the current setting of the LC_COLLATE category; see XBD LC_COLLATE. The first pointer after the last pathname shall be a null pointer. If the pattern does not match any pathnames, the returned number of matched paths is set to 0, and the contents of pglob->gl_pathv are implementation-defined. It is the caller's responsibility to create the structure pointed to by pglob. The glob() function shall allocate other space as needed, including the memory pointed to by gl_pathv. The globfree() function shall free any space associated with pglob from a previous call to glob(). The flags argument is used to control the behavior of glob(). The value of flags is a bitwise-inclusive OR of zero or more of the following constants, which are defined in <glob.h>: - Append pathnames generated to the ones from a previous call to glob(). - Make use of pglob->gl_offs. If this flag is set, pglob->gl_offs is used to specify how many null pointers to add to the beginning of pglob->gl_pathv. In other words, pglob->gl_pathv shall point to pglob->gl_offs null pointers, followed by pglob->gl_pathc pathname pointers, followed by a null pointer. - Cause glob() to return when it encounters a directory that it cannot open or read. Ordinarily, glob() continues to find matches. - Each pathname that is a directory that matches pattern shall have a <slash> appended. - Supports rule 3 in XCU Patterns Used for Filename Expansion. If pattern does not match any pathname, then glob() shall return a list consisting of only pattern, and the number of matched pathnames is 1. - Disable backslash escaping. - Ordinarily, glob() sorts the matching pathnames according to the current setting of the LC_COLLATE category; see XBD LC_COLLATE. When this flag is used, the order of pathnames returned is unspecified. The GLOB_APPEND flag can be used to append a new set of pathnames to those found in a previous call to glob(). The following rules apply to applications when two or more calls to glob() are made with the same value of pglob and without intervening calls to globfree(): The first such call shall not set GLOB_APPEND. All subsequent calls shall set it. All the calls shall set GLOB_DOOFFS, or all shall not set it. After the second call, pglob->gl_pathv points to a list containing the following: Zero or more null pointers, as specified by GLOB_DOOFFS and pglob->gl_offs. Pointers to the pathnames that were in the pglob->gl_pathv list before the call, in the same order as before. Pointers to the new pathnames generated by the second call, in the specified order. The count returned in pglob->gl_pathc shall be the total number of pathnames from the two calls. The application can change any of the fields after a call to glob(). If it does, the application shall reset them to the original value before a subsequent call, using the same pglob value, to globfree() or glob() with the GLOB_APPEND flag. If, during the search, a directory is encountered that cannot be opened or read and errfunc is not a null pointer, glob() calls (*errfunc()) with two arguments: The epath argument is a pointer to the path that failed. The eerrno argument is the value of errno from the failure, as set by opendir(), readdir(), or stat(). (Other values may be used to report other errors not explicitly documented for those functions.) If (*errfunc()) is called and returns non-zero, or if the GLOB_ERR flag is set in flags, glob() shall stop the scan and return GLOB_ABORTED after setting gl_pathc and gl_pathv in pglob to reflect the paths already scanned. If GLOB_ERR is not set and either errfunc is a null pointer or (*errfunc()) returns 0, the error shall be ignored. The glob() function shall not fail because of large files. Upon successful completion, glob() shall return 0. The argument pglob->gl_pathc shall return the number of matched pathnames and the argument pglob->gl_pathv shall contain a pointer to a null-terminated list of matched and sorted pathnames. However, if pglob->gl_pathc is 0, the content of pglob->gl_pathv is undefined. The globfree() function shall not return a value. If glob() terminates due to an error, it shall return one of the non-zero constants defined in <glob.h>. The arguments pglob->gl_pathc and pglob->gl_pathv are still set as defined above. The glob() function shall fail and return the corresponding value if: - The scan was stopped because GLOB_ERR was set or (*errfunc()) returned non-zero. - The pattern does not match any existing pathname, and GLOB_NOCHECK was not set in flags. - An attempt to allocate memory failed. One use of the GLOB_DOOFFS flag is by applications that build an argument list for use with execv(), execve(), or execvp(). Suppose, for example, that an application wants to do the equivalent of:ls -l *.c but for some reason:system("ls -l *.c") is not acceptable. The application could obtain approximately the same result using the sequence:globbuf.gl_offs = 2; glob("*.c", GLOB_DOOFFS, NULL, &globbuf); globbuf.gl_pathv = "ls"; globbuf.gl_pathv = "-l"; execvp("ls", &globbuf.gl_pathv); Using the same example:ls -l *.c *.h could be approximately simulated using GLOB_APPEND as follows:globbuf.gl_offs = 2; glob("*.c", GLOB_DOOFFS, NULL, &globbuf); glob("*.h", GLOB_DOOFFS|GLOB_APPEND, NULL, &globbuf); ... This function is not provided for the purpose of enabling utilities to perform pathname expansion on their arguments, as this operation is performed by the shell, and utilities are explicitly not expected to redo this. Instead, it is provided for applications that need to do pathname expansion on strings obtained from other sources, such as a pattern typed by a user or read from a file. If a utility needs to see if a pathname matches a given pattern, it can use fnmatch(). Note that gl_pathc and gl_pathv have meaning even if glob() fails. This allows glob() to report partial results in the event of an error. However, if gl_pathc is 0, gl_pathv is unspecified even if glob() did not return an error. The GLOB_NOCHECK option could be used when an application wants to expand a pathname if wildcards are specified, but wants to treat the pattern as just a string otherwise. The sh utility might use this for option-arguments, for example. The new pathnames generated by a subsequent call with GLOB_APPEND are not sorted together with the previous pathnames. This mirrors the way that the shell handles pathname expansion when multiple expansions are done on a command line. Applications that need tilde and parameter expansion should use wordexp(). It was claimed that the GLOB_DOOFFS flag is unnecessary because it could be simulated using:new = (char **)malloc((n + pglob->gl_pathc + 1) * sizeof(char *)); (void) memcpy(new+n, pglob->gl_pathv, pglob->gl_pathc * sizeof(char *)); (void) memset(new, 0, n * sizeof(char *)); free(pglob->gl_pathv); pglob->gl_pathv = new; However, this assumes that the memory pointed to by gl_pathv is a block that was separately created using malloc(). This is not necessarily the case. An application should make no assumptions about how the memory referenced by fields in pglob was allocated. It might have been obtained from malloc() in a large chunk and then carved up within glob(), or it might have been created using a different memory allocator. It is not the intent of the standard developers to specify or imply how the memory used by glob() is managed. The GLOB_APPEND flag would be used when an application wants to expand several different patterns into a single list. exec, fdopendir, fnmatch, fstatat, readdir, wordexp XBD LC_COLLATE, <glob.h> First released in Issue 4. Derived from the ISO POSIX-2 standard. Moved from POSIX2 C-language Binding to BASE. The normative text is updated to avoid use of the term "must" for application requirements. The restrict keyword is added to the glob() prototype for alignment with the ISO/IEC 9899:1999 standard. return to top of page
<urn:uuid:6d8ae403-8926-4ae9-8f1d-fbb1012e153b>
2.625
2,358
Documentation
Software Dev.
57.032625
842
Spawning, larval abundance and growth rate of Sardinops sagax off southwestern Australia: influence of an anomalous eastern boundary current Muhling, B.A., Beckley, L.E., Gaughan, D.J., Jones, C.M., Miskiewicz, A.G. and Hesp, S.A. (2008) Spawning, larval abundance and growth rate of Sardinops sagax off southwestern Australia: influence of an anomalous eastern boundary current. Marine Ecology Progress Series, 364 . pp. 157-167. |PDF - Published Version | Download (651kB) | Preview *Subscription may be required The temporal and spatial distributions of sardine Sardinops sagax eggs and larvae off the oligotrophic southwestern coast of Australia were examined and related to gonadosomatic index, daily growth rates of larvae and regional biological oceanography. Seasonal environmental cycles were established from remotely sensed sea surface temperature and chlorophyll concentration, wind and sea surface height data. Sardine egg and larval distributions were determined from regular transect surveys and annual grid surveys. Sardine eggs and larvae were common across the continental shelf throughout the year between Two Rocks and Cape Naturaliste (∼32 to 34°S), and gonadosomatic index data suggested a distinct winter peak in spawning activity. Surface chlorophyll concentrations were highest during winter, coincident with the seasonal peak in the southward flow of the Leeuwin Current along the continental shelf break. Retention conditions on the mid-outer shelf for pelagic eggs and larvae were therefore poor during this time. Egg and larval concentrations were lower than expected in winter and higher in summer when retention conditions were more favourable. Larval sardine growth rates were unexpectedly high, averaging 0.82 mm d-1. Fisheries for clupeiod species off southwestern Australia are insignificant compared to other eastern boundary current systems. Our data suggest that this may be due to a combination of low primary productivity caused by suppression of large-scale upwelling by the Leeuwin Current and the modest seasonal maximum in primary productivity occurring during the time least favourable for pelagic larval retention. |Publication Type:||Journal Article| |Murdoch Affiliation:||School of Biological Sciences and Biotechnology| School of Environmental Science |Copyright:||© Inter-Research 2008| |Item Control Page|
<urn:uuid:533fb574-5afc-4789-abc4-48eefb006a8b>
2.59375
494
Academic Writing
Science & Tech.
28.585871
843
Studying PHP's (5.3.1 and below) LCG (linear congruential generator, a pseudorandom number generator), I discovered that there are weaknesses that reduce the complexity of determining the sequence of pseudorandom numbers. What this means is that PHP is severely deficient in producing random session IDs or random numbers, leading to the possibility of stealing sessions or other sensitive information. The initial seed can be reduced from 64-bits to 35-bits, and with PHP code execution, can be reduced further down to just under 20-bits, which takes only seconds to recreate the initial seed. You can test with sources available below. Mad hax0r pr0pz to Arshan "DHS-most-wanted" Dabirsiaghi (bless you) and Amit "smartypants" Klein for pointing me in the right direction with the LCG. Other tools to work out the LCG in forward and reverse, as well as determine session IDs, found below. To test breaking the seed, run the following (after compiling s1s2.c) time ./s1s2 11484 0.82548251995711 Can you guess my next lcg_value based off the above? (hint: it's 0.86290409858717). Test by running: time ./lcg-state-forward [s1] [s2] 100 Your session_id is mfbu1v8qjnp003ob1pt6bbkft4 (or just look at your cookie) session_start(); echo "Hi $_SERVER[REMOTE_ADDR]! The time is " . time() . "<p>"; echo "To test breaking the seed, run the following (after compiling <a href='s1s2.c'>s1s2.c</a>)<br>"; echo "<code>time <a href='s1s2.c'>./s1s2</a> " . getmypid() . " " . lcg_value() . "</code><p>"; echo "Can you guess my next lcg_value based off the above? (hint: it's " . lcg_value() . ").<br>"; echo "Test by running: <code>time <a href='lcg-state-forward.c'>./lcg-state-forward</a> [s1] [s2] 100</code><p>"; echo "Your <a href='http://www.test.com/search?q=" . session_id() . "'>session_id</a> is " . session_id() . " (or just look at your cookie)";
<urn:uuid:3772de06-94c3-405c-9dcc-6657134209ce>
2.609375
585
Personal Blog
Software Dev.
80.587308
844
Seasonal Recharge Components in an Urban/Agricultural Mountain Front Aquifer System Using Noble Gas Thermometry Thirteen noble gas samples were collected from eleven wells and two mountain springs in the Treasure Valley, Idaho, USA to derive recharge temperatures using noble gas thermometry. One common assumption with noble gas thermometry is that recharge temperatures are roughly equal to the mean annual surface temperature. When water table depths are shallow or variable, or infiltration is seasonal recharge temperatures may be significantly different from the mean annual surface temperature. Water table depths throughout the study area were used to estimate recharge source temperatures using an infiltration-weighted recharge temperature model which takes into account a time-variable water table. This model was applied to six different seasonally-dependent recharge scenarios. The modeled recharge temperatures for all scenarios showed a strong dependence of recharge temperature on mean annual depth to water. Temperature results from the different recharge scenarios ranged from near the mean annual surface temperature to as much as 6 °C warmer. This compared well to noble gas derived recharge temperatures from the valley wells which ranged from 5 °C below to 7.4 °C above the mean annual surface temperature of the valley. Cooler temperatures suggest an influence of recharge through the adjacent mountain block while warmer temperatures suggest an influence from summer irrigation. Thoma, Michael J.; McNamara, James P.; and Benner, Shawn G.. (2011). "Seasonal Recharge Components in an Urban/Agricultural Mountain Front Aquifer System Using Noble Gas Thermometry". Journal of Hydrology, 409(1-2), 118-127. http://dx.doi.org/10.1016/j.jhydrol.2011.08.003
<urn:uuid:a0313fb3-8834-42d5-a533-ffdf813c2f40>
2.984375
342
Academic Writing
Science & Tech.
31.412788
845
After a nasty blizzard temporarily disabled Washington this winter, conservative politicians and pundits jammed the airwaves with claims the storm “contradicted Al Gore’s hysterical global warming theories.” Their myopia was derided by climate scientists and Stephen Colbert alike, who said on his show: “Now folks, that is simple observational research: Whatever just happened is the only thing that is happening. Ask any peekaboo-ologist.” Still, in the wake of the storm, the National Oceanic and Atmospheric Administration decided to cancel its scheduled announcement of a Climate Service office, which would keep the public up-to-date on on global warming. Timing matters. Now, with the East Coast woozy from a humid heat wave, it’s the climate scientists’ turn to spread their message. A press release today highlights a Stanford study showing heat waves and extremely high temperatures could be commonplace in the U.S. by 2039: According to the climate models, an intense heat wave – equal to the longest on record from 1951 to 1999 – is likely to occur as many as five times between 2020 and 2029 over areas of the western and central United States. The 2030s are projected to be even hotter… The Stanford team also forecast a dramatic spike in extreme seasonal temperatures during the current decade. Temperatures equaling the hottest season on record from 1951 to 1999 could occur four times between now and 2019 over much of the U.S., according to the researchers. The 2020s and 2030s could be even hotter, particularly in the American West. From 2030 to 2039, most areas of Utah, Colorado, Arizona and New Mexico could endure at least seven seasons equally as intense as the hottest season ever recorded between 1951 and 1999, the researchers concluded. “Frankly, I was expecting that we’d see large temperature increases later this century with higher greenhouse gas levels and global warming,” [Noah] Diffenbaugh said. “I did not expect to see anything this large within the next three decades. This was definitely a surprise.” Photo by leeroy09481
<urn:uuid:430af643-55ba-48af-bddf-c87b35d775fa>
2.5625
440
News (Org.)
Science & Tech.
43.484763
846
Wind Could Power the World Studies: Wind potentially could power the world AP : By Seth Borenstein, AP Science Writer Earth has more than enough wind to power the entire world, at least technically, two new studies find. But the research looks only at physics, not finances. Other experts note it would be too costly to put up all the necessary wind turbines and build a system that could transmit energy to all consumers. The studies are by two different U.S. science teams and were published in separate journals on Sunday and Monday. They calculate that existing wind turbine technology could produce hundreds of trillions of watts of power. That’s more than 10 times what the world now consumes. Wind power doesn’t emit heat-trapping gases like burning coal, oil and natural gas. But there have been questions, raised in earlier studies, about whether physical limits would prevent the world from being powered by wind. The new studies, done independently, showed potential wind energy limits wouldn’t be an issue. Money would be. “It’s really a question about economics and engineering and not a question of fundamental resource availability,” said Ken Caldeira, a climate scientist at the Palo Alto, Calif., campus of the Washington-based Carnegie Institution for Science. He is a co-author of one of the studies; that one appeared Sunday in the journal Nature Climate Change. Caldeira’s study finds wind has the potential to produce more than 20 times the amount of energy the world now consumes. Right now, wind accounts for just a tiny fraction of the energy the world consumes. So to get to the levels these studies say is possible, wind production would have to increase dramatically. If there were 100 new wind turbines for every existing one, that could do the trick says, Mark Jacobson, a Stanford University professor of civil and environmental engineering. Jacobson wrote the other study, published in the Proceedings of the National Academy of Sciences. It shows a slightly lower potential in the amount of wind power than Caldeira’s study. But he said it still would amount to far more power than the world now uses is or is likely to use in the near future. Jacobson said startup costs and fossil fuel subsidies prevent wind from taking off. The cheap price of natural gas, for one thing, hurts wind development, he added. Henry Lee, a Harvard University environment and energy professor who used to be energy chief for the state of Massachusetts, said there a few problems with the idea of wind powering the world. The first is the cost is too high. Furthermore, all the necessary wind turbines would take up too much land and require dramatic increases in power transmission lines, he said. Jerry Taylor, an energy and environmental analyst at the conservative Cato Institute, said the lack of economic reality in the studies made them “utterly irrelevant.” Caldeira acknowledged that the world would need to change dramatically to shift to wind. “To power civilization with wind turbines, I think you’re talking about a couple wind turbines every square mile,” Caldeira said. “It’s not a small undertaking.” - The journal Proceedings of the National Academy of Sciences: http://www.pnas.org - Nature Climate Change: http://www.nature.com/nclimate
<urn:uuid:1eb5a5a8-58d8-48e6-aa94-164b72fc7910>
3.046875
697
News Article
Science & Tech.
48.218918
847
The sun is an incredibly powerful source of energy. That’s why TVA is using photovoltaic (PV) panels to transform solar energy into usable electricity. When rays of sunshine strike a solar panel, they give some of the electrons inside it more energy, a process that creates an electrical current. TVA has set up photovoltaic systems at various solar sites throughout the region. Can PV systems produce power on cloudy days? See the Solar FAQ for the answer to this question and others. Properly placed wind turbines can generate electric power anywhere the wind blows steady and strong. Wind turbines use the momentum of moving air to quietly turn large blades that are attached to the shaft of an efficient electric generator. Wind energy is a major contributor to Green Power Switch. TVA has one wind power site, located on Buffalo Mountain near Oak Ridge, Tenn. The site has 15 very large and three smaller turbines. How large are the blades on the wind turbines? See the Wind Q&A for the answer to this question and others. Wastewater methane collection is a potential source of renewable energy. Biogas energy is produced when organic wastes decay – in wastewater treatment plants or landfills, for example. TVA collects methane gas, a by-product of organic decay, from the city of Memphis wastewater treatment plant and co-fires it with coal at Allen Fossil Plant. How does firing methane with coal help the environment? Read the answer to this and other questions in the Biogas FAQ.
<urn:uuid:400eb44e-fdd0-469d-baa4-fa5637d16c63>
3.828125
317
Customer Support
Science & Tech.
48.447662
848
A major thrust of the effort is to establish the potential of this technology for calibration and validation of satellite-based ocean-color measurements. The new floats are enabled with a two-way communication system that allows researchers to control when the floats descend and ascend, and when they take measurements. “Radiometers allow us to do a better job in modeling primary production,” says Boss. “We’re trying to see if we can use them to calibrate satellites, and plan on having other sensors measure for scattering. That allows us to get more information on what’s in the water.” Most of the existing floats are programmed to descend and ascend for specific periods of time to take a predetermined number of measurements. Using wireless communication and data dissemination created by CLS America, researchers will provide the floats with commands during missions, including changes in response to events such as hurricanes. The data collected will be sent to a centralized Web site for all researchers to analyze and for future input into assimilating ocean ecosystem models. With more advanced communications systems, it may also be possible to increase the life of profiling floats. Currently, researchers can record about 300 profiles from one float. The devices are limited by battery life, and once the batteries die, it’s not possible to recapture the devices. One of Boss’ goals is to test recovery possibilities, so that floats can be reused. Scientists from the NASA Goddard Earth Sciences Data and Information Services Center, partners in this project, are building a tool that will provide crucial remotely sensed information around the float surfacing location for measurement context. Every time a float reports its location, NASA will provide real-time data on weather, temperature and events in a 50-kilometer radius. “We have the opportunity to make a huge difference in the future of our field and its ability to provide much-needed information on how carbon and other material are processed globally,” says Boss.
<urn:uuid:86a4d0af-43ea-427a-b6b4-0ae157b49020>
3.359375
399
News (Org.)
Science & Tech.
27.610455
849
Sensors flag environmental damage to art at the Met NEW YORK It will take a good eye to spot them, but dozens of tiny, very modern works of art have been installed near the 15th-century unicorn tapestries and other medieval masterpieces at a New York City museum. The Metropolitan Museum of Art announced this week that a network of wireless environmental sensors designed to prevent damage to the collection is being tested at its Cloisters branch. The IBM sensors — each housed with a radio and a microcontroller in a case about the size of a pack of cigarettes — can measure temperature, humidity, air flow, light levels, contaminants and more. They are inexpensive and run on low power, and several can be positioned in a room, scientists said. The information collected goes into a three-dimensional "climate map" that can be accessed on a computer, and the data can then be analyzed to adjust the climate, spot trends and even make predictions. "Nobody in the world at this moment has this kind of information, not at this level of detail," said Paolo Dionosi Vici, associate research scientist at the Metropolitan. "It's the analytics that will keep us one step ahead technologically." The network now covers about a third of the Cloisters, which houses 3,000 medieval works in several ancient buildings that were disassembled in Europe and rebuilt in northern Manhattan. The Met expects to expand the network throughout the Cloisters and eventually to the main museum on Fifth Avenue. The climate at museums like the Cloisters is already tightly controlled, with especially fragile items kept in sealed cases. Curators don't have to worry about the ravages that might happen to a fresco in an open Italian church, for example. But the artwork is sensitive to small climate variations. "A window in a museum, in summer, that can be a hot spot," Vici said. "And the light from the window on the floor can increase the temperature of the floor. Until now, that is a variation we might not know about because we were not taking so many measurements." Another factor that can influence the climate in a museum is the number of visitors — and where the visitors have been. "If it's raining outside the Cloisters and the tourists that come in are wet, that has an effect," Vici said. The idea is to keep the effects from causing any damage, even slow damage, to the art. "Whenever we have to act on an object to repair it, it's a loss of memory of what it was in the past," Vici said. "Restoration can be very useful but if we can prevent (deterioration), it's better." Hendrik Hamann, an IBM research manager working on the project, said the 100-year-old company has had a long relationship with the Met and found the art world a good test for its sensor technology, which can also be used in ordinary buildings to measure energy efficiency and other details. "The conservation of art and our cultural heritage is obviously one of the grand challenges of our time," Hamann said. Vici and Hamann both said the sensors — which they called low-power motes — could eventually be adapted to measure how a painting on wood, for example, reacts to minor climate fluctuations. "We'd like to be able to monitor how much the wood swells, even a tiny amount," said Vici, who said he worked on the preservation of the Mona Lisa. Hamann said as data pours in, trends will appear, "and we can use those trends to understand what will happen in the future." "We will know that certain things happen in the museum environment on certain days," he said. Those trends can then be correlated with information about the best way to protect a tapestry or a wooden statue, for example. Hamann said the Cloisters was chosen for the test because "It is a historic building. It has high ceilings. It has famous glass windows. It has tapestries, wood paintings, stonework, it has indoors and outdoors sections. It's very interesting from a monitoring perspective." The Cloisters had temperature and humidity monitors but lacked the analytic capabilities of the new program, he said. About 100 of the new sensors have been spread through seven adjacent rooms, including the one housing the priceless tapestries that portray a unicorn hunt. They are inconspicuous, but not hidden entirely. "If you know where the motes are you can see them," Hamann said. But Vici said, "The visual impact of the sensors is so small compared to the quality of the information. … For every object in the room we can know in real time how the climate evolves in that particular point." Copyright 2011 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
<urn:uuid:e3defdde-49cc-4009-bac0-90499eea86eb>
3
1,006
News Article
Science & Tech.
48.372814
850
Environmental Monitoring in Antarctica Environmental monitoring is essential in Antarctica to allow assessment of the impacts of human activities. Examples of the this monitoring work carried out by BAS are given below. Sewage outfall monitoring, Rothera Station Until 2003, untreated sewage was discharged into the sea from Rothera Station. Monitoring of the receiving water before and after installation of a biological treatment plant showed a dramatic reduction in the sewage plume as indicated by faecal coliforms. Concentrations of heavy metals in lichens and marine bivalves around Rothera Research Station. Concentrations of lead, zinc, cadmium and other heavy metals in lichens and marine bivalves are measured. The results are used to assess whether any observed pollution is due to station activities, and to determine the area of contamination. Skua Population breeding success The impact of Rothera Research Station on the local South Polar Skua (Stercorarius maccormicki) population has been monitored for over 12 years. The monitoring programme has recently been expanded to include measurements such as chick weight, survival and egg dimensions.
<urn:uuid:57464506-f84d-40c7-9971-acbb34e72820>
3.3125
232
Knowledge Article
Science & Tech.
12.907895
851
Study foretold a consequence of oil leak It wasn’t until seven weeks after the But an unusual experiment conducted in 2000 off the coast of Norway, a trial run of a deep-water oil and gas spill that BP helped pay for, showed that oil could remain underwater for some time. The North Atlantic exercise was designed to understand how a spill would behave as the drilling industry plumbed new depths to extract oil and gas. The federal Minerals Management Service and 22 companies took part in the test, at about half the depth of the gulf disaster. Oil and gas were released close to the sea floor and only a fraction of it was spotted on the surface after about seven hours, defying conventional wisdom that oil would almost immediately surface because it’s lighter than water. “Over time, it will rise, but the rate of rise can be quite slow,’’ said Eric Adams, senior research engineer at the Massachusetts Institute of Technology, who helped perform several analyses on Project Deep Spill. “A lot of the oil [in the experiment] most likely rose after they went home.’’ A BP spokesman last week acknowledged the company participated in the Deep Spill experiment but refused to answer why officials flatly denied the possibility there could be significant amounts of oil trapped underwater — even the day after the federal government confirmed the existence of plumes on June 8. “There is certainly oil in the water; we believe [it is] in low and small concentrations,’’ Toby Odone, the BP spokesman, said in an interview. Tony Hayward, BP chief executive, also downplayed the plumes last week during a congressional hearing. Under questioning from Edward J. Markey of Massachusetts, he said the concentrations were low, and some of the oil was not from the ongoing spill. Tests so far show that most of the underwater plumes are diluted: less than 1 part per million, according to the National Oceanic and Atmospheric Administration. In those concentrations the water would not look polluted to the naked eye. “But even though they are low concentrations, the biological community is very sensitive that far down,’’ said Steve Murawski, chief scientist for NOAA Fisheries Service. Little is known about the plumes, which are largely separated from the surface slick. It is not clear how much oil they hold, how many and where there are, how quickly the oil will break down, and whether the plumes are harming marine life. NOAA officials have confirmed the existence of underwater oil as far down as 3,300 feet below the surface and several miles from the wellhead. Other scientists have found evidence of plumes more than 10 miles long. A research cruise headed by Woods Hole Oceanographic Institution researchers and funded by the National Science Foundation set out Thursday to map one of the cloudy mixtures and dissect its components to better understand its toxicity, where it might travel, and how quickly it is being broken down by oil-eating microbes. There are other concerns. Scientists say the high pressure of the gushing leak is creating tiny oil droplets, some that can become suspended in water. The use of large amounts of underwater chemical dispersants are making those droplets smaller still. All those droplets, suspended in the water and trapped by dense ocean layers, are probably forming the plumes. Scientists are also concerned about how much gas from the gushing well remains trapped underwater. The use of dispersants is continuing, because scientists believe they help natural forces more easily break down the oil into less toxic compounds. Oil-eating microbes will do that under water. On the surface, they will be aided by sunlight, waves, and weather. But it’s a careful balance. If there are too many oil droplets in the water, the microbes, which need oxygen, could grow out of control and suck up so much oxygen they create a dead zone. Samantha Joye, a University of Georgia scientist who was one of the first to discover the undersea oil, said oxygen levels were down 30 percent or more in some areas she sampled. Federal officials say ongoing testing shows oxygen levels in the gulf are normal. “But there is a tipping point . . . that’s why you need continuous monitoring,’’ and NOAA is doing that, said Murawski, of the Fisheries Service. The Deep Spill experiment, some 200 miles off Norway’s coast, was celebrated at the time as a forward-looking exercise for the next generation of drilling. In all, four mixtures were released, including a combination of diesel oil and natural gas that, while not a perfect parallel to the crude oil leaking in the gulf, gives some comparison. Flow rates were comparable to the Gulf spill rate. No dispersants were used. According to a 2005 analysis by MIT’s Adams and Scott Socolofsky, now a professor at Texas A&M University, between 2 and 17 cubic meters of the 60 cubic meters of released diesel made it to the surface before overflights looking for oil on the sea surface ended, about seven hours after the oil release stopped. The scientists suggested that a significant portion of the oil rose slowly because it was comprised of very small droplets. Another colleague, Stephen Masutani of the University of Hawaii, found that one-third of the oil in Deep Spill would probably be found in droplets of half a millimeter or smaller, which could have taken a day or more to surface. In the gulf spill, the dispersants probably made droplets much smaller still, Adams said, which means they could take much longer to surface. Those droplets can get trapped by currents and dense water layers deep in the sea. “These small droplets have no choice but to follow the water once it leaves the upward-rising oil-and-gas mixture,’’ Socolofsky said. After Deep Spill, there was no effort by the industry — or government — to better understand these submerged oil droplets. Beth Daley can be reached at firstname.lastname@example.org. CLARIFICATION: An earlier version of this story incorrectly said that NOAA had confirmed the existence of oil underwater 142 miles from the wellhead. While NOAA did find evidence of oil, tests showed it did not come from the leaking BP wellhead.
<urn:uuid:01e003e8-e47e-4de5-9e8b-64a6f8bace08>
3.5625
1,314
News Article
Science & Tech.
44.988904
852
Although researchers can't be exactly sure how old the bacteria are — or how they reproduce — microbiologists at Aarhus University in Denmark posit that the ancient organisms could be anywhere from several thousand to millions of year old, The Washington Post's Joel Achenback reports. The bacteria, found living in sediments that formed 86 million years ago, have extremely slow metabolisms and are able to survive by living on very small amounts of energy. "The slow rate of reproduction means that they cannot evolve at the same speed as bacteria in friendlier, energy-rich, nutrient-thick settings. That means, in turn, that they may preserve more primitive genetic features than other bacteria," scientist Robert Hazen told Achenback. The ability of bacteria to stay alive in such a nutrient-starved environment supports the idea that similar organisms could live on other planets, requiring very little to sustain life.
<urn:uuid:dcd7cbcc-ad75-4944-973e-9572ff78a9ba>
4.0625
183
News Article
Science & Tech.
23.908286
853
If you download this publication you may also be interested in these: Facing an uncertain future How forest and people can adapt to climate changeCenter for International Forestry Research (CIFOR)Bogor, Indonesia The most prominent international responses to climate change focus on mitigation (reducing the accumulation of greenhouse gases) rather than adaptation (reducing the vulnerability of society and ecosystems). However, with climate change now inevitable, adaptation is gaining importance in the policy arena, and is an integral part of ongoing negotiations towards an international framework. This report presents the case for adaptation for tropical forests (reducing the impacts of climate change on forests and their ecosystem services) and tropical forests for adaptation (using forests to help local people and society in general to adapt to inevitable changes). Policies in the forest, climate change and other sectors need to address these issues and be integrated with each other—such a cross-sectoral approach is essential if the benefits derived in one area are not to be lost or counteracted in another. Moreover, the institutions involved in policy development and implementation need themselves to be flexible and able to learn in the context of dynamic human and environmental systems. And all this needs to be done at all levels from the local community to the national government and international institutions. The report includes an appendix covering climate scenarios, concepts, and international policies and funds.
<urn:uuid:57106d23-5399-4426-a853-4136284a3a19>
2.859375
272
Truncated
Science & Tech.
-0.745517
854
In Northeast Ohio, coyotes trot across runways at Burke Lakefront Airport within sight of Cleveland's skyscrapers, and dodge traffic on busy highways like U.S. 422 near Solon. In Chicago a couple of years ago, one sauntered into a sandwich shop in the downtown Loop and plopped down in the beverage cooler. In New York, a frisky 35-pound male led cops, reporters and TV news helicopters on an hours-long romp around Central Park before being shot with a tranquilizer dart. Once the denizens of southwestern deserts and Great Plains prairies, mobile and highly adaptable coyotes have been on a relentless eastward march for much of the 20th century, aided by suburban land-clearing and the elimination of their chief competitor, the gray wolf.Beginning in the 1990s, their presence escalated in Cleveland and other big Midwestern and Eastern metropolises, not just in fringe parklands, but in neighborhoods and the urban core. City-dwellers who spot them in parks and back yards are startled to find that a formidable tracker and hunter at the top of the food chain – what ecologists call an apex predator – is in their midst. "People are walking within a few feet or yards from coyotes every day and they don't know it," said Ohio State University biologist Stan Gehrt, whose decade-long-study of the Chicago area's estimated 2,000 coyotes prompted him to dub them "ghosts of the city." "There are a whole lot more out there than what we see." Much remains unknown about these elusive creatures. They're one of the largest animals to dramatically expand, rather than shrink, their range in response to humans, though researchers still haven't figured out if they've settled in urban realms in spite of us, or because of us. "For an animal that's lived around humans for this long, we don't know much about it," said Cleveland State University biologist Robert Krebs. But new research, including a DNA study by Krebs and several CSU colleagues, is gradually revealing more about the habits of urban coyotes. The investigations are unearthing some surprises about how the animals got here, what they're doing now, how much of a threat they pose (hint: not a lot, although that may change) and their role in an ongoing evolutionary experiment that could play out in and around Ohio during the next few decades. The urban coyotes among us now are the descendants of an expanding Western coyote population spilled from the grasslands of Iowa, Missouri and Indiana into Ohio in the early 1900s. The animals' eastward progress across the state was slow; they didn't spread to western Pennsylvania until 1947.A separate wave of Western coyotes took another expansion route and followed a different genetic strategy that gave them a big advantage over the slow-moving Buckeye group, according to a led by New York State Museum biologist Roland Kays. The coyotes skirted northeastward, moving above and around the Great Lakes. In the Canadian woods, they encountered remnant populations of wolves, which hadn't been eradicated like their American counterparts. Although natural rivals, a few coyotes and wolves cross-bred, DNA testing shows. "It may have been both sides trying to make the best of things," Kays said. The resulting Northeastern coyote-wolf hybrids have big bodies, broad skulls and strong jaw muscles, making them better suited to take down deer than Ohio's mainly rodent- and roadkill-eating coyotes. Kays' research shows the hybrid coyote-wolves spread across Ontario and southward into upstate New York five times faster than the purebred Western coyotes that colonized Ohio and the lower Midwest. Like advancing armies, the two expansion fronts, one composed of smaller non-hybrid coyotes and the other of larger coy-wolves, are just starting to encounter each other in western New York and western Pennsylvania. What happens next – further genetic mixing, one side out-competing the other and taking over its territory, or some kind of peaceful but separate co-existence – isn't clear. "That's playing out right now," Kays said. Whatever their future, Ohio's purebred coyotes have become an enduring, if still mostly stealthy, fixture on the urban landscape.When Lake Erie freezes, U.S. Department of Agriculture wildlife biologist Randy Outward sees coyotes using the icepack to move east and west along the downtown Cleveland shoreline. "It's like a highway for them," he said. Suburban police departments, especially those near the Cleveland Metroparks and the Cuyahoga Valley National Park, get complaints from residents awakened by coyote howling, or alarmed to see coyotes jogging through their yards. The parks also hear from people demanding they do something about "their" coyotes. "A lot of folks think we're a refuge for them. We're not," said Rick Tyler, the Metroparks' senior natural resource manager. "Their home ranges use a lot of our parks, but some cross our boundaries freely. They're well-entrenched in municipalities around them. They're doing pretty well in the suburbs." The federal and regional park districts keep loose track of the coyotes within their boundaries using "howling surveys," remote trail cameras triggered by the animals' movement, and contact reports from park visitors. The annual howling surveys, where wildlife specialists and volunteers play amplified recordings of coyote calls and count the number of responses, are an inexact measure. But they indicate that between 100 and 150 coyotes are present in the 33,000-acre national park. The population climbed during the 1990s but seems to have leveled off, said Lisa Petit, the national park's chief of science and resources management."Anecdotally, I'd say there is greater pressure on the [coyote] population," Petit said. That's possibly due to activities outside the park, such as the trapping of nuisance coyotes in neighborhoods and the culling of deer whose carrion otherwise would be a coyote food source. In the Metroparks, the Brecksville and Bedford reservations both appear to have resident coyote family groups – a dominant "alpha" male and female, their pups and several subordinate animals, Tyler said. Other coyote family groups seem to be based outside the Rocky River, Mill Stream Run, West Creek, and North and South Chagrin reservations, but include parts of those parks in their "home range,"the area within which they hunt for food, and shelter and raise pups. Collars equipped with radio transmitters and GPS locators would enable researchers to precisely track the coyotes' movements, a major aid in managing them. But funding isn't currently available for the devices, which cost several thousand dollars apiece. So CSU graduate student Beth Judy is using a year's worth of Metroparks howling survey results and computer mapping software to try to figure out the animals' habitat and range. She's still crunching data but already knows "they're not just staying in the parks." Though mobile, Greater Cleveland's coyotes aren't mixing and freely inter-breeding as one big group. Instead, DNA collected from coyote droppings, or "scat," shows three clusters of animals, each genetically distinct from the others. Something is isolating the coyote groups, preventing a more wholesale blending of genes. The locations of the genetic clusters – one east of the Cuyahoga River, one west of the river but still in the national park, and the third in the Rocky River watershed – suggests that big north-south multi-lane roads, not rivers, are to blame, acting as physical barriers. "There's no problem with a coyote crossing the Cuyahoga [River]," said Krebs, the CSU biologist who led the analysis. "They don't have the east-west issue except for the highways. [Interstates] 71 and 77 form far stronger boundaries for their movement than the natural features." The Cleveland-area coyote DNA samples also show a surprising amount of genetic differences from individual to individual. To Kays, that indicates the Ohio population arose from a large, diverse original wave of coyote pioneers arriving from the west, not just a few stalwart explorers. Krebs thinks the genetic diversity shows that new coyote immigrants are continuously arriving in the area. Greater Cleveland may be what biologists call a "sink habitat," he said – an area where animals can survive, but don't reproduce well enough to sustain a population without replenishment from nearby "source habitats" where coyotes thrive. In that regard, Cleveland is like a kitchen sink beneath a running faucet. The steady pipeline of rural coyotes that resupply the urban population means that trapping or killing is only a temporary solution to nuisance complaints. "Solitary animals float around the landscape and have huge home ranges," Gehrt said. "They're looking for gaps and will fill them quickly. If a city wants to hire a trapper, he's got a permanent job." While wildlife officials acknowledge that removing aggressive coyotes is necessary, their overall strategy is to teach people how to co-exist with the animals. They aim to prevent coyotes from becoming dependent on people which should reduce the chance of conflicts. That means keeping cats indoors, not leaving small dogs unattended outside, and securing potential food sources such as garbage and pet food. It also means accepting that urban coyotes are here to stay, even as scientists Some evidence indicates they gravitate to urban sites. When they're captured and relocated to the country, they inevitably try to return. And yet in the city, coyotes avoid humans as much as possible, even altering their behavior and restricting their activity to nighttime to minimize contact. "They are a walking paradox," Gehrt said. "Every time you say they're a certain way, they do the exact opposite. All these things work against each other. But they work."
<urn:uuid:b121f276-473f-46bf-a1d4-06e864e8ab71>
2.796875
2,079
News Article
Science & Tech.
43.324073
855
Cascading Importance: Wolves, Yellowstone, and the World Beyond. A talk with William Ripple. Jonathan Batchelor Winter 2013. Large Predators and Ecological Health WAMC Northeast Public Radio August 23. Top Predators Protect Forests The Wildlife Professional Summer 2012. Cougars Encourage Lizards in Zion Year of the Lizard News July 2012. Predators and Plants Science Update April 26. Herbivores take toll on ecosystem The Register Guard April 10. Loss of predators affecting ecosystem health OSU Press Release April 9. Wolves to the Rescue Defenders of Wildlife Defenders Magazine Winter 2012. Wolves help Yellowstone, researchers say Local 10, CNN January 5, 2012. How Wolves Are Saving Trees in Yellowstone Good Environment January 4, 2012. Study says that with more wolves and fewer elk, trees rebounding in portions of Yellowstone The Washington Post January 3, 2012. Yellowstone transformed 15 years after the return of wolves OSU Press Release Dec 21, 2011. Lopped Off Science News November 2011. The Crucial Role of Predators: A New Perspective on Ecology Yale Environment 360 September 15, 2011. For Want of a Wolf, the Lynx Was Lost? Science Magazine September 9, 2011. Red wolf comeback in N.C. helps other animals thrive The Charlotte Observer August 13, 2011. The case for large predators The Oregonian July 23, 2011. Study tracks effects of declining predator numbers The Register-Guard July 17, 2011. Loss of top predators causes chaos, including fires and disease The Vancouver Sun July 15, 2011. Loss of large predators disrupting multiple plant, animal and human ecosystems OSU Press Release July 14, 2011. Loss of Top Predators Has Far-Reaching Effects PBS Newshour July 14, 2011. Oregon State researchers: Predators Important To Ecosystems OPB Earthfix July 14, 2011. Using Wolves and Other Predators to Restore Western Ecosystems Eugene Natural History Society November 2010. Sharks and Wolves: Predator, Prey Interactions Similar on Land and in Oceans US News Nov. 15, 2010. New Theory for Megafaunal Extinction American Archaeology Fall 2010. New theory on what killed off the woolly mammoths Science Fair, USA Today July 2, 2010. Study probes role of key predators in ecosystem disruption Corvallis Gazette-Times July 1, 2010. Ripple Marks: The Story Behind the Story Oceanography June, 2010. Destination Science 2010: The reintroduction of wolves has helped bring a severely damaged ecosystem back from the brink Discover Magazine April, 2010. Mess O' Predators The Discovery Files January 20, 2010. Top predators' decline disrupts ecosystems, says study The Epoch Times October 14-20, 2009. Ripple receives Spirit of Defenders Award for Science The Barometer October 7, 2009. Wolves, jaguars are out, coyotes, foxes are in: New global study The Arizona Daily Star - Blogging in the desert October 2, 2009. Decline in big predators wreaking havoc on ecosystems, OSU researchers say The Oregonian October 1, 2009. Where Tasty Morsels Fear to Tread The New York Times: The Wild Side September 29, 2009. Wolves to the Rescue in Scotland ScienceNOW Daily News (Science Magazine) July 22, 2009. Can wolves restore an ecosystem? Seattle Times January 25, 2009. Wolf Loss and Ecosystem Disruption at Olympic National Park Island Geoscience Fall 2008. The Silence of the Wild William Stolzenburg essay, Powell's Books 2008. Century without the wolf The Oregonian July 30, 2008. Monitoring cougar in Yosemite Valley Difficult San Mateo County Times June 22, 2008. Lack of predators harms wild lands San Mateo County Times June 21, 2008. Cougar decline resuls in critical changes to Yosemite ecosystem Land Letter - E&E Publishing Service May 8, 2008. Yosemite: Protected but Not Preserved. Science Magazine May 2, 2008. How humans, vanishing cougars changed Yosemite San Francisco Chronicle May 2, 2008. Wolves and Elk Shape Aspen Forests CurrentResults.com 2007. Return of the Wolves. Weekly Reader December 2007. Oregon State is No. 1 in conservation biology. The Oregonian via OregonLive.com September 6, 2007. Yellowstone's Wolves Save Its Aspens. The New York Times August 5, 2007. Presence Of Wolves Allows Aspen Recovery In Yellowstone. Science Daily (ScienceDaily.com) July 31, 2007. Apsens Return to Yellowstone, With Help From Some Wolves. www.sciencemag.org July 27, 2007. Yellowstone trees get help from wolves. MSNBC.com July 27, 2007. It All Falls Down: A plummeting cougar population alters the ecosystem at Zion National Park. Smithsonian Magazine/Smithsonian.com December, 2006. Cougar Predation Key To Ecosystem Health. ScienceDaily.com / University of Toronto October 25, 2006. The Ecology of Fear. emagazine.com March 2006. Hunting Habits of Yellowstone Wolves Change Ecological Balance in Park. The New York Times Oct. 18, 2005. Episode 3 "Predators", Strange Days on Planet Earth. National Geographic April 2005. Ecological changes linked to wolves. The Seattle Times Jan. 12, 2005. Mystery in Yellowstone: wolves, wapiti, and the case of the disappearing aspen. Notable Notes, Oregon State University 2004. A Top Predator Roars Back. On Earth Summer 2004. Research Shows Wolves Play Key Role in Ecosystems. ABC News Dec. 15, 2004. Who's Afraid of the Big Bad Wolf? The Yellowstone Wolves Controversy. Journal of Young Investigators Nov. 2004. Lessons from the Wolf. Scientific American Jun. 2004. Wolves linked to vegetation improvements. Wyoming Tribune-Eagle Mar. 18, 2004. Endangered Wolves Make a Comeback. National Public Radio Feb. 20, 2004. Wolves' Leftovers Are Yellowstone's Gain, Study Says. National Geographic News Dec. 4, 2003. Wolves enhance biodiversity in Yellowstone, report says. Oregonian Oct. 29, 2003. Wolves linked to tree recovery. Billings Gazette Oct 29, 2003. A top dog takes over. National Wildlife Federation Oct./Nov. 2003. OSU student maps L&C wildlife observations. Corvallis Gazette-Times Mar. 28, 2003. Aspens wither without wolves. Herald and News Nov. 19, 2000. Observatory: Fates of wolf and aspen. New York Times Sep. 26, 2000. Quiet Decline: Fewer wolves and wildfires may have led to aspen's decline. ABC News Sep. 21, 2000. Support for the Leopold site is provided by: Dept. of Forest Resources, OSU, 280 Peavy Hall, Corvallis, OR 97331. phone: 541-737-4951 | fax: 541-737-3049 Copyright 2003, Oregon State University | Disclaimer.
<urn:uuid:8d55d791-f938-4846-9347-17c697c329a0>
3.390625
1,508
Content Listing
Science & Tech.
56.17466
856
Some researchers believe that the solar cycle influences global climate changes. They attribute recent warming trends to cyclic variation. Skeptics, though, argue that there's little hard evidence of a solar hand in recent climate changes. Now, a new research report from a surprising source may help to lay this skepticism to rest. A study from NASA’s Goddard Space Flight Center in Greenbelt, Maryland looking at climate data over the past century has concluded that solar variation has made a significant impact on the Earth's climate. The report concludes that evidence for climate changes based on solar radiation can be traced back as far as the Industrial Revolution. Past research has shown that the sun goes through eleven year cycles. At the cycle's peak, solar activity occurring near sunspots is particularly intense, basking the Earth in solar heat. According to Robert Cahalan, a climatologist at the Goddard Space Flight Center, "Right now, we are in between major ice ages, in a period that has been called the Holocene." Thomas Woods, solar scientist at the University of Colorado in Boulder concludes, "The fluctuations in the solar cycle impacts Earth's global temperature by about 0.1 degree Celsius, slightly hotter during solar maximum and cooler during solar minimum. The sun is currently at its minimum, and the next solar maximum is expected in 2012." According to the study, during periods of solar quiet, 1,361 watts per square meter of solar energy reaches Earth's outermost atmosphere. Periods of more intense activity brought 1.4 watts per square meter (0.1 percent) more energy. While the NASA study acknowledged the sun's influence on warming and cooling patterns, it then went badly off the tracks. Ignoring its own evidence, it returned to an argument that man had replaced the sun as the cause current warming patterns. Like many studies, this conclusion was based less on hard data and more on questionable correlations and inaccurate modeling techniques. The inconvertible fact, here is that even NASA's own study acknowledges that solar variation has caused climate change in the past. And even the study's members, mostly ardent supports of AGW theory, acknowledge that the sun may play a significant role in future climate changes.
<urn:uuid:589e8396-cc38-487e-ba70-b4542b127704>
3.84375
448
News Article
Science & Tech.
42.7468
857
This Dawn FC (framing camera) image shows some of the undulating terrain in Vesta’s southern hemisphere. This undulating terrain consists of linear, curving hills and depressions, which are most distinct in the right of the image. Many narrow, linear grooves run in various directions across this undulating terrain. There are some small, less than 1 kilometer (0.6 mile) diameter, craters in the bottom of the image. These contain bright material and have bright material surrounding them. There are fewer craters in this image than in images from Vesta’s northern hemisphere; this is because Vesta’s northern hemisphere is generally more cratered than the southern hemisphere. This image is located in Vesta’s Urbinia quadrangle and the center of the image is 63.0 degrees south latitude, 332.2 degrees east longitude. NASA’s Dawn spacecraft obtained this image with its framing camera on Oct. 25, 2011. This image was taken through the camera’s clear filter. The distance to the surface of Vesta is 700 kilometers (435 miles) and the image has a resolution of about 70 meters (230 feet) per pixel. This image was acquired during the HAMO (high-altitude mapping orbit) phase of the mission. The Dawn mission to Vesta and Ceres is managed by NASA’s Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, for NASA’s Science Mission Directorate, Washington D.C. UCLA is responsible for overall Dawn mission science. The Dawn framing cameras have been developed and built under the leadership of the Max Planck Institute for Solar System Research, Katlenburg-Lindau, Germany, with significant contributions by DLR German Aerospace Center, Institute of Planetary Research, Berlin, and in coordination with the Institute of Computer and Communication Network Engineering, Braunschweig. The Framing Camera project is funded by the Max Planck Society, DLR, and NASA/JPL. More information about Dawn is online at http://dawn.jpl.nasa.gov. Image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA
<urn:uuid:070933fb-a8a3-406a-8b89-95e2d798f517>
3.265625
459
Knowledge Article
Science & Tech.
44.139553
858
Working in collaboration with the CAO, SANParks and the University of the Witwatersrand, the CSIR's Earth observations research groups have achieved several milestones, changing the way large areas like the KNP and surrounding areas can be managed. According to Prof Greg Asner, professor at the Department for Global Ecology at Stanford University and in charge of the CAO, their relationship with South Africa is quite unique: "It is one of the only places in the world where we work directly with local scientists on issues of management conservation. Working in South Africa with the Kruger National Park and the CSIR gives us the chance to have real impact," he said during an interview at the time of the CAO's third mission to the country in April 2012. This sentiment is echoed by SANParks' research manager for GIS and remote sensing, Dr Izak Smit: "We do not have the infrastructure, technology or expertise to deal with a project of this magnitude. Yet, working with external partners, we can leverage the expertise and funding, thereby enriching our work in transforming the science into management decisions and practices. "We find ourselves at the interface between the science and the management of the parks. Collaboration with external partners like the CSIR, universities and the CAO is essential to the successful management of the parks, and has had impacts on how we manage the park when it comes to the provision of water holes and prescribed burning, for For Dr Renaud Mathieu, a CSIR principal scientist, the collaboration is also about building technical skills and capacity in South Africa to process large sets of data and developing remote sense technologies well suited to the South African savannah landscape. "Historically, especially in Africa, most remote sensing-based approaches focused on tropical deforestation. However, more than half of the southern African subcontinent is covered with savannah with about 10 to 50% tree cover and undergoing mostly gradual changes such as bush encroachment or tree logging for fuelwood. Techniques developed for assessing woody biomass in tropical forests with dense canopies cannot simply be transferred to savannas and woodlands," he explains. Futhermore, the long-term vision is to develop the whole LiDAR value chain, including the local capacity to operationally collect LiDAR data for environmental management and vegetation applications using local airborne survey companies. In this regard, the CSIR and SANParks are already working with a South African company to test the viability, as SANParks is considering using LiDAR surveys for long-term monitoring. Research milestone: Sustainability of fuelwood for rural energy needs The LiDAR data from the 2008 flight campaign have enabled researchers to map and measure woody biomass in rural areas such as Bushbuckridge, where harvesting of live wood is still the primary source of fuel for cooking and heating even when electricity is available. Researchers combined the LiDAR data with socio-economic data collected from the area over the past 20 years by the Wits Rural Public Health and Health Transitions Research Unit, and the WITS programme for Sustaining Natural Resources in African Ecosystems. This shows that at the current rate of fuelwood consumption - three to four tons per year per household - the woodland resources for some rural villages in Bushbuckridge may only last another 12 years. With the help of the LiDAR data and fieldwork, researchers have also found evidence of illegal commercial cutting of fuelwood in the communal rangelands. "There is great concern that the current levels of utilisation are not sustainable, with direct negative impacts on the poor, as well as for biodiversity loss and conservation. Our findings to date regarding the sustainability of this ecosystem service warrant further investigation," says Dr Mathieu. In all instances, improved estimates will be instrumental to poverty alleviation. Research milestones: Loss of big trees in conserved areas Another significant finding is that large herbivores and fires may have a bigger impact on the loss of big trees in conserved areas than in communal areas, where large trees like the Marula are valued for their fruits. Over five metres high, many of these trees have taken over 50 years to grow. Dr Mathieu, "We have detected a 20% loss of big trees from research sites in a private game reserve next to the KNP in just two years, compared to a 10% loss of big trees from research sites on communal land over the same time period." This was also the first time in his remote sensing career that he found a 100% correlation between prediction of a remote sensing system (the LiDAR) and ground verification. But researchers are still puzzled about why and how this is happenin. "At the moment, we think it is because of different reasons," explains Dr Mathieu: "In the case of the private game reserve, field work shows that a combination of elephants and fire damage is involved. For instance, the elephants push and debark the big trees. The trees are weakened, and then burn more easily in veld fires." In the communal areas, trees are cut for building posts for field fencing and fuel wood; however, it is a big taboo to cut big fruit-bearing trees like the Marula, and people mostly cut the lower-growing trees and bushes. Again, this finding needs to be further investigated and interrogated. For the remote sensing specialists in South Africa, the recent 2012 CAO campaign will be useful to confirm or infirm these results over a wider area and a longer time span.
<urn:uuid:99c23dc7-b725-4ad6-8a8b-c23e4b5b4416>
2.96875
1,223
News Article
Science & Tech.
26.964535
859
Jump to the main content of this page Pacific Southwest Research Station Tahoe Science Projects supported by SNPLMA Remote sensing of Lake Tahoe's nearshore environment Erin Lee Hestir, University of California, Davis The goal of this research is to use remotely sensed data to retrieve fine sediment, chlorophyll, and colored dissolved organic matter (CDOM) concentrations from the water column in the near shore, and to map the distribution of periphyton (attached algae), aquatic macrophytes (submerged plants), clam beds in the nearshore of Lake Tahoe and variations in sediment type. High spatial resolution multispectral satellite imagery, moderate spatial resolution multispectral satellite imagery, and airborne hyperspectral imagery will be used. We will investigate both empirical and model-driven methods to map fine sediment, chlorophyll, and CDOM concentration, macrophyte communities, clam beds, periphyton, and substrate type. The empirical approach will first classify the optically shallow near shore into the different bottom classes using the field data and spectral library first to train and then (independently) validate the classifier. This analysis allows the development of statistical correlations (e.g., regression modeling) whereby reflectance information can be used to predict the probability of the concentration of water quality constituents above a particular bottom type. Upon successful development, the statistical model can then be used to predict water quality in each image pixel given the reflectance value of that pixel. The second approach will use a radiative transfer model that simulates remote sensing reflectance of water given inputs of different aquatic optical properties. One of the key deliverables of the project is a cost-benefit analysis of remote sensing approaches for monitoring the nearshore environment and a manual for implementing remote sensing analysis for monitoring the nearshore environment. Relation to Other Research Including SNPLMA Science Projects The value of remote sensing technologies for evaluation and monitoring in the Lake Tahoe Basin has also been widely recognized, and an increasing number of remote sensing datasets are being acquired over the basin. NASA has been operating a remote sensing validation site at Lake Tahoe for over a decade and recently the Tahoe Regional Planning Agency (TRPA) and U.S .Geological Survey have purchased high spatial resolution satellite imagery of the Tahoe Basin. Dr. Schladow and Dr. Steissberg (UC Davis TERC) and Dr. Hook (NASA-JPL) have completed a SNPLMA Round 7-funded project, "Monitoring past, present, and future water quality using remote sensing (RS),"aimed at using remote sensing to quantify changes in lake-wide distributions of Secchi depth and chlorophyll distribution. The large pixel size has limited the application to areas outside the nearshore; however, there will be considerable benefit to this project based on what was learned. Several previous SNPLMA science studies will inform this project, including: 1) "Predicting and managing changes in near-shore water quality," 2) "Natural and human limitations to Asian clam distribution and recolonization-factors that impact the management and control in Lake Tahoe," and 3) "Development of a risk model to determine the expansion and potential environmental impacts of Asian clams in Lake Tahoe," as well as a project funded by the US Army Corps of Engineers that is conducting a baseline assessment of benthic species and developing recommendations for future assessments. Expected date of final products: |Last Modified: Mar 7, 2013 06:28:08 PM|
<urn:uuid:b75db238-5845-4d8f-8c9c-2a479659e8ca>
2.578125
724
Knowledge Article
Science & Tech.
9.61561
860
"This paper has very strong implications for the United States staying ahead in magnet technology, which would bring great dividends in research and improvements in medical imaging." A scientific surprise greets FSU researchers at higher magnetic fields by Susan Ray Research performed by a team at Florida State University's National High Magnetic Field Laboratory suggests that the benefits of building higher-field superconducting magnets likely will far outweigh the costs of building them. FSU researchers Riqiang Fu, Ozge Gunaydin-Sen and Naresh Dalal discovered something they weren't expecting while trying to improve the resolution, or quality of image, in the magnet lab's unique 900-megahertz, 21.1-tesla magnet. While experimenting with the giant magnet, the three noted an exponential increase in the ease of detecting the "fingerprint" of the chemical compound they were studying as they exposed it to ever-higher magnetic fields. A paper describing their research was published recently in the Journal of the American Chemical Society, a top-tier chemistry journal. The paper can be accessed here. "This paper has very strong implications for the United States staying ahead in magnet technology, which would bring great dividends in research and improvements in medical imaging," said Tim Cross, director of the magnet lab's NMR User Program and a professor of chemistry and biochemistry at FSU. "We need—and are working on—additional fundamental studies that show the benefits of going to higher fields." Nuclear magnetic resonance, or NMR, generates a true-to-life fingerprint—a unique pattern indicating the presence of specific molecules—for a research sample that is being analyzed. As a technique, NMR is very accurate as long as one can detect the sample in the first place. The ease or difficulty of detecting a sample is known as "sensitivity." Low sensitivity has been one of NMR's biggest liabilities, because the lower the sensitivity, the longer the experiment takes. Such slowness has limited NMR's potential applications. "Poor signal is like a faint picture in the darkness," said Dalal, the Dirac Professor of Chemistry and Biochemistry at FSU. "We've shown that the '900' (magnet) increases the picture's brightness by a factor of about 10 relative to low-field images. Think of how much more you can see in a room that is that much brighter…and imagine what you'd see at even higher fields." Theorists had predicted a linear increase in both resolution and sensitivity at higher magnetic fields, moving from 14.1 tesla to 21.1 tesla, the current state of the art in superconducting magnets. In their experiment, the FSU team members observed an exponential increase—with the sensitivity increasing by a factor of three over what had been predicted. Higher sensitivity in a magnet means it takes far less time—or much less of a sample—to conduct an experiment. "The reduction in time is like going from one hour to a couple of minutes," said Fu, an associate scholar/scientist at the magnet lab and the FSU chemistry department. "Many experiments take weeks, and such a reduction in time will allow for far more studies to be conducted on a single instrument." Dalal said the shortening of experimental time increases scientists' ability to fingerprint materials, opening up new areas of scientific investigation in NMR, including the study of materials useful in nanotechnology and medical imaging. The need for less of a sample—up to 18 times less—will open up high-field NMR to the study of enzymes and purified proteins, an area in which samples typically are of limited size. The National High Magnetic Field Laboratory (www.magnet.fsu.edu) develops and operates state-of-the-art, high-magnetic-field facilities that faculty and visiting scientists and engineers use for interdisciplinary research. The laboratory is sponsored by the National Science Foundation and the state of Florida and is the only facility of its kind in the United States.
<urn:uuid:c26c1d37-b8a7-47b0-bcc1-3c4cbdbd7343>
2.984375
823
News (Org.)
Science & Tech.
36.790437
861
|Freshwater Mussels of the Upper Mississippi River System| Mussel Conservation Activities 2005 Highlights: Possible fish predation of subadult Higgins eye was observed in the Upper Mississippi River, Pools 2 and 4. Subadult Higgins eye pearlymussels (Lampsilis higginsii) from the Upper Mississippi River, Pools 2 and 4. Shell damage may be due to predation by fish (i.e. common carp or freshwater drum). Top photo by Mike Davis, Minnesota Department of Natural Resources; bottom photo by Gary Wege, U.S. Fish and Wildlife Service. Species Identification and Location • Threatened and Endangered Mussels • Life History • Ecology • Mussel Harvest on the River • Current Threats • Mussel Conservation Activities • Ongoing Studies and Projects • Multimedia • Teacher Resources • Frequently Asked Questions • Glossary • References • Links to Other Mussel Sites Privacy • FOIA • FirstGov • Contact Department of the Interior • U.S. Fish & Wildlife Service • U.S. Geological Survey |Last updated on December 21, 2006
<urn:uuid:931a73fa-5694-4b94-93fe-4dedcf039dcd>
3.015625
227
Knowledge Article
Science & Tech.
30.7425
862
HELCOM Indicator Fact Sheets for 2005 As the environmental focal point of the Baltic Sea, HELCOM has been assessing the sources and inputs of nutrients and hazardous substances and their effects on ecosystems in the Baltic Sea for almost 30 years. The resulting indicators are based on scientific research carried out around the Baltic Sea under the HELCOM PLC and COMBINE monitoring programmes. During the past few years, HELCOM Indicator Fact Sheets have been compiled by responsible institutions and approved by the HELCOM Monitoring and Assessment Group. The Indicator Fact Sheets for 2005 are listed in the navigation menu on the left and older ones can be found in the Indicator Fact Sheet archive. The development of sea surface temperature in the Baltic Sea in 2004 was characterised by rather cold months of June and July and by a warm August. The wave climate in the northern Baltic Sea in 2004 was charactrised by a spring season that was calmer than usual and by a storm in December during which the significant wave heigth in the northern Baltic Proper reached a record value of 7.7 meters. The following ice winter was, by the extent of the ice cover, classified as normal. The break up of ice was in most waters earlier than normal and on the 23rd of May the Baltic Sea was ice free. Life pulsates according to water inflows The present state of the Baltic Sea is not only the result of the anthropogenic pressures but is also influenced hydrographic forces, such as water exchange between the Baltic Sea and the North Sea. After the major Baltic inflow in January 2003, which renewed most of the deep water in the Baltic Sea, no new major inflow has taken place and the near-bottom water in the Bornholm and eastern Gotland Basin returned back to anoxic conditions in the middle of 2004. The Baltic Sea continues to suffer the impacts of human activities Baltic Sea habitats and species are threathened by eutrophication and elevated amounts of hazardous substances as a result of decades of human activities in the surrounding catchment area and in the sea. Eutrophication is the result of excessive nutrient inputs resulting from a range of anthropogenic activities. Nutrients enter the either via runoff and riverine input or through direct discharges into the sea. Although nutrient inputs from point sources such as industries and municipalities have been cut significantly, the total input of nitrogen to the Baltic Sea is still over 1 million tonnes per year, of which 25 % enters as atmospheric deposition on the Baltic Sea and 75 % as waterborne inputs. The total input of phosphorus to the Baltic Sea is ca. 35 thousand tonnes and enters the Baltic Sea mainly as waterborne input with the contribution of atmospheric deposition being only 1-5 % of the total. The main source of nutrient inputs is agriculture. (Please note that Indicator Fact Sheets on nutrient inputs to the Baltic Sea will be published in the near future). The inputs of some hazardous substances to the Baltic Sea have been reduced considerably over the past 20 to 30 years. In particular discharges of heavy metals have decreased. The large majority of heavy metal enters the Baltic Sea via rivers or as direct discharges: 50 % for mercury, 60-70 % for lead and 75-85 % for cadmium. The remaining share of inputs is mainly from atmospheric deposition of these heavy metals. Eutrophication intensifies phytoplankton blooms The waterborne loads for nitrogen and phosphorus were significantly higher in 2004 compared to the previous year, partly due to the natural flutuations in inputs caused by varying hydrographical conditions. Annual emissions of nitrogen from the HELCOM Contracting Parties were lower in 2003 than in 1995. Mainly because of interannual changes in meteorology, no significant temporal pattern in nitrogen depositions to the Baltic Sea and its sub-basins can be detected, however depositon in 2003 was 11% lower than in 1995. Eutrophication is an issue of major concern almost everywhere around the Baltic Sea area. The satellite-derived chlorophyll-like pigments in the Baltic Sea are clearly higher than in the Skagerrak and North Sea. The average biomass production has increased by a factor of 2.5 leading to decreased water clarity, exceptionally intense algal blooms, more extensive areas of oxygen-depleted sea beds as well as degraded habitats and changes in species abundance and distribution. Annual integrated rates for sedimentation of organic matter in the Gotland Sea have not show significant trends between 1995 and 2003. However, decrease in water clarity has been observed in all Baltic Sea sub-regions over the last one hundred years, with it being most pronounced in the Northern Baltic Proper and the Gulf of Finland. Although no rising trend can be detected in spring blooms from 1992 to 2005, the 2005 spring bloom in the Gulf of Finland was more intense than in the previous year while negligable in the Arkona Basin. Due to the poor weather during the summer of 2004, there were no major cyanobacteria blooms that year. As a result, levels of dissolved inorganic nutrients in the winter nutrient pool remained extremely high throughout the Baltic Proper and meant that the risk for severe cyanobacterial blooms remained. The average concentrations of dissolved inorganic nitrogen were lower in all regions except at the entrance to and within the Gulf of Finland throughout the year 2004 when compared to the reference (the average of the years 1993-2003). This was confirmed by the 2005 summer blooms of cyanobacteria being amongst the most intense and widespread ever encountered in the Northern and Central Baltic Proper. High surface water temperatures are a prerequisite for intensive blooms of toxic Nodularia species. In 2004, the abundance of the nitrogen fixing cyanobacteria as well as the ratio between the toxic Nodularia spumigena and the non-toxic Aphanizomenon flos-aquae were almost at the same level as in the previous four years. Heavy metals and organic pollutant still persistent in marine environment The inputs of some hazardous substances to the Baltic Sea have reduced considerably over the past 20 to 30 years. However, the concentrations of heavy metals and organic pollutants in sea water are still several times higher in the Baltic Sea compared to waters of the North Atlantic. As a result of efforts to reduce pollution, annual emissions of heavy metals to the air have decreased since 1990 and consequently their annual deposition onto the Baltic Sea has also halved since 1990. Riverine heavy metal loads (notably cadmium and lead) have also decreased for most of coastal states. Concentrations of contaminants in fish vary according to substance, species and location, but in general, the concentrations of cadmium, lead and PCBs have decreased. Still the content of dioxins in the fish muscle may exceed the authorized limits set by the European Commission. Overall the levels of radioactivity in the Baltic Sea water and biota have shown declining trends since the Chernobyl accident in 1986, which caused significant fallout over the area. Radioactivity is now slowly transported from the Baltic Sea to the North Sea via Kattegat. The amount of caesium-137 in Baltic Sea sediments however has remained largely unchanged, with highest concentrations in the Bothnian Sea and the Gulf of Finland. Habitats and species under threat This year HELCOM introduces its first biodiversity indicators. The degenerating state of the the Baltic Sea affects marine life in many ways. Macrobenthic communities have been severely degraded by increased eutrophication throughout the Baltic Proper and the Gulf of Finland and are below the longterm averages. Populations of the amphipod Monoporeia affinis have crashed in the Gulf of Bothnia and the invasive polychaete Marenzelleria viridis has spread. The lack of salt water inflows has diminished the habitat layer for heterotrophic organisms in general and those of marine origin, such as copepods, in particular. Although the total number of copepods has not change dramatically, the ratio between different species has been affected which in turn has had consequences in higher trophic levels. Herring for instance has suffered from a decline in its favoured diet and now competes with sprat for other species of copepods. Decrease in observed illegal oil spills An increase in the number of maritime transportation during the past decade has increased the potential for an increased numbers of illegal oil discharges. Since the late 1990s ships have been required to deliver oil or oily water from the machinery spaces as well as from ballast or cargo tanks to reception facilities in ports. As of 1999, the number of observed illegal oil discharges has gradually been decreasing every year, but in 2004, still almost 300 illegal spills were detected. Information on the long-term varaitions in the Baltic marine environment can be found in: Fourth Periodic Assessment of the State of the Marine Environment of the Baltic Sea, 1994-1998; Executive Summary (2001) List of 2005 Indicator Fact Sheets
<urn:uuid:7c6ebf00-01dd-49e4-b7ae-1c894499c4ed>
3.15625
1,838
Structured Data
Science & Tech.
32.555379
863
Look out for Asteroid 2012 DA14. It is heading toward Earth at 17,450 miles per hour, according to NASA, and the tug of our planet's gravitational field will cause it to accelerate when it gets here. But it's not going to strike us, when it passes by on Feb. 15. NASA is adamant about this. "It's orbit is very well-known," said Dr. Don Yeomans, NASA specialist for near-Earth objects. "We know exactly where it's going to go, and it cannot hit the Earth." But it will give the Blue Planet the closest shave by any object it's size in known history, Yeomans said. Gravity will cause it to fly a curved path, tugging it closer to Earth's surface than most GPS or television satellites. While the asteroid is moving at a good clip, space rockets have to accelerate to an even higher speed to escape Earth's gravity and make it into space. Though 2012 DA14 will be flying more slowly, its trajectory will keep it from falling to Earth. Getting a look at 2012 DA14 Star gazers in Eastern Europe, Asia or Australia might be able to see it with binoculars or consumer telescopes. It will not be visible to the naked eye, because it's small, "about half the size of a football field," Yeomans said. There are millions of asteroids in our solar system, and they come in all dimensions -- from the size of a beach ball to a large mountain, NASA said. Researchers are looking forward to getting such a close look at an asteroid, as it flies from south to north past Earth, coming as close as 17,200 miles to our planet's surface. NASA will ping it with a signal from a satellite dish for a few days to get a better idea of its makeup. Astronomers think there are about half a million asteroids the size this one near Earth, NASA said, but less than one percent have been detected. Twenty years ago, no one would likely have discovered 2012 DA14, Yeomans said. Scientists spotted it nearly a year ago from an observatory in the south of Spain. Today, specialists track asteroids' paths 100 years into future. They do so less to assess any possible threat of impact with Earth and more to explore what opportunities they offer. "These objects are important for science. They're important for our future resources," Yeomans said. Asteroids are potential gold mines Asteroids can be chock full of metals and other materials, which could be mined for use on earth or on space stations. NASA has discussed the possibility of capturing near-Earth asteroids and placing them into Earth's orbit to study them and extract their resources. At least two start-up companies, Planetary Resources and Deep Space Industries, plan to mine asteroids and sell the acquired bounty on Earth and in space. Being able to exploit asteroids' resources would allow humans to fly farther out into the solar system, build stations a long way from Earth and supply them with materials gathered out in space. Some asteroids, for example, are made of ice, NASA said, which could be used as drinking water for a distant space platform. What if one like this did hit us? An asteroid this size passes this close to Earth only every 40 years and collides with it only once every 1,200 years. If NASA turns out to be wrong about this one not hitting the planet -- and they won't be -- then Asteroid 2012 DA14 would not destroy the world in any case, Yeomans said. An asteroid made of metal that was about the same size collided with Earth 50,000 years ago, creating the mile wide "Meteor Crater" in Arizona and obliterating everything for 50 miles around, he said. 2012 DA14 is likely made of stone, which would do much less damage. In 1908 a similar type asteroid entered the atmosphere and exploded over Tunguska, Russia, leveling trees over an area of 820 square miles -- about two thirds the size of Rhode Island. Not Earth shattering, but you still wouldn't want to live nearby.
<urn:uuid:a72bb2dd-4a07-4f8e-ae29-473bfc427530>
3.25
856
News Article
Science & Tech.
59.00901
864
slot-missing class object slot-name operation &optional new-value => result* slot-missing (class t) object slot-name operation &optional new-value Arguments and Values: class---the class of object. slot-name---a symbol (the name of a would-be slot). operation---one of the symbols setf, slot-boundp, slot-makunbound, or slot-value. The generic function slot-missing is invoked when an attempt is made to access a slot in an object whose metaclass is standard-class and the slot of the name slot-name is not a name of a slot in that class. The default method signals an error. The generic function slot-missing is not intended to be called by programmers. Programmers may write methods for it. The generic function slot-missing may be called during evaluation of slot-value, (setf slot-value), slot-boundp, and slot-makunbound. For each of these operations the corresponding symbol for the operation argument is slot-value, setf, slot-boundp, and slot-makunbound respectively. The optional new-value argument to slot-missing is used when the operation is attempting to set the value of the slot. If slot-missing returns, its values will be treated as follows: Affected By: None. The default method on slot-missing signals an error of type error. defclass, slot-exists-p, slot-value The set of arguments (including the class of the instance) facilitates defining methods on the metaclass for slot-missing.
<urn:uuid:b143a54e-cab0-40a5-9564-dd175c5caccb>
3.015625
348
Documentation
Software Dev.
51.614314
865
At the bottom of the code page is a text area where you can enter the URL for a Web accessible dataset and a browse button for selecting a dataset on your computer. Either way, the dataset will be read in using read.table with header=T and stored in a dataframe called X. The dataframe, X, will then be attached so you can use the variable names. After your code has be executed three more browser windows will open to display the results. Once all of the window are open you can keep typing code into the code window, edit what's there, or erase everything and start over. You can cut and paste between an editor window and the code window. You can also cut text or images out of the Rweb windows and paste them into documents (if the document editor supports pasting images). Here is some R code you can use to test things out. # A little Regression x <- rnorm(100) # 100 random numbers from a normal(0,1) distribution y <- exp(x) + rnorm(100) # an exponential function with error result <- lsfit(x,y) # regress x on y and store the results ls.print(result) # print the regression results plot(x,y) # pretty obvious what this does abline(result) # add the regression line to the plot lines(lowess(x,y), col=2) # add a nonparametric regression line (a smoother) hist(result$residuals) # histogram of the residuals from the regression ## Boxplots n <- 10 g <- gl(n, 100, n * 100) x <- rnorm(n * 100) + sqrt(codes(g)) boxplot(split(x, g), col = "lavender", notch = TRUE) # Scatter plot matrix data("iris") pairs(iris[1:4], main = "Edgar Anderson's Iris Data", font.main = 4, pch = 19) pairs(iris[1:4], main = "Edgar Anderson's Iris Data", pch = 21, bg = c("red", "green3", "blue")[codes(iris$Species)]) #Coplots data(quakes) coplot(long ~ lat | depth, data = quakes, pch = 21, bg = "green3") #Image and contour plots (These are Owww-Ahhh plots) opar <- par(ask = interactive() && .Device == "X11") data(volcano) x <- 10 * (1:nrow(volcano)) x.at <- seq(100, 800, by = 100) y <- 10 * (1:ncol(volcano)) y.at <- seq(100, 600, by = 100) image(x, y, volcano, col = terrain.colors(100), axes = FALSE) rx <- range(x <- 10*1:nrow(volcano)) ry <- range(y <- 10*1:ncol(volcano)) ry <- ry + c(-1,1) * (diff(rx) - diff(ry))/2 tcol <- terrain.colors(12) par(opar); par(mfrow=c(1,1)); opar <- par(pty = "s", bg = "lightcyan") plot(x = 0, y = 0,type = "n", xlim = rx, ylim = ry, xlab = "", ylab = "") u <- par("usr") rect(u, u, u, u, col = tcol, border = "red") contour(x, y, volcano, col = tcol, lty = "solid", add = TRUE) title("A Topographic Map of Maunga Whau", font = 4) abline(h = 200*0:4, v = 200*0:4, col = "lightgray", lty = 2, lwd = 0.1) par(opar)
<urn:uuid:6bcfbbcc-7a9d-4db7-b92b-6bc57541cd9d>
2.5625
865
Documentation
Software Dev.
71.088842
866
Behind the buzz and beyond the hype: Our Nanowerk-exclusive feature articles Posted: Jun 28th, 2010 Novel maskless e-beam technique a promising tool for engineering metallic nanostructures (Nanowerk Spotlight) The manufacture of certain types of nanostructures – nanotubes, graphene, nanoparticles, etc. – has already entered industrial-scale mass production. However, the controlled fabrication of nanostructures with arbitrary shape and defined chemical composition is still a major challenge in nanotechnology applications. It appears that electron beams from electron microscopes (EM) – nowadays routinely focused down to the nanometer regime – are ideal candidates for versatile tools for nanotechnology (see our recent Nanowerk Spotlight: "Direct-write process brings nanotechnology fabrication closer to mass production"). However, their usage is mostly restricted by the conditions in the corresponding electron microscopes, since most EMs are housed in high vacuum chambers the unintended electron-beam-induced deposition of residual gases is a problem, as well as the maintenance of well defined sample conditions. Researchers in Germany have now presented a novel way to use a highly focused electron beam to lithographically fabricate clean iron nanostructures. This new technique expands the application field for focused electron beams in nanotechnology. "We have developed a novel two-step process to locally generate iron nanostructures on a commercial 300 nm silicon oxide substrate at room temperature," Hubertus Marbach, a researcher at the Universität Erlangen-Nürnberg tells Nanowerk. "In the first step, the surface is locally activated by a 3 nm wide electron beam. The second step comprises the development of the activated structures by dosing an organometallic precursor, which then decomposes and grows autocatalytically to form pure iron nanocrystals until the precursor supply is stopped." Using a more vivid picture, Marbach says that one might think of the whole process as writing with invisible ink in the irradiation step, which is then made visible by the development step. "Besides the fantasy-stimulating application to write secret nanomessages in ultrahigh vacuum, the described effect might be the starting point for a whole new way to generate nanostructures." Electrons as Invisible Ink. A SiOx surface can be locally activated with a focused electron beam (1) such that subsequently dosed [Fe(CO)5] decomposes (2) and autocatalytically grows to pure Fe nanocrystals (3) at predefined positions until the precursor supply is stopped. A 3D representation of the SEM data is in the background. (Reprinted with permission from Wiley-VCH Verlag) The major new aspect of this work is the local chemical activation, i.e. catalytic activation of an oxidic surface. The researchers use this process to locally dissociate adsorbed precursor molecules and then generate nanostructures with an electron beam (a process that can be categorized as focused electron beam induced processing or FEBIP, where the injection or removal of electrons can be used to trigger chemical processes, such as bond formation or dissociation). The starting point of the present investigations was the so called electron beam induced deposition or EBID technique a special case of FEBIP, where already adsorbed precursor molecules are locally dissociated with a focused electron beam, leaving a deposit of the nonvolatile dissociation products. To minimize the complications of unintended EBID of residual gases, the team followed a 'surface science approach' where they worked under ultra high vacuum (UHV) conditions. This resulted in deposits with high purity. The cleanliness of the whole process, namely UHV conditions plus a well-defined surface, was identified as the key factor for the purity of the metallic nanostructures. In a previous paper, Marbach and his team have described this technique ("Electron-Beam-Induced Deposition in Ultrahigh Vacuum: Lithographic Fabrication of Clean Iron Nanostructures") Marbach explains that, In conventional applications, the high energetic primary electrons of the EM beam are scattered in the sample. Eventually, scattered electrons exit the surface again close to the impact of the electron beam. "In EBID, this effectively leads to a widening of the deposit compared to the size of the beam" he says. "This (proximity) effect increases with an increase of the local electron dose. Since our fabrication technique relies on catalytic and autocatalytic effects, the electron dose needed as a 'seed' for the growth of the iron nanostructures can be minimized, thus reducing the mentioned proximity effect. In other words, our approach might be suitable to produce smaller structures." EBID allows almost every combination of deposit material and substrate to be targeted since there is a large variety of precursor molecules and there are nearly no restrictions in regard to the substrate. In this specific work, the researchers' aim was to generate clean iron nanostructures with potential applications in the field of data storage, sensor or information processing devices or as seeds for the localized growth of other nanostructures like carbon nanotubes or silicon wires. With their novel FEBIP process they are now moving on to explore other oxide materials and precursor molecules. "We propose our technique to pre-structure the surface by a local chemical modification as a general route to fabricate nanostructures, e.g. to locally anchor or activate functional molecules," says Marbach. One challenge of the novel process is the rather low writing speed. Marbach points out though, that there are considerable efforts underway to develop multibeam instruments which would boost the throughput of electron-beam-based techniques, e.g. at the TU Delft (Mapper lithography) and the European CHARPAN project located in Vienna.
<urn:uuid:ef3f121e-0523-4e2c-99bf-7396c4f065d5>
3.0625
1,207
News Article
Science & Tech.
13.430458
867
AN ILLUSION device that makes one object look like another could one day be used to camouflage military planes or create "holes" in solid walls. The idea builds on the optical properties of so-called metamaterials, which can bend light in almost any direction. In 2006, researchers used this idea to create an "invisibility cloak" that bent microwaves around a central cavity, like water flowing around a stone. Any object in this cavity is effectively invisible. Now a group of researchers has gone a step further. "Invisibility is just an illusion of free space, of air," says Che Ting Chan, a physicist at the Hong Kong University of Science and Technology and a co-author of the study. "We are extending that concept. We can make it look like not just air but anything we want." Instead of bending light around a central cavity, the team has worked out ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:1a023e00-0243-4afa-8be5-70df19c7bc36>
3.4375
215
Truncated
Science & Tech.
47.467219
868
11/27/2010 - International delegates Saturday adopted new protections for seven species of shark in the Atlantic Ocean but rejected restrictions for bluefin tuna and swordfish, leaving the future of some of the world's most imperiled marine predators uncertain. Matt Rand, who directs global shark conservation for the Washington-based Pew Environment Group, said in a phone interview from Paris the decisions show that policymakers are responding to the criticism they received this spring after the Convention on International Trade in Endangered Species of Wild Fauna and Flora failed to adopt a single measure restricting the global trade of species such as oceanic whitetip and various types of hammerhead. Rand said the votes demonstrate "fisheries managers around the world are paying attention to shark issues," although he added that it still means only a tiny fraction of the sharks that swim in the Atlantic now are protected from fishing vessels. "It's a good step forward but far short of what is needed to save the world's sharks," Rand said. Read the full article Sharks Get New Protections Amid Severe Declines; Bluefin Tuna Safeguards Rejected on The Washington Post's Web site.
<urn:uuid:857127d8-b3b6-4250-acf2-1154a693ab48>
2.859375
235
News Article
Science & Tech.
23.109701
869
Tapping the Liquid Gold WATER BATTLE SITES Central Plains Water scheme: involves taking water from the Rakaia and Waimakariri rivers and storing it in a reservoir in the dammed Waianiwaniwa Valley. The water will be used to irrigate 60,000ha bounded by the rivers, SH1 and the Malvern foothills. Red Zones: Environment Canterbury has labelled many blocks of land as red zones areas that have no more groundwater allocation to spare for irrigation. But farmers keep fighting for more water through legal challenges. Project Aqua: The lower Waitaki River was the site where Meridian abandoned a bid to build six new power stations by diverting two-thirds of the river through channel alongside the river. Canterbury has 78,162 kilometres of rivers and 4753 lakes with a surface area of 702 square kilometres, an area the size of greater Christchurch. The Canterbury region has 70 per cent of the country’s irrigated land and generates 24% of the nation’s power through hydroelectricity. The aquifiers on the Central Plains are fed by seepage from the rivers and by rainfall. They supply almost all the water for irrigation and for human use. The groundwater of the Canterbury Plains is a large continuously flowing body of water within layers of silts, sands and gravel down to a depth of 500m. The Waimakariri River is the source of Christchurch’s pristine untreated drinking water through the underground aquifer system. It’s also one of the largest and best examples of a braided river in New Zealand. (c) 2008 Press, The; Christchurch, New Zealand. Provided by ProQuest LLC. All rights Reserved.
<urn:uuid:07fc4255-e897-4380-91d9-8c1e8a7687e0>
2.921875
358
Truncated
Science & Tech.
49.35227
870
June 9, 2010 Three-dimensional imaging is dramatically expanding the ability of researchers to examine biological specimens, enabling a peek into their internal structures. And recent advances in X-ray diffraction methods have helped extend the limit of this approach. While significant progress has been made in optical microscopy to break the diffraction barrier, such techniques rely on fluorescent labeling technologies, which prohibit the quantitative 3-D imaging of the entire contents of cells. Cryo-electron microscopy can image structures at a resolution of 3 to 5 nanometers, but this only works with thin or sectioned specimens. And although X-ray protein crystallography is currently the primary method used for determining the 3-D structure of protein molecules, many biological specimens -- such as whole cells, cellular organelles, some viruses and many important protein molecules -- are difficult or impossible to crystallize, making their structures inaccessible. Overcoming these limitations requires the employment of different techniques. Now, in a paper published May 31 in Proceedings of National Academy of Sciences, UCLA researchers and their collaborators demonstrate the use of a unique X-ray diffraction microscope that enabled them to reveal the internal structure of yeast spores. The team reports the quantitative 3-D imaging of a whole, unstained cell at a resolution of 50 to 60 nanometers using X-ray diffraction microscopy, also known as lensless imaging. Researchers identified the 3-D morphology and structure of cellular organelles, including the cell wall, vacuole, endoplasmic reticulum, mitrochondria, granules and nucleolus. The work may open a door to identifying the individual protein molecules inside whole cells using labeling technologies. The lead authors on the paper are Huaidong Jiang, a UCLA assistant researcher in physics and astronomy, and John Miao, a UCLA professor of physics and astronomy. The work is a culmination of a collaboration started three years ago with Fuyu Tamanoi, UCLA professor of microbiology, immunology and molecular genetics. Miao and Tamanoi are both researchers at UCLA's California NanoSystems Institute. Other collaborators include teams at Riken Spring 8 in Japan and the Institute of Physics, Academia Sinica, in Taiwan. "This is the first time that people have been able to peek into the 3-D internal structure of a biological specimen, without cutting it into sections, using X-ray diffraction microscopy," Miao said. "By avoiding use of X-ray lenses, the resolution of X-ray diffraction microscopy is ultimately limited by radiation damage to biological specimens. Using cryogenic technologies, 3-D imaging of whole biological cells at a resolution of 5 to 10 nanometers should be achievable," Miao said. "Our work hence paves a way for quantitative 3-D imaging of a wide range of biological specimens at nanometer-scale resolutions that are too thick for electron microscopy." Tamanoi prepared the yeast spore samples analyzed in this study. Spores are specialized cells that are formed when they are placed under nutrient-starved conditions. Cells use this survival strategy to cope with harsh conditions. "Biologists wanted to examine internal structures of the spore, but previous microscopic studies provided information on only the surface features. We are very excited to be able to view the spore in 3-D," Tamanoi said. "We can now look into the structure of other spores, such as Anthrax spores and many other fungal spores. It is also important to point out that yeast spores are of similar size to many intracellular organelles in human cells. These can be examined in the future." Since its first experimental demonstration by Miao and collaborators in 1999, coherent diffraction microscopy has been applied to imaging a wide range of materials science and biological specimens, such as nanoparticles, nanocrystals, biomaterials, cells, cellular organelles, viruses and carbon nanotubes using X-ray, electron and laser facilities worldwide. Until now, however, the radiation-damage problem and the difficulty of acquiring high-quality 3-D diffraction patterns from individual whole cells have prevented the successful high-resolution 3-D imaging of biological cells by X-ray diffraction. Other social bookmarking and sharing tools: - H. Jiang, C. Song, C.-C. Chen, R. Xu, K. S. Raines, B. P. Fahimian, C.-H. Lu, T.-K. Lee, A. Nakashima, J. Urano, T. Ishikawa, F. Tamanoi, J. Miao. Quantitative 3D imaging of whole, unstained cells by using X-ray diffraction microscopy. Proceedings of the National Academy of Sciences, 2010; DOI: 10.1073/pnas.1000156107 Note: If no author is given, the source is cited instead.
<urn:uuid:147d7363-857e-4221-9731-f3d1f731081a>
3.25
1,009
News Article
Science & Tech.
32.543814
871
Web edition: May 27, 2011 It took no time in December for critics to cast doubt on the remarkable claim that a bacterium force-fed with arsenic incorporated some of the poison into its cells and even its DNA. As soon as the Science paper by Felisa Wolfe-Simon and colleagues was published, critics took to the airwaves and the blogosphere; there was even coverage of the coverage. The debate over whether an organism could substitute a normally toxic substance, arsenic, for phosphorus, one of life’s six elemental building blocks — and whether Wolfe-Simon and her colleagues demonstrated an instance that substitution — continues with a formal octet of criticisms published online today in Science. And you can take a look for yourself at the whole thing, because the journal has made the papers freely available. Some of the objections are with the science done (or not done) by the research team. The growth medium for example, in which the microbes known as strain GFAJ-1 were cultured, contained some phosphorous, one researcher notes. Others take issue with the math with which the data was analyzed. Other scientists cast doubt on the claims not because of the nitty-gritty of the lab work, but because the report challenges a lot of known science, such as the chemical reactions that go into building a DNA molecule and how that might work — or not work — if arsenic were swapped for phosphorous. Science also published a response by Wolfe-Simon and her colleagues, in which they address the presented critiques. And so this scientific version of Clue continues: Was it GFAJ-1, in the lab, with the arsenic? That still isn’t clear, but as Science notes in its introduction to the debate, science is proceeding as it should.
<urn:uuid:87e7b736-923c-4fe6-97ac-1e3f2e149342>
2.875
363
Knowledge Article
Science & Tech.
40.332408
872
Screenshot via MBARI video The newly discovered carnivorous "harp sponge." Candelabra-shaped and carnivorous, a newly discovered sea creature — the "harp sponge" — has been found clinging to the ocean floor off Monterey Bay. Armed with "Velcro-like barbed hooks," the sponge — first observed via remote vehicle by a Monterey Bay Aquarium Research Institute (MBARI) team in 2000 — lives about two miles under the sea and feeds on crustaceans, reports Our Amazing Planet. Velcro-like barbed hooks cover the sponge's branching limbs, snaring crustaceans as they are swept into its branches by deep-sea currents. Once the harp sponge has its meal, it envelops the animal in a thin membrane, and then slowly begins to digest its prey. Two sponges were collected off the coast of California by researchers; 10 others were filmed. The largest observed specimen was about 14 inches tall, and the sponges varied in structure. Scientists believe the sponge evolved a multi-pronged shape, in part, to cover more food-grabbing surface area.
<urn:uuid:07937f17-c5c0-4aaf-9395-95bc4d9889c0>
3.625
236
News Article
Science & Tech.
41.546039
873
Observing the rock 2012 DA14 flying past the Earth on 16 February 2013 The path of asteroid 2012 DA14 across the south-west sky as seen from Sydney on the morning of Saturday 16 February 2013. The times indicated are in AEDT while the positions with relation to the horizon are calculated for 5:00 am. Diagram Nick Lomb On the morning of 16 February 2013 (Australian time) 2012 DA14, a piece of space rock the size of a large city building, will hurtle past the Earth at a speed of about 28,000 km per hour. Its closest distance to the surface of the Earth will be about 27,700 km, which is closer than any other similar object in modern times. That closest approach is within the paths of the geosynchronous communication satellites that circle at 35,800 km above the equator. However, there is no likelihood of 2012 DA14 hitting the Earth and little chance of a collision with a satellite. An illustration showing how 2012 DA14 will pass by the Earth and its system of artificial satellites. Courtesy NASA It will be possible to see and photograph this rare close approach, but from Sydney it will be a little tricky. As the rock is heading for its closest approach rendezvous at 6:26 am AEDT and brightening as it comes closer, the Sydney sky is also brightening with the coming of dawn and sunrise. Any view of the space rock or asteroid is likely to be lost after nautical twilight at 5:34 am when the object’s predicted brightness is 8.2 mag (see discussion on magnitudes below). At closest approach, which almost coincides with sunrise in Sydney, the prediction is for a relatively bright 6.9 mag. Those who fancy a trip to Adelaide or even to Perth will have a better opportunity to see the flypast at its closest for the Sun rises later there. Of course, as usual with astronomical events the best viewing is from a dark sky site, away from city lights. For those not familiar with the magnitude scale used by astronomers, it is a measure of the brightness of stars and other objects in the sky. It works in reverse to what you may expect in that the fainter a star the greater its magnitude. Venus, for example, can be magnitude -4, the brightest star has a magnitude of about -1, the faintest star visible from a suburban location maybe magnitude 4, the faintest star visible from a dark location maybe magnitude 6 and with binoculars from a dark sky magnitude 9 maybe visible. Those in a dark sky should be able to see 2012 DA14 with a pair of binoculars just before dawn. From Sydney suburbs a Go To telescope could be sent to the exact celestial coordinates of the object courtesy of JPL’s Horizons service: 4:00 am AEST RA 10 08 34.75 Dec -76 18 35.2 4:30 am AEST RA 10 29 03.10 Dec -69 26 18.5 5:00 am AEST RA 10 43 14.02 Dec -59 11 15.1 5:30 am AEST RA 10 53 41.53 Dec -43 38 41.4 6:00 am AEST RA 11 01 43.36 Dec -21 21 32.2 For most people though the best way to attempt observation is to set up a camera on a tripod, or better still, a tracking mount pointing in the region of the sky below the Southern Cross and take time exposures during the period between 5:00 and 5:30 am from Sydney (or until local nautical twilight at places to the west of Sydney). If the exposures are long enough the space rock may appear as a faint streak longer than the shorter streaks from stars. The observations and imaging may not work, but it is still worth trying if the sky is clear. It is a long wait until the next such close pass that we know about, which is that of the asteroid Apophis on 14 April 2029, again in the morning sky. What have you to lose? Only a little bit of sleep! Update 12 February 2013: the time of closest approach to Sydney is at 6:14 am AEDT when the asteroid is 30,678 km away from the city.
<urn:uuid:cc5b1486-e3cb-49eb-89f1-8fd7db7b83f3>
3.046875
871
Knowledge Article
Science & Tech.
72.726366
874
The Geological Perspective On Global Warming: A Debate Dr Colin P. Summerhayes, Vice-President of the Geological Society of London Dear Dr Peiser, In the interest of contributing to the evidence-based debate on climate change I thought it would be constructive to draw to your attention the geological evidence regarding climate change, and what it means for the future. This evidence was published in November 2010 by the Geological Society of London in a document entitled “Climate Change: Evidence from the Geological Record”, which can be found on the Society’s web page. A variety of techniques is now available to document past levels of CO2 in the atmosphere, past global temperatures, past sea levels, and past levels of acidity in the ocean. What the record shows is this. The Earth’s climate has been cooling for the past 50 million years from 6-7°C above today’s global average temperatures to what we see now. That cooling led to the formation of ice caps on Antarctica 34 million years ago and in the northern hemisphere around 2.6 million years ago. The cooling was directly associated with a decline in the amount of CO2 in the atmosphere. In effect we moved from a warm “greenhouse climate” when CO2, temperature and sea level were high, and there were no ice caps, to an “icehouse climate” in which CO2, temperature and sea level are low, and there are ice caps. The driver of that change is the balance between the emission of CO2 into the atmosphere from volcanoes, and the mopping up of CO2 from the atmosphere by the weathering of rocks, especially in mountains. There was more volcanic activity in the past and there are more mountains now. Superimposed on this broad decline in CO2 and temperature are certain events. Around 55 million years ago there was a massive additional input of carbon into the atmosphere – about 4 times what humans have put there. It caused temperatures to rise by a further 6°C globally and 10°C at the poles. Sea level rose by some 15 metres. Deep ocean bottom waters became acid enough to dissolve carbonate sediments and kill off calcareous bottom dwelling organisms. It took over 100,000 years for the Earth to recover from this event. More recently, during the Pliocene, around 3 million years ago, CO2 rose to levels a little higher than today’s, global temperature rose to 2-3°C above today’s level, Antarctica’s Ross Ice Shelf melted, and sea level rose by 10-25 metres. The icehouse climate that characterised the past 2.6 million years averaged 9°C colder in the polar regions and 5°C colder globally. It was punctuated by short warm interglacial periods. We are living in one of these warm periods now – the Holocene – which started around 11,000 years ago. The glacial to interglacial variations are responses to slight changes in solar energy meeting the Earth’s surface with changes in: our planet’s orbit from circular to elliptical and back; the position of the Earth relative to the sun around the Earth’s orbit; and the tilt of the Earth’s axis. These changes recur on time scales of tens to hundreds of thousands of years. CO2 plays a key role in these changes. As the Earth begins to warm after a cold period, sea ice melts allowing CO2 to emerge from the ocean into the atmosphere. There it acts to further warm the planet through a process known as positive feedback. The same goes for another greenhouse gas, methane, which is given off from wetlands that grow as the world warms. As a result the Earth moves much more rapidly from cold to warm than it does from warm to cold. We are currently in a cooling phase of this cycle, so the Earth should be cooling slightly. Evidently it is not. The Geological Society deduced that by adding CO2 to the atmosphere as we are now doing, we would be likely to replicate the conditions of those past times when natural emissions of CO2 warmed the world, melted ice in the polar regions, and caused sea level to rise and the oceans to become more acid. The numerical models of the climate system that are used by the meteorological community to predict the future give much the same result by considering modern climate variation alone. Thus we arrive at the same solution by two entirely independent methods. Under the circumstances the Society concluded that “emitting further large amounts of CO2 into the atmosphere over time is likely to be unwise, uncomfortable though that fact may be.” Dr Colin P. Summerhayes Vice-President Geological Society of London and Emeritus Associate Scott Polar Research Institute, Cambridge. 8 February 2013 Professor Robert Carter and Professor Vincent Courtillot respond: Dear Dr Peiser, Thank you for your invitation on behalf of the Foundation to reply to Dr Summerhayes’ letter about geological evidence in relation to the hypothesis of dangerous anthropogenic global warming (DAGW) that is favoured by the Intergovernmental Panel on Climate Change (IPCC). We are in agreement with many of Dr Summerhayes’ preliminary remarks about the geological context of climate change. This reflects that a large measure of scientific agreement and shared interpretation exists amongst most scientists who consider the global warming issue. Points of commonality in the climate discussion include: * that climate has always changed and always will, * that Earth has often been warmer than it is today, and that its present climatic condition is that of a warm interglacial during a punctuated icehouse world, * that carbon dioxide is a greenhouse gas and warms the lower atmosphere (though debate remains as to the magnitude and timescale of the warming), * that a portion of human emissions are accumulating in the atmosphere, * that a global warming of around 0.5°C occurred in the 20th century, but that there has been no global temperature rise over the last 16 years. The first two points are rooted in geological evidence (as discussed in more detail by Dr Summerhayes), the third is based upon physical principle and the last three are mostly matters of instrumental measurement (i.e. observation). Despite the disparate scientific disciplines involved, all these points are relevant to achieving a quantitative understanding of climate change, together with several other disputed scientific matters such as those that we discuss below. One of the disputed scientific matters is represented by Dr Summerhayes’ assertion that cooling over the last 34 million years “was directly associated with a decline in the amount of CO2 in the atmosphere”. The word “associated” is ambiguous. It may simply mean that temperature and CO2 were correlated, in the sense that their trends were parallel. But as everyone knows correlation is not causality and whether one drives the other, or the two are driven by a third forcing factor, or the correlation is the result of chance, requires careful analysis and argument. Though it may be true that a broad correlation exists between atmospheric CO2 content and global temperature, at least on some timescales, it remains unclear whether the primary effect is one of increasing CO2 causing warming (via the greenhouse effect) or of warming causing CO2 increase (via outgassing from the ocean). We are familiar with the argument that the currently decreasing carbon isotope ratio in the atmosphere is consistent with a fossil fuel source for incremental CO2 increases, and therefore with the first of these two possibilities, but do not find it compelling because other natural sources (soil carbon, vegetation) also contribute isotopically negative carbon to the atmosphere. A second area of uncertainty, related to the point just discussed, is the rate, scope and direction of the various feedbacks that apply during a natural glacial-interglacial climatic cycle. Dr Summerhayes provides a confident, and perhaps plausible, account as to how changing insolation (controlled by orbital change), melting sea-ice and increasing CO2 and CH4 jointly drive the asymmetrical glacial-interglacial cycles that have characterised recent planetary history. However, our knowledge of the climate system and its history currently remains incomplete; some of the forcing mechanisms and feedbacks may not be known accurately, or even at all. For example, we do not yet know whether clouds exert a net warming or cooling effect on the climate. Similarly, variations in ultraviolet radiation and high-energy particle emission from the Sun, in atmospheric electricity and in galactic cosmic rays may all play larger roles in controlling climate change than is currently assumed, yet these effects are absent from most of the current generation of deterministic computer models of the future climate. The temperature projections made by these models may well be affected by our ignorance of the magnitude, the sign, or even the existence of some of the forcings and feedbacks that are actually involved. Thirdly, Dr Summerhayes also briefly discusses the issue of sea level change. He quotes an estimated increase of 15 m in sea level associated with a temperature increase of 6–10°C 55 million years ago. He then quotes a range of 10–25 m rise for a 2–3°C warming 3 million years ago. To this we might add the further examples of the 125 m sea level rise that has accompanied the 6°C temperature rise since the last glacial maximum, and the 0.2-m rise associated with the ~0.5°C 20th century warming. It appears from these examples that a 1°C temperature rise can be associated with a sea level rise of as little as 0.4 m or as much as 8 m, and all values in between! This indicates an uncertainty in our understanding of the temperature/CO2/sea-level connection that surely lessens its value for contributing to policy formulation. Figure 1. Temperature curve reconstructed from oxygen isotope measurements in a Greenland ice core over the last 10,000 years (Lappi 2010 after Alley,2000). Fourth, and last, Dr Summerhayes says that because orbitally-forced climate periodicity is currently in a cooling phase “the Earth should be cooling slightly. Evidently it is not”. The statement is tendentious, because whether Earth is seen to be cooling or warming depends upon the length of climate record that is considered. Trends over 1, 10, 100 or 1000 years are not the same thing, and their differences must be taken into account carefully. We reproduce two figures that may be used to demonstrate that Earth is currently not warming on either the longer-term millennial timescale (Figure 1) or the short-term decadal/meteorological timescale (Figure 2). We note also that on the intermediate centennial timescale (1850–2010) the temperature trend has been one of a slight (0.5°C) rise. In assessing which of these timescales is the “proper” one to consider in formulating climate policy, we observe that the results conveyed in Figure 2 have little scientific (and therefore policy) meaning unless they are assessed in the context of the data in Figure 1. Figure 2. Mean temperature of lower atmosphere: HadCRUT4 annual means 1997-2011 We acknowledge that the data in Figure 1, which are drawn from a Greenland ice core, represent regional rather than global climate. But a similar pattern of Holocene long-term cooling is seen in many other records from around the world, including from Antarctic ice cores. Also, evidence for a millenial solar cycle has been accumulating over the past years, and, representing that rhythm, the Medieval Warming (also called Medieval Climatic Optimum) appears to have been both global and also warmer than today’s climate. Regarding Figure 2, the data demonstrate that no warming has occurred since 1997. In response, some leading IPCC scientists have already acknowledged that should the temperature plateau continue, or turn into a statistically significant cooling trend, then the mainstream IPCC view will need revision. It is noteworthy, too, that over the 16 years during which global temperature has remained unchanged (1997-2012), atmospheric carbon dioxide levels have increased by 8%, from 364 ppm to c.394 ppm. Given a mixing time for the atmosphere of about 1 year, these data would invalidate the hypothesis that human-related carbon dioxide emissions are causing dangerous global warming. In any case, observed global temperatures are currently more remote than ever from the most recent predictions set out in IPCC AR4. The areas of uncertainty in the prevailing argument over DAGW are therefore not only geological but also instrumental and physical. Current debate, which needs to be resolved before climate policy is set, centres on the following three issues: * whether any definite evidence exists for dangerous warming of human causation over the last 50 years, * the amount of net warming that is, or will be, produced by human-related emissions (the climate sensitivity issue), and * whether the IPCC’s computer models can provide accurate climate predictions 100 years into the future. In assessing these issues, our null hypothesis is that the global climate changes that have occurred over the last 150 years (and continue to occur today) are mainly natural in origin. As summarised in the reports of the Nongovernmental International Panel on Climate Change (NIPCC), literally thousands of papers published in refereed journals contain facts or writings consistent with this null hypothesis, and plausible natural explanations exist for all the post-1850 global climatic changes that have been described so far. In contrast, no direct evidence exists, and nor does the Geological Society point to any, that a measurable part of the mild late 20th century warming was definitely caused by human-related carbon dioxide emissions. The possibility of human-caused global warming nonetheless remains, because carbon dioxide is indubitably a greenhouse gas. The major unknown is the actual value of climate sensitivity, i.e. the amount of temperature increase that would result from doubling the atmospheric concentration of CO2 compared to pre-industrial levels. IPCC models estimate that water vapour increases the 1°C effect that would be seen in a dry atmosphere to 2.5-4.5°C, whereas widely cited papers by Lindzen & Choi (2011) and Spencer & Braswell (2010) both describe empirical data that is consistent with negative feedback, i.e. sensitivity less than 1°C. The conclusion that climate sensitivity is significantly less than argued by the IPCC is also supported by a range of other empirical or semi-empirical studies (e.g., Forster & Gregory, 2006; Aldrin et al., 2012; Ring et al., 2012). Gathering these various thoughts together, we conclude that the risk of occurrence of damaging human-caused global warming is but a small one within the much greater and proven risks of dangerous natural climate-related events (not to mention earthquakes, volcanic eruptions, tsunamis and landslides, since we are dealing here with geological topics). Moreover, the property damage and loss of life that occurred in the floods in the UK in 2007; in the 2005 Katrina and 2012 Sandy storms in the USA; and in deadly bushfires in Australia in 2009 and 2013 all attest that even wealthy and technologically sophisticated nations are often inadequately prepared to deal with climate-related hazard. The appropriate response to climate hazard is to treat it in the same way as other geological hazards. Which is to say that national policies are needed that are based on preparing for and adapting to all climate events as and when they happen, and irrespective of their presumed cause. Every country needs to develop its own understanding of, and plans to cope with, the unique combination of climate hazards that apply within its own boundaries. The planned responses should be based upon adaptation, with mitigation where appropriate to cushion citizens who are affected in an undesirable way. The idea that there can be a one-size-fits-all global solution to deal with just one possible aspect of future climate hazard, as recommended by the IPCC, and apparently supported by Dr Summerhayes on behalf of the Geological Society, fails to deal with the real climate and climate-related hazards to which all parts of the world are episodically exposed. Professor Robert (Bob) Carter Professor Vincent Courtillot 14 February 2013 Aldrin, M. et al. 2012. Bayesian estimation of climate sensitivity based on a simple climate model fitted to observations on hemispheric temperature and global ocean heat content. Environmetrics, doi:10.1002/env.2140. Alley, R.B. 2000. The Younger Dryas cold interval as viewed from central Greenland. Quaternary Science Reviews 19: 213–226 Forster, P.M. & Gregory, J.M. 2006. The climate sensitivity and its components diagnosed from Earth radiation budget data. Journal of Climate 19, 39-52. Lappi, D. 2010. 65 million years of cooling Lindzen, R.S. & Choi, Y-S. 2011. On the observational determination of climate sensitivity and its implications. Asia-Pacific Journal of Atmospheric Sciences 47, 377-390. Ring, M.J. et al. 2012. Causes of the global warming observed since the 19th century. Atmospheric and Climate Sciences 2, 401-415. Spencer R. W. & Braswell, W.D. 2010. On the diagnosis of radiative feedback in the presence of unknown radiative forcing. Journal of Geophysical Research 115, D16109.
<urn:uuid:f0965eb8-455d-443b-9cf9-720d790d4628>
3.578125
3,605
Audio Transcript
Science & Tech.
40.039931
875
A Hot Paper for Warm Weather: Research on Fire and Climate Change Honored When the weather warms up in the American West, most people are just enjoying the blooms and greenery. But fire-watchers know the lush landscape will turn into tinder when temperatures soar. And the more plant material there is, the more flammable the countryside will become. One researcher watching the fire seasons with particular attention is UC Merced’s Tony Westerling, jointly appointed in the School of Engineering and the School of Social Sciences, Humanities and Arts. Westerling’s 2006 paper about large wildfires and climate change, published in the journal “Science,” was cited by Thomson Scientific’s ScienceWatch.com as the Fast-breaking Paper of the Month in geosciences for February 2008. The paper was in the top 1 percent of papers in the geosciences in terms of being cited by other scientists in their publications. It also showed the largest percentage increase in citations in the past few months of any highly -cited paper in the geosciences. Westerling’s paper gained national attention on its publication, with articles appearing in newspapers nationwide. The new assistant professor even found himself a guest on National Public Radio’s Science Friday before starting work at UC Merced in July 2006. He and his colleagues at the University of Arizona published a statement in 2007 following severe fires in Southern California clarifying what kinds of fires their research had successfully linked to global climate change. The Southern California chaparral fires were not among the group – their research focused on forest fires. This 2007 statement also gained widespread notice in the media. ScienceWatch.com has published commentary by Westerling about the 2006 paper and his own background and experience. The site also plans a podcast interview with him. In the commentary, Westerling noted that the increase in forest fires “has been very large, in terms of the number of large forest wildfires, total area burned in these fires, the length of the fire season, and the length of time individual fires burn.” “This research is the first to conclusively link the increase in wildfire to trends toward warming and earlier springs, implying that global warming will tend to increase wildfire in forest ecosystems where snow plays an important role in the hydrology,” he said.
<urn:uuid:a401102a-5c55-4c3a-a8d7-68e42585b971>
3.1875
485
News (Org.)
Science & Tech.
38.894167
876
The Array object is used to store multiple values in a single variable. Create an array, and assign values to it: You will find more examples at the bottom of this page. An array is a special variable, which can hold more than one value at a time. If you have a list of items (a list of car names, for example), storing the cars in single variables could look like this: However, what if you want to loop through the cars and find a specific one? And what if you had not 3 cars, but 300? The solution is an array! An array can hold many values under a single name, and you can access the values by referring to an index number. An array can be created in three ways. The following code creates an Array object called myCars: You refer to an element in an array by referring to the index number. This statement access the value of the first element in myCars: This statement modifies the first element in myCars: | is the first element in an array. is the second . . . . . (indexes start with 0)| Because of this, you can have variables of different types in the same Array. You can have objects in an Array. You can have functions in an Array. You can have arrays in an Array: The Array object has predefined properties and methods: For a complete reference of all properties and methods, go to our complete Array object reference. The reference contains a description (and more examples) of all Array properties and methods. The example above makes a new array method that transforms array values into upper case. Your message has been sent to W3Schools.
<urn:uuid:ce9fe621-e329-44db-acf6-98f01c2ee7b3>
4.28125
360
Documentation
Software Dev.
60.750141
877
New Hybrid Deep-sea Vehicle Is Christened Nereus Unique underwater vehicle is named in nationwide student contest Nereus—a mythical god with a fish tail and a man’s torso—was chosen Sunday (June 25) in a nationwide contest as the name of a first-of-its-kind, deep-sea vehicle under construction at the Woods Hole Oceanographic Institution. The vehicle, known until now as the Hybrid Remotely Operated Vehicle, or HROV, will be able to work in the deepest parts of the ocean, from 6,500 meters to 11,000 meters (21,500 feet to 36,000 feet), a depth currently unreachable for routine ocean research. Scientists also plan to use it to explore remote, difficult-to-reach areas, including under the Arctic ice cap. [Editor's note: On May 31, 2009, Nereus dove to the deepest part of the ocean—Challenger Deep in the Mariana Trench. Read the article and interviews with engineers who built Nereus.] Engineers and ship's crew will be able to transform Nereus from a free-swimming vehicle for wide-area ocean surveys to a vehicle tethered by a cable to a surface ship that can be used for or close-up investigation and sampling of seafloor rocks and organisms. The transformation will take 6 to 8 hours and happen on the ship's deck. “Nereus best fits the image of our vehicle, which engineers can change shape at sea for various science needs,” said Andy Bowen, the WHOI engineer overseeing the vehicle’s design and development. Bowen was among a panel of judges from WHOI and engineering consulting groups that selected Nereus from 22 entries in a naming contest open to junior high, high school, and college students who participate in the California-based Marine Advanced Technology Education (MATE) Center. The program provides students in the U.S. and Canada opportunity to explore marine-related careers through internship programs, and it sponsors an annual remotely operated vehicle design competition. “The students thought it would be appropriate to name it after a Greek god who combined two forms,” said Kelly Miller, an oceanography and chemistry teacher at Monterey High School in California who coordinated the winning name submission for a team of six sophomores, juniors, and seniors. A vehicle that switches modes Nereus (rhymes with “serious”) keeps with a tradition in the WHOI Deep Submergence Laboratory of naming vehicles for mythical Greek figures. Among others in the WHOI-operated fleet of vehicles are Jason (a fabled adventurer and ocean explorer), Argo (a ship used by Jason), and Medea (the mythical wife of Jason). Several teams proposed names for the new vehicle taken from mythology, including the Japanese dragon god Ryujin, the Greek god of wind Aiolos, and the Greek god of the sea Poseidon. Ultimately, Bowen said Nereus was selected because “the name most appropriately represented the vehicle’s ability to switch modes as needed by scientists.” The $5-million, battery-operated vehicle will be the first ever designed to transform from a guided, tethered robot to a free-swimming vehicle. Each capability offers advantages to deep-sea researchers. In its autonomous mode, the vehicle will be able to fly on pre-programmed missions over swaths of ocean bottom to map the seafloor, to gather remote data, or to search for scientific targets such as hydrothermal vents. In its tethered mode, it will remain connected via a hair-thin, 25-mile long cable that will enable scientists on the surface ship to send instant commands to the mechanical arm, used for gathering samples of interesting undersea rocks and organisms. Scheduled for launch in 2007 Sea trials will take place offshore Woods Hole in early 2007, and scientists will plan to use it for research later that year at Challenger Deep, a trench in the Pacific Ocean southwest of Guam. It is the deepest area of any ocean, deeper than Mount Everest is high, extending almost 11,000 meters (36,000 feet) beneath the sea surface. The panel of judges involved in the name selection included engineers from WHOI as well as engineering consultants working on the vehicle at The Johns Hopkins University in Baltimore and the Space and Naval Warfare Systems Center in San Diego. Several teams suggested names inspired by wildlife, including the color-switching lizard Chameleon, the aquatic salamander Siren, the Hawaiian owl Pueo, and the scientific name for lobster, Homarus. Others proposed people names. A Newfoundland team suggested Jacques, after famed underwater explorer Jacques Cousteau. Harvey, proposed by a Florida team, acknowledged marine artist Guy Harvey. Audrey, the only female name, came from a California team honoring the late Audrey Mestre, who died in 2002 attempting to set a deep-sea diving record. Nereus was announced the winner during a June 25 awards banquet at the NASA Johnson Space Center in Houston. The prize for the winning team is a trip this September to see the HROV in Woods Hole, where Bowen said engineers expect to be concluding tests on the vehicle’s manipulator arm, thruster, pressure housings, and electronic components. Funding for the development of Nereus comes from the National Science Foundation, the Office of Naval Research, and the National Oceanic and Atmospheric Administration. Originally published: June 26, 2006
<urn:uuid:4a1406d4-54ab-4282-b555-e51a18bc1888>
2.609375
1,137
News (Org.)
Science & Tech.
33.6309
878
Karner Blue Butterfly and Concord (NH) Pine Barrens Project Photo: Lindsay Webb/NHFG Project Goal: To reintroduce the Karner blue butterfly (Lycaeides melissa samuelis) in Concord, NH and to maintain the Concord Pine Barrens through habitat management. Timeline: NH Fish and Game's Nongame Program began restoring the Concord Pine Barrens in 2000 and began releasing captive reared Karner blue butterflies in 2001. Periodic habitat management will always be necessary to maintain the Pine Barrens as a suitable habitat for Karner blue butterflies to survive. Captive rearing of the butterflies will end when the federal recovery goals have been met. Location: Approximately 300 acres of Pine Barrens habitat in Concord, NH. Description: In 1999, the Karner blue butterfly was thought to be extirpated from New Hampshire. The last place it was observed was in a power line corridor in Concord. Working closely with the U.S. Fish and Wildlife Service, New Hampshire Fish and Game biologists began collecting Karner blue butterfly eggs at the next closest and stable population in New York. Almost every year New York provides adult butterflies to help maintain genetic diversity in the New Hampshire Karner blue population. Over many years, captive rearing techniques were established and flying adults were released into the wild. In order to tell which butterflies were released into the wild from the captive rearing lab, and which butterflies make it through the life cycle in the wild, biologists write a number on the butterfly's wing before releasing it into the wild. A mark-recapture survey during the two flying periods allows biologists to estimate the population. In 2005, a new partnership with Roger Williams Park Zoo in Providence, Rhode Island, was established allowing the Zoo to help raise larvae through the pupae stage. The pupae are transported back to New Hampshire where they are either released into the wild or held in captivity for breeding. In 2008, concerns over the New York Karner blue population arose so biologists in NH began captive rearing more larvae in order to give New York pupae to be released back into the wild to augment the wild population in New York at the Albany Pine Bush Preserve. Habitat management is performed on the Concord Pine Barrens to mimic the historic natural disturbance regimes that maintain Pine Barrens vegetation. Some of the techniques used are controlled burning, brush cutting and planting of native vegetation. NHFG biologists adhere to adaptive management which allows the management techniques to change over time as specific outcomes due to timing and intensity change the result. Controlled burning is performed to reduce leaf litter and duff, reduce non-native vegetative species, and promote sunny and sandy openings for native Pine Barrens vegetation to grow. "Kids For Karners" started in 2000 as a way to engage area school children in the Karner blue butterfly and Concord Pine Barrens project. Every winter, biologists go into classrooms where they talk to kids from pre-K up through high school age about the project. The students then plant wild lupine seeds and take care of the plants until May when they come to the Concord Pine Barrens to plant their wild lupine plants. In some years, students also try growing other essential nectar plants such as New Jersey tea and blunt-leaved milkweed. High school students have also helped by cutting and piling brush and planting wild lupine and other plants at a nearby business. The New England Zoo and Aquarium Conservation Collaborative, a conservation group initiated by Roger Williams Park Zoo, began volunteering in 2000. This group has helped to grow wild lupine and plant it in the Concord Pine Barrens, volunteered in the captive rearing lab and in the field to cut brush, pick wild lupine seed, and help with trail work. Albany Pine Bush Preserve Commission City of Concord National Wildlife Federation New England Wildflower Society New England Zoo and Aquarium Conservation Collaborative New Hampshire Army National Guard NH Department of Resources and Economic Development: Division of Forests and Lands New York Department of Conservation Roger Williams Park Zoo U.S. Fish and Wildlife Service - Northeast Region Funding: Private donations have provided the foundation for the Nongame and Endangered Wildlife Program since its inception in 1988. Contributions support the on-the-ground work and also enable the Nongame Program to qualify for additional funding through grants from both the State of New Hampshire and the U.S. Fish and Wildlife Service. Donations made to the Nongame Program are matched dollar-for-dollar by the State of New Hampshire up to $50,000 annually. Please help keep this project going by donating to the Nongame and Endangered Wildlife Program. (Click here to donate) The Nongame Program also receives a portion of proceeds from the sale of the NH Conservation License plate (moose plate) each year. To learn more please visit the NH Moose Plate Program online at www.mooseplate.com.The habitat management of the Concord Pine Barrens is funded through a mitigation agreement with the New Hampshire Army National Guard until 2010. Volunteering: Volunteer opportunities to help on this project, including planting wild lupine, brush piling, and helping in the captive rearing lab are usually made available every spring and summer on specified dates. Check your Spring Wildlines newsletter or contact the Wildlife Division at email@example.com or (603) 271-2461. |Number of butterflies released (both broods) in the Concord Pine Barrens||Number of plants Kids For Karners planted in the Concord Pine Barrens||Number of Acres Burned in the Concord Pine Barrens| - Karner blue butterfly and Concord Pine Barrens and other Nongame Program news - Karner blue butterfly fact sheet - Wildlife Action Plan Karner blue butterfly profile - U.S. Fish and Wildlife Service - Karner Blue Butterfly - New! "Propagation Handbook for the Karner blue butterfly"
<urn:uuid:16bb6ee8-7314-4dbd-b1d3-333b8c1fe6c8>
3.09375
1,248
Knowledge Article
Science & Tech.
37.043905
879
Scientists have identified a never-before-seen type of meteorite from Mars that has 10 times more water and far more oxygen in it than any previous Martian sample. The meteorite was found in the Sahara Desert in 2011 and has the official name of Northwest Africa 7034. It is a small basaltic rock — nicknamed “Black Beauty” – which means it formed from rapidly cooling lava. The meteorite is about 2.1 billion years old, from a period known as the Martian Amazonian epoch, and provides scientists with their first hands-on glimpse of this era. Around 110 Martian meteorites have been found on Earth. Most were probably blown off the Red Planet during a large asteroid impact and subsequently crashed on our own world. The majority are relatively young, though the famous Allan Hills 84001, which some scientists believe contains traces of ancient Martian bacteria, is more than 4 billion years old.
<urn:uuid:76c66113-f06e-418b-94a8-9f28e8378eac>
3.640625
186
News Article
Science & Tech.
51.188158
880
|Life depends on an essentially continuous exchange of mass and energy between living organisms and their environment. Human impact on this vital exchange has occurred on a global or macroclimate scale. Understanding the physical principles involved in heat transfer and absorption in the atmosphere is critical to understanding how these physical factors affect living organisms. The specific objectives of this section are to explain the properties of heat transfer, and to describe laboratory activities that can be used at a variety of academic levels with only slight Described below are three series of experiments performed in the laboratory to address questions that emphasize the underlying principles of heat transfer. These hands-on experiments focused on principles that relate to conduction and convection. The object was to identify the method of heat transfer through solids, liquids, gases, and between boundaries. Understanding these concepts gave us a better understanding of how heat is transferred between our environment and living organisms. These experiments were used as an integral part of the workshop, which consisted of reflections on redesigning or modifying lab exercises to fit personal needs of workshop teachers. These exercises could be adapted for middle school, high school, and college level courses. The methods utilized for the three experiments involved increasing or decreasing the temperature of a solid or liquid, and where applicable, observing the motion of a dye caused by the changes in temperature and density of the medium. |Modes of Heat Transfer: - Conduction: heat transfer resulting from direct contact between substances of different temperatures; heat is transferred from the high-temperature substance to the low by direct molecular - Convection: heat transport by a moving fluid (gas of liquid). The heat is first transferred to the fluid by conduction, but the fluid motion carries the heat away. - Radiative exchange: heat transfer via electromagnetic waves, the amount of radiant energy emitted, transmitted, or (Figure from Microsoft Encarta) Return to Top Laboratory Apparatus for Labs 1-3 |Lab 1: Heating from Below: Convection In this experiment, water was heated from below to produce convection. Although the atmosphere is composed of air, this experiment was relevant to atmospheric motion as well. The lower atmosphere (troposphere) is mostly heated from below because the oceans and continents absorb radiation from the sun and then transfer some of the resulting heat energy to the lower atmosphere. In Lab 1, a beaker was heated (see figure below). Thermometers were placed in 1/2 cm below water surface and 1/2 cm above the bottom of the beaker. The temperature was recorded at 30 second intervals. Drops of dye were added to the bottom of the beaker between intervals. After three minutes the beaker was removed from the hot plate and temperature reading recorded for another five minutes. Convection was visualized by observing the motion of the The motion of the dye was circular from bottom to top and returning to the bottom of the beaker. The energy from heating created a less dense liquid at the bottom, thus causing the upward motion of the dye. Upon reaching the surface, the dye was now in the denser medium and therefore returned to the bottom. This motion is an example of convection. This phenomenon is evident in the motion of wind. The difference in densities and kinetic movement of the water molecules driven by temperature change resulted in the movement of air molecules. This lab can be used at lower levels to demonstrate simple properties of heat transfer and convection. At higher levels, this lab illustrates these basic principles, and could be extended to address more complex applications related to convection such as the Coriolis 1. Explain the process by which the water is heated. 2. Describe the motion of the water as made visible by the 3. Why does convection occur? 4. Did convection cease? When? Why? Environmental Applications of Principles of Radiative Exchange, Conduction and Convection (Figure from E. Zerba, Princeton University; email@example.com) Return to Top |Lab 2: Conduction Comparison of this experiment with the first illustrated the difference between the rate of heat transfer by conduction and that of convection. It also illustrated the difference in heat capacities between water and the solid materials of the Lab 2 was configured similarly to Lab 1, but looked at the effect of heating and cooling temperature difference using sand of equal weight as water used in experiment 1. No dye was used in this experiment, as convection was not a The temperature difference between the top and bottom layers of sand indicated that sand heats and cools at a faster rate compared to water. When the beaker was removed from the heat, the temperature continued to increase via conduction from the bottom of the beaker. This lab exercise is useful for demonstrating the concept of conduction to lower level students. Upper level students can use this lab to make the connections between conduction and heat capacity of various substances related to heat transfer that occurs between the earth's surfaces and the surface of living organisms. 1. Is there any convection in the sand? Explain. 2. Why did the temperature recorded by the lower thermometer continue to rise dramatically after the heating ceased? 3. On the basis of heat capacity, explain why the temperature changes for the sand and water were different. 4. Using what you have observed in the two experiments, predict whether a cold front will lower temperatures more at inland locations or on the coast. Explain your answer. Return to Top |Lab 3: Cooling From Above In lakes and oceans, convection is generally the result of cooling from above rather than heating from below. This was demonstrated by adding ice to the water. Using an experimental setup that allowed measurement of temperature at the top and the bottom of a beaker of water, ice was added to the top of the beaker. This experiment illustrated the concept that at 4 °C, water has higher density and sinks. Convection was visualized by the movement of dye added to the bottom of the beaker which was displaced by the cooler more dense water. This lab demonstrates several physical principles associated with heat transfer, including density, kinetic molecular theory, and convection. On a larger scale, this laboratory exercise demonstrates the process by which seasonal turnovers occur in ponds and lakes. At lower levels, teachers may choose to discuss physical principles of heat transfer only, while at upper levels, teachers may choose to integrate this small-scale investigation with the study of climate processes and lake nutrient stratification and mixing. 1. Why does ice float? 2. Is there any evidence of convection? Why does or does it not occur? 3. Draw a diagram to explain how seasonal turnover occurs in a Return to Top to The Passerine Birds home
<urn:uuid:77831d47-ed83-47e1-aa8f-16bc431d8619>
4
1,507
Tutorial
Science & Tech.
37.059793
881
Introduction to Enzymes The following has been excerpted from a very popular Worthington publication which was originally published in 1972 as the Manual of Clinical Enzyme Measurements. While some of the presentation may seem somewhat dated, the basic concepts are still helpful for researchers who must use enzymes but who have little background in enzymology. Enzyme Kinetics: Energy Levels Chemists have known for almost a century that for most chemical reactions to proceed, some form of energy is needed. They have termed this quantity of energy, "the energy of activation." It is the magnitude of the activation energy which determines just how fast the reaction will proceed. It is believed that enzymes lower the activation energy for the reaction they are catalyzing. Figure 3 illustrates this concept. The enzyme is thought to reduce the "path" of the reaction. This shortened path would require less energy for each molecule of substrate converted to product. Given a total amount of available energy, more molecules of substrate would be converted when the enzyme is present (the shortened "path") than when it is absent. Hence, the reaction is said to go faster in a given period of time.
<urn:uuid:cb68bdbc-cf40-4d1b-ba86-741aa42fc245>
4.09375
234
Tutorial
Science & Tech.
32.380483
882
In Journey into the Cell, we looked at the structure of the two major types of cells: prokaryotic and eukaryotic cells. Now we turn our attention to the "power houses" of a eukaryotic cell, the mitochondria. Mitochondria are the cell's power producers. They convert energy into forms that are usable by the cell. Located in the cytoplasm, they are the sites of cellular respiration which ultimately generates fuel for the cell's activities. Mitochondria are also involved in other cell processes such as cell division and growth, as well as cell death. Mitochondria: Distinguishing Characteristics Mitochondria are bounded by a double membrane. Each of these membranes is a phospholipid bilayer with embedded proteins. The outermost membrane is smooth while the inner membrane has many folds. These folds are called cristae. The folds enhance the "productivity" of cellular respiration by increasing the available surface area. The double membranes divide the mitochondrion into two distinct parts: the intermembrane space and the mitochondrial matrix. The intermembrane space is the narrow part between the two membranes while the mitochondrial matrix is the part enclosed by the innermost membrane. Several of the steps in cellular respiration occur in the matrix due to its high concentration of enzymes. Mitochondria are semi-autonomous in that they are only partially dependent on the cell to replicate and grow. They have their own DNA, ribosomes and can make their own proteins. Similar to bacteria, mitochondria have circular DNA and replicate by a reproductive process called fission. Journey into the Cell: To learn more about cells, visit:
<urn:uuid:725b19ff-601b-4c5b-9e4d-2293b902f895>
4.0625
351
Knowledge Article
Science & Tech.
31.740185
883
- About Us - SW Climate February 2012 La Niña Drought Tracker February 08, 2012 / Vol. 2 / Issue 3 / Drought Tracker / A Publication by CLIMAS After a wet December, more typical, dry La Niña conditions returned in January. Across Arizona and New Mexico precipitation generally was less than 50 percent of average, with large swaths of both states experiencing less than 25 percent of average (top figure). Most of the West also experienced scant rain and snow, including the mountains of the Upper Colorado River Basin, where about 70 percent of the water in the Colorado River originates. In many La Niña winters, the impacts of dry conditions are minimized by average or above-average snow in these mountains, which was the case last winter. This year, however, storms have been pushed farther north than typical by a dome of high-pressure off the northwestern coast. The Pacific Northwest, for example, which typically bares the brunt of winter storms during La Niña, was exceptionally dry for most of December and January. Warm conditions also accompanied January’s scarce precipitation. January temperatures were between 2 and 6 degrees F above average (Supplemental Figure 1), which helped drive a precipitous decline in mountain snow. Most of the country also experienced unseasonably mild temperatures, and many scientists point to the Arctic Oscillation (AO) as part of the cause. The AO describes changes in surface pressure in and around the Arctic (Supplemental Figure 2) that intensify or slacken the winds circulating the polar regions. In the positive phase of the AO, fierce winds prevent the frigid air from flowing south, while the reverse occurs during the negative phase. Up until mid-January this winter, the AO was positive (Supplemental Figure 3). Historically, the confluence of a positive AO and La Niña tends to bring warmer conditions to the Southwest (Supplemental Figure 4), jiving with temperatures in the region in the past month. The AO recently switched to negative and may help bring colder conditions in coming weeks; the AO was negative during February 2011, when several cold snaps froze the region. Drought conditions are still widespread and extend into Mexico (Supplement Figure 5). The soggy December spurred only minimal drought improvements because wet conditions did not persist. With a recent return of dry weather, moderate drought expanded in Arizona by about 13 percent since January 3, most notably in central Arizona (bottom figure). Abnormally dry conditions or a more severe drought category currently cover more than 92 percent of both Arizona and New Mexico. Forecasts also suggest La Niña will continue through the February–April period (Supplemental Figure 6), likely bringing more dry weather. Source: National Resources Conservation Service - The amount of water contained in the snowpack, or snow water equivalent (SWE), was largely below average in Arizona and New Mexico on February 6 (left); SWE in southern mountains dropped by more than 50 percent from one month ago. - Winter storms were few and far between in the Upper Colorado and Rio Grande basins in January. As of February 8, SWE in these basins were reporting less than 80 percent of average (Supplemental Figure 7). - Early streamflow forecasts suggest only a 50 percent chance that the April–June flow into Lake Powell will be above 64 percent of average (Supplemental Figure 8); streamflow forecasts progressively become more accurate as the winter advances. - The precipitation outlook for February–April calls for increased chances for below-average precipitation in all of Arizona and New Mexico (right). Odds for below-average precipitation are 50–60 percent in the southern tier of Arizona and New Mexico (right). There is greater than a 40 percent chance for below-average precipitation in all of Arizona and New Mexico for the February–April period. - The February–April outlook calls for increased odds of above-average temperature in all of Arizona and New Mexico; odds for above-average temperatures are greater than 40 percent in all of New Mexico and in eastern Arizona (Supplemental Figure 9). La Niña conditions were present 16 times between 1950 and 2008. In this period, precipitation during the February–April period was often 0.2–2.7 inches below average in most of Arizona and northern New Mexico; central Arizona experienced the most precipitation deficits (Supplemental Figure 10). Two inches is about 25 percent of the total winter precipitation in many areas. - The Seasonal Drought Outlook calls for drought to persist or intensify in all of the Southwest during the February–April period (Supplemental Figure 11). This forecast is influenced by expectations for below-average precipitation and the continuation of La Niña. - A looping jet stream, which often accompanies La Niña events, combined with a negative Arctic Oscillation that allows cold polar air to waft south, could begin to ferry colder air into the region in coming weeks. - While it is too early to reliably forecast the 2011–2012 winter, it is worth noting that there have been 10 back-to-back La Niña events since 1900. In four of those cases, a La Nina developed for a third consecutive winter, while an El Niño developed in the third winter in the other six cases. ENSO-neutral conditions have never followed a two-year La Niña. - This winter has evolved similarly to the last, as a dry January followed a wet December. However, this January delivered dry conditions to the Upper Colorado and Rio Grande basins (Supplemental Figure 12), which was not the case last winter. - Spring streamflow forecasts in Arizona call for high probabilities for below-average flows in all river basins. In New Mexico, flow in the Rio Grande measured at Otowi Bridge has a 30–50 percent chance of being above average; most other basins have lower odds for above-average flows.
<urn:uuid:4d01727a-1f35-48cc-bf09-97d8ab052898>
2.84375
1,202
Knowledge Article
Science & Tech.
30.820836
884
Date of this Version Nebraska Public Power District (NPPD) has monitored water quality since 1989 and fish populations since 1993 on the Niobrara River in Nebraska in the vicinity of Spencer Hydro during "flushing" or "sluicing" activities. These sluicing activities alter water quality in the river downstream, which can negatively impact fish populations. Higher numbers offish were sampled in 1995 when compared to 1993 and 1994. Of the 6,187 fish and 22 total species sampled above and below the hydro, six species comprised approximately 93 percent of the total sample. The most common species sampled were sand shiner, Notropis ludibundus (35.4%), red shiner, Cyprinella lutrensis (22.9%), flathead chub, Hybopsis gracilis (14.4%), carpsucker spp., Carpoides sp. (10.9%), bigmouth shiner, Hybopsis dorsalis (5.1%), and channel catfish, lctalurus punctatus (4.0%). Operational modifications instituted since 1989, such as opening the flood gates slower and dropping the pond at a slower rate, have reduced sluicing impacts and the hydro structure may not be limiting species diversity to the extent originally thought.
<urn:uuid:942a696e-3c5f-47bb-a446-1ed17907725a>
3.421875
265
Knowledge Article
Science & Tech.
47.09828
885
The number of hurricanes occurring annually on a global basis varies widely from ocean to ocean. Globally, about 80 tropical cyclones occur annually, one-third of which achieve hurricane status. The most active area is the western Pacific Ocean, which contains a wide expanse of warm ocean water. In contrast, the Atlantic Ocean averages about ten storms annually, of which six reach hurricane status. Compared to the Pacific Ocean, the Atlantic is a much smaller area, and therefore supports a smaller expanse of warm ocean water to fuel storms. The Pacific waters also tend to be warmer, and the layer of warm surface waters tends to be deeper than in the Atlantic. The frequency and intensity of hurricanes varies significantly from year to year, and scientists haven’t yet figured out all the reasons for the variability. Hurricanes and El Niño Scientists continue to investigate the interactions between hurricane frequency and El Niño. El Niño is a phenomenon where ocean surface temperatures become warmer than normal in the equatorial East Pacific Ocean. In general, El Niño events are characterized by an increase in hurricane activity in the eastern Pacific and a decrease in activity in the Atlantic, Gulf of Mexico, and the Caribbean Sea. During El Niño years, the wind patterns are aligned in such a way that there is an increase in vertical wind shear (upper level winds) over the Caribbean and Atlantic. The increased wind shear helps to prevent tropical disturbances from developing into hurricanes. Oppositely, in the eastern Pacific, El Niño alters wind patterns in a way that reduces wind shear, contributing to more storms. Hurricanes and Global Warming Since warm ocean waters and warm, moist air fuel storms, theory predicts that global warming should increase the number and intensity of tropical cyclones. As the oceans soak up extra heat from the atmosphere, ocean surface temperatures rise, increasing the extent of warm water that can support a hurricane. Not only should this mean that more hurricanes can form, but increased ocean surface temperatures could also increase a storm’s maximum potential intensity, the strongest a storm can get in ideal conditions. Models based on scientists’ current understanding of hurricanes suggest that if ocean temperatures increased by 2-2.5 degrees, the average intensity of hurricanes would increase by 6 to 10 percent. Since 1970, the average ocean temperature has warmed about half a degree, which means that theoretically, storms could be one to three percent stronger. Such an increase translates to a few knots in wind speed, too small a change to accurately measure. Hurricane wind speeds have historically been measured in increments of five knots, so any increase in intensity that has already occurred as a result of global warming would, in theory, be too small to detect yet. However, in 2005 and 2006, several studies suggested that global warming may be impacting hurricanes more than theory predicts. In an analysis of the historical record, there appeared to be an increase in the number of intense (Category 4 and 5) storms in recent years. Another analysis charted sea surface temperatures and the number of tropical cyclones. It revealed that as sea surface temperatures went up, the number of cyclones went up. Was the increase in sea surface temperatures responsible for the increased number of storms or did some outside factor drive both? The studies triggered many questions. Both theory and the studies suggested that there should be a link between global warming and hurricanes, but the studies showed a much greater increase in storm frequency and intensity than theory predicted. What caused the discrepancy? Is humanity’s current understanding of hurricanes flawed? Can the theory be adjusted to explain why hurricanes would have a stronger reaction to warming than previously predicted? One theory put forth to explain the recent increase in storm intensity and frequency in the Atlantic basin is the multi-decadal oscillation. Storms in the Atlantic may go through a natural cycle of 20-30 years of increased activity followed by a quieter period. The record seems to show such a cycle, with more intense hurricanes in the 1950s and 1960s followed by two decades of relative quiet, and then increased intensity from the mid-1990s to the present. Some scientists argue that this natural cycle may actually be a product of global warming and atmospheric aerosols. In the 1970s and 1980s, aerosol pollution may have “shaded” the Earth, keeping temperatures cooler than they had been in previous decades. This cooling would have suppressed hurricane formation. In the 1990s, global warming may have increased enough to overcome aerosol cooling and allowed hurricane intensity and frequency to climb again. Other scientists argued that the flaw isn’t necessarily in the theory, but in the historical records. Satellite data used to estimate hurricane intensity only goes back to the 1970s for the Atlantic basin, and other basins have a shorter record. A thirty-year record may not be long enough to coax out real trends. Further, satellite technology and the methods used to estimate a storm’s intensity have improved, so a storm that may have been classified a Category 1 or 2 in the 1970s through the mid-1980s would be classified as a much stronger storm today. The change in intensity-predicting methods could skew the record to show fewer intense storms in the 1970s and 1980s than there are today. From the 1940s to the 1970s, hurricane intensity estimates were based on aircraft and ship data. This means that fewer storms were recorded than probably actually occurred. The intensity records may also be skewed because the early flights did not go directly over the eye of the hurricane, but measured winds in safer flying areas farther from the center of the storm. From those measurements, wind speeds at the center of the storm and thus the storm’s intensity were estimated. As a result, many storms may have been stronger than they were estimated to have been. Before the 1940s, intensity estimates were made based on surviving ship’s records. It is likely that any ship at the center of a Category 4 or 5 storm didn’t survive, so the record probably contains fewer big storms than actually occurred. From changes in the methods used to estimate hurricane intensity to spotty ship records, the historical record may well be skewed towards weaker storms, argue many scientists. If all these factors were accounted for, the trend toward greater hurricane frequency and intensity could disappear. Regardless of their position, scientists need a longer and more accurate data record to fully understand the connection between global warming and other factors that may influence hurricane intensity and frequency. A longer, more accurate record will help improve theory and models, and it will amplify or flatten the currently observed trends.
<urn:uuid:0788c2d8-b4c3-485e-875b-d43d7b2f6669>
4.09375
1,333
Knowledge Article
Science & Tech.
35.155389
886
How strong is the strong force? I bet you think you asked a simple question. The simple answer is that the strength depends on the range over which it is acting. At short distances the strong force is weak and at long distances it is strong. That is completely different from the other three forces and arises because the forces transmitters, called gluons, are massless and carry strong force charge. I hope that you are still interested in the more complicated answer given below in which I try to explain how this can be so. The strong force attraction between two protons has a complicated shape which depends on the distance between the protons. The strong force between two protons is partially offset by the repelling electromagnetic forces. The strong force binds the protons with about 25 MeV of energy. The electromagnetic forces repel it with slightly less. The result is that about 1 MeV of energy would be required to split the two protons apart. In the rest of this reply I discuss the fundamental forces in more detail so you can get an idea why the strong force is different from the others. The four forces of nature are the strong force, the electromagnetic force, the weak force, and the gravitational force. We study the first three (and experience the last) at Fermilab. We are most familiar with gravity and second-most familiar with the electromagnetic force in our daily routine. So I will start by comparing the strength of them and then show how they compare to the weak and strong forces. First of all, the strength of a force depends on the distance over which it is acting. For gravity, the force exerted by one object on another drops according to the square of the distance between the two objects. The equation for the force exerted by gravity is: where G is a small constant, and M and m are the masses of the two objects. The minus sign merely indicates the force is attractive. We say the "range" of the gravitional force is "unlimited" because it is exertible over an arbitrarily large distance. It just gets smaller the further the two objects are from each other. The electromagnetic force has a similar formula. The replusive force between two electrons is: where C is a big constant, and e (typed in once for each of the two charges) is the charge of the electron. Notice the strength of the force drops with the distance between the charges in a way identical to gravity. Also, if we were talking about an electron and an anti-electron (which has the opposite charge), then there would be a minus sign indicating the force between opposite charges is attractive. We can compare the strength of the gravitational force to the electromagnetic force on two electrons by taking the ratio between the two forces. The distance-squared cancels out and we are left with: F(gravity)/F(EM) = Gmm/Cee. I intentionally dropped the minus sign; I will simply remember that the gravitional force between the electrons is attractive and the electromagnetic force between the two electrons is replusive. Anyway, when I plug in the values for G, m, C, and e, the ratio is 2.4x10^(-43). In words that is pronounced two-point-four times ten to the minus forty-three. That is a very small number. In other words, the gravitational force between two electrons is feeble compared to the electromagnetic force. The reason that you feel the force of gravity, even though it is so weak, is that every atom in the Earth is attracting every one of your atoms and there are a lot of atoms in both you and the Earth. The reason you aren't buffeted around by electromagnetic forces is that you have almost the same number of positive charges as negative ones, so you are (essentially) electrically neutral. The weak force is misnamed. It's thought to be just as strong as the EM force but, unlike the EM force, it's a short-ranged force. In fact, the range is only about 1/100 the size of an atomic nucleus. The weak force is outside the realm of our everyday experience. We study it at Fermilab by using the accelerator to produce the particles which transmit the force. These are real particles called the W-boson and the Z-boson. Because they are very massive, we need a high-energy accelerator to produce them. The large mass of the W-boson and the Z-boson is also the reason the force has a short range. Incidentally, the particle which carries the EM force is called the photon (yes, light). Because photons are massless, the EM force has a long range as I described above. The weak force and the EM force have been found to be linked at high-energy or, equivalently, short range. They both can be described by one set of equations which we call the "electro-weak" theory. This was discovered in 1967-1971 by Steven Weinberg, Sheldon Glashow, and Abdus Salam. They got the Nobel Prize in physics for unifying those forces. Finally I am ready to talk about the strong force. This is way out of the experience we get in everyday life (not that it doesn't have everyday life consequences), so I will be a little more long-winded in describing it. Remember that a proton or neutron is composed of three quarks? These quarks have strong charge and are bound together by the strong force. Unlike the case of the EM force, where there is one electric charge and one anti-charge (plus and minus charges) there are three strong force charges and three anti-charges. We call the strong force charges "red", "blue", and "yellow" and the anti-charges are called "anti-red" and so forth. The particles which transmit the force are called gluons. Gluons are massless, like the photon. But unlike the photon, which is electrically neutral, the gluons carry strong charge and a different strong anti-charge. A gluon could be "red-anti-blue", for example, and there are eight kinds of gluons. We call the three charges "colors" even though they have nothing to do with how we see. Because the gluon is massless, at first you might think the range of the strong force is infinite, like the EM force. But if you study the behavior of the strong force, you find that the three quarks in a proton or neutron behave almost as if they were bouncing around freely in a relaxed, elastic spherical container. None of the quarks can escape the container because when the quark reaches the boundary of the proton or neutron, the force begins to act and gets stronger and stronger the further away the that quark gets from the others. That is very different from the other forces which get weaker at longer distances and it occurs because the gluons have the color and anti-color charge. The strong force also acts between protons and neutrons in an atomic nucleus much in the same way that simple chemicals are held together by the electric force. A nucleus such as helium, which has two (positively EM-charged) protons, is stable because the strong force overcomes the electromagnetic forces. The strong force binds the two protons with about 25-35 MeV of energy. The electromagnetic forces try to push the protons apart. The net result is that approximately 1 million electron-volts of energy are needed to separate the two protons. In contrast, an electron is bound to a proton in a hydrogen atom by only a few electron-volts. By now you know enough to consider the size of the nucleus in comparison to the size of an atom to judge if this is truly a fair comparison! The strong force is, indeed, strong. We think that if we could study the electroweak and strong forces at high enough energy we would find out they were linked together somehow, like electricity and magnetism are to form EM, and like EM and the weak force are to form electro-weak. Such a theory would be called a grand-unified theory. And we also think that it may be possibe to include gravity with the other three. Such a theory would be called a super-grand-unified theory and there is a candidate for that called "superstrings". So you asked a simple question: "How strong is the strong force?". The answer is that it depends on the range. At short distances it is weak and at long distances it is strong. That effect is completely different from the other three forces and arises because the forces transmitters, called gluons, are massless and have strong-charge and different strong anti-charge. If you want to learn more about particle physics and the work we do at Fermilab, the book "The God Particle" by Leon Lederman and Dick Teresi gives a very good and readable explanation. |last modified 1/11/1999 firstname.lastname@example.org|
<urn:uuid:68df6dec-0c9d-44aa-8f93-eb17e7711579>
3.625
1,867
Q&A Forum
Science & Tech.
54.305438
887
USGS Multimedia Gallery Title: Deep-Sea Cold Water Coral Description: Fish like this Atlantic Roughy (Hoplostethus occidentalis) congregate near deep-sea corals (background is Lophelia pertusa coral). Usage: This image is public domain/of free use unless otherwise stated. Please refer to the USGS Copyright section for how to credit the photo. Suggest an update to the information/tags?
<urn:uuid:ca29d10c-b682-40e2-ab5b-292c61c3893f>
2.671875
93
Truncated
Science & Tech.
26.965445
888
I have to determine all values of h for which A is invertible and I really don't know what should be my first step( If anyone could guide me through this that would be awesome. Here's the matrix: 1 1 0 1 1 0 0 1 0 1 2h + 1 0 1 1 h You mean then that the matrix is I see two ways to do that. One is to use the fact that a matrix is invertible if and only if its determinant is non-zero. The other is to row-reduce this to triangular form and use the fact that a matrix is invertible if and only if, reduced to triangular form, it has no zeros on its main diagonal. Since a simple way of determining the determinant of a matrix is to reduce to triangular form, those are essentially the same. That will give you Now you also need to note that 1) If you "add a multiple of one row to another" the determinant of a matrix 2) if you "multiply one row by a number", the determinant of a matrix is multiplied by that number. 3) if you "swap two rows", the determinant of a matrix is multiplied by -1. Since you have not "multiplied one row by a number", the determinant of your original matrix must be the determinant of this matrix: that is, . The determinant of your original matrix is non-zero if and only if h is non-zero.
<urn:uuid:e55dfbfc-3621-4afe-a9fe-90f7b2eedff1>
2.578125
317
Q&A Forum
Science & Tech.
60.877356
889
A bag contains n discs, made up of red and blue colours. Two discs are removed from the bag. If the probability of selecting two discs of the same colour is 1/2, what can you say about the number of discs in the bag? Let there be r red discs, so P(RB) = r/n (nr)/(n1), similarly, P(BR) = (nr)/n r/(n1). Therefore, P(different) = 2r(nr)/(n(n1)) = 1/2. Giving the quadratic, 4r2 4nr + n2 n = 0. Solving, r = (nn)/2. If n is an odd square, n will be odd, and similarly, when n is an even square, n will be even. Hence their sum/difference will be even, and divisible by 2. In other words, n being a perfect square is both a sufficient and necessary condition for r to be integer and the probability of the discs being the same colour to be 1/2. Prove that n(n+1)/2 (a triangle number), must be square, for the probability of the discs being the same colour to be 3/4, and find the smallest n for which this is true. What does this tell us about n and n(n+1)/2 both being square? Can you prove this result directly?
<urn:uuid:c329b7b4-cde7-4cf9-96f3-2a0ef6253d5c>
2.9375
304
Q&A Forum
Science & Tech.
94.961703
890
US space agency Nasa has launched two spacecraft that are expected to make the first 3D movies of the Sun. CMEs will typically throw a billion tonnes of matter into space The Stereo mission will study violent eruptions from our parent star known as coronal mass ejections (CMEs). The eruptions create huge clouds of energetic particles that can trigger magnetic storms, disrupting power grids and air and satellite communications. The mission is expected to help researchers forecast magnetic storms - the worst aspects of "space weather". "Coronal mass ejections are a main thrust of solar physics today," said Mike Kaiser, the Stereo project scientist at the US space agency's (Nasa) Goddard Space Flight Center. "With Stereo, we want to understand how CMEs get started and how they move through the Solar System." The mission comprises two spacecraft, lofted on a Delta-2 rocket from Cape Canaveral, Florida. The two near-identical satellites will orbit the Sun, but one of them will move slightly ahead of the other, to provide stereo vision. Technical hitches have delayed previous attempts at launching. Coronal mass ejections erupt when "loops" of solar material lifting off the Sun suddenly snap, hurling a high-temperature (hundreds of thousands of degrees) plasma into space. The plasma is formed of electrons and ions of hydrogen and helium. A CME will contain typically a billion tonnes of matter and move away from the Sun at about 400km/s. Much of the time, these outbursts are directed away from the Earth, but some inevitably come our way. When they do, the particles, and the magnetic fields they carry, can have highly undesirable effects. "When a big storm hits and the conditions are just right, you can get disturbances on power grids and on spacecraft - they are susceptible to high-energy electrons and protons hitting them," Dr Kaiser told BBC News. "These particles are hazardous to astronauts; and even airline companies that fly polar routes are concerned about this because CMEs can black out plane communications, and you can get increased radiation doses on the crew and passengers. "If we know when these storms are going to hit, we can take preventive action." At the moment, solar observatories, because they look at the Sun straight on, have great difficulty in determining the precise direction of a CME. By placing two spacecraft in orbit to look at the Sun-Earth system from two widely spaced locations, scientists will be able look at the storms from the side - to work out very rapidly if a cloud of plasma is going to hit our planet. "In solar physics, we make a remarkable leap in understanding either by producing new instruments that have better resolution, so you can probe deeper into the Sun or see structures you've never seen before; or by going to a different vantage point," said Stereo program scientist Dr Lika Guhathakurta. "This is where Stereo comes in; it is not that its instrumentation is a breakthrough in terms of resolution, but it will see the Sun in all its 3D glory for the first time - all the way from the surface of our star out to the Earth. It's going to be spectacular." The Stereo spacecraft each carry 16 instruments. These include telescopes, to image the Sun at different wavelengths, and technologies that will sample particles in CMEs. The UK has a significant role on the mission, having provided all the camera systems on board the spacecraft. It has also delivered a Heliospheric Imager (HI) for each platform. The spacecraft are identical apart from a few structural details This instrument will follow the progress through space of a bubble of plasma by tracing its reflected light. The engineering demands on the British team have been exacting. "The reflected light from these coronal mass ejections is extremely faint," explained Dr Chris Eyles of the University of Birmingham. "It is typically a [100 trillion] times fainter than the direct light from the Sun's disc, so we have to use a sophisticated system of baffles to reject that direct light. "Critical to the HI's operation has been cleanliness of assembly. If we get dust particles, fibres of even hairs on critical surfaces inside the instrument, they would scatter sunlight and destroy the performance of the instrument." The Stereo spacecraft will send their data straight to the US National Oceanic and Atmospheric Administration (Noaa), the agency which makes the space weather forecasts used worldwide by satellite and airline operators. The new information is expected to lengthen the advance warning forecasters are able to give - from the current few hours to a couple of days. With our ever increasing dependence on spacecraft in orbit - for communications and navigation - the Stereo mission comes not a moment too soon. Cleanliness is paramount in the instruments' preparation Earth's magnetic field gives the planet and its inhabitants a good measure of protection, but with space agencies seemingly intent on sending astronauts to the Moon and even to Mars in the next few decades, there is a pressing need for a fuller understanding of the Sun's activity. Moon or Mars bases will have to be carefully designed shelters, and astronauts will need very good advice before deciding to venture too far from such protection. August 1972 saw a solar storm that is legendary at Nasa. It occurred between two Apollo missions, with one crew just returned from the Moon and another preparing for launch. If an astronaut had been on the Moon at the time, they might have received a 400 rem (Roentgen Equivalent Man) radiation dose. Not only would this have caused radiation sickness, but without rapid medical treatment such a sudden dose could have been fatal. Dr Chris Davis from the UK's Rutherford Appleton Laboratory underlined the power of CMEs. "The energy in a CME is typically about 10-to-the-power-of-24 joules. That is the same as a bus hitting a wall at 25mph a billion, billion times. It's 100 times the energy stored in the world's nuclear arsenal," he said. The spacecraft launched on a trajectory that goes past the Moon The lunar swingby will position the spacecraft in widely spaced orbits One will lead the Earth in its orbit, the other will lag behind Over the course of their mission, the twins will continue to separate Their different views will be combined to make 3D movies of CMEs
<urn:uuid:6f2403d8-90af-4141-8e0a-3d439e28dc68>
3.515625
1,334
News Article
Science & Tech.
42.3394
891
Call it the fish version of instant messaging. When a fish is injured, it secretes a compound that makes other fish dart away (as seen in the latter half of the sped up video above, when the red light flashes). The substance, named Schreckstoff (German for "scary stuff"), protects the entire community of fish, but no one knew how it worked. Now they do, thanks to an analysis of fish mucus reported today in Current Biology. The key ingredient in Schreckstoff is a sugar called chondroitin sulfate, which is found in abundance in fish skin. When the skin is torn, enzymes break the compound down into sugar fragments that activate an unusual class of sensory neurons known as crypt cells in other fish. And the fish take off. See more Videos.
<urn:uuid:f62ada12-ac3e-4219-927e-689986852153>
2.953125
168
Truncated
Science & Tech.
59.762467
892
viyh writes with coverage on MSNBC of the discovery of ancient microbes fossilized in the gut of a termite. "One hundred million years ago a termite was wounded and its abdomen split open. The resin of a pine tree slowly enveloped its body and the contents of its gut. In what is now the Hukawng Valley in Myanmar, the resin fossilized and was buried until it was chipped out of an amber mine. The resin had seeped into the termite's wound and preserved even the microscopic organisms in its gut. These microbes are the forebears of the microbes that live in the guts of today's termites and help them digest wood. ... The amber preserved the microbes with exquisite detail, including internal features like the nuclei. ... Termites are related to cockroaches and split from them in evolutionary time at about the same time the termite in the amber was trapped."
<urn:uuid:0c0fbe56-5503-4d58-ba33-62129727e0f4>
3.1875
186
News Article
Science & Tech.
52.459685
893
After Higgs Boson, scientists prepare for next quantum leapFebruary 13th, 2013 in Physics / General Physics A graphic distributed on July 4, 2012 by CERN in Geneva shows a representation of traces of a proton-proton collision measured in the search for the Higgs boson. Seven months after its scientists made a landmark discovery that may explain the mysteries of mass, Europe's top physics lab will take a break from smashing invisible particles to recharge for the next leap into the unknown. Seven months after its scientists made a landmark discovery that may explain the mysteries of mass, Europe's top physics lab will take a break from smashing invisible particles to recharge for the next leap into the unknown. From Thursday, the cutting-edge facilities at the European Organisation for Nuclear Research (CERN) will begin winding down, then go offline on Saturday for an 18-month upgrade. A vast underground lab straddling the border between France and Switzerland, CERN's Large Hadron Collider (LHC) was the scene of an extraordinary discovery announced in July 2012. Its scientists said they were 99.9 percent certain they had found the elusive Higgs Boson, an invisible particle without which, theorists say, humans and all the other joined-up atoms in the Universe would not exist. The upgrade will boost the LHC's energy capacity, essential for CERN to confirm definitively that its boson is the Higgs, and allow it to probe new dimensions such as supersymmetry and dark matter. "The aim is to open the discovery domain," said Frederick Bordry, head of CERN's technology department. "We have what we think is the Higgs, and now we have all the theories about supersymmetry and so on. We need to increase the energy to look at more physics. It's about going into terra incognita (unknown territory)," he told AFP. Theorised back in 1964, the boson also known as the God Particle carries the name of a British physicist, Peter Higgs. He calculated that a field of bosons could explain a nagging anomaly: Why do some particles have mass while others, such as light, have none? That question was a gaping hole in the Standard Model of particle physics, a conceptual framework for understanding the nuts-and-bolts of the cosmos. One idea is that the Higgs was born when the new Universe cooled after the Big Bang some 14 billion years ago. It is believed to act like a fork dipped in honey and held up in dusty air. Most of the dust particles interact with the honey, acquiring some of its mass to varying degrees, but a few slip through and do not acquire any. With mass comes gravity—and its pulling power brings particles together. Supersymmetry, meanwhile, is the notion that there are novel particles which are the opposite number of each of the known particle actors in the Standard Model. This may, in turn, explain the existence of dark matter—a hypothetical construct that can only be perceived indirectly via its gravitational pull, yet is thought to make up around 25 percent of the Universe. At a cost of 6.03 billion Swiss francs (4.9 billion euros, $6.56 billion dollars), the LHC was constructed in a 26.6-kilometre (16.5-mile) circular tunnel originally occupied by its predecessor, the Large Electron Positron (LEP). That had run in cycles of about seven months followed by a five-month shutdown, but the LHC, opened in 2008, has been pushed well beyond. "We've had full operations for three years, 2010, 2011 and 2012," said Bordry. "Initially we thought we'd have the long shutdown in 2012, but in 2011, with some good results and with the perspective of discovering the boson, we pushed the long shutdown back by a year. But we said that in 2013 we must do it." Unlike the LEP, which was used to accelerate electrons or positrons, the LHC crashes together protons, which are part of the hadron family. "The game is about smashing the particles together to transform this energy into mass. With high energy, they are transformed into new particles and we observe these new particles and try to understand things," Bordry explained. "It's about recreating the first microsecond of the universe, the Big Bang. We are reproducing in a lab the conditions we had at the start of the Big Bang." Over the past three years, CERN has slammed protons together more than six million billion times. Five billion collisions yielded results deemed worthy of further research and data from only 400 threw up data that paved the road to the Higgs Boson. Despite the shutdown, CERN's researchers won't be taking a breather, as they must trawl through a vast mound of data. "I think a year from now, we'll have more information on the data accumulated over the past three years," said Bordry. "Maybe the conclusion will be that we need more data!" Last year, the LHC achieved a collision energy level of eight teraelectron volts, an energy measure used in particle physics—up from seven in 2011. After it comes back online in 2015, the goal is to take that level to 13 or even 14, with the LHC expected to run for three or four years before another shutdown. The net cost of the upgrade is expected to be up to 50 million Swiss francs. CERN's member states are European, but the prestigious organisation has global reach. India, Japan, Russia and the United States participate as observers. (c) 2013 AFP "After Higgs Boson, scientists prepare for next quantum leap." February 13th, 2013. http://phys.org/news/2013-02-higgs-boson-scientists-quantum.html
<urn:uuid:d348472a-a0ad-45b4-ac28-b6f820112b17>
3.1875
1,222
News Article
Science & Tech.
56.197368
894
June 22, 1976. North Atlantic. At 21:13 GMT a pale orange glow behind a bank of towering cumulus to the west was observed. Two minutes later a white disc was observed while the glow from behind the cloud persisted. High probability that this may have been caused by interferometry using 3-dimensional artificial scalar wave? Fourier expansions? as the interferers. Marine Observer. 47(256), Apr. 1977. p. 66-68. "Unidentified phenomenon, off Barbados, West Indies." August 22, 1969. West Indies. Luminous area bearing 310 degrees grew in size and rose in altitude, then turned into an arch or crescent. High probability that this may have been caused by interferometry using artificial scalar wave? ((Fourier expansions.)) Marine Observer. 40(229), July, 1970. p. 107-108. "Optical phenomenon: Caribbean Sea; Western North Atlantic." Mar. 20, 1969. Caribbean Sea and Western North Atlantic. At 23:15 GMT, a semicircle of bright, milky-white light became visible in the western sky and rapidly expanded upward and outward during the next 10 minutes, dimming as it expanded. High probability that this may be caused by interferometry using artificial scalar wave? Fourier expansions?. Marine Observer, 40(227), Jan. 1970. p.17; p. 17-18. 7B.21 - Electricity 13.06 - Triple Currents of Electricity 14.35 - Teslas 3 6 and 9 ((16.04 - Nikola Nikola Tesla describing what electricity is)) 16.07 - Electricity is a Polar Exchange 16.10 - Positive Electricity 16.16 - Negative Electricity - Russell 16.17 - Negative Electricity - Tesla 16.29 - Triple Currents of Electricity ((Figure 16.04.05 and Figure 16.04.06 - Nikola Nikola Tesla and Lord Kelvin)) Part 16 - Electricity and Magnetism Tesla - Electricity from Space What Electricity Is - Bloomfield Moore Page last modified on Wednesday 19 of May, 2010 05:23:05 MDT
<urn:uuid:5ff0ec8a-893b-4fc8-b51a-53a907c3402b>
2.921875
452
Knowledge Article
Science & Tech.
62.622889
895
Saturn's largest moon, Titan, pictured to the right of the gas giant in the Cassini spacecraft view. Scientists have discovered methane lakes in the tropical areas of Saturn's moon Titan, one of which is about half the size of Utah's Great Salt Lake, with a depth of at least one meter. The longstanding bodies of liquid were detected by NASA's Cassini spacecraft, which has been orbiting Saturn since its arrival at the ringed planet in 2004. It was previously believed that such bodies of liquid only existed at the polar regions of Titan. According to a report published in the journal Nature , the liquid for the lakes could come from an underground aquifer. "An aquifer could explain one of the puzzling questions about the existence of methane, which is continually depleted," said the lead author Caitlin Griffith. "Methane is a progenitor of Titan's organic chemistry, which likely produces interesting molecules like amino acids, the building blocks of life," Griffith noted. The lakes have remained since they were detected by Cassini’s visual and infrared mapping spectrometer in 2004. Only one rainfall has been recorded which shows that the lakes could not be replenished by rain. According to the theories regarding the circulation models of Titan, liquid methane in the moon's equatorial region evaporates and is then carried by wind to the polar regions. Methane is then condensed due to the colder temperatures and forms the polar lakes after it falls to the surface. "We had thought that Titan simply had extensive dunes at the equator and lakes at the poles, but now we know that Titan is more complex than we previously thought," said Linda Spilker, a Cassini project scientist. She further added that, "Cassini still has multiple opportunities to fly by this moon going forward, so we can't wait to see how the details of this story fill out." NASA launched the Cassini spacecraft in 1997 and its mission has been extended several times, most recently until 2017.
<urn:uuid:feb8a43b-30f8-45e2-a4b8-ba618234e936>
3.921875
411
News Article
Science & Tech.
41.223214
896
published a paper (PDF) comparing performance of four programming languages, C++, its own language Go, Java and Scala. A team at Google created a "simple and compact" benchmark that didn't take advantage of language-specific features. An algorithm was implemented using each language's "idiomatic container classes, looping constructs, and memory/object allocation schemes."Google has However, the paper notes: "While the benchmark itself is simple and compact, it employs many language features, in particular, higher-level data structures (lists, maps, lists and arrays of sets and lists), a few algorithms (union/find, dfs/deep recursion, and loop recognition based on Tarjan), iterations over collection types, some object oriented features, and interesting memory allocation patterns." Above: Run-time measurements, including a few optimizations. After benchmark tests were published within Google various employees took a stab at optimizing the code for specific languages. - C++ provides the best performance by far, but it requires the most extensive language-specific tuning. - Scala provides the most concise notation and optimization of code complexity. - The algorithm was simplest to implement in Java, but garbage collection settings make both Java and Scala difficult to benchmark accurately. - Go offers concise notion and very fast compile time, but is still immature. The phrase "lies, damn lies and benchmarks" is by now a cliche. Suffice it to say, benchmarks never tell the full story, and there are many factors to consider when choosing a programming language. That said, you may find parts of this paper enlightening, especially with regards to Scala performance.
<urn:uuid:81e0c93e-bacb-49f3-b304-265ce00899c8>
2.65625
334
Personal Blog
Software Dev.
26.796375
897
Herschel Space Observatory Launch Date: May 14, 2009 Mission Project Home Page - http://herschel.jpl.nasa.gov/ Herschel's infrared image of the Andromeda Galaxy shows rings of dust that trace gaseous reservoirs where new stars are forming and XMM-Newton's X-ray image shows stars approaching the ends of their lives. Both infrared and X-ray images convey information impossible to collect from the ground because these wavelengths are absorbed by Earth's atmosphere. Credits: ESA/Herschel/PACS/SPIRE/J.Fritz, U.Gent/XMM-Newton/EPIC/W. Pietsch, MPE The Herschel Space Observatory is a space-based telescope that is studying the light of the Universe in the far-infrared and submillimeter portions of the spectrum. It is revealing new information about the earliest, most distant stars and galaxies, as well as those closer to home in space and time. It is also taking a unique look at our own Solar System. Herschel is the fourth Cornerstone mission in the European Space Agency’s Horizon 2000 program. Ten countries, including the United States, participated in its design and implementation. Launched on May 14, 2009, the mission will operate until the cryostat runs out of helium during the first half of 2013. The mission will operate until the cryostat runs out of helium, perhaps four years after launch. Originally called “FIRST,” for “Far InfraRed and Submillimeter Telescope,” the spacecraft was renamed for Britain’s Sir William Herschel, who discovered in 1800 that the spectrum extends beyond visible light into the region we today call “infrared.” Herschel’s namesake will give scientists their most complete look so far at the large portion of the Universe that radiates in far-infrared and submillimeter wavelengths. With a primary mirror 3.5 meters (approximately 11.5 feet) in diameter, Herschel is the largest infrared telescope sent into space as of its launch date. Using detectors cooled to temperatures very close to absolute zero (0 degree Kelvin), the three instruments called HIFI, SPIRE, and PACS, which enables Herschel to be the first spacecraft to observe in the full 60-670 micron range. The far-infrared and submillimeter wavelengths at which Herschel observes are considerably longer than the familiar rainbow of colors that the human eye can perceive. Yet, this is a critically important portion of the spectrum to scientists because it is the frequency range at which a large part of the universe radiates. Much of the Universe consists of gas and dust that is far too cold to radiate in visible light or at shorter wavelengths such as x-rays. However, even at temperatures well below the most frigid spot on Earth, they do radiate at far-infrared and submillimeter wavelengths. Stars and other cosmic objects that are hot enough to shine at optical wavelengths are often hidden behind vast dust clouds that absorb the visible light and re-radiate it in the far-infrared and submillimeter range. Last updated: October 26, 2012 - ESA Herschel Website - http://www.esa.int/SPECIALS/Herschel/index.html - More about Herschel - http://www.nasa.gov/mission_pages/herschel/index.html - Science@ESA - ISO/Herschel video - http://astronomy2009.esa.int/science-e/www/object/index.cfm?fobjectid=44698&fattributeid=885
<urn:uuid:1ce46b18-ab5f-48ab-aacd-8ef745e71dd4>
3.671875
768
Knowledge Article
Science & Tech.
50.385784
898
One winter morning about two years ago, marine mammal scientist Anton van Helden was driving hell-for-leather up to the far north of New Zealand. A thirteen-and-a-half hour haul would take him to the beach where the first pygmy killer whale to strand itself in New Zealand was lying, and he wasn’t going to miss the chance to collect the skeleton and perhaps tissue samples for the Museum of New Zealand, where he worked. Partway through the drive, he got a call. Two beaked whales had been found dead at a place called Opape Beach. “They sounded for all the world like Gray’s beaked whales,” he recalls, a species that washes up relatively frequently on New Zealand’s shores, which sometimes see several hundred whale beachings per year thanks in part to the nation’s long coastline. “Maybe you can bury them,” he said to the conservation department employee at the other end of the line, “and we’ll have a look later, just to be sure.” Van Helden gave the matter little thought until a few months had passed and his phone rang one morning before he had even gotten out of bed. It was Rochelle Constantine, a marine biologist at the University of Auckland, and her graduate student Kirsten Thompson, who had conducted routine DNA analyses on the beached whales. “I hope you’re sitting down,” Constantine said. Those animals stranded in December were not Gray’s. They were instead a pair of spade-toothed beaked whales. It was a name to make a certain kind of scientist weak in the knees: the most elusive species of whale in the world, known only from several bone fragments washed up over the course of 140 years. It had never been seen in the flesh before. Van Helden looked up at the ceiling and swore. (Listen: Beluga Whale Mimics Human Sound) There are a lot of reasons to care about seeing the remains of so elusive a species — beyond simply the thrill of the chase and the brass-ring quality of an actual discovery. Humans have made a hash of the oceans — depleting fish stocks, slaughtering dolphins and whales, pouring pollutants into coastal waters. Part of the reason we are so cavalier is that so much of the ocean is invisible to us on a day-to-day basis; if you don’t know what’s there, you don’t know what you’re destroying. The spade-toothed beak whale, which was written up in the new issue of Current Biology, is a lesson both in the diversity of the fragile oceans and the painstaking sleuthing that is required to study it. After van Helden received his early-morning call, he and his colleagues got to work. They gathered all the information they could about the beached pair, including measurements and a set of photographs taken at the scene of the beaching. They worked with the Whakatohea Iwi, the Maori tribe in the area where the whales had been found, and the New Zealand Department of Conservation, to get the skeletons exhumed, and Van Helden produced a detailed anatomical illustration of the animal, which also appears in Current Biology. The pair on the beach were an adult female, measuring just over 17 ft. (5.3 m), and a juvenile male, 11.5 ft. (3.5 m) from beak to tail fin. They lack the species’ signature protruding teeth, which occur only in adult males. The shape and coloration of both specimens are subtly different from other beaked whales — subtle enough, in fact, that they would be easy to confuse with the more commonly seen Gray’s. (Video: Dolphins Chased By Killer Whale) What allowed the team to make a firm identification was the genetic information collected from the three bones that until now have been our only evidence of the spade-toothed’s existence. The first sign of the animal was a lower jaw found on New Zealand’s Chatham Islands in 1872, bearing two jutting, triangular teeth. Later on, two skulls without lower jaws, one found in 1950 in New Zealand, the other found all the way across the Pacific on Robinson Crusoe Island in 1986, were proven by DNA analysis to be from the same distinct species. But while scientists made educated guesses about what an intact spade-toothed would look like, extrapolating from its relatives could take them only so far. That’s why this find had the team so excited. “It was the first time ever that anybody had ever had even a hint of what these things looked like,” van Helden says. The new discovery is a big step forward for scientists interested in beaked whales, but there is still much to learn about both this species and its close relatives. Thompson is working on a project that uses genetic information from beached beaked whales to glean insight into the elusive creatures’ familial relationships. And seeing the animal alive in the wild remains a tantalizing goal of many scientists. Scott Baker, a marine biologist at Oregon State University and a co-author of the paper who has studied whales and dolphins for 30 years, saw his first live, open-ocean beaked whale just last month, in Samoa. The sighting lasted about 4 seconds before the animal dove — too brief to tell if it was a spade-toothed. “Their environment is very remote,” he says. “It’s deep water, and they’re submerged for maybe 96% of their lives.” It’s one of the sad ironies of the marine biologist’s work that the only way to get a more lingering look at animals like these is when they wash up on beaches — like ornithologists studying birds that are invisible until they slam into a window. “Here we have an animal which is over 5 meters long, the size of a big car, that has to come to the surface to breathe, and yet no one has ever seen one alive,” says van Helden. “It typifies how little we know about the ocean.” The new find sheds at least a tiny bit of light on that often dark world, reminding us of an environment we have the power to destroy — and the responsibility to preserve.
<urn:uuid:859bf0af-c2f3-4a82-88d1-775e7aebd9fd>
3.078125
1,340
News Article
Science & Tech.
53.274264
899