text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
We say one integer divides another if it does so evenly, that is with a remainder of zero (we sometimes say, "with no remainder," but that is not technically correct). More formally, mathematicians write: If a and b are integers (with a not zero), we say a divides b if there is an integer c such that b = ac. We use this concept enough that it has its own symbols: The integers that divide a are called the divisors of a. |a | b||means a divides b.| a does not divide b.| You might want try your hand at proving the following basic properties which hold for all integers a, b. c and d: Finally, suppose p is a prime and k is a positive integer. A notation that is quickly gaining acceptance is to write pk || a to indicate that pk divides a, but pk+1 does not. - a | 0, 1 | a, a | a. - a | 1 if and only if a=+/-1. - If a | b and c | d, then ac | bd. - If a | b and b | c, then a | c. - a | b and b | a if and only if a=+/-b. - If a | b and b is not zero, then |a| < |b|. - If a | b and a | c, then a | (bx+cy) for all integers x and y. See Also: GCD, Prime, RelativelyPrime
<urn:uuid:f2aaad95-8881-45f1-b28d-8e4979d8963f>
4.59375
342
Structured Data
Science & Tech.
84.794631
U.S. Geological Survey Open-File Report 2013–1026 Though volcanic eruptions are comparatively rare in the American Southwest, the States of Arizona, Colorado, New Mexico, Nevada, and Utah host Holocene volcanic eruption deposits and are vulnerable to future volcanic activity. Compared with other parts of the western United States, comparatively little research has been focused on this area, and eruption probabilities are poorly constrained. Monitoring infrastructure consists of a variety of local seismic networks, and ”backbone“ geodetic networks with little integration. Emergency response planning for volcanic unrest has received little attention by either Federal or State agencies. On October 18–20, 2012, 90 people met at the U.S. Geological Survey campus in Flagstaff, Arizona, providing an opportunity for volcanologists, land managers, and emergency responders to meet, converse, and begin to plan protocols for any future activity. Geologists contributed data on recent findings of eruptive ages, eruption probabilities, and hazards extents (plume heights, ash dispersal). Geophysicists discussed evidence for magma intrusions from seismic, geodetic, and other geophysical techniques. Network operators publicized their recent work and the relevance of their equipment to volcanic regions. Land managers and emergency responders shared their experiences with emergency planning for earthquakes. The meeting was organized out of the recognition that little attention had been paid to planning for or mitigation of volcanic hazards in the American Southwest. Moreover, few geological meetings have hosted a session specifically devoted to this topic. This volume represents one official outcome of the meeting—a collection of abstracts related to talks and poster presentations shared during the first two days of the meeting. In addition, this report includes the meeting agenda as a record of the proceedings. One additional intended outcome will be greater discussion and coordination among emergency responders, geologists, geophysicists, and land managers regarding geologic hazards in the Southwest. First posted January 25, 2013 Volcano Science Center - Menlo Park U.S. Geological Survey 345 Middlefield Road, MS 910 Menlo Park, CA 94025 This report is presented in Portable Document Format (PDF); the latest version of Adobe Reader or similar software is required to view it. Download the latest version of Adobe Reader, free of charge. Lowenstern, Jacob B., ed., 2013, Abstracts for the October 2012 meeting on Volcanism in the American Southwest, Flagstaff, Arizona: U.S. Geological Survey Open-File Report 2013–1026, 39 p. (Available at http://pubs.usgs.gov/of/2013/1026/.) List of Organizers List of Attendees List of Abstracts
<urn:uuid:0fcab570-83d9-4608-a4d8-03044dd2655a>
2.875
555
Academic Writing
Science & Tech.
27.441031
Evolving Dispersal: Where to Go Next? Habitat destruction and global climate change are two major threats to the persistence of ecosystems. The probability that a species survives such changes depends on its ability to track environmental shifts, either by moving between patches of habitat or by rapidly adapting to local conditions. This explains why the evolution of dispersal has become an integrative topic of paramount importance in evolutionary and behavioral ecology, as demonstrated by a recent conference*. A wide panel of researchers, who highlighted the recent major advances and the most promising lines of future research, were present at this meeting. Ferriere, Regis; Belthoff, James R.; Olivieri, Isabelle; and Krackow, Sven. (2000). "Evolving Dispersal: Where to Go Next?". Trends in Ecology & Evolution, 15(1), 5-7. http://dx.doi.org/10.1016/S0169-5347(99)01757-7
<urn:uuid:8e7cbd6c-dcd3-4071-9e6e-c32eb15d1bb3>
3
203
Academic Writing
Science & Tech.
42.522938
Sun-Moon-Earth Magnetic Relationship For most people, the moon is a lump of barren rock that provides us with moonlight, and is the cause of tides. Along with that, we know that gravity is less on the moon, allowing astronauts to take floating steps. Few people know that our moon has a magnetic field, just like the Sun and Earth have – meaning it isn’t as dead as it appears. These snippets are from Wikipedia, which states that the information is still speculative: - Roughly once every Lunar orbit, the Moon passes through Earth’s magnetotail for approximately 6 days around full moon. More here. - Interaction with the plasma sheet causes the Moon’s surface to become negatively charged. - The moon’s dayside is lightly charged, while the nightside is strongly charged The relationship is interesting. We have the Sun which has a magnetic reversal every 11 years or so, and in-between cycles the magnetic field varies quite a lot. We have the Earth and its wandering magnetic poles, and various magnetic anomalies. And our Moon which is dynamic, varying between day/night sides and whether it is passing through our magnetosphere. These variations in all three bodies allow for change to occur, and it has been noticed that recently there is a possible connection between eclipses and earthquakes. The combination of gravitational and magnetic forces, when all three are in an alignment (an eclipse) seems to currently make a difference. If this trend is growing, then all sorts of earth changes could be heading our way, especially around eclipses. An extra, possible dimension is electric. While I admire the large body of serious discussions regarding the Electric Universe (see the work of McCanney, Thornhill and Talbot), I have yet to spend enough time studying it to fully grasp the concepts. But it feels like a solid alternative theory to how the mechanics of the universe work. And it could be that the relationship between the Sun/Moon/Earth involves magnetic fields, gravitational fields and electrical forces. James McCanney said: The [New] Moon moves in front of Earth, breaks that electrical flow [between the sun and Earth], and then moves out of the way. It gives us tremendous bombardment after that Moon moves out of the way, the first and second day after the New Moon. That’s the condition that has been identified as being one of the leading causes of kicking-off major hurricanes and storms. What it does is: The Moon is interacting with the solar electric field. It’s that CHANGE which causes the storms, and causes the environment around Earth to change, and thus affects Earth weather. The proof could be seen soon. On June 1st and July 1st we will have solar eclipses. Splitting them in two will be a lunar eclipse on June 15. There are so many factors in play that I cannot be certain of increased earthquakes during that month, but I think it is a strong possibility. If there are 7.0+ earthquakes at that time, plus possibly volcanic eruptions and cyclones, then I will be suggesting that earth changes are here, and that their source has been determined.
<urn:uuid:2ec86904-b3f2-4437-846d-e7ebd9121a33>
3.28125
651
Personal Blog
Science & Tech.
52.581745
Science Fair Project Encyclopedia The gram or gramme, symbol g, is a unit of mass, and is defined in the SI system of units as one one-thousandth of a kilogram (i.e., 1x10-3 kg). The kilogram, in turn, is defined as the mass of a specific platinum-iridium cylinder maintained at the International Bureau of Weights and Measures in Paris, France. Although the gram is not an SI base unit, it is a derived unit of the kilogram, which is a base unit. However, the gram is a base unit of the older cgs system of measurement, a system which is no longer widely used. Regardless, the gram is an essential measurement in scientific endeavors worldwide. A gram was originally defined as the weight of one cubic centimeter of water at its densest. This occurs at a temperature near four degrees Celsius. However, its definition was changed to the mass of a metal artifact in the eighteenth century. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:f7c90447-d8f7-4d1f-867c-c99c25beb639>
3.78125
233
Knowledge Article
Science & Tech.
50.631407
The background image for these pages is created from satellite-derived free air gravity anomalies (gray-scale) and color contours of the Earth's geomagnetic inclination, declination and field strength. Satellite-derived free air gravity anomalies Ocean surfaces represent a gravitational equipotential surface. Free air gravity anomalies are then derived by calculating the rate of change in elevation of this surface. The area of the image is the central Atlantic Ocean with the Lesser Antilles Islands and Venezuela located to the west, and Africa to the East. The Mid-Atlantic Ridge (short north-south segments) and related transform faults and fracture zones (generally east-west) are well defined through the center of the area. The Earth's gravity field is monopolar so it always points down towards the Earth's center. But the magnetic field is dipolar, which means that the direction of the field changes with geographic position. At the geomagnetic poles the field is vertical or perpendicular to the Earth's surface (see diagram below). At the geomagnetic equator it is horizontal or parallel to the Earth's surface, and oriented north-south. Geomagnetic inclination and declination describe the magnetic field vector with respect to geographic location. Geomagnetic field strength generally is strongest near its poles and weakest over its equator. Tasa Graphic Arts has a lot of cool Below is an "unwashed" version of the background image. - Geomagnetic inclination (10º red contours) increases from -20º in the south to 50º in the north. - Geomagnetic declination (5º blue contours) decreases to -15º through the center of the area and increases westward and eastward to -10º and -5º respectively. - Geomagnetic field strength (5,000 nT magenta contours) increases from 30,000 nT in the south to 45,000 nT in the north.
<urn:uuid:064d0f7b-17ea-405f-9bac-b72fc68bae26>
3.53125
426
Knowledge Article
Science & Tech.
42.062592
Young sharks learn from friends Lemon shark / LiveScience.com (LiveScience) Sharks might be able to learn new skills just by watching their friends' behavior, a new study finds. In experiments at the Bimini Biological Field Station in the Bahamas, a group of researchers corralled 18 juvenile lemon sharks in a large holding pen and trained some to learn a reward-based task. If the sharks entered a certain area of the pen -- called the indicator zone -- a target would be exposed on the other side of the pen. Then, if the sharks swam to the target and bumped it, they were given a piece of fish. In the next phase of the experiments, some untrained sharks were paired with those that had learned how to get the reward, while another naive set was paired with sharks that had not learned the task. The researchers then tested to see if the inexperienced sharks had picked up their peers' behavior. They made the task slightly easier, exposing the fish reward once the sharks entered the indicator zone. The researchers found that the sharks that were paired with trained peers completed the task more quickly and successfully than those that had inexperienced partners. The researchers, who detailed their findings in an Aug. 30 paper in the journal Animal Cognition, said the results indicate that the sharks -- like some primates, birds, insects and other animals -- are capable of using social information to learn about their environment, which could affect how they find food, travel and avoid predators. "The general perception of sharks as solitary, mindless, feeding machines with pea-brains couldn't be further from the truth," researcher Culum Brown, a professor at Australia's Macquarie University, said in a statement. Brown added that he hopes the study will burnish sharks' reputation and help change public perception and conservation attitudes towards sharks. - Image Gallery: Great White Sharks - Image Gallery: Sharks Make Home With African Penguins - Cyclops of the Sea: Pictures of a One-Eyed Shark Copyright 2012 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. Popular in SciTech - Amazon proposes a colossal biospherelike Seattle campus - Weird pirate ant comes with an "eye patch" - The 7 weirdest things made by 3D printing - Watch: Biggest solar storm of the year Play Video - NASA funds 3D pizza printer - Jennifer Lopez to open Verizon cellphone stores - Microsoft announces Xbox One - Xbox One Press Conference
<urn:uuid:5998dd6f-01d3-4995-9fc5-41c145eae529>
3.640625
522
Truncated
Science & Tech.
38.262813
To use all functions of this page, please activate cookies in your browser. With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter. - My watch list - My saved searches - My saved topics - My newsletter Could paint particles protect the planet? New geoengineering concept for a climate change ‘insurance policy’ Five tethered balloons would loft 1.5 million tonnes of titanium dioxide particles into the stratosphere each year. The balloon size is far larger than any launched to date to avoid ‘blow over’ from the fierce winds that the tether will experience 10 km above the Earth. The cost of the technology is significantly cheaper than other proposed stratospheric particle injection systems such as aircraft, artillery, and even tall towers. 15-05-2012: Dispersing fine (sub-micron) light-scattering particles into the upper atmosphere could help to combat climate change, suggests a former UK government advisor and chemical engineer. The technology concept developed in the UK and first revealed in this month’s tce magazine, advocates dispersing benign titanium dioxide particles as used in paint, inks and sunscreens into the stratosphere to deflect the sun’s rays. In a tce webinar, Peter Davidson, a Chartered Chemical Engineer, Fellow of the Institution of Chemical Engineers (IChemE) and the Royal Academy of Engineering, and a former senior innovation advisor to a number of government departments, will call for this geoengineering concept to be researched as an insurance policy to cope with possible catastrophic effects of global warming if we don’t manage to reduce CO2 emissions fast enough. “While it’s essential that we work to reduce carbon dioxide emissions now, it would be wise to have a well-researched emergency system in reserve as a Plan B,” says Davidson. The idea may sound like science fiction; but the concept in fact mimics the earth-cooling effects of large volcanic eruptions which occur several times a century. When in 1991 Mount Pinatubo erupted in the Philippines, it caused temperatures to drop by around 0.5oC around the globe for two years, ending most talk of global warming during this period. The eruption threw 20 million tons of sulphur dioxide into the stratosphere, forming a fine mist of sulphuric acid particles that spread over the globe in a matter of months. As the size of volcanic aerosol particles is similar to the wavelength of sunlight, they scattered a small proportion of the light (~1 %), and hence its heat back into space. The Earth cooled. Adding sulphuric acid to the stratosphere degrades the ozone layer, and may cause regional changes in precipitation. We need a benign but similarly sized particle; Davidson suggests Titanium Dioxide (TiO2), mankind’s most commonly-used pigment. It is stable in air, non-toxic and seven-times more effective at scattering light than sulphuric acid. Titanium is abundant in the earth’s crust and five million tonnes a year of pigment is produced currently so supply appears feasible. If you are reading this on a printed page the ink and the paper probably both have a TiO2 pigment in them. With a candidate particle identified, the next challenge is devising a system to effectively and economically lift and disperse millions of tons of particles some 20 km (~ 65,000 feet) up into the stratosphere, so they stay up for a couple of years and do not immediately get rained out. Davidson says: “The impact of global warming is predicted to be most severe on the world’s poorest peoples, both because of their lack of resources and because of where they happen to be living. I would hope we could ensure that these peoples have a stake in decision-making and the opportunity to have their voice heard, alongside the richer countries, and appropriate NGO’s (for example environmentalists), as well as other bodies. “Ideally an independent charitable trust funded by a variety of stakeholders from around the world would research not only the technology but suitable governance, legal and ethical frameworks,” adds Davidson. The total capital cost of the balloon, tethers, ultra high pressure pumps, and the production and transport of the particles is estimated to be £500m plus £600m in annual operating costs in a paper to be published by the Royal Society. These costs are perhaps thirty times lower than the next best technologies considered, such as large numbers of very sophisticated jet aircraft, and do not have the same carbon footprint. “Space mirrors on the scale needed and 20km tall towers are likely to be for the 22nd century not this one.” Very approximate estimates are that we’d need to disperse over a million tonnes of titanium dioxide per year to keep planetary temperatures constant if CO2 levels in the atmosphere double. If such an insurance policy was needed we would have to do this for 50 to 150 years. Ocean acidification would be a worry but this might be still worse if such temperature control did not keep methane emissions from melting arctic tundra or seas under control. At current prices, supplying these particles would cost around £3bn per year or around 50p per person per year. Davidson says: “Creating a suitable insurance policy for climate remediation is a vital task. It will not do to underestimate the challenges. Much research and work on governance is still needed, but a vision is now on offer for debate, and development where potential means of solving some of the most difficult technical challenges have been identified. It would be short-sighted to put-off research of such a safety-device – like trying to develop a life-jacket when you’re swept out to sea and struggling in the water.” Contact / Request information Request further information free of charge: This is where you can add this news to your personal favourites - 1Drew Industrial Division of Ashland Specialty Chemical Company purchases industrial water-treatment business of London-based Fer - 2Allegra® Launched in Japan - 3LG-DOW Polycarbonate Plant Starts Production in Korea to Effectively Meet Regional Needs - 4Caflon® surfactants from Univar as substitutes for banned nonylphenol ethoxylates - 5Honeywell Appoints Terrence Hahn as Vice President and General Manager for Fluorine Products - 6Knoll AG: Pharma business sold for $6.9 billion: - 7Plurafac LF 303 - Plurafac LF 305: The new generation of low-foam surfactants - 8Not just cars, but living organisms need antifreeze to survive - 9Putting electronic cigarettes to the test - 10Baytron P®– Gateway to a new generation of polymers
<urn:uuid:031801b2-c312-468a-b281-9286e079b2fc>
2.921875
1,436
Content Listing
Science & Tech.
31.837476
Solar cells are capable of producing a reasonable amount of electric energy from solar light. This has stimulated the imagination of many creative people around the world. One of the most attractive uses for the photo electricity –electricity produced by light– are the vehicles powered by solar energy. So far, tests on airplanes, boats and cars have been performed with different results. A light airplane, which had all the roof covered by solar cells capable of producing enough energy to move an electric engine, was able to fly over the English Channel. Each year on Australia an original race is celebrated where only solar powered cars run. The participants have to travel from side to side the Australian continent and most of them don’t finish. The average speed accomplished by these cars is between 50 and 80 Km per hour. The disadvantage of vehicles directly powered by photovoltaic cells is that they only work well on places with a exceptionally clear and sunny weather. As usually happens on the high technology business, Japan is one of the pioneers on solar energy applications to transportation. The main inconvenient of using solar energy on vehicles is the necessity of using heavy accumulators or batteries that occupy too much space. This is because of the need of store energy for the vehicle when there is no solar light available. The batteries store the energy when solar light is abundant. A possible dream In the 60’s, experts around the world worked in a project that could look as science fiction. It involves launching an enormous space station to convert solar energy into electricity and send it to the earth in form of microwaves. The space station would have 3 main parts: a solar collector of 6 Km long by 2.5 Km width, build of mirrors and photovoltaic cells; a energy central and a microwave transmitter. Despite its cost, an installation of this magnitude could be rentable on the long run thanks to the fact that the energy efficiency of solar cells is higher outside the earth’s atmosphere. This microwaves could be transformed into electricity again on earth. The microwave transmitter space station project has never go further than in paper, many hoped that this could have been the end of the fossil fuels. Nevertheless, some day we could have to give a chance to these exotic and crazy methods to obtain efficient energy for men.
<urn:uuid:414c2c7c-f17a-46c6-8b93-4534a4e93110>
3.734375
468
Personal Blog
Science & Tech.
41.037778
M-Theory And The Higgs Boson Energetic pieces of threads may finally explain all four fundamental forces of nature and our perceived reality with space, time, matter and motion. This is the new physics. The basic elements of this so far purely mathematical concept are so-called ‘strings’ and ‘membranes’ — subatomic one-dimensional energy threads and built areas. The mere vibrations of tiny strings and membranes, only about a hundredth of a billionth of a billionth of the size of an atomic nucleus generate everything; all elements of the periodic system, the vacuum of space and progressive time. Acceptance of this ‘theory of everything’ relies on the super-symmetry of forces and matter. Particle physicists need proof for super-symmetry as well as to explain their contemporary model of weakly interacting massive particles (WIMPS) that are currently supposed to form extensive, galaxies stabilising dark matter halos, apparently providing the majority of matter in the universe. The announcement by CERN last week that there is a high probability that the new particle they’ve found is the Higgs boson is an important step toward doing this. Enter the 11th dimension String theories, which emerged in the 1980s, postulate that 10 dimensions exist in nature. Only Einstein’s three-dimensional space and one-dimensional time are ‘rolled out’, the other six spatial dimensions are ‘curled up’ and invisible. Varying approaches led to different mathematical solutions and descriptions. Five variants seemed to be promising, but did not yet produce suitable solutions for all existing elementary particles, space, time, and quantum gravity. Then in 1994, the so-called M-Theory caused a second superstring revolution. It attempts to unify all five previously developed theories, introducing an 11th dimension and a staggering amount of mathematical solutions. The M-Theory considers those five set-ups to describe the same, but from different perspectives. M-Theory formulates relationships between each of the five previous theories, calling those relationships ‘dualities’. Each duality provides a mathematical solution to convert one string theory into another. The 11th dimension is supposed to acquire sufficient ene rgy to infinitely expand. String specialists ponder on a ‘floating membrane’ and consider the existence of our universe along such a membrane. Infinite parallel universes accompany our universe with their own floating membranes. Leakages between those universes lead to a mathematically feasible concept of gravity. One distinctive feature of the M-theory is the assumed existence of multidimensional spaces within any single point of space and time. Endless string solutions are the result, creating far too many variations to find the suitable ones randomly; but powerful computers may help scientists find feasible results. All elementary particles that have been observed are either fermions or bosons; fermions are supposed to build all known types of matter and elementary bosons are either photons or W- and Z-bosons or gluons. Photons carry the forces of the electromagnetic fields. W- and Z-bosons mediate a weak force of radioactiv e decay and neutrino interactions, and gluons the strong force in the atomic nuclei. A feasible solution for quantum gravity would be necessary to cover all four fundamental forces of nature. The bosons challenge string physicists most; currently they need 26 dimensions for a boson string theory, meaning 15 dimensions on top of the M-Theory. The Higgs quantum field and the Higgs boson play a crucial role in providing proof of super-symmetry because they give elementary particles a mass by spontaneous breaking of electroweak symmetry; the Higgs boson is an excitation of the Higgs quantum background field above its ground state. The basic theories for all elementary particles need getting accustomed to because each material particle is described as a distinguishable excitation state of basic energy strings and areas with quantum mechanical aspects. The classical observations of nature completely fade in the imaginations of theoretical string physicists. The quantised approach to all forces and energies of nature already challenges these scientists from the very beginning. For example, look at a simple electron: like any photon, any electron either behaves as concentrated particle or spreading wave, only depending on the set-up of the experiment. This peculiarity has been called the dualism of wave and particle. Quantum physicists handle this remaining inexplicable contradiction by the superposition of several possible states and conditions. There is only a probability that one of these states and conditions takes place. The whole of possible states is mathematically expressed by so-called ‘wave’ functions. Any single result of an observation appears accidentally. This way, quantum physics can predict atomic processes with extraordinary high precision. impacts, for example of a length into time, time into energy density, energy density into time compression and, closing the circle, back into a space length. Einstein described these rotational features in his theory of relativity by energy tensors and rotary functions. This circular exchange chain of physical quantities and states has been proven experimentally, but now needs completion with additional dimensions. Quantum physics enters this picture by innovative time compression, representing the opposite function of time dilation. Cosmology will strongly influence further development of string theory and theory of everything. We postulate that the accelerating expansion of the universe, explained by dark energy, is being driven by scalar fields. Fields of this kind serve as a description of changing super-symmetries that have their origin in one single type of initial force. These fields determine the development of the hierarchy of today’s fundamental forces of nature. Rotational space-time symmetry accommodates the types of scalar fields that are needed to explain the peculiar negative pressure and adiabatic nature of dark energy. It explains the location and nature of the Higgs quantum field as well. The M-Theory may soon culminate in the successful programming of a powerful computer, but only the experimental proofs of super-symmetry and the identification of the circular exchange chain of parameters will open a new chapter in the contemporary standard model of physics. Source: Dr Henryk Frystacki c/o ABC Science.
<urn:uuid:2fb3eef2-bbfc-4999-b26b-66b00418b067>
2.828125
1,303
Personal Blog
Science & Tech.
22.88562
The most important invention in the past two thousand years? In my opinion it is the invention by Otto von Guericke in 1660 of a machine which produced static electricity. This device was the primitive tool which unlocked our understanding and application of electricity. Modern power generation, communication, computation, and almost all of our most important analytic devices stand on the foundation of von Guericke's machine. A long line of basic intellectual formulations from electromagnetism to the bioelectric properties of brain mechanisms owe a debt to this invention. When we discover how the human brain creates the covert models of its own inventions, the structure and dynamics of the brain's own electrical activity will undoubtedly be an essential aspect of the explanation.
<urn:uuid:89b41019-568b-43fb-9bd6-613a58bf773a>
2.796875
144
Knowledge Article
Science & Tech.
26.209946
Contact: Francis Reddy NASA/Goddard Space Flight Center Caption: Swift's UVOT acquired this image of Comet Garradd (C/2009 P1) on April 1, 2012, when the comet was 142 million miles away, or 636 times farther than the moon. Red shows sunlight reflected from the comet's dust; violet shows ultraviolet light produced by hydroxyl (OH), a fragment of water. NGC 2895 is a barred spiral galaxy located 400 million miles away in the constellation Ursa Major. The UVOT image (outlined) is placed within a wider visible image of the region from the Digital Sky Survey. Credit: NASA/Swift/D. Bodewits (UMD) and S. Immler (GSFC) and DSS/STScI/AURA Usage Restrictions: None Related news release: NASA's Swift monitors departing Comet Garradd
<urn:uuid:8572e676-3194-4f0a-acda-49dd43d5916a>
3.078125
187
Truncated
Science & Tech.
52.818011
Figure 12.3: Latitude-month plot of radiative forcing and model equilibrium response for surface temperature. (a) Radiative forcing (Wm-2) due to increased sulphate aerosol loading at the time of CO2 doubling. (b) Change in temperature due to the increase in aerosol loading. (c) Change in temperature due to CO2 doubling. Note that the patterns of radiative forcing and temperature response are quite different in (a) and (b), but that the patterns of large-scale temperature responses to different forcings are similar in (b) and (c). The experi-ments used to compute these fields are described by Reader and Boer (1998).
<urn:uuid:316e5972-b039-4796-bb2e-4805484eff6c>
3.046875
152
Academic Writing
Science & Tech.
40.262647
Back to Earth Sciences Alt text / descriptions for slideshow images SlideShow: Welcome to the Earth Sciences Department webpage. Welcome to the Earth Sciences Department webpage. Alt text provided via a link below. The slideshow consists of 17 images showing various geologic and atmospheric events and features. Slide 0 : One of the most impressive geological formations in the eastern United States, the Whaleback Anticline near Shamokin, Pennsylvania, is a superb natural laboratory for geologists and geology students studying the dynamic forces that shaped the planet. Slide 1 : Most people wouldn't think to vacation in eastern Nevada, but you can find air as cool as any Las Vegas casino in the Caves sections of Cathedral Gorge State Park. The Caves aren't true caves but rather narrow canyons eroded from a Pliocene Epoch lake bed. Slide 2 : Wave: Welcome to Oceanography Slide 3 : Big wave crashing, west coast California, CA. Slide 4 : 40 wave in Hawaii Slide 5 : Welcome to Geology, volcano and lightning. Slide 6 : Poas Volcano Crater, shown above, is located in Poas Volcano National Park in Costa Rica: acid lake? Find out more about the acid lake Slide 7 : California Cavern: Learn how stalactites form. Slide 8 : Globe: Welcome to Geography Slide 9 : NASA scientists use remote sensing to observe a section of the earth each orbit, and then combine the data from many orbits to recreate the whole earth at once. Sensors in space observe the earth in many different wavelengths of light. Combinations of these images can be used to determine what is growing in each patch of the earth, and even if it is healthy or not. As sensors get better and more sensitive, the size of the smallest patch that can be observed from space gets smaller and smaller - to within a few meters now. Remote sensing data can be used in estimating biomass, soil moisture, changes in elevation, or even animal The photo above showing lenticular wave clouds seemingly emanating from the snow-covered summit of Mount St. Helens in southwestern Washington was taken on October 22, 2007. Lenticular clouds are observed when stable air near saturation is forced to flow over mountain ranges, elevated plateaus or high hills. The lifting cools the air to the saturation point thus forming clouds. Lenticulars appear stationary because they're confined to wave crests, which often migrate quite slowly. Photo taken from the U.S. Forest Slide 10 : Volcano exploding Slide 11 : Welcome to Meteorology, cloud. Slide 12 : Image of a fabulous solar corona while traveling south from Aberdeen to Glasgow, Scotland, on September 8, 2007. A thick, low-level covering of clouds had just cleared, leaving a thinner veil of mid-level clouds, composed chiefly of water droplets. Minute but very uniform water droplets near the edges of these clouds deflected the Sun's light by the process of diffraction in such a way to produce the vivid metallic colors Slide 13 : Waves breaking over a wall on the shore. Slide 14 : This is a photo of Logan Pass in Glacier National Park, Montana. Many of the glaciers in this beautiful park have been noticeably receding, but the effects of the glacial ice that nearly covered this corner of northwestern Montana are quite obvious. Slide 15 : Natural wonders, balancing rocks. Slide 16 : This is a fascinating photo of the "bathtub ring" that represents a water dipstick for the southwestern U.S. and northern Mexico. The boaters are motoring south on the Overton Arm of Lake Mead. The white calcium deposit you see is the 65-year-old water line that is considered the lake's normal water level. At this particular spot, the water level is almost 50 feet (15 m) below the lake's capacity. As of October 12, 2007, Lake Mead storage was 48% of capacity. Slide 17: Glaciers are abundant in south-central Alaska's Prince William Sound. One showcase glacier is Surprise Glacier, shown above. Note its bluish ice formations. Is glacier ice really blue? No, it just appears to be blue because the longer wavelengths of light (reds and yellows) are more readily absorbed by thick ice than are the shorter wavelengths (blues and greens). Thus, the longer light travels in ice, the bluer it appears. In contrast to thick ice, sunlight does not penetrate very far into snow, therefore it appears white. However, when you poke a hole into deep, fresh snow (more than about 0.3 m or 1 ft in depth), bluish light will emerge
<urn:uuid:1d83f5c1-b8d4-4724-9ced-8589d65a54e6>
3.484375
1,036
Content Listing
Science & Tech.
48.881604
Corals around the world, already threatened by pollution, destructive fishing practices and other problems, are also widely regarded as among the ecosystems likely to be first — and most — threatened with destruction as earth’s climate warms. But there is reason to hope, researchers are reporting. The scientists, from Penn State University and elsewhere, have produced new evidence that some algae that live in partnership with corals are resilient to higher ocean temperatures. One species, Symbiodinium trenchi, is particularly abundant – “a generalist organism,” the researchers call it, able to live with a variety of coral hosts. Click on "source" to view the entire article. Also watch the MicrobeWorld Video episode about the disappearance of coral reefs: http://bit.ly/MWV16coral
<urn:uuid:55b50ed1-9862-45d3-9354-d5b704d43178>
3.765625
166
Truncated
Science & Tech.
21.058789
Early this century, a box with a few wires sticking out and flickering needles that jumped across a chart recorder would probably have impressed the average person. Electric gadgets were newish inventions back then. An uncritical enthusiasm for science was in the air. These are the most charitable explanations for the origin of the lie detector. No doubt William Moulton Marston would have disagreed. For it was Marston who in 1915 claimed to have discovered that blood pressure often goes up when people lie. That finding paved the way for the invention of the polygraph in the 1920s. As for Marston, he went on to invent the comic strip character Wonderwoman, giving her a lasso with special powers that forced villains to tell the truth. And aren't comic strips where the entire pseudoscience of lie detection properly belongs, too? Probablyso why is the US Department of Energy to begin carrying ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:87a31a06-d28f-4af8-a1da-859d72dd323d>
3.21875
208
Truncated
Science & Tech.
51.953289
IT'S FRIDAY evening, and you're about to head out of the city. But the traffic news tells you that there's a jam on your normal route. Is it worth taking an alternative? And how far out of your way do you need to go to avoid the snarl-up? If you lived in Duisburg in Germany, you need never wrestle with this kind of dilemma. Instead, you could plan your journey using an up-to-the-minute map of traffic flow for the entire city. These maps are constructed from data taken by vehicle detectors dispersed throughout the city's streets. But there is no way that the entire city could be staked out this way. Data recorded at a few key points provide the input to a real-time on-line computer simulation of the traffic flow, pegging the model to reality. This system, developed by Michael Schreckenberg and his colleagues at the University of Duisburg, ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:68dd2b32-d7d2-4e3f-8aba-d86d665c0827>
3.25
223
Truncated
Science & Tech.
64.065
THE precursor of life may have learned how to copy itself thanks to simple convection at the bottom of the ocean. Lab experiments reveal how DNA replication could have occurred in tiny pores around undersea vents. One of the initial steps towards life was the first molecule capable of copying itself. In the open ocean of early Earth, strands of DNA and loose nucleotides would have been too diluted for replication to occur. So how did they do it? Inside many undersea hydrothermal vents, magnesium-rich rocks react with sea water. Such reactions create a heat source that could drive miniature convection currents in nearby pores in the rock, claim Christof Mast and Dieter Braun of Ludwig Maximilian University of Munich, Germany. They propose that such convection could concentrate nucleotides, strands of DNA, and polymerase, providing a setting that would promote replication. Sea water inside pores on ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:bd54c8cf-9c93-4da9-a154-bbe9f99c88a2>
4.375
210
Truncated
Science & Tech.
42.83328
Explore general as well as scientific information about the movement, chemistry and biology of area surface water environments. Click the "Learn More" text to learn more about each parameter shown and how it indicates Hydrology. Water levels typically follow rainfall patterns during periods of wet weather and drought. From these data, one can get a picture of how recent flood or drought events compare to historical data. Learn more about lake water levels » |Latest Value||High Water |Historic Range||Additional Information| 80.0 - 80.00 ft. |Graph(s) Unavailable - Insufficient Data| The Lake Region Classification System is a tool used for grouping lakes based on similarities in physiography, geology, soils, hydrology, water chemistry, vegetation, and climate. It was created from a cooperative effort involving the United States Environmental Protection Agency, the Florida Department of Environmental Protection, and researchers at the University of Florida's Department of Fisheries and Aquatic Sciences. There are a total of 47 Lake Region groups. These are used to provide a framework of the different types of lakes in the state so that management plans can be developed for groups of lakes with similar characteristics. Learn more about Florida Lake Regions » The lake region this lake is located in is: Orlando Ridge (Region 7521) This is an urbanized karst area of low relief, with elevations from 75-120 feet. Phosphatic sands and clayey sand are at a shallow depth. Lakes in this region can be characterized as clear, alkaline, hard-water lakes of moderate mineral content. They are mesotrophic to eutrophic, but it is difficult to distinguish between effects of urbanization and natural phosphatic levels. Lakes are more phosphatic and green than the Crescent City/Deland Ridges located to the north, and only slightly more than the Apopka Upland located to the west.
<urn:uuid:29d0d123-3186-4905-bad0-46016f47037d>
3.78125
390
Knowledge Article
Science & Tech.
28.763759
Just So You Know Newts are salamanders that live much or all of their adult lives under water, although not all aquatic salamanders are considered newts. Newts are classified in the subfamily Pleurodelinae of the family Salamandridae, and are found in North America, Europe and Asia. Newts metamorphose through three distinct developmental life stages: aquatic larva, terrestrial juvenile (called an eft), and adult. Adult newts have lizard-like bodies and may be either fully aquatic, living permanently in the water, or semi-aquatic, living terrestrially but returning to the water each year to breed.
<urn:uuid:6ec4a1a8-e8d0-433c-90b5-553c3955b498>
3.015625
136
Personal Blog
Science & Tech.
20.0825
Pulsed laser deposition (PLD) is a thin film deposition (specifically a physical vapor deposition, PVD) technique where a high power pulsed laser beam is focused inside a vacuum chamber to strike a target of the desired composition. Material is then vaporized from the target and deposited as a thin film on a substrate, such as a silicon wafer facing the target. This process can occur in ultra high vacuum or in the presence of a background gas, such as oxygen which is commonly used when depositing oxides to fully oxygenate the deposited films. While the basic-setup is simple relative to many other deposition techniques, the physical phenomena of laser-target interaction and film growth are quite complex (see Process below). When the laser pulse is absorbed by the target, energy is first converted to electronic excitation and then into thermal, chemical and mechanical energy resulting in evaporation, ablation, plasma formation and even exfoliation . The ejected species expand into the surrounding vacuum in the form of a plume containing many energetic species including atoms, molecules, electrons, ions, clusters, particulates and molten globules, before depositing on the typically hot substrate. The detailed mechanisms of PLD are very complex including the ablation process of the target material by the laser irradiation, the development of a plasma plume with high energetic ions, electrons as well as neutrals and the crystalline growth of the film itself on the heated substrate. The process of PLD can generally be divided into four stages: Each of these steps is crucial for the crystallinity, uniformity and stoichiometry of the resulting film. The ablation of the target material upon laser irradiation and the creation of plasma are very complex processes. The removal of atoms from the bulk material is done by vaporization of the bulk at the surface region in a state of non-equilibrium and is caused by a Coulomb explosion. In this the incident laser pulse penetrates into the surface of the material within the penetration depth. This dimension is dependent on the laser wavelength and the index of refraction of the target material at the applied laser wavelength and is typically in the region of 10 nm for most materials. The strong electrical field generated by the laser light is sufficiently strong to remove the electrons from the bulk material of the penetrated volume. This process occurs within 10 ps of a ns laser pulse and is caused by non-linear processes such as multiphoton ionization which are enhanced by microscopic cracks at the surface, voids, and nodules, which increase the electric field. The free electrons oscillate within the electromagnetic field of the laser light and can collide with the atoms of the bulk material thus transferring some of their energy to the lattice of the target material with in the surface region. The surface of the target is then heated up and the material is vaporized. The temperature of the generated plasma plume is typically 10000 K. In the second stage the material expands in a plasma parallel to the normal vector of the target surface towards the substrate due to Coulomb repulsion and recoil from the target surface. The spatial distribution of the plume is dependent on the background pressure inside the PLD chamber. The density of the plume can be described by a cos^n(x) law with a shape similar to a Gaussian curve. The dependency of the plume shape on the pressure can be described in three stages: The most important consequence of increasing the background pressure is the slowing down of the high energetic species in the expanding plasma plume. It has been shown that particles with kinetic energies around 50 eV can resputter the film already deposited on the substrate. This results in a lower deposition rate and can furthermore result in a change in the stoichiometry of the film. The third stage is important to determine the quality of the deposited films. The high energetic species ablated from the target are bombarding the substrate surface and may cause damage to the surface by sputtering off atoms from the surface but also by causing defect formation in the deposited film. The sputtered species form the substrate and the particles emitted from the target form a collision region, which serves as a source for condensation of particles. When the condensation rate is high enough, a thermal equilibrium can be reached and the film grows on the substrate surface at the expense of the direct flow of ablation particles and the thermal equilibrium obtained. The nucleation process and growth of crystalline film on a substrate depend on several factors such as the density, energy, ionization degree of the ablated material and temperature, roughness and crystalline properties of the substrate. Although PLD has a much lower average deposition rate than other deposition techniques such as Molecular Beam Epitaxy (MBE) and Sputtering Deposition (SD) depending on the repetition rate of the laser it can be demonstrated that the fraction of stable nucleation sites is orders of magnitudes higher for PLD than for SD and MBE. This has several implications on the growth mechanism in PLD: The critical nucleus radius is smaller and in the range of one or two atoms. Since the nucleation rate is proportional to the nucleation site density and the rate of impingement, it is much higher in the case of PLD. Furthermore, the high density of nucleation sites also increases the smoothness of the deposited film. This is why PLD is such an outstanding method for the growth of thin films. Pulsed laser deposition is only one of many thin film deposition techniques. Other methods include molecular beam epitaxy (MBE), chemical vapor deposition (CVD), sputter deposition (RF, Magnetron, and ion beam). The history of laser-assisted film growth started soon after the technical realization of the first laser in 1960 by Maiman. Smith and Turner utilized a ruby laser to deposit the first thin films in 1965, three years after Breech and Cross studied the laser-vaporization and excitation of atoms from solid surfaces. However, the deposited films were still inferior to those obtained by other techniques such as chemical vapor deposition and molecular beam epitaxy. In the early 1980’s, a few research groups (mainly in the former USSR) achieved remarkable results on manufacturing of thin film structures utilizing laser technology. The breakthrough came in 1987 when Dijkkamp and Venkatesan were able to laser deposit a thin film of YBa2Cu3O7, a high temperature superconductive material, which was of more superior quality than films deposited with alternative techniques. Since then, the technique of Pulsed Laser Deposition has been utilized to fabricate high quality crystalline films. The deposition of ceramic oxides, nitride films, metallic multilayers and various superlattices has been demonstrated. In the 1990’s the development of new laser technology, such as lasers with high repetition rate and short pulse durations, made PLD a very competitive tool for the growth of thin, well defined films with complex stoichiometry. There are many different arrangements to build a deposition chamber for PLD. The target material which is evaporated by the laser is normally found as a rotating disc attached to a support. However, it can also be sintered into a cylindrical rod with rotational motion and a translational up and down movement along its axis. This special configuration allows not only the utilization of a synchronized reactive gas pulse but also of a multicomponent target rod with which films of different multilayers can be created.
<urn:uuid:3194e6ba-6798-4856-893e-4dc9a61a6361>
3.0625
1,522
Knowledge Article
Science & Tech.
25.867321
Source: "Inverse Square Law" This animation, originally created for a KET distance learning physics course, explains the mathematical formula for the Inverse Square Law by demonstrating how the brightness of light changes with the distance from a source in one, two, and three dimensions. This animation can be viewed in segments or as a whole. Have you ever noticed that light from a flashlight seems much brighter when it shines on something nearby than when focused on something far away? Maybe you’ve also noticed that when you use spray paint the thickness of the coat is related to how close you hold the can to the wall. These effects are due to a physical property known as the Inverse Square Law, which states that the strength of a given physical quantity is inversely proportional to the distance from the source squared or I → 1/d2 (I=Intensity). Inversely proportional means that as distance increases, intensity decreases. The most well-known story about the Inverse Square Law comes from one of the most famous stories in all of science—Sir Isaac Newton and the apple. According to the story, Newton was trying to understand the planetary motion of the heavens when an apple fell from a tree, hit him on the head, and let loose a stream of ideas that would change physics forever. Newton wondered how the gravity that caused the apple to fall was related to the gravity that acted on much more distant objects like the Moon. He made some calculations and found that the Moon was falling toward Earth with an acceleration that was 3,600 times smaller than the apple’s acceleration. He knew the Moon was about 60 times as far from Earth's center as the falling apple was. Newton realized the inverse square of the distance was correlated to the change in the effects of gravity, 1/602 = 1/3600. Newton also saw that because he was squaring the inverse distance, the relationship was non-linear. For example, a satellite half as far away as the Moon experiences gravity at 1/302 = 1/900, not 1/1800. Note that the distance is one-half the distance to the Moon, yet the intensity is four times as strong as the intensity at the Moon—not twice as strong as you might expect. The Inverse Square Law applies to anything that radiates in all directions. Gravity obviously follows this law because our spherical planet tugs on objects all across Earth's surface. The light intensity from a bulb, the radiation escaping a brick of Uranium, the force on an electron in an electric field, and the screeching sounds of a passing ambulance are all under the command of the Inverse Square Law. From a far distance these things seem to be weak—a faint light on your face or the subtle sounds of the sirens. But as you get closer to the emanating source, the effect grows larger, and the closer you get, the more rapidly the effect increases. Gravity, sound, electric force, and radiation because they can radiate in all directions. The intensity of a source of energy is dependent on the distance from the source squared, and since it is diminishing with distance it must be dependent on the inverse of this number so that when the distance increases, the intensity decreases. No. A laser beam of light does not spread vertically or horizontally, so there is no diminishing of its intensity with distance. a. moved from 1 meter away to 2 meters away or b. moved from 10 meters away to 11 meters away? The answer is "a." The intensity would go from I/12 = I to I/22 = I/4, or a change of 400%. In "b" the intensity falls from I/102 = I/100 to I/112 = I/121, or a change of 21%. Approximately 10,000 speakers. If the initial intensity was I/12 = I, and the new intensity is I/1002 = I/10,000, you’d need (10,000 speakers)*(I/10,000) = I to have the same intensity as you originally had with 1 speaker at 1 meter. Lower elevation. At a lower elevation the distance from the center of the earth is smaller so the intensity of gravity would be greater, and thus fewer apples would be needed for a pound. Have you ever noticed that a campfire illuminates the faces of all who sit around it quite well, but when you go off to your tent for more marshmallows the light dwindles down to almost nothing? You've probably observed the same with sound—your headphones seem to be loud enough when up against your ears, yet you can only barely hear them when you rest them against your neck. This is all due to an effect known as the Inverse Square Law, and it applies to many natural phenomenon including light, sound, radiation, and gravitation. The Inverse Square Law can be explained in several ways, but it is probably best to take a step-by-step approach with something simple like a light bulb. You can try this with your class. Cover a light source with some opaque material that lets no light escape. Then, put a small pinhole in the material to let out a one-dimensional beam of light. Now, whether you put your hand an inch from the light source or a few yards away, the light intensity in your palm from the small beam escaping from the pinhole is about the same. Add a second dimension by cutting a slit in the material so that a fan of light escapes. The light now has another dimension to spread out into. If you put your hand a few inches from the source and slowly back away, you'll see that the intensity changes quite a bit. While you will see the entire slit on your hand when it is very close to the source, at a large distance you will see that your hand only covers a small part of the arc of light and therefore only receives a small part of the energy. The light's intensity is inversely proportional to the distance, I = 1/d or I = I0/d. Finally, if you cut a square from the material to allow a third dimension, you'll see even more spread. Now the light, and thus its energy, can spread in two directions. Again, if you place your hand near the cutout you will see the full bright square in your palm, but as you move away, the intensity drops quickly and the square no longer fits in your hand. Since it is spreading in two directions and s throughout a square area, the intensity falls in inverse proportion to the distance squared, I = 1/d2 or I = I0/d2. Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co. We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment.
<urn:uuid:22e459a4-53ef-4510-bbf2-3c77cc0f98fd>
3.90625
1,494
Knowledge Article
Science & Tech.
55.294272
There was a recent article at weather.com called Classic Plains Tornado Outbreak Ingredients. It gives an excellent overview, along with weather map examples, of what it takes to generate a tornado. This is absolutely worth a read for any chaser. The tornado season this year has been absolutely HUGE so far! So, here’s an infographic dedicated to teaching you more about this amazingly powerful weather phenomenon! This is one of those images that just makes you stop and gawk. I can’t wait to shoot a picture like this myself someday. Seems fitting that the word for today should be Blizzard! Last night a massive snowstorm blanketed much of the Midwest United States, and has left many without power or stuck in places they don’t want to be. Flights have been canceled, schools closed, and general mayhem was experienced by all! A blizzard, according to Wikipedia, is “a severe storm condition characterized by strong winds and reduced visibility. By definition, the difference between blizzard and a snowstorm is the strength of the wind. To be a blizzard, a snow storm must have winds in excess of 56 km/h (35 mph) with blowing or drifting snow which reduces visibility to 400 meters or ¼ mile or less and must last for a prolonged period of time — typically three hours or more.” Last night certainly counted as a blizzard. As the people who experienced 25 foot waves crashing on the shores of Lake Michigan in Chicago. Or the literally thousands of motorists stranded on the roads. This was one of the worst blizzards in history, and the full impact is yet to be realized. As I got to thinking about the word I wanted to discus today, it occurred to me that the very base of what we’re talking about here, Weather, would be the perfect word to delve into! Weather, according to Wikipedia, “is the state of the atmosphere, to the degree that it is hot or cold, wet or dry, calm or stormy, clear or cloudy.” Seems pretty basic when you look at it like that! Most weather tends to occur in the troposphere, which is the layer of the atmosphere that we live in. Wikipedia goes on to say that “weather refers, generally, to day-to-day temperature and precipitation activity, whereas climate is the term for the average atmospheric conditions over longer periods of time.” And, unless you’re being specific, the term generally applies to weather on Earth. Weather basically occurs because of density differences in temperature and moisture between different points on the earth. These differences are generally caused by the varying sun angles at any point on the earth, which varies by latitude from the tropics. The strong temperature differences between the tropics and the poles is what causes the jet stream, which is a fast flowing, narrow air current. Weather systems in the mid-latitudes, like the United States and much of Europe, is caused by instabilities of the jet stream flow. Sometimes the jet stream dips down from the poles, bringing with it cold air. Sometimes it causes a ridge that brings up warm, moist air from the tropics. All this combines to create the weather systems that get reported on each day. On Earth, “common weather phenomena include wind, cloud, rain, snow, fog and dust storms.” Less common events include “tornadoes, hurricanes, typhoons and ice storms.” Wind is created by differences in air pressure levels on the planet, with air flowing from regions of high pressure to low pressure. Pressure itself is caused by varying temperatures on the planet, with colder temps producing lower air pressure, and higher temps associated with high pressure systems. The atmosphere is a hugely complex and chaotic system, where changes to one variable (temperature, pressure, moisture) can have huge effects on the weather in a particular location, or a location remote from where the variable has changed. The dynamics of weather in many cases are poorly understood, which is why long term weather forecasts are so difficult to create. As the years go by and we are able to gather more data about actual weather events, our understanding of weather and climate increase, and that leads to an increased ability to make accurate forecasts. But the weather will always surprise us, and we should be prepared for anything at any time. If you tend to venture outside your home, you should take time to understand what weather events might be headed your way! Have you ever been out hiking and looked down on a waterfall, surprised to see a round, rainbow colored halo? Or perhaps you’ve looked off towards a bank of clouds, with the sun at your back, and seen the same colored halo? If so, you’ve seen what meteorologists call a Glory. Essentially, a Glory is “one or more sequences of faintly colored rings of light that can be seen by an observer around his own shadow cast on a water cloud (a cloud consisting mainly of small, uniform sized water droplets). It can also be seen on fog and exceptionally on dew.” The glory can only be seen when the observer is directly between the sun and cloud of refracting water droplets. Glories are not completely understood. The colored rings of the glory are caused by two-ray interference between “short” and “long” path surface waves – which are generated by light rays entering the droplets at diametrically opposite points (both rays suffer one internal reflection). Glories are often seen in association with a Brocken spectre, the apparently enormously magnified shadow of an observer, cast (when the Sun is low) upon the upper surfaces of clouds that are below the mountain upon which he or she stands. The name derives from the Brocken, the tallest peak of the Harz mountain range in Germany. Because the peak is above the cloud level, and the area is frequently misty, the condition of a shadow cast onto a cloud layer is relatively favored. The appearance of giant shadows that seemed to move by themselves due to the movement of the cloud layer (this movement is another part of the definition of the Brocken Spectre), and which were surrounded by optical glory halos, may have contributed to the reputation the Harz mountains hold as a refuge for witches and evil spirits. The next time you have your back to the sun and clouds at your front, or are up high with clouds below, try and find a Glory on your shadow. If you have a camera with you, and see a Glory, send us a snapshot! Source : Wikipedia I very much want to take pictures like this. I hoping to hit the plains states this next storm season! I just read a Twitter post from @reedtimmerTVN that said another Nor’easter was bearing down on the East Coast of the United States today. It occurred to me that I really don’t know what a Nor’easter is, so I figured that would be the perfect word for Weather Word Wednesday. According to Wikipedia, a Nor’easter is “is a type of macro-scale storm along the East Coast of the United States and Atlantic Canada, so named because the storm travels to the northeast from the south and the winds come from the northeast, especially in the coastal areas of the Northeastern United States and Atlantic Canada.” Specifically, a Nor’easter describes a low pressure area who’s center of rotation is just off the East Coast and who’s leading winds in the left forward quadrant rotate onto land from the northeast. Nor’easters can cause coastal flooding, coastal erosion, hurricane force winds and heavy snow. While they can occur at any time of the year, they are most frequent in the winter months. These systems are known for bringing down extremely cold arctic air from the north. Nor’easters generally affect the United States, from Virginia to the New England cost, as well as Quebec and Atlantic Canada. They tend to bring massive amounts of precipitation, high winds, large waves, and marginal storm surges to coastal areas. In general, though, people tend to call any strong rain or snow storm in the Northeast a Nor’easter. The very name brings feelings of dread and anticipation to people in the region! For more information, visit this Wikipedia article on Nor’easters. This is the first of the Weather Word Wednesday series, where each Wednesday I take a word or set of words found in weather reporting or meteorology, and let you know what they mean! My aim is to educate and help you understand what’s really being talked about on your favorite weather report, blog, or publication. Today we are going to talk briefly about Winter Storm advisories, watches and warnings! Winter Storm Advisory: An advisory is issued by the National Weather Service (NWS) when a significant winter storm or hazardous winter weather is occurring, imminent, and is an inconvenience. This means bad weather is happening right now or will happen soon, but it won’t be as bad as what you get with a Winter Storm Warning. Winter Storm Watch: A watch is issued when significant winter weather, like heavy snow, sleet, significant freezing rain, or a combination of events, is expected, but not imminent. A warning provides about 12 to 36 hours notice of the possibility of severe winter weather. Winter Storm Warning: This is issued when significant winter weather is occurring, imminent, or likely, and is a threat to life or property. If you hear a Winter Storm Warning for your area, you should probably remain inside with a nice cup of hot chocolate! But be prepared to dig yourself out later! During the course of any given winter, you are likely to hear all three of these weather statements issued at one time or another. Even here in Arizona, we get Winter Storm Warnings when the occasional super heavy snowstorm rolls through. It’s good to have a basic understanding of the urgency associated with each of these weather statements, as it ultimately helps you get prepared for the weather ahead. Keep an eye on the TV, or an ear to the radio, and always know what’s happening outside! Sources : The Weather Channel
<urn:uuid:5b388cac-5c02-4ffd-9102-2e40c422e606>
2.796875
2,113
Personal Blog
Science & Tech.
48.248234
Carbon dioxide (CO2) is the most important greenhouse gas produced by human activities, primarily through the combustion of fossil fuels. It accounts for the largest proportion of the 'trace gases' and is currently responsible for 60% of the 'enhanced greenhouse effect'. It is also thought that it's been in the atmosphere for over 4 billion of the Earth's 4.6 billion year geological history and in much larger proportions (up to 80%) than today. Most of the carbon dioxide was removed from the atmosphere as early organisms evolved photosynthesis. This locked away carbon dioxide as carbonate minerals, oil shale and coal, and petroleum in the Earth's crust when the organisms died. This left 0.03% in the atmosphere today. The natural carbon dioxide cycle - Atmospheric carbon dioxide comes from a number of natural sources, mainly the decay of plants, volcanic eruptions and as a waste product of animal respiration. - It is removed from the atmosphere by photosynthesis in plants and by dissolving in water, especially on the surface of oceans. - Carbon dioxide stays in the atmosphere for approximately 100 years. - The amount of carbon dioxide taken out of the atmosphere by plants is almost perfectly balanced with the amount put back into the atmosphere by respiration and decay. Small changes as a result of human activities can have a large impact on this delicate balance. The impact of human activities Burning fossil fuels releases the carbon dioxide stored millions of years ago. Fossil fuels used to run vehicles (petrol, diesel and kerosene), heat homes, businesses and power factories. Deforestation releases the carbon stored in trees and also results in less carbon dioxide being removed from the atmosphere. Since the Industrial Revolution in the 1700’s, human activities, such as the burning of oil, coal and gas, and deforestation, have increased CO2 concentrations in the atmosphere. In 2005, global atmospheric concentrations of CO2 were 35% higher than they were before the Industrial Revolution. The best case scenario for the increase in carbon dioxide emissions predicts that the concentration of carbon dioxide in the atmosphere will reach double the level of before the Industrial Revolution, in 2100. The worst case scenario brings this forward to 2045. Reducing carbon emission Promote the use carbon-free or reduced-carbon sources of energy. Carbon-free sources of energy have their own associated impacts, but in general, these technologies generate energy without producing and emitting carbon dioxide to the atmosphere. Carbon-free energy sources include solar power, wind power, geothermal energy, low-head hydropower, hydrokinetics and nuclear power. Alternatively, switching from high-carbon fuels like coal and oil, to reduced-carbon fuels such as natural gas, will also result in reduced carbon dioxide emissions. Another method of reducing emission is carbon sequestration which involves the capture and storage of carbon dioxide that would otherwise be present in the atmosphere, contributing to the greenhouse effect.
<urn:uuid:bc8ba587-ba5b-4054-84cf-93720da63ddf>
4.21875
596
Knowledge Article
Science & Tech.
30.41825
Halfway through the entry for Lineodes integra you'll see a character that looks like a crosshair, followed by "Solanum spp. 4-5,8, S. radula4-5, S. jasminifolium4-5, S. tuberosum (=Potato)8." According to Wolfram Mey, the leading lepidopterist of the Museum of Natural History (MfN), Berlin: The symbol means that the species has been reared from/on the particular plant. The symbol has been in use particularly by the old British authors, particularly Lord Walsingham, and is also used on the labels attached to the specimens. (translation by Dr. Michael Ohl)What this tells us is Lineodes integra (Eggplant Leafroller Moth) is reared on a variety of Solanum species, including Solanum tuberosum (Potato). This example was uncovered during a Name search for Solanum tuberosum; the resulting bibliography included a link to this volume on insects from the Biologia Centrali-Americana, which seemed unusual given the search was for a plant species. This demonstrates why we'd want to facilitate proximity searches, so that users could find pages where both Lineodes integra and Solanum tuberosum occurred to aid in the discovery of predator-prey, plant-pollinator, or other coevolutionary relationships. This example also suggests that our OCR algorithms are woefully inadequate to infer these kinds of relationships through automated means; the crosshair symbol was identified as ©.
<urn:uuid:d1a381ef-8324-4f50-95fc-305dcf5d1713>
3.03125
332
Personal Blog
Science & Tech.
33.942622
In 1917, a year after his general theory of relativity was published, Einstein tried to extend his field equation of gravitation to the universe as a whole. The universe as known at the time was simply our galaxy—the neighboring Andromeda, visible to the naked eye from very dark locations, was thought to be a nebula within our own Milky Way home. Einstein’s equation told him that the universe was expanding, but astronomers assured him otherwise (even today, no expansion is evident within the 2-million-light-year range to Andromeda; in fact, that galaxy is moving toward us). So Einstein inserted into his equation a constant now known as “lambda,” for the Greek letter that denoted it. Lambda, also called “the cosmological constant,” supplied a kind of force to hold the universe from expanding and keep it stable within its range. Then in 1929, Hubble, Humason, and Slipher made their monumental discovery using the 100-inch Mount Wilson telescope in California of very distant galaxies and the fact that they were receding from us—implying that the universe was indeed expanding, just as Einstein’s original equation had indicated! When Einstein visited California some time later, Hubble showed him his findings and Einstein famously exclaimed “Then away with the cosmological constant!” and never mentioned it again, considering lambda his greatest “blunder”—it had, after all, prevented him from theoretically predicting the expansion of the universe. Fast forward six decades to the 1990s. Saul Perlmutter, a young astrophysicist at the Lawrence Berkeley Laboratory in California had a brilliant idea. He knew that Hubble’s results were derived using the Doppler shift in light. Light from a galaxy that is receding from us is shifted to the red end of the visible spectrum, while a galaxy that is approaching us has its light shifted to the blue end of the spectrum, from our vantage point. The degree of the shift is measured by a quantity astronomers call Z, which is then used to determines a galaxy’s speed of recession away from us (when Z is positive and shift is to the red).
<urn:uuid:a7288559-c7c8-4df5-a993-b94a0e0a6087>
3.84375
446
Personal Blog
Science & Tech.
34.998492
What is biological integrity? supporting and maintaining a balanced community of organisms What are indacator species? groups of biological resources that can be used to tell if water is being polluted how are macro invertabrets good indicator species? because most of the bugs are VERY sensetive to pollution and tempurture changes, which make them ideal for pollution testing. why are salmon and trout bad indacator species? because they are tolerant to pollution.
<urn:uuid:3353dfee-bb72-4c27-b217-9025b9c7eeaa>
2.875
98
Personal Blog
Science & Tech.
26.63
A molecule (pron.: //) is an electrically neutral group of two or more atoms held together by covalent chemical bonds. Molecules are distinguished from ions by their lack of electrical charge. However, in quantum physics, organic chemistry, and biochemistry, the term molecule is often used less strictly, also being applied to polyatomic ions. In the kinetic theory of gases, the term molecule is often used for any gaseous particle regardless of its composition. According to this definition, noble gas atoms are considered molecules despite being composed of a single non-bonded atom. A molecule may be homonuclear, that is, it consists of atoms of a single chemical element, as with oxygen (O2); or it may be a chemical compound composed of more than one element, as with water (H2O). Atoms and complexes connected by non-covalent bonds such as hydrogen bonds or ionic bonds are generally not considered single molecules. Molecules as components of matter are common in organic substances (and therefore biochemistry). They also make up most of the oceans and atmosphere. However, the majority of familiar solid substances on Earth, including most of the minerals that make up the crust, mantle, and core of the Earth, contain many chemical bonds, but are not made of identifiable molecules. Also, no typical molecule can be defined for ionic crystals (salts) and covalent crystals (network solids), although these are often composed of repeating unit cells that extend either in a plane (such as in graphene) or three-dimensionally (such as in diamond, quartz, or sodium chloride). The theme of repeated unit-cellular-structure also holds for most condensed phases with metallic bonding, which means that solid metals are also not made of molecules. In glasses (solids that exist in a vitreous disordered state), atoms may also be held together by chemical bonds without presence of any definable molecule, but also without any of the regularity of repeating units that characterises crystals. Molecular science The science of molecules is called molecular chemistry or molecular physics, depending on whether the focus is on chemistry or physics. Molecular chemistry deals with the laws governing the interaction between molecules that results in the formation and breakage of chemical bonds, while molecular physics deals with the laws governing their structure and properties. In practice, however, this distinction is vague. In molecular sciences, a molecule consists of a stable system (bound state) composed of two or more atoms. Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. The term unstable molecule is used for very reactive species, i.e., short-lived assemblies (resonances) of electrons and nuclei, such as radicals, molecular ions, Rydberg molecules, transition states, van der Waals complexes, or systems of colliding atoms as in Bose-Einstein condensate. History and etymology - Molecule (1794) – "extremely minute particle," from Fr. molécule (1678), from modern Latin. molecula, diminutive of Latin moles "mass, barrier". A vague meaning at first; the vogue for the word (used until late 18th century only in Latin form) can be traced to the philosophy of Descartes. Although the existence of molecules has been accepted by many chemists since the early 19th century as a result of Dalton's laws of Definite and Multiple Proportions (1803–1808) and Avogadro's law (1811), there was some resistance among positivists and physicists such as Mach, Boltzmann, Maxwell, and Gibbs, who saw molecules merely as convenient mathematical constructs. The work of Perrin on Brownian motion (1911) is considered to be the final proof of the existence of molecules. The definition of the molecule has evolved as knowledge of the structure of molecules has increased. Earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties. This definition often breaks down since many substances in ordinary experience, such as rocks, salts, and metals, are composed of large networks of chemically bonded atoms or ions, but are not made of discrete molecules. Molecular size Most molecules are far too small to be seen with the naked eye, but there are exceptions. DNA, a macromolecule, can reach macroscopic sizes, as can molecules of many polymers. Molecules commonly used as building blocks for organic synthesis have a dimension of a few Å to several dozen Å. Single molecules cannot usually be observed by light (as noted above), but small molecules and even the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope. Some of the largest molecules are macromolecules or supermolecules. Smallest molecule diameter Largest molecule diameter Molecular formula A compound's empirical formula is the simplest integer ratio of the chemical elements that constitute it. For example, water is always composed of a 2:1 ratio of hydrogen to oxygen atoms, and ethyl alcohol or ethanol is always composed of carbon, hydrogen, and oxygen in a 2:6:1 ratio. However, this does not determine the kind of molecule uniquely – dimethyl ether has the same ratios as ethanol, for instance. Molecules with the same atoms in different arrangements are called isomers. Also carbohydrates, for example, have the same ratio (carbon:hydrogen:oxygen = 1:2:1) (and thus the same empirical formula) but different total numbers of atoms in the molecule. The molecular formula reflects the exact number of atoms that compose the molecule and so characterizes different molecules. However different isomers can have the same atomic composition while being different molecules. The empirical formula is often the same as the molecular formula but not always. For example, the molecule acetylene has molecular formula C2H2, but the simplest integer ratio of elements is CH. The molecular mass can be calculated from the chemical formula and is expressed in conventional atomic mass units equal to 1/12 of the mass of a neutral carbon-12 (12C isotope) atom. For network solids, the term formula unit is used in stoichiometric calculations. Molecular geometry Molecules have fixed equilibrium geometries—bond lengths and angles— about which they continuously oscillate through vibrational and rotational motions. A pure substance is composed of molecules with the same average geometrical structure. The chemical formula and the structure of a molecule are the two important factors that determine its properties, particularly its reactivity. Isomers share a chemical formula but normally have very different properties because of their different structures. Stereoisomers, a particular type of isomers, may have very similar physico-chemical properties and at the same time different biochemical activities. Molecular spectroscopy Molecular spectroscopy deals with the response (spectrum) of molecules interacting with probing signals of known energy (or frequency, according to Planck's formula). Molecules have quantized energy levels that can be analyzed by detecting the molecule's energy exchange through absorbance or emission. Spectroscopy does not generally refer to diffraction studies where particles such as neutrons, electrons, or high energy X-rays interact with a regular arrangement of molecules (as in a crystal). Theoretical aspects The study of molecules by molecular physics and theoretical chemistry is largely based on quantum mechanics and is essential for the understanding of the chemical bond. The simplest of molecules is the hydrogen molecule-ion, H2+, and the simplest of all the chemical bonds is the one-electron bond. H2+ is composed of two positively charged protons and one negatively charged electron, which means that the Schrödinger equation for the system can be solved more easily due to the lack of electron–electron repulsion. With the development of fast digital computers, approximate solutions for more complicated molecules became possible and are one of the main aspects of computational chemistry. When trying to define rigorously whether an arrangement of atoms is "sufficiently stable" to be considered a molecule, IUPAC suggests that it "must correspond to a depression on the potential energy surface that is deep enough to confine at least one vibrational state". This definition does not depend on the nature of the interaction between the atoms, but only on the strength of the interaction. In fact, it includes weakly bound species that would not traditionally be considered molecules, such as the helium dimer, He2, which has one vibrational bound state and is so loosely bound that it is only likely to be observed at very low temperatures. Whether or not an arrangement of atoms is "sufficiently stable" to be considered a molcule is inherently an operational definition. Philosophically, therefore, a molecule is not a fundamental entity (in contrast, for instance, to an elementary particle); rather, the concept of a molecule is the chemist's way of making a useful statement about the strengths of atomic-scale interactions in the world that we observe. See also |Wikimedia Commons has media related to: Molecules| - Van der Waals molecule - Diatomic molecule - Small molecule - Chemical polarity - Molecular geometry - Covalent bond - Noncovalent bonding - list of compounds for a list of chemical compounds - List of molecules in interstellar space - Software for molecular mechanics modeling - Molecular Hamiltonian - Molecular ion - Molecular orbital - Molecular modelling - Molecular design software - WorldWide Molecular Matrix - Periodic Systems of Small Molecules - IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (1994) "molecule". - Puling, Linus (1970). General Chemistry. New York: Dover Publications, Inc. ISBN 0-486-65622-5. - Ebbin, Darrell, D. (1990). General Chemistry, 3rd Ed. Boston: Houghton Mifflin Co. ISBN 0-395-43302-9. - Brown, T.L.; Kenneth C. Kemp, Theodore L. Brown, Harold Eugene LeMay, Bruce Edward Bursten (2003). Chemistry – the Central Science, 9th Ed. New Jersey: Prentice Hall. ISBN 0-13-066997-0. - Chang, Raymond (1998). Chemistry, 6th Ed. New York: McGraw Hill. ISBN 0-07-115221-0. - Zumdahl, Steven S. (1997). Chemistry, 4th ed. Boston: Houghton Mifflin. ISBN 0-669-41794-7. - Chandra, Sulekh (2005). Comprehensive Inorganic Chemistry. New Age Publishers. ISBN 81-224-1512-1. - Molecule, Encyclopaedia Britannica on-line - Molecule Definition (Frostburg State University) - Roger L. DeKock, Harry B. Gray; Harry B. Gray (1989). Chemical structure and bonding. University Science Books. p. 199. ISBN 0-935702-61-X. - Chang RL, Deen WM, Robertson CR, Brenner BM. (1975). "Permselectivity of the glomerular capillary wall: III. Restricted transport of polyanions". Kidney Int. 8 (4): 212–218. doi:10.1038/ki.1975.104. PMID 1202253. - Chang RL, Ueki IF, Troy JL, Deen WM, Robertson CR, Brenner BM. (1975). "Permselectivity of the glomerular capillary wall to macromolecules. II. Experimental studies in rats using neutral dextran". Biophys J. 15 (9): 887–906. Bibcode:1975BpJ....15..887C. doi:10.1016/S0006-3495(75)85863-2. PMC 1334749. PMID 1182263. - IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (1997,2006) "spectroscopy". - Anderson JB (May 2004). "Comment on "An exact quantum Monte Carlo calculation of the helium-helium intermolecular potential" [J. Chem. Phys. 115, 4546 (2001)]". J Chem Phys 120 (20): 9886–7. Bibcode:2004JChPh.120.9886A. doi:10.1063/1.1704638. PMID 15268005. - Molecule of the Month – School of Chemistry, University of Bristol
<urn:uuid:a97e0d63-1cb9-4510-b23f-b98e7593e19a>
3.734375
2,676
Knowledge Article
Science & Tech.
36.222574
|Types of Moths Ranchman's Tiger Moth Polka Dot Wasp Moth Texas Wasp Moth Black Witch Moth Corn Earworm Moth Polyphemus (Silkworm Moth) Western Sheep Moth White-lined Sphinx Moth Types of Moths Like butterflies, moths belong to the family Lepidoptera. Often they are described by comparing them with butterflies. Here are some of the most common comparisons. While it does not hold for every example, moths are normally considered night Lepidopteras species, meaning they are active during the evening and night. Butterflies, on the other hand, are considered daytime Lepidoptera species, meaning they are active during the daylight hours. In terms of physical features, moths are often characterized as having thicker bodies than butterflies. The absence of a club (or ball) at the end of the antenna also characterizes moths. Another general rule of thumb is that butterflies have colorful wings, while moths have dull, brown wings. That generalization holds for many, but not all moth and butterfly species. A high percentage of butterfly species in the Hesperiidae family (skippers) and Riodinidae family (metalmarks), for example, have brown color wings. The moth pictures in this album show a representative sample of moth species with brightly colored wings and fit into seven different moth families. - Clearwing Moths: Family Sesiidae - Inchworm Moths: Family Geometridae - Erastria decrepitaria - Owlet Moths, Miller Moths: Family Noctuidae - Black Witch Moth, Corn Earworm Moth, Underwing Moths - Tiger Moths: Arctiidae - Cinnebar Moth, Tiger Moths, Wasp Moths - Giant Silkworm Moths: Saturniinae - Polyphemus, Western Sheep Moth - Sphinx Moths: Sphingidae - They are often called hummingbird moths because many species have robust bodies and can hover around flowers like hummingbirds. White-lined Sphinx Moth - Prominent Moths: Notodontidae - White-dotted Prominent © 2006-2011 Patricia A. Michaels
<urn:uuid:b4b7bbfc-f9f9-4c32-b0af-13b14009e4b9>
3.546875
479
Knowledge Article
Science & Tech.
28.094708
Discovering Warrior Wasps Lynn S. Kimsey is an entomologist, and has been one for most of her life. So begins the National Science Foundation's recent LiveScience feature on the UC Davis entomologist. It's an interesting piece. Kimsey, director of the Bohart Museum of Entomology and professor of entomology at UC Davis, traces her interest in entomology to age 5, when she received her first butterfly net. "I've pretty much had a burning passion for insects ever since, except for a brief foray into marine biology as an undergraduate," she told LiveScience. Kimsey recently drew international attention with her discovery of gigantic "warrior wasps" on the Indonesian island of Sulawesi.(The male measures about two-and-a-half-inches long, Kimsey says. “Its jaws are so large that they wrap up either side of the head when closed. When the jaws are open they are actually longer than the male’s front legs.) And what is "the most important characteristic a researcher must demonstrate in order to be an effective researcher?" "A burning curiosity and the need to know." Kimsey is also quick to point out the societal benefits of her research. "Understanding insects, where they occur and the ecosystem services they provide, is critical for our how important insects are to us. They are our principal competitors — they feed on us and our animals, they make us sick and yet provide critical pollination, recycling and nutritional services." We're glad to see LiveScience singling out scientists for a "behind-the-scenes" look. It humanizes the scientists who do such intriguing research. We remember when apiculturist Marla Spivak, a 2010 MacArthur Foundation and Distinguished McKnight Professor and Extension entomologist with the University of Minnesota, shared some of her thoughts with LiveScience. When asked "If you could only rescue one thing from your burning office or lab, what would it be?" Spivak answered "My students." Then, showing a trademark sense of humor, she added "If there were bees in the lab, I would grab them, too." Kimsey, too, has a honed sense of humor. The Bohart Museum is the home of a global collection of seven million insect specimens and what she calls "the live petting zoo"--insects you can touch and handle. They include Madagascar hissing cockroaches, a rose-haired taranatula, and walking sticks. We thought she might gleefully answer "walking sticks" when she was asked what she would RUN out of burning building with, but no. Kimsey replied: "My external hard drive: My entire research life, my brain, is in that drive." Lynn Kimsey with a gigantic "warrior wasp" she discovered on the island of Sulawesi, Indonesia. (Photo by Kathy Keatley Garvey)
<urn:uuid:42edfa11-6502-42f5-9347-d721a47bc8ed>
2.6875
613
Personal Blog
Science & Tech.
44.480229
Java is a programming language that is quite similar to C and C++. Introduced by Sun Microsystems in 1995, Java's popularity has greatly increased throughout the years. Java can create programs that can be run not only on one computer, but also on multiple computers through a network! Java can also be used to create what is known as "applets." These applets can be placed on webpages to increase user-interaction. Some of these applets might include games, quizzes, and other applications! There are some important things to know about Java. 2. Programs you create with Java are portable. Java programs can be run on any system and anywhere in a network, as long as the computer has a Java Virtual Machine. The Java Virtual Machine is a Java interpreter. Java programs are compiled into byte-codes. The Java Virtual Machine interprets the byte-codes into a code that your computer hardware can understand. Java can be used to write very reliable software. The objects in a Java language cannot make any references to data external to themselves. This makes sure that data in another application or in the operating system are not called upon by the instructions in a Java program. If it is included in the instructions, the applications and operating system might crash! Thankfully, Java eliminates these problems. 3. Java is very secure. This is good, because many java applets are downloaded from the Internet everyday. Now, who would want to download something that isn't secure? When a Java Virtual Machine loads a Java program to interpret for your computer, it might perform a byte-code verification program if it does not trust the code. This helps to make Java safe from illegal byte-codes. 4. Java is simple- at least as far as programming languages go. It is similar to C and C++, so programmers who are already familiar with these two languages will find Java fairly simple. Also, Java has removed some of the excessive features in C and C++. It replaces features that result in poor programming practices with different, easier features! As you can see, Java is a really interesting language. It's no wonder that it has enthralled programmers for years! If you would like to learn java, I would suggest taking a trip up to your local library, or visiting http://www.htmlgoodies.com, and checking out their Java tutorial! 1995-2001, ThinkQuest Inc.all rights reserved
<urn:uuid:15028047-1f35-4afd-80c4-91b90be0ea00>
3.265625
495
Knowledge Article
Software Dev.
50.478
Building Your Own Website from the Ground Up What is a Website? - A website consists of one or more "pages" that may be accessed by a web browser. - A web browser is a piece of software that runs on your local computer but shows content from a website hosted on a web server. - A web server is a computer located at a particular address on the internet that is configured to respond to requests from web browsers and provide "pages" of content. - A web server consists of hardware that provides data storage, connection to the internet, and computing power (Central Processing Unit [CPU] cycles plus RAM). - Most web servers are "virtual machines." They share hard disks, internet connections, CPU cycles, and RAM with many other websites. The real hardware is split up into pieces of the hard disk, CPU time, and memory so that websites take turns using the system resources. - The two most popular flavors of web server operating systems are linux and Windows Server. - The Apache HTTP Server is one of the most popular pieces of software used in linux systems to receive browser requests and provide data ("pages") in response. - An "IP address" is an "Internet Protocol address," a unique number that identifies a computer on a network; the whole internet is nothing but a set of uniquely identified computers that can send packets of information to each other by means of their IP addresses. - A page is a handy metaphor for "everything we see on our computer screen in response to a request we made of a server through our browser." The metaphor must not be pressed too closely; the content displayed in a browser by a server can be much more subtle and dynamic than a piece of paper or a page in a book. - A browser sends requests to websites. - Websites return "pages" to browsers. What is a Domain Name? - A Domain Name is a human-readable alias for an IP address that is used in a URL. - You may build a website without a domain name. You would then distribute the IP Address in your URLs for the site. If you want to have a domain name, you must purchase the rights to a name from a Domain Name Registrar. - A URL is a "Universal Resource Locator" (or "Uniform Resource Locator") that allows a browser to connect to a particular part ("page") of a website. |URL||Protocol||Separator||Domain Name or IP Address||Specific "page"| - http: Hypertext Transfer Protocol - https: HTTP secure - ftp: File Transfer Protocol - mailto: Simple Mail Transfer Protocol (SMTP) interface. - IP address - IPv4: 32 bits. - IPv6: 128 bits. - static vs. dynamic "While IPv4 allows 32 bits for an IP address, and therefore has 232 (4,294,967,296) possible addresses, IPv6 uses 128-bit addresses, for an address space of 2128 (approximately 340 undecillion or 3.4×1038) addresses." - The Domain Name System provides lists of domain names and their corresponding IP Addresses through nameservers. - Note well: In order to have your domain name point to your web server, you must correctly fill out the records for your ISP's name server! How do I design a web page? - HTML editors: - FTP Clients: - Filezilla, a free File Transfer Protocol (FTP) client. - PHP: Personal Hypertext Processor - cgi: Common Gateway Interface - Perl: A general purpose unix scripting language. - XML: Extensible Markup Language used to manage datasets. Content management systems Once you control a domain name, you can create or configure an e-mail server to send and receive e-mail from that domain. This takes us far afield from the primary function of websites. I'm not going to go into the details now. Most ISPs will have a pre-configured e-mail system. All you need to do is to create accounts or aliases for those who will have e-mail accounts. If you are building a site from scratch, then you must install your own e-mail server, configure the MX records properly to point to the server, and deal with all kinds of hassles that go with being your own postmaster on the website. To develop your own website, you need to purchase storage and bandwidth from an Internet Service Provider (ISP).
<urn:uuid:a6739dea-f778-4897-9b4b-a8b7634d3dc0>
3.71875
940
Tutorial
Software Dev.
53.251283
The best way to reduce data from passive ranging is suggested Electromagnetic waves and sound waves are widely used for the determination of the location of objects. Familiar examples are radar, sonar, GPS positioning, Polaroid cameras, police speed meters, and many others. Most are echo devices, generating a wave and interpreting its echo from the object of interest. GPS is a cooperative system, in which the receiver observes timing signals from sources of known location, and locates itself in reference to them. In what follows, we will consider systems based on sound waves in water, locating in two dimensions only, and in the simple case of a uniform, isotropic speed of propagation. This will allow us to clarify the fundamentals free of unnecessary complication. For any practical system, all the complicating factors must be taken into consideration. The problem of determining the location of an object will be called ranging here. There are three kinds of ranging systems, which we will call echo, cooperative and passive. Echo ranging, illustrated in the Figure, is by far the most common method in practice. The observer at point 1 emits a wave at a certain time. When the outgoing wavefront strikes the object 0, a scattered wavefront is launched, which is detected at point 1 a certain time interval Δt later. The distance r from 1 to 0 is then r = cΔt/2, where c is the speed of sound. Two such stations can locate the point 0 by comparing the distances they obtain. More commonly, the observer is able to determine the direction of the received wave, and in that case the object can be located from the location 1 alone. This, of course, is the principle of radar. The object observed is passive, and need take no part in the procedure. In cooperative ranging, the object takes an active part. In one form, the object emits waves that contain timing information. Two fixed stations with clocks synchronized with that of the object note the time delays of the received wave, and from the two distances thereby determined can locate the object, and perhaps radio back its position. In the second form, the object receives waves from two fixed stations, and determines its position from the two distances. Of course, in three dimensions, three distances are necessary. The GPS system uses the second plan, with the remarkable circumstance that the 'fixed' stations are actually moving in orbits, but their positions can be accurately predicted at any time. Cooperative ranging places great demands on clocks, especially when carried out with electromagnetic waves. Timing sound waves is much less critical. Passive ranging makes use of a wave generated by the object to be located, but timing information is not necessary. Therefore, the wave used may be generated in the normal actions of the object, and not necessarily for location purposes. An example might be the location of a boat from its propeller noise, or a whale from its singing. A minimum of three fixed stations is necessary, which we assume can detect a sound only, not determine its direction. The fundamental data is the relative times, the time delays, at which the signals are received at the stations. Let us assume all delays have been expressed as distances by multiplying them by the speed of sound. In the Figure, b and c are the delays at stations 2 and 3, relative to station 1. A circular wavefront must pass through point 1, and be tangent to the circles drawn from stations 2 and 3. The centre of this wavefront is then the location of the object. It was this diagram that caught my attention in the Reference, since the problem of constructing a circle passing through a given point and tangent to two given circles is a challenging one that I have not seen before. It happens that the two circles cannot be given arbitrarily, and the wavefront must be convex at the points of tangency. A means of solution with compass and straightedge did not suggest itself, so I turned to algebra. The triangle 023 can be solved by the law of cosines when a guess for a is used. Then, the distance 01 can be computed and compared with the guess for a. If they are not equal, a new guess for a is chosen in the direction making them more equal, and soon a consistent solution arises. Of course, a computer is necessary for the tedious calculations. It became obvious that certain input values were inconsistent, and that various cases existed that would complicate the matter. However, this could be worked into a practical means of solution. A different attack was based on the familiar problem of different times of arrival at two stations. If the wavefront arrives simultaneously, the source is somewhere on the perpendicular bisector of the line joining the observation points. If the delay d in arrival is equal to the distance between the two points 2a, then the source is located somewhere on the line through the two points, excepting the segment joining them. For intermediate values of delay, the point must be on a branch of a hyperbola whose foci are the observation points, since the hyperbola is the locus of points whose distances to two fixed points, the foci, have a constant difference. The problem now is to take the three observation points as two pairs, determine the hyperbolas, and find their point of intersection. This is also possible, but extremely tedious. One notices that the hyperbolas lie relatively close to their asymptotes at even moderate distances from the foci. If the hyperbolas are replaced by their asymptotes, the intersections of these straight lines are relatively easy to determine. The intersection of the asymptotes for two pairs will give an approximate location of the point. Now we apply a procedure to correct this approximate location to a more accurate one. To do this, the time delays predicted from the approximate location are subtracted from the observed time delays to obtain an error at each observation point, and a small displacement of the approximate location is made that reduces the sum of the squares of the errors to a minimum. Least squares also handles the possiblity of errors of measurement, and the use of redundant additional observations. Iteration will reduce the error to as small a figure as is consistent with the accuracy of the observations. This, I believe, is the best way to reduce the data for passive ranging. C. V. Drysdale, Submarine Signalling and the Transmission of Sound Through Water, in C. V. Drysdale, et. al., The Mechanical Properties of Fluids (London: Blackie and Son, 1925). Composed by J. B. Calvert Created 30 July 2000
<urn:uuid:e00bd1bc-ef6f-4ce5-b0df-8434a6018a1e>
3.84375
1,349
Academic Writing
Science & Tech.
44.092435
Effects of red tide Name: Mrs. Corwin's 5th grade class My 5th grade class would like to know why is it that red tide only affects humans and not lobsters, fish, etc. Why is the micro-organism so toxic to humans? Waiting to learn.... Without going into too much detail, the microorganisms responsible for causing red tide can live within the shellfish without killing them because the chemicals they produce which imbue everything with the characteristic red color aren't toxic to the shellfish. It's just our bad luck that those very same chemicals happen to interact with our body chemistries in the ways that can't occur in shellfish. As it turns out, red tides do affect other vertebrates (animals with backbones), in fact, they are responsible for huge, stinky die-offs of fish that wash up on shore during a "red tide". The microorganism responsible for the occurrence of a red tide is the "dinoflagellate", there are different types of dinoflagellates, and as I understand it, they produce different types of toxins, but usually the toxin responsible for the die-offs is what is called a "neurotoxin", which affects the heart , slowing it down. This reduces blood circulation, and the reduced blood circulation to the gills results in oxygen starvation, and the fish dies. As far as I know, however, this toxin only affects vertebrates, and not invertebrates (animals without backbones) like the clam and lobster. Tom F Ihde Click here to return to the Biology Archives Update: June 2012
<urn:uuid:69129efc-4714-4fcb-83de-fb3500a079bc>
3.28125
360
Knowledge Article
Science & Tech.
47.779839
In former papers it has been shown that the partial impact of cosmical bodies may not unfrequently produce a central mass and attendant bodies, which I have called respectively a sun or nebula, and planets. The sun is at a high temperature and rotates. The planets, in a solid, liquid, or gaseous state, revolve round in one general plane with orbits of varying area and of high eccentricity. All the motions, whether of sun or planets, have one common direction. Further it was shown that the planetary path is due to a portion of the original proper motion escaping conversion into heat at impact. For the same reason the temperature of the planet is lower than that of the sun, whose high molecular velocity, due to its temperature and comparatively small mass, may cause it to expand into a nebula. The present paper requires that the central mass shall become a nebula, and shall expand beyond aphelion distance of the most remote planet. The forces acting on the planet will be the attraction of the nebula, gaseous adhesion while traversing the nebula, and at the same time exchange of molecules with those of the nebula. The heavier molecules will generally be attracted to the planet, while the lighter ones will leave it. The probability of such a system being formed, or the possibility of gaseous planets moving in a nebula, with its attendant effects on the size of the orbit and the change of apsides, is not treated in this paper. It is solely occupied with the change of eccentricity. The following are five causes which are calculated to result in such a change:— 1st. An alteration in the amount of the attractive force exerted on the planet by the nebula. 2nd. The varying resistance and interchange of molecules incurred by the planet in its path. 3rd. The gaseous adhesion to the planet revolving on its axis within a nebula. 4th. The accretion of some of the vast number of small bodies which would exist in the nebula. 5th. Some others which are too dependent upon the special character of the impact to be discussed at present. In compliance with the wishes of several members, I have inserted in this paper the solutions of the dynamical problems involved, whose truth I had before assumed. The agency of lessened attraction as affecting any one planet, applies only to the period which elapses while the central mass is expanding to a nebula, and it will appear that the first revolution will especially be productive of altered eccentricity on this count. The following shows the action of these forces reduced to geometrical problems:— Problem 1. Suppose a planet to be at that part of its orbit most distant from the sun, and, while in this position, suppose the mass of the sun suddenly diminished to a given extent,—required to trace the effect of this diminution of the sun's mass upon the orbit of the planet. At present let the sun's mass be considered constant. Let the line ax fig. 1 be tangent to the curve at aphelion, and aa, ab, bc infinitesimals along ax in the direction of the planet's course; let aa', bb', cc', be infinitesimals representing the fall of the planet during the times contained respectively in aa, ab, ac, then aa' b' c' will be the path of the planet. Now suppose the mass of the sun to be decreased, the infinitesimals aa, ab, bc will remain unaltered, but aa', bb', cc', etc., will each be diminished to a” b” c”. Then the curve aa” b” c” represents the new orbit. It falls without the old orbit, except at a where it coincides with it. Perihelion distance is therefore increased, as represented in fig. 2, by virtue of diminished attraction. The amount of the lessening of the attractive force will depend upon the quantity of the sun's matter which expands beyond aphelion distance. The portion which so expands ceases to affect the path of the planet. As this increases the orbit will assume variously the forms of the ellipse, circle, ellipse (the foci being reversed), parabola and hyperbola. If the attraction towards the centre entirely ceased, the path would coincide with the line aa. These orbits are respectively shown in fig. 2. In fig. 3 let p′ represent the orbit with perihelion distance increased beyond that of p, this latter representing the orbit if the sun were not to expand into a nebula. Let the dotted circle c represent the limits to which the nebula has expanded when the planet passes aphelion. As the planet is entirely in the nebula it will be subject to constantly and rapidly diminishing attraction as it approaches the centre, s, hence it will not pass along p’, but will move more slowly inwards (in agreement with the first problem), and will pass along the second dotted line p′, which shows great increase in perihelion distance. The two actions which have now been discussed scarcely affect aphelion distance, but render the orbit more circular by increasing perihelion distance. I have now to notice gaseous resistance and interchange of molecules, whose action will be found chiefly to diminish aphelion distance. The following problem demonstrates decrease of aphelion distance by a resistance at perihelion. Problem 2. Suppose a planet to be at that part of its orbit nearest to the sun, and, when in that position, suppose a retarding force to act upon it,—required to trace the effect of this upon the orbit of the planet. Let Px represent a tangent to perihelion, and pa, ab, bc be components in direction pn, passed over in three successive infinitesimals of time. Let a α, b β, c γ represents the total fall towards the sun in the same intervals. Then p α β γ, represents the orbit. Now let the velocity in the direction px be diminished by the retarding force, and let the spaces pa′, a′b′, b′c′ represent the components in the direction px in the same infinitesimals of time. The components towards the sun remaining the same draw αα′ ββ′ γγ′ parallel to px, then a′ β′ γ′ are points in the new orbit. This curve lies entirely within the other. Thus, by a retardation at perihelion, aphelion distance is diminished, as shown in fig. 5. If this retardation is great enough, the orbit may become a circle or an ellipse with foci reversed, as shown in fig. 5. The general action of gaseous resistance is to convert the energy of the system into heat by gradually drawing the planet into the sun, or to the centre of attraction. It is maximum at perihelion, for there the density of the nebula is greater than at any other part of the orbit. Molecular exchange results from the varying densities of the different parts of the system. The planets are cooler than the central parts of the nebula, and will most likely be denser than the matter surrounding them in their path, and have sufficient attractive power to collect the heavy molecules in their vicinity. The temperature of the surface of the planet will be raised to an unknown extent by its immersion in the nebula and its progress towards perihelion. Its light molecules have their velocity so increased as to escape the planet, while the heavier molecules of the vicinity, with their lower velocity (though equal temperature), will be attracted, picked up, and become permanently part of the planet. A greater proportion of heavy molecules will be found towards perihelion, for at the centre of the nebula will probably be its greatest density, and the original expansion of the central mass into a nebula will result in the more rapid outward escape of the light molecules compared with the heavy, in obedience to the laws of gaseous diffusion. Thus the accretion of molecules to the planet will be maximum at perihelion distance. Its effect will be to retard the motion of the planet, as, in order to give its own velocity to a molecule, it will impart some of its energy. The escape of the light molecules will not affect the planet's orbit. We find therefore that gaseous re- sistance and molecular exchange act as resistances to planetary motion and are both maximum at perihelion, thereby decreasing aphelion distance and rendering the orbit more circular.
<urn:uuid:3f7c712d-8ba7-4ec3-aa48-0ded6603f089>
3.03125
1,803
Academic Writing
Science & Tech.
44.448078
For the mathematically minded: There are a few different methods of how to calculate the SOI. The method used by the Australian Bureau of Meteorology is the Troup SOI which is the standardised anomaly of the Mean Sea Level Pressure difference between Tahiti and Darwin. It is calculated as follows: [ Pdiff - Pdiffav ] SOI = 10 ------------------- Pdiff = (average Tahiti MSLP for the month) - (average Darwin MSLP for the month), Pdiffav = long term average of Pdiff for the month in question, and SD(Pdiff) = long term standard deviation of Pdiff for the month in question. The multiplication by 10 is a convention. Using this convention, the SOI ranges from about 35 to about +35, and the value of the SOI can be quoted as a whole number. The SOI is usually computed on a monthly basis, with values over longer periods such a year being sometimes used. Daily or weekly values of the SOI do not convey much in the way of useful information about the current state of the climate, and accordingly the Bureau of Meteorology does not issue them. Daily values in particular can fluctuate markedly because of daily weather patterns, and should not be used for climate purposes.
<urn:uuid:7e3e43c2-9654-4b49-ae72-554a9f5e16bc>
3.265625
280
Knowledge Article
Science & Tech.
42.850114
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. dynamics of genetic change Genetic variation is present throughout natural populations of organisms. This variation is sorted out in new ways in each generation by the process of sexual reproduction, which recombines the chromosomes inherited from the two parents during the formation of the gametes that produce the following generation. But heredity by itself does not change gene frequencies. This principle is stated by... What made you want to look up "genetic equilibrium"? Please share what surprised you most...
<urn:uuid:c016c3ed-d56d-4c19-912b-eed91475b3c9>
3.59375
134
Knowledge Article
Science & Tech.
40.636288
Spring Comes Early Across the U.S. Over the past several decades, with the exception of the Southeast, spring weather has been arriving earlier in most parts of the United States. This shift affects all sorts of biological processes that are triggered by warmer temperatures — not just flowering, but animal migration and giving birth and the shedding of winter coats and the emergence from cocoons. The calendar above uses an index for the onset of spring developed by Mark D. Schwartz (University of Wisconsin-Milwaukee) and USA National Phenology Network colleagues. This index, based on temperature variables measured at individual weather stations estimates the first day that leaves appear on plants in a given state. To come up with a U.S. estimate, we took the average change across 716 weather stations spread across the lower 48 states. For the U.S., the average shift of “first leaf” was approximately 3 days earlier moving from March 20 (1951-1980 average) to March 17 now (1981-2010 average).
<urn:uuid:5d327b8f-3037-4f46-ac20-1eaed3069e7d>
3.71875
205
Knowledge Article
Science & Tech.
52.965016
Kayley Rachel (5th grade) from Texas wanted to know if temperature made a difference in the strength of magnets. She researched her question and came up with an experiment to answer her question. What do you think? Is a hot magnet stronger or weaker than a cold magnet? Read her science project "Hot on Magnets" to find the answer. Subscribe for your FREE monthly Kids Activities Newsletter and we'll send you a link to our Kids Activities Library filled with kids crafts ideas and science projects to keep your children happy and busy! Hang the Wheel of on your kitchen wall, and the atmosphere in your home will change overnight. It's guaranteed! Watch the dramatic changes as homework gets done, clothes get picked up, arguing stops...in short, the Wheel produces great kids and a stress free environment! The mere presence of the Wheel will have a marked effect on your kids. Chances are, you'll seldom even have to spin it! Question: Does temperature affect the strength of magnets? The hypothesis of my experiment is that temperature extremes (0 degrees Fahrenheit and 210 degrees Fahrenheit) will reduce magnetic strength, when compared to magnetic strength at room temperature (about 72 degrees Fahrenheit). 3 ceramic magnets, 300 bb pellets, a small glass, a card board box, Count out 300 bb pellets and put them into the glass. First, bake a magnet in the oven (210 degrees Fahrenheit) for one hour. Then, put the second magnet in the freezer (0 degrees Fahrenheit) for one hour. Donít do anything to the third magnet so it remains at room temperature (70 degrees Fahrenheit). Attach the screws magnetically to the flat surface of the third magnet with the sharp part facing upwards and dip the magnet into the glass holding the screw by the sharp part. After five seconds, slowly pull the magnet straight up and count the bb pellets that are left in the glass. Subtract the number from 300 and record the data. When the first and second magnets are done baking and freezing, perform step three with them. After thirty minutes, repeat the experiment. Find the average of the number of bb pellets each magnet picked up. The unchanged magnet picked up the most bb pellets, with an average of 215 of them picked up. The frozen magnet picked up the second most, with the average of 192.5 bb pellets picked up, about 10% less than the unchanged magnet. The baked magnet picked up the least, with 174 bb pellets as an average of bb pellets picked up. That is about 20% less than the unchanged magnet and almost two times less than the frozen magnet. The unchanged magnet was the strongest magnet of the three magnets concluding that temperature does affect the strength of magnets. I found that the freezing and baking of the two magnets reduced the strength of the magnet by about 10 to 20%. Heating the magnet decreased the magnetís strength the most because the hot magnet picked up almost two times less than the frozen magnet. I found that adding the two screws to the magnets as handles seemed to increase the magnetís strength. As a result, the screws and the bb pellets the magnet picked up had magnetic charge. So I had to wait a little while before conducting the experiment with another magnet. When I did the experiment I found out that I had to change some things on the procedure. I removed some materials that I didnít need, and added the ones that I needed. I had to do the experiment twice to find the right information that was needed. After performing the experiment again, I found the average of bb pellets each magnet picked up. I learned that it takes a lot of time and patience to conduct a successful experiment. Levarn, Maxine. 2003. Science Fair Projects for Dummies. Wiley Publishing, Inc. Indianapolis, Indiana Pp. 212- 213 Kay, Toni and Tritton, Roger and Moore, Sean and Stephens, Hilary and Somo and Coburg and Germany and Murrel, Simon and Evans, Jo and Binder, Julee and Austin, Zirrinia and Bush, Charlotte and McCarry, Heather and Pau, Johnny and Walker, Chris and Williams, Kevin and Carouso, Luisa and Jones, Peter and Mason, Jane and Stalker, Geoffrey. 1998. Unlimited Visual Dictionary. DK Publishing, Inc. New York, New York. Pp. 316- 317 Dream Makers Software 2004 World Book 2005 (computer disk) World Book, Inc. Chicago, Illinois Creative Kids at Home has checked every weblink on this page. We believe these links provide interesting information that is appropriate for kids. However, the internet is a constantly changing place. You are responsible for supervising your own children. If you ever find a link that you feel is inappropriate, please let us know. 42 Piece Magnetic Geomag Glow In The Janice VanCleave's Magnets: Mind-boggling Experiments You Can Turn Into Science Fair
<urn:uuid:e4cf6415-20bb-420f-9470-cc2c0b5e7e70>
3.578125
1,058
Tutorial
Science & Tech.
63.16344
java.lang.Object java.nio.charset.CoderResultA description of the result state of a coder. A charset coder, that is, either a decoder or an encoder, consumes bytes (or characters) from an input buffer, translates them, and writes the resulting characters (or bytes) to an output buffer. A coding process terminates for one of four categories of reasons, which are described by instances of this class: Underflow is reported when there is no more input to be processed, or there is insufficient input and additional input is required. This condition is represented by the unique result object #UNDERFLOW , whose isUnderflow method returns true. A malformed-input error is reported when a sequence of input units is not well-formed. Such errors are described by instances of this class whose isMalformed method returns true and whose length method returns the length of the malformed sequence. There is one unique instance of this class for all malformed-input errors of a given length. An unmappable-character error is reported when a sequence of input units denotes a character that cannot be represented in the output charset. Such errors are described by instances of this class whose isUnmappable method returns true and whose length method returns the length of the input sequence denoting the unmappable character. There is one unique instance of this class for all unmappable-character errors of a given length. JSR-51- Expert Group |public static final CoderResult||UNDERFLOW||Result object indicating underflow, meaning that either the input buffer has been completely consumed or, if the input buffer is not yet empty, that additional input is required.| |public static final CoderResult||OVERFLOW||Result object indicating overflow, meaning that there is insufficient room in the output buffer.| |Method from java.nio.charset.CoderResult Summary:| |isError, isMalformed, isOverflow, isUnderflow, isUnmappable, length, malformedForLength, throwException, toString, unmappableForLength| |Methods from java.lang.Object:| |clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait| |Method from java.nio.charset.CoderResult Detail:| public boolean isError() public boolean isMalformed() public boolean isOverflow() public boolean isUnderflow() public boolean isUnmappable() public int length() public static CoderResult malformedForLength(int length) public void throwException() throws CharacterCodingException public String toString() public static CoderResult unmappableForLength(int length)
<urn:uuid:7f1edf41-0138-481e-9cf7-34ee685fa5a4>
3.375
596
Documentation
Software Dev.
23.321435
Oxygen is the third most abundant element in the universe. It is a non-metallic element with the symbol O, the atomic number 8, an atomic weight of 15.999, and a melting point of about -218.4°C. Oxygen gas is colorless, odorless, and tasteless. The liquid and solid forms are a pale blue color and are strongly paramagnetic. |Previous Element: Nitrogen Next Element: Fluorine |Phase at Room Temp.||gas| |Melting Point (K)||54.8| |Boiling Point (K)||90.2| |Heat of Fusion (kJ/mol)||0.4| |Heat of Vaporization (kJ/mol)||---| |Heat of Atomization (kJ/mol)||249| |Thermal Conductivity (J/m sec K)||0.03| |Electrical Conductivity (1/mohm cm)||0| |Number of Isotopes||3| |Electron Affinity (kJ/mol)||140.9788| |First Ionization Energy (kJ/mol)||1313.9| |Second Ionization Energy (kJ/mol)||3388.2| |Third Ionization Energy (kJ/mol)||5300.3| |Atomic Volume (cm3/mol)||13.9| |Ionic Radius2- (pm)||126| |Ionic Radius1- (pm)||---| |Atomic Radius (pm)||73| |Ionic Radius1+ (pm)||---| |Ionic Radius2+ (pm)||---| |Ionic Radius3+ (pm)||---| |Common Oxidation Numbers||-2| |Other Oxid. Numbers||-1, +1, +2| |In Earth's Crust (mg/kg)||4.61×105| |In Earth's Ocean (mg/L)||8.57×105| |In Human Body (%)||6.3%| |Regulatory / Health| |OSHA Permissible Exposure Limit||No limits| |OSHA PEL Vacated 1989||No limits| |NIOSH Recommended Exposure Limit||No limits| Mineral Information Institute Jefferson Accelerator Laboratory The name derives from the Greek oxys for "acid" and genes for "forming", since the French chemist Antoine-Laurent Lavoisier originally thought that oxygen was an acid-producer because by burning phosphorus and sulfur and dissolving them in water, he was able to produce acids. For many centuries, workers occasionally realized that air was composed of more than one component. The behavior of oxygen and nitrogen as components of air led to the advancement of the phlogiston theory of combustion, which captured the minds of chemists for a century. Oxygen was prepared by several workers, including Bayen and Borch, but they did not know how to collect it, did not study its properties, and did not recognize it as an elementary substance. Oxygen was discovered independently by the Swedish pharmacist and chemist Carl-Wilhelm Scheele in 1771 and the English clergman and chemist Joseph Priestly in 1774. Scheele's Chemical Treatise on Air and Fire was delayed in publication until 1777, and Priestly, whose findings were published first, is credited with the discovery. Ozone (O3), a highly active compound, is formed by the action of an electrical discharge or ultraviolet light on oxygen. Ozone's presence in the Earth's atmosphere (amounting to the equivalent of a layer 3 millimeters (mm) thick under ordinary pressures and temperatures) helps prevent harmful ultraviolet rays of the sun from reaching the Earth's surface. Pollutants in the atmosphere may have a detrimental effect on this ozone layer. Ozone is toxic and exposure should not exceed 0.2 mg/m3 (8-hour time-weighted average - 40-hour work week). Undiluted ozone has a bluish color. Liquid ozone is bluish black and solid ozone is violet-black. Oxygen has three isotopes. Natural oxygen is a mixture of three isotopes. Naturally occurring oxygen-18 (18O) is stable and available commercially, as is water (H2O with 15% 18O). Commercial oxygen consumption in the U. S. is estimated at 20 million short tons per year and the demand is expected to increase substantially. Oxygen enrichment of steel blast furnaces accounts for the greatest use of the gas. Large quantities are also used in making synthesis gas for ammonia and methanol, ethylene oxide, and for oxy-acetylene welding. Air separation plants produce about 99% of the gas, while electrolysis plants produce about 1%. Oxygen is the third most abundant element found in the sun, and it plays a part in the carbon-nitrogen cycle, the process once thought to give the sun and stars their energy. Oxygen under excited conditions is responsible for the bright red and yellow-green colors of the Aurora. A gaseous element, oxygen forms 21% of the Earth's atmosphere by volume and is obtained by liquefaction and fractional distillation. The atmosphere of Mars contains about 0.15% oxygen. The element and its compounds make up 49.2%, by weight, of the Earth's crust. About two-thirds of the human body and nine-tenths of water is oxygen. In the laboratory oxygen can be prepared by the electrolysis of water or by heating potassium chlorate with manganese dioxide as a catalyst. Oxygen is prepared for commercial use by the liquefaction and fractional distillation of air and by the electrolysis of water, although the latter process is more expensive. In the laboratory it can be prepared by the electrolysis of water or by heating potassium chlorate with manganese dioxide as a catalyst. Oxygen is very reactive and capable of combining with most elements. It is a component of hundreds of thousands of organic compounds. It is essential for respiration of all plants and animals and for practically all combustion reactions. Oxygen enrichment for basic-oxygen steelmaking furnaces is the greatest industrial use of the gas. Large quantities are also used in making synthesis gas for ammonia and methanol, ethylene oxide, and for oxy-acetylene welding. Oxygen is utilized in medicine in the treatment of respiratory diseases and is used to aid respiration in marines and passengers of high-flying planes and spaceships. Liquid oxygen is used as an oxidizer in the fuel systems of large rockets.
<urn:uuid:55861840-7ed4-48c5-80b7-2e5e36752cb6>
3.3125
1,412
Knowledge Article
Science & Tech.
47.434899
- A natural or synthetic hydrated aluminosilicate with an open three-dimensional crystal structure which water molecules are held in cavities in the lattice. They are used to soften water. - An ion that has a positive and negative charge on the same group of atoms. It is also called - An inactive biomolecule that is a precursor to an enzyme. Glossary created by David Shaw (Madison Area Technical College) for The Chemistry Place. Information Please® Chemistry Place, ©2005 Pearson Education, Inc. All Rights Reserved.
<urn:uuid:5aa3734f-5360-423e-bf63-a2108f00faaf>
2.78125
121
Structured Data
Science & Tech.
26.776707
The simplest tool to study three-dimensional arrangements is polarizing microscopy (PM). PM tests the orientation of optical axes of the liquid crystal specimen; these optical axes are closely related to the molecular arrangements in the medium. Unfortunately, PM yields only two-dimensional (2D) textures in the so-called plane of observation which is perpendicular to the optical axis of the microscope. This 2D image integrates the true 3D configuration of optical birefringence over the path of light. As the result of such an integration, the director profile along the direction of observation (="vertical cross section" of the specimen) is hard to decipher. Regrettably, it is precisely the director configuration in the vertical cross-section that is often the most valuable and desirable. The fluorescence confocal polarizing microscopy (FCPM) allows one to recover the missing information and to obtain a truly 3D image of the liquid crystal director, both in the plane of observation and along the direction of observation. The principle of imaging is different from the traditional PM. The FCPM maps the intensity of polarized fluorescent light emitted by the liquid crystal sample, rather than the pattern of integrated birefringence as the PM texture does. This feature allows one to avoid the ambiguity of the in-plane PM textures that do not distinguish between two mutually perpendicular director configurations. More importantly, the confocal scheme allows one to collect the fluorescent light from a very small region of the sample and thus to optically slice the specimen by scanning the focused laser beam. The obtained map of fluorescence intensity is the 3D image of orientation of the fluorescent probe. Click on image to view
<urn:uuid:989a74df-0a0f-45f2-99db-b281bdd77951>
2.734375
337
Knowledge Article
Science & Tech.
22.065492
Here we study radioactive decay. Shown (red dots) is a large number of identical atomic nuclei, each obeying the same decay law. Now select the mean life time of the nuclei with the slider, press the START button, and watch them decay away as a function of time (displayed in the upper right corner). Shown also is a histogram (in green) of the number of nuclei remaining at a given time. © W. Bauer, 1999
<urn:uuid:6697e51d-74b8-4002-bae2-a22b166018f2>
3
98
Tutorial
Science & Tech.
64.913
|Jul12-12, 09:59 PM||#1| Photon frequency boosting materials I've been reading about fluorescence and I understand how the band-gap accounts for reemission of photons at a longer wavelength. However, can a material store a photon, and then given an external energy source reemit at a shorter wavelength? I'm imagining an electrified "glass" that could absorb IR and reemit it in the visible spectrum. Granted, it likely wouldn't be in the same direction, but is it even possible for a material to do this? I've read about Two-photon absorption which seems like a viable process, albeit at a greatly diminished intensity. What do y'all think? |Jul13-12, 07:31 AM||#2| Welcome to PF; One can certainly imagine an atom excited by multiple photons to a high energy level and then losing all that energy in one go. There are quite a lot of other things that can happen too. Why would it not just de-energize in two steps as well? But look up "anti-Stokes shift". |Similar Threads for: Photon frequency boosting materials| |frequency of a photon?||Quantum Physics||20| |Materials that radio frequency EM waves can penetrate||General Physics||1| |frequency of a photon||Introductory Physics Homework||3| |Resonace Frequency of Bodies/Materials||General Physics||3| |What does the frequency of a photon actually mean?||General Physics||4|
<urn:uuid:3192d609-dab5-4b7e-a706-7db4a0bdd787>
2.984375
332
Comment Section
Science & Tech.
50.77546
If a square has a side of length 7 feet, then it has an area of 72, or 49 square feet. If a square has an area of 36 square feet, then the length of its side is 6 feet. If we know the length of a side, then we square it to find the area. If we know the area, then we must undo the process of squaring to find the length of a side. Undoing the process of squaring is called taking the square root. Because 32 = 9, 23 = 8, and (-2)4 = 16, we say that 3 is a square root of 9, 2 is the cube root of 8, and -2 is a fourth root of 16. In general, undoing an nth power is referred to as taking an nth root. The number b is an nth root of a if bn = a. Both 3 and -3 are square roots of 9 because 32 = 9 and (-3)2 = 9. Because 24 = 16 and (-2)4 = 16, there are two real fourth roots of 16: 2 and -2. If n is a positive even integer and a is any positive real number, then there are two real nth roots of a. We call these roots even roots. The positive even root of a positive number is called the principal root. The principal square root of 9 is 3, and the principal fourth root of 16 is 2. When n is even, the exponent 1/n is used to indicate the principal nth root. The principal nth root can also be indicated by the radical symbol . Exponent 1/n When n Is Even If n is a positive even integer and a is a positive real number, then a1/n denotes the positive real nth root of a and is called the principal nth root of a. An exponent of n indicates nth power, an exponent of -n indicates the reciprocal of the nth power, and an exponent of 1/n indicates nth root. We will see later that choosing 1/n to indicate nth root fits in nicely with the rules of exponents that we have already studied. Finding even roots Evaluate each expression. a) Because 22 = 4, we have 41/2 = 2. Note that 41/2 ≠ -2. b) Because 24 = 16, we have 161/4 = 2. Note that 161/4 ≠ -2. c) Following the accepted order of operations from Chapter 1, we find the root first and then take the opposite of it. Because 34 = 81, we have 811/4 = 3 and -811/4 = -3. d) Because , we have . Note that 23 = 8 but (-2)3 = -8. The cube root of 8 is 2, and the cube root of -8 is -2. If n is a positive odd integer and a is any real number, then there is only one real nth root of a. We call this root an odd root. Exponent 1/n When n Is Odd If n is a positive odd integer and a is any real number, then a1/n denotes the real nth root of a. Finding odd roots Evaluate each expression. a) Because 23 = 8, we have 81/3 = 2. b) Because (-3)3 = -27, we have (-27)1/3 = -3. c) Because 25 = 32, we have -321/5 = -2. We do not allow 0 as the base when we use negative exponents because division by zero is undefined. However, positive powers of zero are defined, and so are roots of zero; for example, 04 = 0, and so 01/4 = 0. nth Root of Zero If n is a positive integer, then 01/n = 0. An expression such as (-9)1/2 is not included in the definition of roots because there is no real number whose square is -9. The definition of roots does not include an even root of any negative number because no even power of a real number is negative. The expression 31/2 represents the unique positive real number whose square is 3. Because there is no rational number that has a square equal to 3, the number 31/2 is an irrational number. If we use a calculator, we find that 31/2 is approximately equal to the rational number 1.732. Because the square root of 3 is not a rational number, the simplest representation for the exact value of the square root of 3 is 31/2.
<urn:uuid:dca2a7c4-021d-4910-8f4e-658e5503ae87>
3.53125
970
Documentation
Science & Tech.
85.419728
GRAIL's Gravity Map of the Moon - This image shows the variations in the lunar gravity field as measured by NASA's Gravity Recovery and Interior Laboratory (GRAIL) during the primary mapping mission from March to May 2012. Very precise microwave measurements between two spacecraft, named Ebb and Flow, were used to map gravity with high precision and high spatial resolution. The field shown resolves blocks on the surface of about 12 miles (20 kilometers) and measurements are three to five orders of magnitude improved over previous data. Red corresponds to mass excesses and blue corresponds to mass deficiencies. The map shows more small-scale detail on the far side of the moon compared to the nearside because the far side has many more small craters.
<urn:uuid:12bede06-3d08-4a4e-bcf8-c7cecbd44037>
3.21875
147
Truncated
Science & Tech.
34.885571
Image: Close up of a maggot spiracle A close up of one spiracle (breathing apparatus) of a fly maggot. Electron micrograph. - S. Lindsay - © Australian Museum The rear ends of maggots (fly larvae) consist of a chamber, in which their anus and posterior spiracles are located. (They also have anterior spiracles). Spiracles are used for breathing, and the possession of spiracles in a posterior location means that maggots can breath feeding 24 hours a day.
<urn:uuid:ddfe6be7-5674-4bd5-8a13-a5dec8bec750>
2.9375
111
Truncated
Science & Tech.
47.981532
Determining OpenGL ES Capabilities Both the OpenGL ES 1.1 and OpenGL ES 2.0 specifications define a minimum standard that every implementation must support. However, the OpenGL ES specification does not limit an implementation to just those capabilities. The OpenGL ES specification allows implementations to extend its capabilities in multiple ways. Another document, OpenGL ES Hardware Platform Guide for iOS, drills down into the specific capabilities of each OpenGL ES implementation provided by iOS. The precise capabilities of an implementation may vary based on the version of the specification implemented, the underlying graphics hardware, and the version of iOS running on the device. Whether you have chosen to build an OpenGL ES 1.1 or OpenGL ES 2.0 application, the first thing your application should do is determine the exact capabilities of the underlying implementation. To do this, your application sets a current context and calls one or more OpenGL ES functions to retrieve the specific capabilities of the implementation. The capabilities of the context do not change after it is created; your application can test the capabilities once and use them to tailor its drawing code to fit within those capabilities. For example, depending on the number of texture units provided by the implementation, you might perform your calculations in a single pass, perform them in multiple passes, or choose a simpler algorithm. A common pattern is to design a class for each rendering path in your application, with the classes sharing a common superclass. At runtime you instantiate the class that best matches the capabilities of the context. Read Implementation-Dependent Values The OpenGL ES specification defines implementation-dependent values that define limits of what an OpenGL ES implementation is capable of. For example, the maximum size of a texture and the number of texture units are both common implementation-dependent values that an application is expected to check. iOS devices that support the PowerVR MBX graphics hardware support textures up to 1024 x 1024 in size, while the PowerVR SGX software supports textures up to 2048 x 2048; both sizes greatly exceed 64 x 64, which is the minimum size required in the OpenGL ES specification. If your application’s needs exceeds the minimum capabilities required by the OpenGL ES specification, it should query the implementation to check the actual capabilities of the hardware and fail gracefully; you may load a smaller texture or choose a different rendering strategy. Although the specification provides a comprehensive list of these limitations, a few stand out in most OpenGL applications. Table 3-1 lists values that both OpenGL ES 1.1 and OpenGL ES 2.0 applications should test. Maximum size of the texture Number of depth buffer planes Number of stencil buffer planes In an OpenGL ES 2.0 application, your application primarily needs to read the limits placed on its shaders. All graphics hardware supports a limited number of attributes that can be passed into the vertex and fragment shaders. An OpenGL ES 2.0 implementation is not required to provide a software fallback if your application exceeds the values provided by the implementation; your application must either keep its usage below the minimum values in the specification, or it must check the shader limitations documented in Table 3-2 and choose shaders whose usage fits within those limits. Maximum number of vertex attributes Maximum number of uniform vertex vectors Maximum number of uniform fragment vectors Maximum number of varying vectors Maximum number of texture units usable in a vertex shader Maximum number of texture units usable in a fragment shader For all of the vector types, the query returns the number of 4-component floating-point vectors available. OpenGL ES 1.1 applications should check the number of texture units and the number of available clipping planes, as shown in Table 3-3. Maximum number of texture units available to the fixed function pipeline Maximum number of clip planes Check for Extensions Before Using Them An OpenGL ES implementation adds functionality to the OpenGL ES API by implementing OpenGL ES extensions. Your application must check for the existence of any OpenGL ES extension whose features it intends to use. The sole exception is the OES_framebuffer_object extension, which is always provided on all iOS implementations of OpenGL ES 1.1. iOS uses framebuffer objects as the only kind of framebuffer your application may draw into. Listing 3-1 provides code you can use to check for the existence of extensions. Listing 3-1 Checking for OpenGL ES extensions. BOOL CheckForExtension(NSString *searchName) // For performance, the array can be created once and cached. NSString *extensionsString = [NSString stringWithCString:glGetString(GL_EXTENSIONS) encoding: NSASCIIStringEncoding]; NSArray *extensionsNames = [extensionsString componentsSeparatedByString:@" "]; return [extensionsNames containsObject: searchName]; Call glGetError to Test for Errors The debug version of your application should periodically call the glGetError function and flag any error that is returned. If an error is returned from the glGetError function, it means the application is using the OpenGL ES API incorrectly or the underlying implementation is not capable of performing the requested operation. Note that repeatedly calling the glGetError function can significantly degrade the performance of your application. Call it sparingly in the release version of your application. © 2013 Apple Inc. All Rights Reserved. (Last updated: 2013-04-23)
<urn:uuid:2c40196a-8736-4dba-814d-dced25299aed>
2.78125
1,098
Documentation
Software Dev.
30.27466
Corrosion fatigue is fatigue in a corrosive environment. It is the mechanical degradation of a material under the joint action of corrosion and cyclic loading. Nearly all engineering structures experience some form of alternating stress, and are exposed to harmful environments during their service life. The environment plays a significant role in the fatigue of high-strength structural materials like steel, aluminum alloys and titanium alloys. Materials with high specific strength are being developed to meet the requirements of advancing technology. However,their usefulness depends to a large extent on the extent to which they resist corrosion fatigue. The effects of corrosive environments on the fatigue behavior of metals were studied as early as 1930. The phenomenon should not be confused with stress corrosion cracking, where corrosion (such as pitting) leads to the development of brittle cracks, growth and failure. The only requirement for corrosion fatigue is that the sample be under tensile stress. Effect of corrosion on S-N diagram The effect of corrosion on a smooth-specimen S-N diagram is shown schematically on the right. Curve A shows the fatigue behavior of a material tested in air. A fatigue threshold (or limit) is seen in curve A, corresponding to the horizontal part of the curve. Curves B and C represent the fatigue behavior of the same material in two corrosive environments. In curve B, the fatigue failure at high stress levels is retarded, and the fatigue limit is eliminated. In curve C, the whole curve is shifted to the left; this indicates a general lowering in fatigue strength, accelerated initiation at higher stresses and elimination of the fatigue limit. To meet the needs of advancing technology, higher-strength materials are developed through heat treatment or alloying. Such high-strength materials generally exhibit higher fatigue limits, and can be used at higher service stress levels even under fatigue loading. However, the presence of a corrosive environment during fatigue loading eliminates this stress advantage, since the fatigue limit becomes almost insensitive to the strength level for a particular group of alloys. This effect is schematically shown for several steels in the diagram on the left, which illustrates the debilitating effect of a corrosive environment on the functionality of high-strength materials under fatigue. Corrosion fatigue in aqueous media is an electrochemical behavior. Fractures are initiated either by pitting or persistent slip bands. Corrosion fatigue may be reduced by alloy additions, inhibition and cathodic protection, all of which reduce pitting. Since corrosion-fatigue cracks initiate at a metal's surface, surface treatments like plating, cladding, nitriding and shot peening were found to improve the materials' resistance to this phenomenon. Crack-propagation studies in corrosion fatigue In normal fatigue-testing of smooth specimens, about 90 percent is spent in crack nucleation and only the remaining 10 percent in crack propagation. However, in corrosion fatigue crack nucleation is facilitated by corrosion; typically, about 10 percent of life is sufficient for this stage. The rest (90 percent) of life is spent in crack propagation. Thus, it is more useful to evaluate crack-propagation behavior during corrosion fatigue. Fracture mechanics uses pre-cracked specimens, effectively measuring crack-propagation behavior. For this reason, emphasis is given to crack-propagation velocity measurements (using fracture mechanics) to study corrosion fatigue. Since fatigue crack grows in a stable fashion below the critical stress-intensity factor for fracture (fracture toughness), the process is called sub-critical crack growth. The diagram on the right shows typical fatigue-crack-growth behavior. In this log-log plot, the crack-propagation velocity is plotted against the applied stress-intensity range. Generally there is a threshold stress-intensity range, below which crack-propagation velocity is insignificant. Three stages may be visualized in this plot. Near the threshold, crack-propagation velocity increases with increasing stress-intensity range. In the second region, the curve is nearly linear and follows Paris' law(6); in the third region crack-propagation velocity increases rapidly, with the stress-intensity range leading to fracture at the fracture-toughness value. Crack propagation under corrosion fatigue may be classified as a) true corrosion fatigue, b) stress corrosion fatigue or c) a combination of true, stress and corrosion fatigue. True corrosion fatigue In true corrosion fatigue, the fatigue-crack-growth rate is enhanced by corrosion; this effect is seen in all three regions of the fatigue-crack growth-rate diagram. The diagram on the left is a schematic of crack-growth rate under true corrosion fatigue; the curve shifts to a lower stress-intensity-factor range in the corrosive environment. The threshold is lower (and the crack-growth velocities higher) at all stress-intensity factors. Specimen fracture occurs when the stress-intensity-factor range is equal to the applicable threshold-stress-intensity factor for stress-corrosion cracking. When attempting to analyze the effects of corrosion fatigue on crack growth in a particular, both corrosion type and fatigue load levels affect crack growth in varying degrees. Common types of corrosion include filiform, pitting, exfoliation, intergranular; each will affect crack growth in a particular material in a distinct way. For instance, pitting will often be the most damaging type of corrosion, degrading a material's performance (by increasing the crack-growth rate) more than any other kind of corrosion; even pits of the order of a material's grain size may substantially degrade a material. The degree to which corrosion affects crack-growth rates also depends on fatigue-load levels; for instance, corrosion can cause a greater increase in crack-growth rates at a low loads than it does at a high load. Stress-corrosion fatigue In materials where the maximum applied-stress-intensity factor exceeds the stress-corrosion cracking-threshold value, stress corrosion adds to crack-growth velocity. This is shown in the schematic on the right. In a corrosive environment, the crack grows due to cyclic loading at a lower stress-intensity range; above the threshold stress intensity for stress corrosion cracking, additional crack growth (the red line) occurs due to SCC. The lower stress-intensity regions are not affected, and the threshold stress-intensity range for fatigue-crack propagation is unchanged in the corrosive environment. In the most-general case, corrosion-fatigue crack growth may exhibit both of the above effects; crack-growth behavior is represented in the schematic on the left. See also - P. T. Gilbert, Metallurgical Reviews 1 (1956), 379 - H. Kitegava in Corrosion Fatigue, Chemistry, Mechanics and Microstructure, O. Devereux et al. eds. NACE, Houston (1972), p. 521 - C. Laird and D. J. Duquette in Corrosion Fatigue, Chemistry, Mechanics and Microstructure, p. 88 - J. Congleton and I. H. Craig in Corrosion Processes, R. N. Parkins (ed.). Applied Science Publishers, London (1982), p. 209 - H. H. Lee and H. H. Uhlig, Metall. Trans. 3 (1972), 2949 - P. C. Paris and F. Erdogan, J. Basic Engineering, ASME Trans. 85 (1963) 528 - Craig L. Brooks, Scott A. Prost-Domasky, Kyle T. Honeycutt and Thomas B. Mills, "Predictive modeling of structure service life" in ASM Handbook Volume 13A, Corrosion: Fundamental, Testing and Protection, October 2003, 946-958.
<urn:uuid:d1beef51-a8bc-4878-8ade-ccad6a01e5e9>
3.21875
1,595
Knowledge Article
Science & Tech.
38.629535
Comprehensive DescriptionRead full entry BiologyOccurs in fast flowing rivers. Eastern, brackish populations enter the lower reach of rivers for spawning (Ref. 1441). Inhabit large lowland rivers and estuaries. Active at night. Prey on benthic invertebrates. Semi-anadromous populations forage in large brackish-water habitats in estuaries around Black Sea. Spawn in large aggregations in fast-flowing water on gravel bottom or submerged vegetation. Usually rare and threatened due to water pollution (Ref. 59043).
<urn:uuid:7ec225d5-38f9-4a92-ab7d-9006bd527fad>
2.8125
119
Knowledge Article
Science & Tech.
31.384404
Wave Generation and Properties - Rectangular aquarium (a 10 or 20 gallon aquarium will work, though a larger aquarium will make it easier to measure the waves) - Food coloring - Rectangular board, as wide as the narrow edge of the aquarium and longer than the height of the aquarium (see diagram below). - Pour water in the aquarium up to a height (or depth) h. Use the board to make waves by gently moving it back and forth while maintaining it fixed at the bottom (see diagram). - Try to measure the wavelength and the period T of the waves generated. Make sure you measure the wavelength and period before the waves hit the other side of the aquarium. - Calculate the wave celerity C with the values wavelength and period that you measured. - Can you tell whether the waves produced are shallow (long) or deep (short)? - Compare the speed C to the theoretical value for shallow or deep waves. - Now compare your results to those obtained at: http://www.coastal.udel.edu/faculty/rad/wavemaker.html - Different waves. Add more water and measure the new h. Do steps a) through e) above and compare results. - Particle trajectories. With the syringe, add a small amount of food coloring. Then look at the orbital trajectories of the food coloring and determine whether they are circular (deep wave) or elliptical (shallow wave). Experiment further at http://www.coastal.udel.edu/faculty/rad/linearplot.html with different wave configurations and look at the orbital trajectories under waves. - Using your aquarium, how would you determine the wave height/depth ratio at which waves break? Add a sloping board at the bottom of the tank. Experiment with different h and different speeds of paddle oscillations to see at what values of the ratio H/h the waves start to break. - Wave superposition Go to the site: http://www.coastal.udel.edu/faculty/rad/superplot.html and experiment with various waves to see the result of their superposition. - What is the speed of a tsunami traveling in the middle of the Atlantic Ocean over a depth of 3000 m? If it is generated off the coast of the Azores Islands, 5000 kilometers away from Florida, how long will it take for the tsunami to arrive in Florida? - How long does it take for the tide to propagate in Tampa Bay from Port Manatee to McKay Bay Entrance? These two sites are 36 km apart, and we can assume that the tidal wave travels over a mean depth of 10 m. Verify your answer with real tide data from: http://tidesandcurrents.noaa.gov/
<urn:uuid:3dabf082-e45d-46fc-ab10-be564f60c793>
3.90625
577
Tutorial
Science & Tech.
59.376819
12. A pilot wishes to fly from Toronto to Montreal, a distance of 508 km on a bearing of 075^o. The cruising speed of the plane is 550 km/h. An 80 km/h wind is blowing on a bearing of 125^o a) What heading should the pilot take to reach his destination? b) What will be the speed of the plane relative to the ground? c) How long will the trip take? I do not know how to get the angle the pilot should take. I don't understand how to do it. I know there is a little angle of 15^o, however, can't find the angle in the triangle to add to the little angle then subtract it from 90^o. I know this is easy, however, can't seem find the method to find the bearing. NOTE: All angles are from true North [up] and then rotate clockwise.
<urn:uuid:d1637ff0-e84f-4395-a835-49f5c2059ae1>
3.28125
191
Q&A Forum
Science & Tech.
90.466921
Multiyear La Niña events and persistent drought in the contiguous United States Article first published online: 12 JUL 2002 Copyright 2002 by the American Geophysical Union. Geophysical Research Letters Volume 29, Issue 13, pages 25-1–25-4, July 2002 How to Cite Multiyear La Niña events and persistent drought in the contiguous United States, Geophys. Res. Lett., 29(13), doi:10.1029/2001GL013561, 2002., , and , - Issue published online: 12 JUL 2002 - Article first published online: 12 JUL 2002 - Manuscript Accepted: 6 SEP 2001 - Manuscript Revised: 23 AUG 2001 - Manuscript Received: 2 JUN 2001 La Niña events typically bring dry conditions to the southwestern United States. Recent La Niñas rarely exceed 2 years duration, but a new record of ENSO from a central Pacific coral reveals much longer La Niña anomalies in the 1800s. A La Niña event between 1855–63 coincides with prolonged drought across the western U.S. The spatial pattern of this drought correlates with that expected from La Niña during most of the La Niña event; land-surface feedbacks are implied by drought persistence and expansion. Earlier periods also show persistent La Niña-like drought patterns, further implicating Pacific anomalies and surface feedbacks in driving prolonged drought. An extended index of the Pacific Decadal Oscillation suggests that extratropical influences would have reinforced drought in the 1860s and 1890s but weakened it during the La Niña of the 1880s.
<urn:uuid:02dc3013-eef1-4cd3-84c8-f9a0920a4b9a>
2.828125
334
Academic Writing
Science & Tech.
51.610809
I will try to find time to read the article in Nature (pub on 30 March 2006). Science Daily — The extraordinary properties of spider's thread are like a blessing for researchers working on polymers. However, the amazing twisting properties it displays are still not very well understood. How can one explain the fact that a spider suspended by a thread remains completely motionless, instead of rotating like a climber does at the end of a rope? Researchers at the Laboratoire de physique des lasers (CNRS/University of Rennes) have described the exceptional properties of this material which still has some secrets to reveal. The results will be published in Nature on 30 March 2006. Fasten an object to the end of a vertically suspended thread. Give it a slight twist and let go. You will observe that the object rotates for a certain length of time and with a certain amplitude, depending on the material of the thread. Now observe a spider suspended from its thread: It is stable, doesn't move, spins its thread in a perfectly straight line and always recovers its balance after environmental disturbances. By experimenting with a torsion pendulum to which they attached a mass equivalent to a spider's weight, researchers at the Laboratoire de physique des lasers (CNRS/University of Rennes) compared the dynamic reactions of different types of thread to a 90° rotation. The results are revealing: a KevlarTM filament (which is synthetic) behaves like an elastic, with reduced oscillations. A copper thread oscillates slightly but does not return to its original shape, and becomes more fragile as a result of these oscillations. Spider's thread, on the other hand, is very efficient at absorbing oscillations, regardless of air resistance, and retains its twisting properties during the experiments. It also returns to its exact original shape. Certain alloys, such as Nitinol, possess similar properties but must be heated to 90° to return to their original shape. The amazing properties of spider's thread have been known for several years: its ductility, strength and hardness surpass those of the most complex synthetics fibers . It now also seems that through natural selection, spider's thread has evolved into a material with “self-shape memory effect” which allows it to return to its original configuration without outside stimulus. This complex dynamic process has recently been represented as a “stacked” model which the authors use to depict the relaxation of the different proteins in spider's thread. Note: This story has been adapted from a news release issued by Centre National De La Recherche Scientifique.
<urn:uuid:81d11b6e-d432-430c-aae8-82c4e918b4d9>
2.75
528
Personal Blog
Science & Tech.
37.847849
In this tutorial, we will discuss how to get bytes from buffer. The ByteBuffer class is a container for handling data. The method allocate( int capacity) creates a new byte buffer, and the current position will be zero and its limit will be it's capacity. The FileChannel class creates a channel for reading, writing, mapping and manipulating a file. The FileChannel is similar to the stream but they are few different. The channel can read write bot, but stream can either read-only or write-only. The getChannel method of FileInputStream class returns the Channel object associated with this file input stream. The read() method of FileChannel class fills byte buffer created by using allocate() method. The get() method of ByteBuffer class reads bytes of from buffer's current position and increment position. About ByteBuffer API: The java.nio.ByteBuffer class extends java.nio.Buffer class. It provides the following methods: |static ByteBuffer||allocate( int capacity)||The allocate() method create a byte buffer of specified capacity.| |byte||get()||The get() method reads bytes of from buffer's current position and increment position.| |C:\>java JavaByteBuffer C:\Work\Bharat\NIO\byteBuffer\test.txt Given file name :C:\Work\Bharat\NIO\byteBuffer\test.txt Contents of file If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:07243bce-f59c-4763-af32-5f9f8fffaabc>
3.265625
347
Documentation
Software Dev.
47.727054
Barium sulfate is a chemical compound composed of barium and sulfate ions. It contains barium in the +2 oxidation state, as in all barium compounds. It does not dissolve in water. If it did, it would be very toxic. It is used in X-rays of the stomach and intestines to see whether there is a problem with them. Its chemical formula is BaSO4.
<urn:uuid:02faca3f-cf4f-409f-b779-8ae198237f86>
2.828125
83
Knowledge Article
Science & Tech.
65.142708
We defined the real numbers to be a complete uniform space, meaning that limits of sequences are convergent if and only if they are Cauchy. Let’s write these two out in full: - A sequence is convergent if there is some so that for every there is an such that implies . - A sequence is Cauchy if for every there is an such that and implies . See how similar the two definitions are. Convergent means that the points of the sequence are getting closer and closer to some fixed . Cauchy means that the points of the sequence are getting closer to each other. Now there’s no reason we can’t try the same thing when we’re taking the limit of a function at . In fact, the definition of convergence of such a limit is already pretty close to the above definition. How can we translate the Cauchy condition? Simple. We just require that for every there exist some so that for any two points we have . So let’s consider a function defined in the ray . If the limit exists, with value , then for every there is an so that implies . Then taking as well, we see that and so the Cauchy condition holds. Now let’s assume that the Cauchy condition holds. Define the sequence . This is now a Cauchy sequence, and so it converges to a limit , which I assert is also the limit of . Given an , choose an so that - for any two points and above Just take a for each condition, and go with the larger one. In fact, we may as well round up so that for some natural number . Then for any we have and so the limit at infinity exists. In the particular case of an improper integral, we have . Then . Our condition then reads: For every there is a so that implies .
<urn:uuid:0657365b-e409-450c-9812-549c496ddc52>
3.125
393
Personal Blog
Science & Tech.
64.442268
Science Fair Project Encyclopedia I have altered the statement that Euclidean geometry is a subset of Riemannian geometry. The set of theorems of Riemannian geometry could be said to be a subset of the set of theorems of Euclidean geometry, if one were to construe the former to mean propositions true in all Riemannian manifolds. On the other hand, the class of spaces that satisfy the axioms of Riemannian geometry is a subclass of those that satisfy the axioms of Euclidean geometry. Not a set, but rather a proper class. Michael Hardy 19:57 Mar 12, 2003 (UTC) This page has problems, in relation to the Riemannian manifold coverage elsewhere. The initial posting seems to have been about the Riemannian geometry of constant negative curvature. I'm not quite sure now what the thrust is. Charles Matthews 19:01 29 Jun 2003 (UTC) - Riemannian geometry is the original name for geometry which deals with non-euclidean spaces. Historically, it is concrete. It is important to preserve the timeline for epistemological reasons. Also to give credit where credit is due, such that the things that the inventor had to say about their invention don't go unheard. They are important and the inventor has earned the right to be heard by inventing. - Riemannian geometry is prior to the Riemannian manifold. - Kevin Baas -2003.12.07 --- The page does have problems: it is a little bit of a hwole lot and nothing substantial of anything. Where there are headlines, those should be separate pages all together. Is an orthonormal frame riemannian geometry? No, it is a topic based off of riemannion geometry. It should be a page of it's own, at most linked to. same with the other topics. The point of this page is to give people an idea of what riemannian geometry is, not to throw a bunch of esoteric and advanced topics at them with no explanation or introduction. -Kevin Baas -2003.12.07 There was a question on an edit summary: isn't a line just a geodesic? a line is a geodesic if and only if it is the shortest path between two points. Here are some rough definitions: Line - a continuous one-dimensional extension, usually residing in a space. usually thought to be of infinite lenght, though sometimes used as shorthand for a line segment. line segment - a continuous, 1-dimension extension from one point to another, of finite length. geodesic - the shortest path between points, see calculus of variations. curve - a continuous function defined on a space, often thought of as one-dimensional, but not thus restricted. trajectory - a continuous function defined on a space, parametrized by a variable such as "t" (for time), often thought of as one-dimensional, but not thus restricted. a given line is not neccessarily a geodesic. it is concievable to have a geodesic plane between two lines, this making a geodesic not neccessarily a line, but i don't know if the strict definition of the term includes such a generalization. Kevin Baas | talk 20:09, 2004 Aug 3 (UTC) The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:e31b0a0e-0d1f-43e6-bf67-4f0dfc198fcb>
3.078125
742
Comment Section
Science & Tech.
51.415153
Also known as the Cigar Galaxy for its elongated visual appearance, M82 is a starburst galaxy with a superwind. through ensuing supernova explosions and powerful winds from massive stars, the burst of star formation in M82 is driving the prodigous outflow of material. for the superwind from the galaxy's central regions is clear in sharp composite image, based on data from small telescopes on planet Earth. The composite highlights emission from filaments of atomic hydrogen gas in reddish hues. The filaments extend for over 10,000 light-years. Some of the gas in the superwind, enriched in heavy elements forged in the massive stars, will eventually escape into intergalactic space. Triggered by a close encounter with nearby large galaxy M81, the furious burst of star formation in M82 should last about 100 million years or so. M82 is 12 million light-years distant, near the northern boundary of
<urn:uuid:f96a0b32-8613-4018-9497-0ecbabf14a89>
2.859375
215
Knowledge Article
Science & Tech.
44.419262
Web development is a necessary part in website creation. It is arguably the biggest one, considering that it incorporates all the other areas, like web design, content development and management, and to some extent, programming and web engineering. Web developers undergo different kinds of programming courses, and so long as they satisfy the requirements that necessitate an understanding of programming languages, then they are certified as web developers. Web development has a long history, with its start being around the same time that the internet started growing in popularity. Back then, web development was a little more complicated, owing to the fact that the technology was only being introduced. This accounts for why the price of developing a website was ridiculously high. Setting up websites in those days would set you back tens of thousands of dollars. Not much, considering that there are some websites that cost that much to put up now. But they offered too little features, and lacked a lot in web design, for that cost to be justified. Over the years, web development technology has come to be understood, and with systems such as Apache, Linux, MySQL and PHP being offered for free, web development only costs a fraction of what it initially used to. Webpage development has become easier, now that there is a rampant understanding of mark-up languages. In any case, knowing HTML is not really a prerequisite to being a web developer, since the use of WYSIWYG software is on the rise. These editors make it easier to write web pages, and this has also reduced the cost of website development as well. The best developers do have HTML knowledge (HTML5 being the most recent) as it is needed in smoothing out web pages created with WYSIWYG that may contain coding errors or inconsistencies. Web development is to thank for some of the progress made in the virtual world. For instance the development of e-commerce is owed to milestones made in web development. The creation of application such as payment gateway systems, shopping carts has enabled people to be able to sell and buy online. Of course this has come with increased information security technologies to ensure there is safety in carrying out virtual monetary transactions. Internet communication, which has given rise to the multi-billion dollar industry that is social networking, is credited to web development. There is a necessary training process that is required for those setting out to be web developers. The training phase does not take too long, and with certification to show for the basic understanding of web development skills, one can easily join the industry. It provides a challenging work environment, and a chance to grow one’s skills owing to its dynamic nature and ever developing technologies. It is also financially rewarding, and can provide a reliable anchor for all monetary needs. The top web developers easily make $100000 and more with every year they stay in the business. Some advance to becoming webmasters and chief information officers in their companies, and this means even bigger paychecks.
<urn:uuid:279ebd23-19c2-4cd4-aad2-80baf345eab2>
2.890625
597
Personal Blog
Software Dev.
35.277375
We've all seen the satellite images of Earth at night--the bright blobs and shining webs that tell the story of humanity's endless sprawl. These pictures are no longer just symbols of human impact, however, but can be used to objectively measure it, according to a study in the December 2008 issue of Geocarto International, a peer-reviewed journal on geoscience and remote sensing. Travis Longcore, a USC geographer and expert in light pollution, collaborated with an international team, led by Christoph Aubrecht of the Austrian Research Centers, to develop the index. "Coral reefs are incredibly importantbut unfortunately they're also incredibly fragile," Longcore said. "Using night light proximity, we were able to identify the most threatened and most pristine spots in an objective and easily repeatable way." The researchers did this by first classifying the light into three separate sources: urban areas, gas flares and fishing boat activity. Each of these sources puts stress on reefs: urban areas cause sewage and polluted runoff, oil platforms cause leakages and spills, and commercial fishing boats deplete marine life and impair the ecological balance. The closer a reef is to one or more of these sources, the higher the index number and the greater the stress on the reefs. While previous assessments of coral reef health, like the 1998 Reefs at Risk survey, considered more variables, the LPI yields similar results, Longcore added. "As a first-pass global assessment, light pretty much correlates with human impact on the oceans," he explained. Light's direct impact on coral reefs In this way the index uses light as an indirect measure of coral reef health, which could help inform conservation policy. But the LPI is also a direct measurement of coral reef stress, since light itself also affects marine life, according to the study. "The lights themselve |Contact: Terah DeJong| University of Southern California
<urn:uuid:6dfb4841-6d7e-48c7-adf9-1dec2ce04cee>
3.96875
402
Knowledge Article
Science & Tech.
30.768077
An immense and towering thunderhead was gliding to the east, forming a stark backdrop for Crane Lake in the Crescent Lake National Wildlife Refuge in the Sand Hills of Nebraska. I walked up a hill, weaving between the yuccas, looking for the perfect vantage for a photograph of the lake, when something caught my eye. The yucca flowering stalks were up and the lower flowers had opened. Each flower was green outside and inside, with six thick, greenish white stamens tipped with yellow anthers (male) surrounding the central pistil (female). The ovary, or lower portion of the pistil, was light green, while the stigma, or upper portion of the pistil, was dark green. Virtually every open flower had small white moths in it. Each moth was predominantly white, but the legs, eyes and ends of the antennae were black. They had longer, white hairs on the head and outermost edges of the wings. Some had small black dots, usually two, on the wings. They seemed oblivious to my approach and scrutiny. I had stumbled upon a classic and celebrated case study in evolutionary biology, the obligate mutualism of yucca and yucca moths. The story is that yuccas are pollinated only by yucca moths and are thus entirely dependent on them. And because yucca moths develop inside yucca ovaries and eat only yucca seeds, they are entirely dependent on yucca. This association originated approximately 40 million years ago and it seems to be thriving today. The number of yucca species is between 30 and 45 and they are pollinated by 15 species of yucca moths in the genus Tegeticula. The yucca and yucca moth mutualism is believed to be an example of coevolution, or successive and reciprocal evolutionary steps, each taking better advantage of the behavior or resources provided by the partner. Butterflies and moths usually come to flowers for nectar. But yucca moths come to yucca to inject their eggs into the ovary, for the larvae feed solely on yucca seeds. To ensure that the larvae have seeds to eat, yucca moth females pollinate the yucca. This is not a simple task -- the moths use specialized tentacles on their mouthparts to collect the glutinous pollen and work it into a ball. No other insects have these tentacles, which develop only in female yucca moths. The females put a ball of pollen on the cup-shaped end of the stigma, and with fast and vigorous movements of the tentacles, they pound the pollen into the stigma. In the distant past, yuccas surely produced nectar to attract pollinators. Nectar production requires considerable energy and also taxes the water balance of plants living in arid environments. At that time the yucca moths would have been parasites or seed predators, but probably not pollinators. Perhaps the moths began pollinating yuccas to ensure that their offspring had seeds to eat; this would have been especially important when pollinators failed to visit in dry years. If moths were more reliable pollinators than bees and butterflies, it would have been to the yucca's advantage to produce less nectar, but more seeds. Ultimately, the yuccas stopped producing nectar, saving energy and water to be shunted to seeds. Now they produce sufficient seeds to feed the yucca moths and to assure their own reproduction. A new facet has been reported in this textbook case of mutualism. University of Colorado professor Yan Linhart and his student Rhea Dodd described numerous flies, a currently undescribed species, in our local yucca. At high elevation, they could not find adult yucca moths or evidence of larval consumption of seeds, but the flies are abundant, their bristles were coated with yucca pollen and the yucca were producing seeds. This system is less obligate, but more dynamic and geographically variable than we thought.
<urn:uuid:aaf25f1f-b6ea-4d5c-8aed-dff35f158cc7>
3.953125
841
Nonfiction Writing
Science & Tech.
41.691347
Hamlington B. D., R. R. Leben, O. A. Godin, et al. (August 2012): Could satellite altimetry have improved early detection and warning of the 2011 Tohoku tsunami? Geophys. Res. Lett., 39 (15), L15605. doi:10.1029/2012GL052386Full text not available from this repository. The 2011 Tohoku tsunami devastated Japan and affected coastal populations all around the Pacific Ocean. Accurate early warning of an impending tsunami requires the detection of the tsunami in the open ocean. While the lead-time was not sufficient for use in warning coastal populations in Japan, satellite altimetry observations of the tsunami could have been used to improve predictions and warnings for other affected areas. By comparing to both model results and historical satellite altimeter data, we use near-real-time satellite altimeter measurements to demonstrate the potential for detecting the 2011 Tohoku tsunami within a few hours of the tsunami being generated. We show how satellite altimeter data could be used to both directly detect tsunamis in the open ocean and also improve predictions made by models. |Divisions:||Physical Sciences Division|
<urn:uuid:f08abe07-067f-4bd7-8bb8-47cfa4475291>
3.078125
244
Academic Writing
Science & Tech.
37.846531
I have been learning about Java for the past month or so, and have been using Arnow, Dexter and Weiss's Introduction to Programming Using Java. In Chapter 6, there's this Exercise 21, which basically states: "Write a method mySubStr() that receives two Strings and returns a boolean value: true only if one of the Strings is a substring of the other. Do not use any methods of the String class other than length()." Is this even possible? I mean, if you can't even use substring or indexOf or any others, how can you compare two Strings to see if one is a substring of the other? Thanks in advance for any help, as this question has really been bothering me. I'm just not sure if I don't know how to do it or there is a typo in the book.
<urn:uuid:c72853d8-4221-429d-bb84-2deeb96a315d>
2.875
177
Q&A Forum
Software Dev.
67.37
This stalk-eyed fly (Achias rothschildi) belongs to the family Platystomatidae and is a highly localised species endemic to Papua New Guinea. The eye-stalks are mainly used for display in confrontations with other males as they try to establish territory in order to attract a mate. During conflict between males The males have long eye-stalks, which vary in size so that the distance between the eyes ranges from about 20 to 55mm. The holotype of this species has the widest head of any known fly. Achias rothchildi is one of a genus containing nearly 100 described species, most of them endemic to New Guinea. Get a further taxxonomic description ofthis species. Achias rothchildi is endemic to Papua New Guinea. Find out about the types of habitat this species is known from. The largest males have the widest head of any known fly. Learn about the size and growth patterns of Achias rothchildi. Studies of stalk-eyed flies suggest that individuals with longer eye-stalks are more successful in conflicts between males, and therefore have an advantage when establishing territory. Discover more about the behaviour of this species. Get reference material for Achias rothchildi. A mounted specimen of the Achias rothchildi stalk-eyed fly held at the Museum. A close up of the wing of Achias rothchildi,a stalk-eyed fly from Papua New Guinea. A tropical rainforest in Papua New Guinea. Achias rothchildi with a body length of 13.5 -16 mm and a wing length of 14 -16.5 mm. A long eye-stalk of the male Achias rothschildi . The eye-stalks are mainly used for display in confrontations with other males as they try to establish territory in order to attract a mate. The eye stalks vary in length between individuals - those with longer stalks tend to be more dominant.
<urn:uuid:08cd365b-8262-4ed4-ab30-afcb9a887d38>
3.171875
418
Knowledge Article
Science & Tech.
56.725993
Transmutation Notebook B, Tree of Life » listen (mp3, 1:28) to David Kohn provide an introduction to Darwin's tree of life sketch and his private notebooks Image transcription → Case must be that one generation then should be as many living as now To do this & to have many species in same genus (as is). REQUIRES extinction. Thus between A. & B. immens gap of relation. C & B. the finest gradation, B & D rather greater distinction Thus genera would be formed.— bearing relation to ancient types.— with several extinct forms, for if each species an ancient (I) is capable of making, 13 recent forms.— Twelve of the contemporarys must have left no offspring at all, so as to keep number of species constant.— With respect to extinction we can easily see that variety of ostrich, Petise may not be well adapted, & thus perish out, or on other hand like Orpheus. being favourable many might be produced.— This requires that the permanent varieties produced inter confined breeding & changing circumstances are continued & produce according to the adaptation of such circumstances & therefore that death of species is a consequence (contrary to what would appear from America) of non adaptation of circumstances.—
<urn:uuid:50583b37-adf8-4695-907a-6e4a3625fd54>
3.34375
259
Truncated
Science & Tech.
48.449785
pwede po ba patulong sa analytic geometry.. 1. a satellite has an elliptic orbit with the earth at one focus. at its closest point it is 100 miles above the surface of the earth; at its farthest point 500 miles. find the polar equation of its path. (take the radius of the earth to be 4000 miles.) 2. a hall that is 10 ft. wide has a ceiling that is a semi-ellipse. the ceiling is 10 ft high at the sides and 12 ft high in the center find its equation with the x axis horizontal and the origin at the center of the ellipse.
<urn:uuid:abe8f26e-9f07-4153-b72f-d01aa9ed728c>
2.96875
131
Comment Section
Science & Tech.
82.923902
In recent weeks you might have seen news like this: Faster than light particles found, claim scientists (Guardian) Speed-of-light results under scrutiny at Cern (BBC) Naughty 'Faster Than Light' Neutrinos a Reality? (Discovery) And what reportedly happened was that neutrinos (ghostly elementary particles usually produced in weak nuclear decays) sent in a beam from the LHC in Switzerland to the OPERA detector in Italy 732km away arrived 60 nanoseconds faster than light would in the same distance. Why this is significant If this is true, that particles can travel faster than light, then Einstein would be wrong and the foundations of physics would be overturned. This would imply, among other things, that 1. Time travel is possible (as effects can come before causes) 2. Space travel is possible (both trans- and inter-galactic space flight) Caution is advised Discounting the possible trivial errors (because those physicists have been getting this result for months already, and have been checking and re-checking for obvious mistakes right from the beginning, so it's unlikely that they missed any such simple blunders), we still shouldn't be too quick to jump to conclusions. For one thing, other particles, when accelerated towards the speed of light, will actually increase in mass. For example, the protons accelerated at the LHC at maximum energy will be 7,000 times their original mass when at rest. But these neutrinos, even though they are not massless (like light itself) did not exhibit this kind of mass increase and subsequent radiation. For example: http://blogs.scientificamerican.com/...-out-en-route/ For another, there are so many other less-obvious and subtle possibilities. Maybe there's something up with GPS coordinates at this level of precision required. Maybe there's some little-understood effects that impede radio wave transmission between the 2 facilities. Maybe there's some material in the ground that can accelerate the neutrinos in unexpected ways. Of course, all of these could be wrong, but the point is, as Carl Sagan loved to say, "Extraordinary claims require extraordinary evidence". Some comments I would add However this turns out, we would definitely learn something new about the world around us. In a way science is exciting because anyone can and do make phenomenal discoveries anytime, so if everything turned out just as expected, it would also be disappointing because this means there is nothing new left to learn about our Universe. And the results of this experiment is definitely very unexpected. For most of us in this field, our attention was focused exclusively on the LHC itself (including me), but while the stuff we expect like supersymmetric particles fail to show up, mind-boggling results showed up at an experiment not even designed to look for faster-than-light particles. So it's kinda like everyone looks at the stage but the magician shows up and does a brilliant trick behind the audience. Unfortunately, it would be difficult to verify the results in the near future, because there are only 2 other existing facilities in the world that can handle this - the Japanese facility was damaged by the recent tsunami disaster, while the US facility is experiencing financial hardship and have shut down their main accelerator just this week. So, we eagerly await both theoretical and experimental solutions. NOTE: The original pre-publication paper describing the experiment can be found here: http://arxiv.org/abs/1109.4897
<urn:uuid:21cbf78c-de00-4590-91ec-0ebfa0f045ad>
3.421875
738
Comment Section
Science & Tech.
40.345707
Giant Asteroid's Troughs Suggest Stunted Planet This full view of the giant asteroid Vesta was taken by NASA's Dawn spacecraft, as part of a rotation characterization sequence on July 24, 2011, at a distance of 3,200 miles (5,200 kilometers). Image credit: NASA/JPL-Caltech/UCLA/MPS/ › Full image and caption An extensive system of troughs encircles Vesta's equatorial region. The biggest of those troughs, named Divalia Fossa, surpasses the size of the Grand Canyon. It spans 289 miles (465 kilometers) in length, 13.6 miles (22 kilometers) in width and 3 miles (5 kilometers) in depth. The complexity of the troughs' morphology can't be explained by small collisions. New measurements from Dawn indicate that a large collision could have created the asteroid's troughs, said Debra Buczkowski, a Dawn participating scientist based at the Johns Hopkins University Applied Physics Laboratory in Laurel, Md., who is the lead author of a new paper in Geophysical Research Letters, a journal of the American Geophysical Union. The crustal layer at the surface appeared to stretch to the breaking point and large portions of the crust dropped down along two faults on either side of the downward-moving block, leaving the giant troughs we see today. The scale of the fracturing would only have been possible if the asteroid is differentiated - meaning that it has a core, mantle and crust. "By saying it's differentiated," said Buczkowski, "we're basically saying Vesta was a little planet trying to happen." For more information on the paper, see http://www.agu.org/n...2/2012-42.shtml . Jia-Rui C. Cook 818-354-0850 Jet Propulsion Laboratory, Pasadena, Calif. Sean Treacy 202-777-7516 American Geophysical Union, Washington Edited by Waspie_Dwarf, 27 September 2012 - 05:27 PM.
<urn:uuid:892d8349-98a2-4984-9613-aa7baeddc260>
3.484375
430
Comment Section
Science & Tech.
60.88
Five Reasons You Should Pay Attention to NASA's Mission to Mars |Concentration of methane found on Mars| Several research groups over the past six years have reported finding methane in the atmosphere on Mars. In Earth, about 98 percent of the atmospheric methane comes from biological sources such as humans and cows. "To put it humorously," Veto says, "The estimation is there are two cows on Mars gauging by the methane production." Because methane has a short half-life (breaks down quickly once released into the air), scientists want to find the source of the gas on the planet. Even if the methane on Mars comes from a non-biological source, the presence of the gas indicates the planet is definitely still alive -- at least in a geological sense. 2. The Gale Crater: an anomaly Gale Crater on Mars NASA Veto remembers during his first class at Arizona State, when scientists were still debating the ideal landing spot for the Curiosity rover. He says, they probably chose the Gale Crater because of its uniqueness within the geological context of the planet. In the center of the crater sits a large mound, known as Mount Sharp. The mountain rises about three miles from the surface and is taller than one of the sides of the crater it sits in. Using data gathered from orbiting spacecraft, they have already determined many different minerals comprise the layers of Mount Sharp. Scientists hope to use the types of minerals as a "history book" of the Martian climate in the past. Veto says some hypothesize that the uppermost layers could be made up of snow pack or dust. Like Earth, the planet Mars goes through ice ages and he says our ancestors could have looked up to see a white planet, instead of the red planet we know today. But why do scientists care about ice ages on Mars? Veto says a better understanding of climate cycles on Mars, a planet untouched by humans, could help us understand and measure the effects we've had on our own planet.
<urn:uuid:46c88109-3a38-4735-b5d3-2419ca301d96>
3.6875
409
Listicle
Science & Tech.
47.136889
In the words of Carl Sagan, "The Earth is a very small stage in a vast cosmic arena." But to us, it's everything. The place where we live, love, work and play. The place where we are born and where we die. From space, Earth is big, blue and beautiful; fragile and inspiring. It's the only planet we've ever been to. In honor of Earth Day, take a moment to enjoy some spectacular images of our home, available for download, in our gallery below. And take a moment to appreciate the only home we've ever known. LATEST IMAGE: Nocturnal wonders These night-shining clouds were spotted over Billund, Denmark on July 15, 2010. These rare clouds are technically called "noctilucent" or "polar mesospheric" clouds, and form at high altitudes, 80 to 85 kilometers (50 to 53 miles) high, where the mesosphere is located. The clouds' high position in the atmosphere allows them to reflect sunlight long after the sun has dropped below the horizon. They only form when the temperature drops below –130 degrees Celsius (-200 degrees Fahrenheit), whereupon the scant amount of water high in the atmosphere freezes into ice clouds. This happens most often in countries at high northern and southern latitudes (above 50 degrees) in the summer, when the mesosphere is coldest. Studies suggest that night-shining clouds are becoming brighter and more common, which is linked to the mesosphere getting colder and more humid. These changes may be happening because of increased levels of heat-trapping greenhouse gases such as carbon dioxide and methane in the atmosphere. In the mesosphere, carbon dioxide radiates heat into space, causing cooling. More methane, on the other hand, puts more water vapor into the atmosphere, because sunlight breaks methane up into water molecules at high altitudes. Research is ongoing.
<urn:uuid:7b5d9732-3c69-4aa0-85b3-f9966e779112>
3.75
388
Knowledge Article
Science & Tech.
46.794711
In complex analysis, an elliptic function is a meromorphic function that is periodic in two directions. Just as a periodic function of a real variable is defined by its values on an interval, an elliptic function is determined by its values on a fundamental parallelogram, which then repeat in a lattice. Such a doubly periodic function cannot be holomorphic, as it would then be a bounded entire function, and by Liouville's theorem every such function must be constant. In fact, an elliptic function must have at least two poles (counting multiplicity) in a fundamental parallelogram, as it is easy to show using the periodicity that a contour integral around its boundary must vanish, implying that the residues of any simple poles must cancel. Historically, elliptic functions were first discovered by Niels Henrik Abel as inverse functions of elliptic integrals, and their theory improved by Carl Gustav Jacobi; these in turn were studied in connection with the problem of the arc length of an ellipse, whence the name derives.Jacobi's elliptic functions have found numerous applications in physics, and were used by Jacobi to prove some results in elementary number theory. A more complete study of elliptic functions was later undertaken by Karl Weierstrass, who found a simple elliptic function in terms of which all the others could be expressed. Besides their practical use in the evaluation of integrals and the explicit solution of certain differential equations, they have deep connections with elliptic curves and modular forms. Formally, an elliptic function is a function meromorphic on for which there exist two non-zero complex numbers and with (in other words, not parallel), such that and for all . Denoting the "lattice of periods" by , it follows that for all . There are two families of 'canonical' elliptic functions: those of Jacobi and those of Weierstrass. Although Jacobi's elliptic functions are older and more directly relevant to applications, modern authors mostly follow Karl Weierstrass when presenting the elementary theory, because his functions are simpler, and any elliptic function can be expressed in terms of them. Weierstrass's Elliptic Functions With the definition of elliptic functions given above (which is due to Weierstrass) the Weierstrass elliptic function is constructed in the most obvious way: given a lattice as above, put This function is clearly invariant with respect to the transformation for any and only has poles at and . The addition of the terms is necessary to make the sum converge. The technical condition to ensure that an infinite sum such as this converges to a meromorphic function is that on any compact set, after omitting the finitely many terms having poles in that set, the remaining series converges normally. On any compact disk defined by , any satisfies and it can be shown that the sum converges regardless of . By writing as a Laurent series and explicitly comparing terms, one may verify that it satisfies the relation This means that the pair parametrize an elliptic curve. The functions take different forms depending on , and a rich theory is developed when one allows to vary. To this effect, put and , with . (After a rotation and a scaling factor, any lattice may be put in this form.) A holomorphic function in the upper half plane which is invariant under linear fractional transformations with integer coefficients and determinant 1 is called a modular form. That is, a holomorphic function is a modular form if for all . One such function is Klein's j-invariant, defined by where and are as above. Jacobi's Elliptic Functions There are twelve Jacobian elliptic functions. Each of the twelve corresponds to an arrow drawn from one corner of a rectangle to another. The corners of the rectangle are labeled, by convention, s, c, d and n. The rectangle is understood to be lying on the complex plane, so that s is at the origin, c is at the point K on the real axis, d is at the point K + iK' and n is at point iK' on the imaginary axis. The numbers K and K' are called the quarter periods. The twelve Jacobian elliptic functions are then pq, where each of p and q is one of the letters s, c, d, n. The Jacobian elliptic functions are then the unique doubly periodic, meromorphic functions satisfying the following three properties: - There is a simple zero at the corner p, and a simple pole at the corner q. - The step from p to q is equal to half the period of the function pq u; that is, the function pq u is periodic in the direction pq, with the period being twice the distance from p to q. The function pq u is also periodic in the other two directions, with a period such that the distance from p to one of the other corners is a quarter period. - If the function pq u is expanded in terms of u at one of the corners, the leading term in the expansion has a coefficient of 1. In other words, the leading term of the expansion of pq u at the corner p is u; the leading term of the expansion at the corner q is 1/u, and the leading term of an expansion at the other two corners is 1. More generally, there is no need to impose a rectangle; a parallelogram will do. However, if K and iK' are kept on the real and imaginary axis, respectively, then the Jacobi elliptic functions pq u will be real functions when u is real. - The set of all elliptic functions which share some two periods form a field. - The derivative of an elliptic function is again an elliptic function, with the same periods. - The field of elliptic functions with respect to a given lattice is generated by ℘ and its derivative ℘′. See also - Template:Cartan Cartan, Henri, Elementary Theory of Analytic Functions of one or Several Complex Variables'", Dover Publications, 1995. - Abramowitz, Milton; Stegun, Irene A., eds. (1965), "Chapter 16", Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover, p. 567, ISBN 978-0486612720, MR 0167642 See also chapter 18. (only considers the case of real invariants). - N. I. Akhiezer, Elements of the Theory of Elliptic Functions, (1970) Moscow, translated into English as AMS Translations of Mathematical Monographs Volume 79 (1990) AMS, Rhode Island ISBN 0-8218-4532-2 - Tom M. Apostol, Modular Functions and Dirichlet Series in Number Theory, Springer-Verlag, New York, 1976. ISBN 0-387-97127-0 (See Chapter 1.) - E. T. Whittaker and G. N. Watson. A course of modern analysis, Cambridge University Press, 1952 |Wikimedia Commons has media related to: Elliptic functions|
<urn:uuid:6804cb15-db3a-47d4-a40a-159cf5777bbc>
3.390625
1,507
Knowledge Article
Science & Tech.
48.170455
This is probably an easy question but I don't get it. Please answer in a easy way :P Imagine you want to paint circles on a linear line. The line is 100cm long. On the right side you have a circle with a diameter of 15 cm. Now I want to draw more circles until the 100cm are full. So basically this: o o o o o o o but with variable diameters. Right is the biggest and the left the smallest. There must be a space between them. So since I don't know how I should go about this I also can't define the variables. The right circle, the biggest one, should be variable and all the others should depend on them. Also I'd say I defined the number of circles and the space is "generated". Anyway, I need a few circles which gets smaller. Whats the best way? Of course I just could say I make 8 circles and each in is 2cm bigger and try out what goes the best, but somehow I'm also doing this to learn a bit math/geometry etc. :)
<urn:uuid:c4332f04-92e8-44a0-9ba2-5ba6e69c0e3a>
3.09375
224
Q&A Forum
Science & Tech.
82.254516
Three points X, Y, Z have position vectors x,y,z. Show that X, Y and Z are collinear iff x ^ y + y ^ z + z ^ x = 0. Here "^" denotes the vector cross product For the converse, if then (since the other two terms are 0). So is orthogonal to x. But it is also orthogonal to y and z. If x, y and z are linearly independent then is orthogonal to the whole space and is therefore 0. But that would mean that y and z are not linearly independent. That contradiction shows that the three vectors must be linearly dependent. So one of them, z say, is a linear combination of the others, . Substitute that value for z into the equation and you will find that , which is the condition for X, Y and Z to be collinear.
<urn:uuid:bc7999ed-3e2c-45cd-8dba-9622b7823961>
3.046875
190
Q&A Forum
Science & Tech.
79.520745
A unique study at the U of A. "Back, about 1993, we started studying methanogens as possible life forms on Mars," Kral says. Microorganisms, living in a test tube. But can they live on another planet? Professor Kral is trying to answer that question. "Well, this is our Pegasus chamber, it's a vaccuum chamber and we're able to take the pressure down to what we see on the surface of mars," Kral says standing next to the piece of machinery. "We could also cool this down to the temperatures on Mars." Kral recently received a grant from NASA, worth nearly $400,000. "This specific grant, I've been trying to get for about 15 years," Kral says. Which he, along with his team of researchers, and students can use to curb their curiosity and further their findings. "The bottom line would be to find out if our methanogens, or if some mutant of our methanogens, that might be resistant to radiation, could exist on mars," Kral adds. New York native Rebecca Mickol always knew she wanted to study astrobiology, and teaming up with Professor Kral truly allows her to go above and beyond. "Whenever, I go home and people ask what I'm studying, I'm like 'Life on Mars!" says second-year grad student Rebecca Mickol. Kral describes the challenge: "To me, looking for life out there. I can't imagine anything more exciting." Professor Kral plans to continue this type of research for the rest of his career. Research that is quite literally -- out of this world.
<urn:uuid:dd98bf51-4ecb-4584-845f-bbe9765374b8>
2.84375
349
Audio Transcript
Science & Tech.
65.379569
Exploring Hydrocarbon Depletion NEW! Members Only Forums! Access more articles, news & discussion by becoming a PeakOil.com Member. Page added on August 10, 2012 I mourn for the dodo, poor fat flightless bird, extinct since the eighteenth century. I grieve for the great auk, virtually wiped out by zealous Viking huntsmen a thousand years ago and finished off by hungry Greenlanders around 1760. I think the world would be more interesting if such extinct creatures as the moa, the giant ground sloth, the passenger pigeon, and the quagga still moved among us. It surely would be a lively place if we had a few tyrannosaurs or brontosaurs on hand. (Though not in my neighborhood, please.) And I’d find it great fun to watch one of those PBS nature documentaries showing the migratory habits of the woolly mammoth. They’re all gone, though, along with the speckled cormorant, Steller’s sea cow, the Hispaniola hutia, the aurochs, the Irish elk, and all too many other species. But now comes word that it isn’t just wildlife that can go extinct. The element gallium is in very short supply and the world may well run out of it in just a few years. Indium is threatened too, says Armin Reller, a materials chemist at Germany’s University of Augsburg. He estimates that our planet’s stock of indium will last no more than another decade. All the hafnium will be gone by 2017 also, and another twenty years will see the extinction of zinc. Even copper is an endangered item, since worldwide demand for it is likely to exceed available supplies by the end of the present century. Running out of oil, yes. We’ve all been concerned about that for many years and everyone anticipates a time when the world’s underground petroleum reserves will have been pumped dry. But oil is just an organic substance that was created by natural biological processes; we know that we have a lot of it, but we’re using it up very rapidly, no more is being created, and someday it’ll be gone. The disappearance of elements, though—that’s a different matter. I was taught long ago that the ninety-two elements found in nature are the essential building blocks of the universe. Take one away—or three, or six—and won’t the essential structure of things suffer a potent blow? Somehow I feel that there’s a powerful difference between running out of oil, or killing off all the dodos, and having elements go extinct.
<urn:uuid:3003b881-7208-4ed1-98bb-c1524f1faa64>
2.734375
561
Comment Section
Science & Tech.
56.75537
Understanding What is Wind Energy What Is Wind Energy. The definition of wind energy is the energy which we harness from wind current in the atmosphere. Wind turbines capture the kinetic energy of the wind and convert into useful electrical and mechanical energy. The wind happened because of the different of pressure. In the atmosphere, there is always high pressure and low pressure areas. When both of areas come close to each other, the air flows from the high pressure area to low pressure area. The knowledgeable sailors have been use wind energy for many years. The wind power will push the boat along, when the sail is lifted. Otherwise the boat will stop moving forward when they lower the sail. The change direction of sail can used to steer the boat. Wind turbine is basically a generator, this machine has aerodynamics blades which are rotated by the wind incident on them. Then the energy is transferred into transmission box. It causes a high speed shaft to turn a generator and then produce electricity. The generator will begins generating power when the proper speed of rotation reached. The using wind power source energy lies in fact that wind is not always constant and steady. In a gentle breeze where turbines are moving at a low speed, they will not produce enough electricity to make them valuable. Because of that, wind energy systems are being used in hybrid arrangement with other power source rather than a replacement. Before you creating your own wind energy, you should check with the zoning board and with your neighborhood alliance. It may be rules that prohibit you from setting your own wind turbine if you live in a tight suburban area. However, once you get approval from your neighborhood, you will be on the way to creating you own wind energy that will help to keep the environment safe and at the same time it will save your money. Wind Energy for Electricity The technology of wind energy is being devoted to developing efficient ways to produce electricity. The wind farms are cropping up all across the country in modest numbers, anywhere a steady breeze and open field can be found. The purpose of wind farm builders is to one day be able to fully replace the significant amounts of oil and coal currently used to generate the electricity. To date, wind farms have not proven able to completely take over the power generation industry, but they get closer to that goal. The biggest fact in using wind for electrical generation lies in the fact that wind is not always steady and constant. Wind farm that lies dormant is producing no electricity, and in a gentle breeze where turbines are moving at a minimal speed, it does not produce enough electricity to make them valuable to the grid. Because of that the wind energy systems are being used in hybrid arrangements with other sources of power as a supplement rather than a replacement. Wind Energy for Home The wind energy had been used by the farmers to pump water from their wells for generations. Today, by combined with solar power systems, the wind energy is a viable source of electricity and hot water for the average rural American home. The modern system of wind energy are built vertically with rotating sales to maximize the system regardless of which direction the wind blows.
<urn:uuid:6fceecdd-5cf5-4de5-9a5c-61f43b714efe>
3.09375
625
Knowledge Article
Science & Tech.
42.921341
Thermodynamics is a branch of physics which deals with the energy and work of a system. Thermodynamics deals only with the large scale response of a system which we can observe and measure in experiments. In aerodynamics, the thermodynamics of a gas obviously plays an important role in the analysis of but also in the understanding of high speed flows. The of thermodynamics defines the relationship between the various forms of energy present in a system (kinetic and potential), the which the system performs and the transfer of heat. The first law states that energy is conserved in all thermodynamic processes. We can imagine thermodynamic processes which conserve energy but which never occur in nature. For example, if we bring a hot object into contact with a cold object, we observe that the hot object cools down and the cold object heats up until an equilibrium is reached. The transfer of heat goes from the hot object to the cold object. We can imagine a system, however, in which the heat is instead transferred from the cold object to the hot object, and such a system does not violate the first law of thermodynamics. The cold object gets colder and the hot object gets hotter, but energy is conserved. Obviously we don't encounter such a system in nature and to explain this and similar observations, thermodynamicists proposed a second law of thermodynamics. Clasius, Kelvin, and Carnot proposed various forms of the second law to describe the particular physics problem that each was studying. The description of the second law stated on this slide was taken from Halliday and Resnick's textbook, "Physics". It begins with the definition of a new state variable called Entropy has a variety of physical interpretations, including the statistical disorder of the system, but for our purposes, let us consider entropy to be just another property of the system, like The second law states that there exists a useful state variable called entropy S. The change in entropy delta S is equal to the heat transfer delta Q divided by the temperature T. delta S = delta Q / T For a given physical process, the combined entropy of the system and the environment remains a constant if the process can be If we denote the initial and final states of the system by "i" and "f": Sf = Si (reversible process) An example of a reversible process is ideally forcing a flow through a constricted pipe. Ideal means no boundary layer losses. As the flow moves through the constriction, the pressure, temperature and velocity change, but these variables return to their original values downstream of the constriction. The of the gas returns to its original conditions and the change of entropy of the system is zero. Engineers call such a process an Isentropic means constant entropy. The second law states that if the physical process is irreversible, the combined entropy of the system and the environment must increase. The final entropy must be greater than the initial entropy for an irreversible process: Sf > Si (irreversible process) An example of an irreversible process is the problem discussed in the second paragraph. A hot object is put in contact with a cold object. Eventually, they both achieve the same equilibrium temperature. If we then separate the objects they remain at the equilibrium temperature and do not naturally return to their original temperatures. The process of bringing them to the same temperature is irreversible. - Beginner's Guide Home Page
<urn:uuid:bd94bbc1-39a5-46f4-bc3a-aa0ff9b5b8eb>
3.390625
756
Knowledge Article
Science & Tech.
36.364373
We know that when an atom absorbs a photon with exactly the right energy, an electron will jump to a higher level. A photon, a particle of light, can be thought of as an electromagnetic wave with a particular oscillation frequency. Different photons have different frequencies along the electromagnetic spectrum. A low energy photon will have a lower frequency than a high energy photon. The precise energy of that photon defines the frequency of the oscillation in an atomic clock.
<urn:uuid:59cb2d63-a8d3-4596-b2a7-cee7d4834f5b>
3.703125
90
Knowledge Article
Science & Tech.
35.1425
This activity from Subtangent generates an endless sequence of trigonometry problems in right angled triangles. Students can choose to focus on finding a missing side, angle or allow a random selection of questions. The answers are given and, to support learning, include any necessary working out. An on-screen scientific calculator is available. HEALTH and SAFETY Any use of a resource that includes a practical activity must include a risk assessment. Please note that collections may contain ARCHIVE resources, which were developed at a much earlier date. Since that time there have been significant changes in the rules and guidance affecting laboratory practical work. Further information is provided in our Health and Safety guidance.
<urn:uuid:0168f76c-f53e-4d41-af5d-2901eb2df3bb>
3.40625
141
Tutorial
Science & Tech.
25.397566
Report an inappropriate comment All Possible Intersections Thu Aug 09 09:30:32 BST 2012 by Eric Kvaalen Could we please have the illustration? "Venn diagrams use overlapping circles to show all possible relationships between sets." That's not a very good description. They include all the possible intersections of either interiors or exteriors. Thus for n sets, there are 2^n regions. The only way to "show all possible relationships between sets" is to specify which of the 2^n regions contain elements of sets and which are just empty. The authors did not simply "comb through computer simulations". They developed the idea of a certain kind of symmetry, and then wrote a program which looks for 11-set Venn diagrams with that kind of symmetry, and found many. They are the first "simple, symmetric" Venn diagrams for 11 sets, although such diagrams had already been found for the primes 2, 3, 5, and 7.
<urn:uuid:dbdc51b4-35f0-4b0e-86df-07693566f5d0>
3.484375
203
Comment Section
Science & Tech.
63.96859
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. October 16, 1996 Explanation: Research balloon flights conducted in 1912 by Austrian physicist Victor Hess revealed that the Earth was constantly bombarded by high energy radiation from space - which came to be called "Cosmic Rays". What are Cosmic Rays and where do they come from? They are now known to be mostly subatomic particles - predominantly protons and electrons - but their origin is a long standing mystery. After almost a century of study, this cosmic puzzle may have been at least partially solved by new X-ray images and spectra from the ASCA satellite observatory. Pieced together to show the region around a star observed to go supernova in 1006 AD, the overlapping X-ray snapshots above (seen in false color) reveal the bright rims of the exploded star's still expanding blast wave. These ASCA observations show for the first time that the energy spectrum of the bright regions is like that produced by extremely high energy electrons streaming through a magnetic field at nearly the speed of light. If (as expected) high energy protons are associated with these energetic electrons then supernova remnants like SN 1006 are sources of Hess' puzzling Cosmic Rays. Authors & editors: NASA Technical Rep.: Sherri Calvo. Specific rights apply. A service of: LHEA at NASA/ GSFC
<urn:uuid:01a22d90-cd4d-4596-ac41-237c54106f7e>
4
297
Knowledge Article
Science & Tech.
39.363434
Global Dimming in the Hottest Decade Posted on 25 October 2012 by Rob Painting - Global surface temperatures in the "noughties" decade (2000-2009) were the warmest in at least 120 years of record-keeping. However, the rate of surface warming, was slower than in the two decades that preceded it. - Even though global warming is not expected to result in incremental warming year-after-year, given that the greenhouse gas forcing increased steadily, surface temperatures through the "noughties" decade might have been expected to be warmer than they were. - Analysis of surface and satellite-based observations show that, on a global scale, sunlight reaching the Earth's surface decreased markedly during 2001-2006. - This decline in surface solar radiation was caused by an increase in cloud cover, with a much smaller contribution due to increased concentrations of light-scattering fine particles in the atmosphere called aerosols. - Based on these observations alone, a slower rate of warming of the upper ocean and global surface air temperatures between 2001-2006 should have occurred. Figure 1 - Anomalies of monthly downward surface solar radiation (with seasonal signal removed) over the period 2001-2006. Average over land (green lines), ocean (blue) and land+ocean (black lines). Global Dimming & Brightening magnitudes are shown in boxed area for Northern Hemisphere (a) and Southern Hemisphere (b). SSR = surface solar radiation. From Hatzianastassiou (2011). A Background Primer Global Dimming and Global Brightening describes the variation in solar radiation reaching the Earth's surface. This has nothing to do with changes in the intensity of sunlight emitted by the sun, but rather with changes in the transparency of the Earth's atmosphere. Unsurprisingly, the largest contributors to this variation in surface solar radiation are clouds and aerosols, which absorb or scatter incoming sunlight depending on their physical characteristics, and thereby contribute toward either warming or cooling the planet. Aerosols? These are microscopic particles suspended in the air, and are derived from natural sources - such as dust, sea salt, and material from volcanic eruptions, and from human activity - especially air pollution. Aerosols are a key component of the climate system because they act as "seed particles" for water to condense, forming clouds. If there were no aerosols, there would be no clouds. This is a physically impossible scenario of course, but it does serve to illustrate how climatologically influential aerosols are. The Hottest Decade Could Have Been Hotter Hatzianastassiou (2011) is the second peer-reviewed paper to examine the global surface solar radiation (SSR) trend in the 21st Century, and to find a dimming trend. A previous paper, Hinkelmann (2009), found a decline in surface solar radiation for the period 1999-2004 amounting to -0.52 W/m2 per year (watts per square metre) - enough to substantially impact Earth's climate. This more recent study takes a look at the period 2000-2007, with an emphasis on 2001-2006, because of incomplete data for the start and end-point years. Utilizing satellite-based observations from a number of global datasets, and a radiative transfer model, the authors compute the surface solar radiation changes on a pixel-by-pixel basis - meaning that they divide the Earth into a series of small grids, and calculate the linear trend over the period for each grid (or pixel). The result of those calculations (with seasonal signals removed) are shown below: Figure 2 - anomalies of deseasonalized model-computed monthly downward surface solar radiation (in Wm2 per year). The white regions are those where fluxes are corrupted by bright surfaces - i.e. ice-covered and desert regions, and therefore have yet to be sufficiently validated. From Hatzianastassiou (2011). To the naked eye the change over this time is remarkably patchy, with opposing tendencies even within the same continent. Analysis demonstrates that the land-dominant Northern Hemisphere underwent a slight solar brightening over the period, whereas the ocean-dominated Southern Hemisphere experienced a strong dimming trend. This can be seen in Figure 1, where the Northern Hemisphere experienced a brightening of 0.17 W/m2 (0.028 W/m2 per year), and the Southern Hemisphere a dimming of -2.88 W/m2 (0.-48W/m2 per year). Readers will notice that, despite this decline over the period, the global surface solar radiation trend exhibits huge year-to-year variability. Perhaps this large annual variability shouldn't come as any great surprise. Aerosols are generally short-lived in the atmosphere and their persistence is strongly controlled by the source of emission, be that natural or of human-made origin, and by local weather patterns - which may vary from year-to-year. To determine if their observationally-derived calculations were supported by other measurements, the authors compared their results with two global surface-based datasets - the Global Energy Balance Archive (GEBA) and the Baseline Surface Radiation Network (BSRN). These datasets come from skyward-looking ground-based instruments which detect the amount of sunlight reaching the surface. Although these instruments have a great advantage over the satellites in that they directly measure sunlight hitting the detectors, they are handicapped by a poor global distribution, and being very limited in number. This makes the ground-based datasets less robust as a means to determine trends on a planetary scale, but very useful for validating the satellite data. Discarding stations with incomplete data over the 2001-2006 period, gave the authors 91 GEBA, and 14 BSRN, stations with which to make comparision. Despite comparing one specific location (land-based) with a grid/pixel about 280x280 km in size (the satellite data), the trends in the two datasets show good agreement - as determined statistically by the authors. See Figure 3. Figure 3 - Comparison of model-computed tendencies of surface solar radiation against GEBA (a) and BSRN (b) land-based station measurements over the 2001-2006 period. Similar tendencies are shown in blue colour and opposite in red, and the larger the circle, the stronger is the statistical correlation. From Hatzianastassiou (2011). Dimming & Brightening: A longer view The global decline in surface solar radiation shown in Hatzianastassiou (2011), for half of the first decade of the 21st Century, is contrary to the trend of the previous decades. Since the late 1980's the Earth experienced a Global Brightening trend (Hatzianastassiou , Pinker ), a trend which reversed a dimming trend in the decades before that - the 1950's to 1980's (Wild ). Therefore, since the mid 20th Century the Earth's surface has globally undergone dimming, then brightening, and now dimming again. Figure 4 - Schematic representation of global dimming/brightening in the latter half of the 20th Century. GH=ground heat flux, SSR=surface solar radiation, SH=sensible heat flux, LH=latent heat flux (evaporation), LW↑=heat emitted from the surface, and LW↓=downwelling heat from greenhouse gases. Global Dimming & brightening have substantial impacts upon global temperature and the global water cycle. See Wild (2012) for further detail. I won't delve any further into the topic here, the point is simply to show that over decadal time frames, and longer, the Earth has undergone substantial variations in the amount of sunlight it receives at its surface. The magnitude and direction of these changes are such that changes in the brightness of the sun can be immediately eliminated as a potential culprit because the solar flux has been far too small, and total solar output has in fact declined over the last 4-5 decades. Summarizing Global Dimming in the 21st Century Analysis of satellite-based observations by Hatzianastassiou (2011) reveal that the Earth experienced a substantial decline in the amount of solar radiation received at its surface between 2001-2006. This decline was primarily the result of increased cloud cover, however a much smaller contribution came from increased aerosol concentrations. The Northern Hemisphere underwent slight brightening during this time, but was overcompensated for by a strong dimming of surface solar radiation in the Southern Hemisphere - whose surface area is largely dominated by ocean. Based solely on these observations, and taking into account the physical understanding of the Earth system, a slower rate of warming of the surface ocean and global surface temperatures should have occurred in the noughties. This short-term global dimming should have counteracted a larger fraction of the long-term warming effect of the greenhouse gas forcing during this interval. Time will tell if the results of this paper are affirmed by further research, but there are other global observations which suggest that Global Dimming did indeed occur during the 21st Century. Next: Part 2 - A Closer Look at 21st Century Global Dimming
<urn:uuid:7a3d0c55-6ecc-4cab-92aa-40f0b305685a>
3.765625
1,884
Academic Writing
Science & Tech.
33.941166
In Antarctica in January, 2013 scientists released 20 balloons, to study the giant radiation belts surrounding Earth and how they lose particles, causing electrons from the belts to stream down toward the poles. NASA is getting ready to launch a new mission to observe a mysterious region of the solar atmosphere that may be crucial to understanding what powers space weather. A NASA team built a first-of-a-kind testbed to simulate the distinctive signature of pulsars - which radiate in regular bursts anywhere from seconds to milliseconds. Scientists seek to gain answers to questions about the formation of stars and galaxies with the launch of the CIBER sounding rocket on June 4 from Wallops. Coronal mass ejections that accompanied X-class flares early last week, arrived at Earth over the weekend and sparked a geomagnetic storm and aurora.
<urn:uuid:ac3f1a8c-db7b-4648-b18d-07c793185a0a>
3.375
168
Content Listing
Science & Tech.
37.415122
Studying Earth's atmosphere and building a strong foundation for the future of our planet. Michael Mann spoke about climate change from two perspectives -- 'reluctant and accidental public figure' and Distinguished Professor. At NASA's Langley hangar, Bruce Anderson, project scientist for the ACCESS (Alternative Fuel Effects on Contrails and Cruise Emissions) experiment, stood in between NASA's HU-25C airplane and a group of media visitors armed with cameras, notepads, and smartphones as he explained the recently completed series of flights. Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality studies pollution where we live and breathe.› Read More NASA's CALIPSO satellite collects information on the distribution and movement of clouds and particles, called aerosols, in Earth's atmosphere.› Read More Researchers at Langley have developed tools to help their peers with satellite calibration. GeoTRACE is a proposed new NASA Earth science mission to measure air pollution for the first time in the way that weather is observed. The little instrument just keeps going and going. Abigail is continuing a family tradition of exploration and discovery. S'COOL, MY NASA DATA, Surface Ozone Outreach, Contrail Education and more. The Data Center is responsible for processing, archiving, and distributing NASA Earth science and atmospheric data. An Applied Sciences Program that fosters Human Capital Development to extend NASA science research to local communities.
<urn:uuid:e6658f48-744e-41f0-a90a-50ff5c40a4c1>
3.0625
308
Content Listing
Science & Tech.
24.176334
I am having a bit trouble with a question about rotational volume. So, this is the problem: "An area R in the xy plane is given by: 0 ≤ x ≤ 1 0 ≤ y ≤ x3 Find the volume when rotating R around the axis y = -1" Here is how I was thinking. First, I drew this figure: My idea was to draw the boundaries, and two radius in red, which I need to find in order to use the washer method. So, I thought the lower boundary must be r(y) = g(y) - c = x3 - (-1) = x3 + 1 (The second red line in the picture) But I am unsure what I should put as the upper boundary (The first radius in the picture). I hope someone can help and tell me in what way I am thinking wrong.
<urn:uuid:fbdcd7a6-2b72-460a-979c-18fa151a7e9e>
3.03125
186
Q&A Forum
Science & Tech.
72.854941
An application should be able to produce and consume data from multiple formats. These often include proprietary binary formats and should also include some standard formats, such as Rich Text Format (RTF) or HTML. The following table lists some formats that can contain ink. |Binary||Applications should use ink serialized format (ISF) to encode ink into their binary formats.| |HTML||A HTML format is highly recommended for the representation of heterogeneous content. Applications should use fortified GIFs to encode ink into their HTML documents. For more information about fortified GIFs, see Building Blocks.| |Image||For applications for which there is no other intersection of compatibility, an ink-enabled application should move bitmap and metafile formatted images to the Clipboard.| |Ink Serialized Format (ISF)||ISF is the most compact persistent representation of ink. Although it often contains only ink data, ISF is extensible. Applications can set custom attributes (identified by a globally unique identifier (GUID)) on an Ink object, ink stroke, or ink point. This allows you to store any kind of data or metadata as an attribute in an ISF stream. For Clipboard interoperability, ink can be placed into a standard Clipboard slot for ISF that is defined in the software development kit (SDK) header files.| |RTF||It is possible to generate an RTF Clipboard format and encode ink in the RTF as OLE objects. This allows the ink to be pasted into an OLE container, such as Microsoft Word or a RichEdit-based application.| |Extensible Markup Language (XML)||Applications can use either one of the ink formats that are base-64 encoded to store ink in an XML file format. An XML format is useful for entering ink content into a database, as in the case of a signature field, or even as an applications primary file format. This alleviates the need for writing a parser.|
<urn:uuid:2bf4d48d-d59c-40d8-9086-31bb73992a36>
2.78125
412
Documentation
Software Dev.
38.235663
The brute force computational approach will not resolve the outstanding problems in disk galaxy formation. What seems to be needed are further analytical insights that will allow refinement of the simple prescriptions for star formation. One such approach has come from studying turbulence-driven viscous evolution of differentially rotating disks. Recent investigations of star formation suggest that turbulence plays an important role in accounting for the longevity of star-forming clouds and their fragmentation into stars. The Jeans mass in a typical interstellar cloud greatly exceeds the stellar mass range. It is likely that the gravitational instability of galaxy disks is a primary source of interstellar cloud turbulent motions, supplemented on small scales by supernova feedback. Of course, these drivers of turbulence are coupled together, since the rates of star formation and star deaths are controlled by global gravitational instability. In effect, differential rotation is the ultimate source of the turbulence. A promising hypothesis is that turbulent viscosity, by regulating the gas flow, controls the star formation rate, and indeed that the star formation time-scale is given by the time-scale for the viscous transfer of angular momentum [Silk and Norman1981]. On the scale of molecular clouds, such an ansatz is reasonable, since one has to shed angular momentum in order to form stars. Magnetic fields are the common culprit in conventional star-forming clouds, but in protogalactic disks one most likely has to appeal to another source of angular momentum transfer. Turbulent viscosity is capable of fulfilling this role. Indeed, the resulting disk has been shown to generically develop an exponential density profile [Lin and Pringle1987]. In infall models, the initial angular momentum profile determines the final disk scale length if angular momentum is conserved. However as found in high resolution simulations, some 90 percent of the baryonic specific angular momentum is lost to the dark halo, and there is no preferred solution for disk sizes. In viscous disk models, the scale length is set by the competition between viscosity-driven star formation, that freezes the scale length once stars form, and dynamical friction on the dark matter, that competes for the same angular momentum supply. The characteristic viscous scale is determined by the cloud mean free path between collisions, itself comparable to the disk instability scale that drives the turbulence, and in combination with the residual rotation rate, provides the ultimate constraint on disk scales. Another byproduct of the viscous disk model is the gas fraction [Silk2001]. The viscous redistribution time-scale, and hence the star formation time-scale, is where is the cloud mean free path (of order the disk scale height), and gas is the cloud velocity dispersion. Disk instability yields gas . The star formation efficiency, if determined by supernova feedback and approximate conservation of momentum, is gas / vSN, where vSN is the specific supernova momentum per unit mass injected into the interstellar medium. Here vSN ESN / mSNvc, where mSN is the mass in stars per supernova and vc is the velocity of transition of a remnant to approximate momentum conservation. The characteristic star formation time may then defined to be where (r) is the disk rotation rate at radius r, M * (r) is the instantaneous stellar mass and Mgas is the gas mass. A steady state is reached in which the gas fraction is and the disk scale length is Thus the disc scale depends both on cosmology and on local conditions. The inferred present epoch numbers are plausible: for M * 6 × 1010 M, one finds * 3M / yr and Mgas / M* 0.1. Also, one has now gas 10 km s-1, vSN 1000 km s-1, and 0.3kpc. At disk formation, one expects that Mgas / M * 1, tsf 109 yr, and 1 kpc, appropriate to the protodisk. These values result in a stellar disk scale-length rd 3 kpc. It is encouraging that simple analytic estimates come out with reasonable numbers for gas and stellar disk scales and gas fraction. Whether such a simple model survives incorporation into 3-dimensional simulations of disk formation in the presence of a live dark matter halo and energetic winds remains of course to be seen.
<urn:uuid:703f67bc-ab44-47c6-bbe1-8c7f309d2475>
2.90625
861
Academic Writing
Science & Tech.
37.082976
The very highest energy photons, gamma-rays, are too energetic to be detected by standard optical methods. In fact they rarely actually make it to the surface of the Earth at all but interact with molecules in the Earth's atmosphere. The high energy gamma-ray telescopes, such as Veritas and Hess use air Cherenkov telescopes to observer these photons. How does an air Cherenkov telescope actually work to measure the incoming gamma ray? A very high energy gamma ray spontaneously pair-produces a particle and anti-particle, the idea being that the gamma ray has enough energy that a decay into matter is feasible. The particle and anti-particle which are created are still very high energy - they have velocities near the speed of light in a vacuum. Whenever a particle flies through a substance at a velocity higher than the speed of light in that substance, it emits Cherenkov radiation. The typical analogy given is that this is like a sonic boom: in a sonic boom, distinctive waves are produced when something flies through a substance at higher than the speed of sound in that substance; with Cherenkov radiation, the waves are produced by flying through at more than the speed of light. The particle and antiparticle might collide with stuff in the atmosphere, producing high-energy photons; these high-energy photons can pair-produce again. This way, the Cherenkov radiation amplifies; a small burst of Cherenkov light is produced whenever a gamma ray enters the atmosphere. The Cherenkov radiation produced by gamma rays has a distinctive pattern that can be detected using a photomultiplier tube. With an array of these detectors, you can observe the shower from several points. Then, you work backwards to figure out where the original gamma ray came from and what energy it had (much easier said than done!). Cherenkov telescopes do not detect gamma rays directly. A gamma ray has a lot of energy compared to a visible light wave. Collision of a high energy gamma ray with an atom in the atmosphere results in emission of electrons, positrons, and/or photons - each of which may have enough energy to cause a similar emission upon subsequent collisions. This results in a cascade effect, which may be larger or smaller depending on the magnitude of energy in the original gamma ray. The cascade of light is collected in a parabolic mirror, sent through photo-multiplier tubes, and measured. See the link for more details.
<urn:uuid:cd59078c-4281-456b-b7c3-64c468686ea6>
4.03125
501
Q&A Forum
Science & Tech.
37.360692
|The ice jets of Enceladus send particles streaming into space hundreds of kilometers above the south pole of this spectacularly active moon. [ more ]| "Deep inside Enceladus, our model indicates we've got an organic brew, a heat source and liquid water, all key ingredients for life," said Dr. Dennis Matson, Cassini project scientist at NASA's Jet Propulsion Laboratory, Pasadena, Calif. "And while no one is claiming that we have found life by any means, we probably have evidence for a place that might be hospitable to life." Since NASA's Voyager spacecraft first returned images of the moon's snowy white surface, scientists have suspected Enceladus had to have something unusual happening within that shell. Cameras on NASA's Cassini orbiter seemed to confirm that suspicion in 2005 when they spotted geysers on Enceladus ejecting water vapor and ice crystals from its south polar region. The challenge for researchers has been to figure out how this small ice ball could produce the levels of heat needed to fuel such eruptions. A new model suggests the rapid decay of radioactive elements within Enceladus shortly after it formed may have jump-started the long-term heating of the moon's interior that continues today. The model provides support for another recent, related finding, which indicates that Enceladus' icy plumes contain molecules that require elevated temperatures to form. "Enceladus is a very small body, and it's made almost entirely of ice and rock. The puzzle is how the moon developed a warm core," said Dr. Julie Castillo, the lead scientist developing the new model at JPL. "The only way to achieve such high temperatures at Enceladus is through the very rapid decay of some radioactive species." The hot start model suggests Enceladus began as a mixed-up ball of ice and rock that contained rapidly decaying radioactive isotopes of aluminum and iron. The decomposition of those isotopes - over a period of about 7 million years - would produce enormous amounts of heat. This would result in the consolidation of rocky material at the core surrounded by a shell of ice. According to the theory, the remaining, more slowly decaying radioactivity in the core could continue to warm and melt the moon's interior for billions of years, along with tidal forces from Saturn's gravitational tug. Scientists have also found the model helpful in explaining how Enceladus might have produced the chemicals in the plume, as measured by Cassini's ion and neutral mass spectrometer. Matson is lead author of a new study of the plume's composition, which appears in the April issue of the journal Icarus. Although the plume is predominantly made up of water vapor, the spectrometer also detected within the plume minor amounts of gaseous nitrogen, methane, carbon dioxide, propane and acetylene. |Enceladus. [ more ]| The thermal decomposition of ammonia would require temperatures as high as 577 degrees Celsius (1070 degrees Fahrenheit), depending on whether catalysts such as clay minerals are present. And while the long-term decay of radioactive species and current tidal forces alone cannot account for such high temperatures, with the help of the hot start model, they can. The scalding conditions are also favorable for the formation of simple hydrocarbon chains, basic building blocks of life, which Cassini's spectrometer detected in small amounts within Enceladus' plume. The team concludes that so far, all the findings and the hot start model indicate that a warm, organic-rich mixture was produced below the surface of Enceladus and might still be present today, making the moon a promising kitchen for the cooking of primordial soup. To gather more information about the chemistry within Enceladus, the team plans to directly measure the gas emanating from the plume during a flyby scheduled for March 2008.
<urn:uuid:53cbb357-d710-4f24-8365-b5fe24af2b12>
4.03125
806
Knowledge Article
Science & Tech.
30.765613
By Charles Q. Choi, SPACE.com Debris spots found on stars reveal planets that went splat like bugs on a windshield. The result: metal smears on the surface of parent stars, said European Southern Observatory astronomer Luca Pasquini, who offered up another analogy: "It is a little bit like a tiramisu or a cappuccino," Pasquini said. "There is cocoa powder only on the top." The finding could help unravel mysteries of planet formation. As scientists began discovering exoplanets, or worlds orbiting distant stars, in the past decade, they found these planets were most often found around stars rich in iron. Stars that host planets are on average nearly twice as rich in metals than counterparts without worlds. But are these stars rich in metals because planetary debris polluted them? Or do metal-loaded stars naturally spawn worlds? It's a classic chicken-or-egg problem. If these metals were planetary debris, they would only be found in the outer layers of stars. On the other hand, if these metals were inherently part of the stars, they would be found to their cores. Unfortunately, the only light that astronomers can see from stars comes just from their outermost layers, which means there is no direct way to peer into their hearts. Instead, scientists looked at stars whose innards churn far more than our sun does. The ingredients of the interiors of these stars roil to their surfaces for astronomers to analyze. Specifically, researchers focused on red giants, stars that - as will the sun in several billion years - have puffed up and become much larger and cooler after they have exhausted the hydrogen in their cores. Compared with sun-like stars, these giants have much larger convective zones, or regions where all the gas is completely mixed. The sun's convective zone comprises only 2% of the star's mass, but in red giants the convective zone is 35 times more massive. After inspecting 14 planet-hosting red giants, Pasquini and his colleagues found these were not rich in metals as is typically the case for planet-hosting sun-like stars. The simplest explanation is that metals seen in planet-hosting stars are pollution from planetary debris, findings that will be detailed in the journal Astronomy & Astrophysics. The debris might come from "planets themselves or small planetoids," researcher Artie Hatzes, director of the Thuringia State Observatory in Tautenburg, Germany, told SPACE.com. Pasquini said their results might favor the controversial and relatively new "disk instability" theory. This concept states that large planets emerge from clumps of dust and gas whose hearts coalesce into cores that grow relatively quickly. MIT planetary scientist Sara Seager, who did not participate in this study, noted "the findings are an intriguing piece of the puzzle in trying to understand planet formation." Resolving the mysteries concerning how planets form will require a much larger study of metal-rich and metal-poor planet-hosting stars, she added. Copyright 2007, SPACE.com Inc. ALL RIGHTS RESERVED. Conversation guidelines: USA TODAY welcomes your thoughts, stories and information related to this article. Please stay on topic and be respectful of others. Keep the conversation appropriate for interested readers across the map.
<urn:uuid:059aaff8-ebc2-4212-9e68-cf910b47bf77>
3.28125
684
Truncated
Science & Tech.
45.3529
Wind and Wave Scientists, engineers and geographers at ODU will help decide whether coastal Virginia's abundant supply of wind and wave energy should be tapped to generate electricity. Wind turbines are producing electricity throughout the world today. The so-called "wind farm" clusters of modern windmills sometimes are developed in near-shore or offshore marine sites, which present special challenges. Virginians will want assurances that a coastal wind farm will be reliable and will not have adverse environmental impacts. State officials must look to experts such as ODU faculty members for answers. Questions about bird migration routes, weather patterns, seabed geology and ways to anchor turbine towers are among the many that researchers must address. ODU, which has program strengths in fluid dynamics, electrical and computer engineering and modeling and analysis, also can contribute expertise in turbine blade design and in modeling to predict wind power generation and strategies for integration of wind power into conventional electrical power grids. Generating electricity by means of wave energy is not a mature technology. Researchers at ODU with expertise in fields such as oceanography, computer modeling and hydrodynamics can help to formulate wave-harnessing strategies for the Mid-Atlantic coast. Several ODU engineers propose to model design concepts that utilize energy extraction from waves to supplement wind power. For more information, visit http://www.ccpo.odu.edu/~jlblanco/windenergy/wind_first.1.htm.
<urn:uuid:9c905814-ed0f-423f-b57f-88f08fb47ad5>
3.328125
298
Knowledge Article
Science & Tech.
25.251332
The Rhynie Chert Flora The early land plants found as fossils in the Rhynie chert are locally preserved in such exquisite detail that cellular details can be examined. This has allowed detailed anatomical studies to be performed on the Rhynie plants. The plants are relatively simple in their level of organisation and include seven identified 'higher land plants', two enigmatic nematophytes and a number of other plants including various types of fungi, algae and the earliest fossil lichen. We can demonstrate that at least seven of the plants are true subaerial plants by most or all of the following features being preserved: The taxonomy of the Rhynie plants poses difficulties for subdivision into currently accepted taxonomic groups. For the purposes of this resource, we have made a simple subdivision into the 'higher land plants' - those with the features listed above, and 'other non-vascular plants' from the chert. The seven higher land plants of the Rhynie chert 'macroflora' that have been described to date are detailed below. Various life stages have also been described for a number of the plants with both the sporophyte and gametophyte stages having been identified (e.g.: Remy & Hass 1986, 1991a,b,c,d; Remy & Remy 1980a,b; Remy et al. 1993 and Kerp et al. in press). A number of these plants exhibit other delicate features such as mycorrhizae, bacterial infections and various forms of pathological damage. Five of the plants are true vascular plants or tracheophytes, showing tracheids in the water-conducting cells. Two plants, Aglaophyton and Nothia do not show tracheids and can therefore not be considered as tracheophytes. Click on the hyperlinks below for a basic overview of the plants and the following individual plant names for a more detailed description of their morphologies. Click on the following names for more detailed descriptions of the individual genera and their palaeoecology: Apart from the fossil plants in their own right, fossilised spores are also found, not only in the chert but also in the associated sediments, particularly the shales and mudstones. Many species have been identified and described and have been useful for biostratigraphic purposes in dating the sediments (see section on Age of the Rhynie Chert). There remains, however, a degree of uncertainty as to which of the vascular plants each belongs, and there may well be spores present from other plants that have not yet been found preserved in the cherts. Click here to learn more. The described flora of the Rhynie chert also includes non-vascular plants such as nematophytes, algae, fungi and a lichen. Click on the hyperlink below for a basic overview of the microflora: Click on the following hyperlinks for a more detailed description of the other forms of plant fossils in the Rhynie chert:.
<urn:uuid:0fd9f3ab-0e56-489a-943b-51bf02b48603>
3.515625
622
Knowledge Article
Science & Tech.
37.19902
Most of our energy comes from the sun but there is also an enormous reservoir of energy in the form heat beneath the Earth. Sometimes this heat rises to the surface in the form of hot water or steam. In a few places, powerful jets of steam shoot up from the ground as geysers. Geothermal steam and hot springs have been used for more than a hundred years to generate electricity. Geothermal power plant in New Zealand How It Works Either, steam is used directly to drive a turbine, or the hot water is used to evaporate another liquid with a lower boiling point, and the vapour from the second liquid drives the turbine. Hot Dry Rocks Everywhere in the world, rocks deep below the surface are very hot. In some places, these hot rocks are closer to the surface than elsewhere. One such place is central Australia. It is possible to make electricity using the heat in these rocks by pumping water down to them and capturing the seam that is produced when the water is heated by the rocks. The hot rock formations in central Australia contain sufficient potential energy to supply all of Australia’s power needs for hundreds of years. Since no fuel is needed to generate geothermal power, it has practically no environmental impact.
<urn:uuid:32544ba5-fa80-40c5-9a72-52c642ff085b>
3.984375
254
Knowledge Article
Science & Tech.
49.103985
Typically recorded as "Eucalyptus," several genera have this general morphology. Myrtaceae pollen is typically oblate and triangular in polar view. The apertures are short furrows in a thickened portion of the wall. The distinctive pattern typically seen in polar view is formed by thinning of the exine, resembling a syncolpium. This thin region often has a triangular island of thick tectum at the pole of the grain. Sculpturing varies from psilate to reticulate. Pollen of the Myrtaceae is an important historic indicator in arid regions of the northern hemisphere. Eucalyptus is the most commonly-planted genus, but many other genera are used horticulturally. The morphology similar to that described above also occurs in Acmena, Angophora (retic.), Backhousia (no tectum island), Baeckea, Callistemon (rough wall distinct island), Calytrix (broad aperatures), Choricarpia, Leptospermum (faint tri-radiate mark), Lophostemon, Melaleuca, Rhodamnia, Rhodomyrtus, Syzygium, Tristaniopsis, Ugni About 3000 woody trees or shrubs with thick leaves containing oil glands. Flowers with 5 sepals and petals and many stamens. Fruit generally a woody capsel. Gondwanan. Tropical and subtropical, mostly in Australasia but with 45 genera in central and south America, 2 in Africa, and about 20 genera native to India and Asia. Cultivated throught the world for shade and as ornamental plants. Cloves (Syzygium aromaticum) and allspice (Pimenta dioica) are important economical species of the Myrtaceae. The pollen is harvested by honey bees. Pollen light micrograph: In maceration mounts, the small (20 - 30 µm) grains are almost always oriented so the polar area is visible. Below are three categories present in samples from a single bee hive from California. Note the range of shapes and sculpturing. Pollen scanning electron micrograph (SEM) The remarkable contrast of thick tectum adjacent to the thin tri-radiate mark is striking in the SEM above. Note the intine or cell membrane protruding through Production and Dispersal: High pollen production but moderate to poor dispersal. Moderate to poor due to breakage. Based on its Gondwanan distribution, the family has existed since the Cretaceous; the introduction of Eucalyptus into the Mediterranean region and southern California began in the Nineteenth century. In Spain and Calfornia, it is a usefull marker for the historic period.
<urn:uuid:d9390685-3966-4d3f-ab6d-b5ac4334daa7>
3.671875
616
Knowledge Article
Science & Tech.
27.014
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. Post a reply Topic review (newest first) Glad about that. Well I was a bit busy so I couldn't post... @ bob bundy Well how can x = 0 makes the expression goes to +/- infinity as dividing anything by 0 is undefined,and x is the denominator. Thanks but I want to try this and the other problems by myself now,but if I got stuck anywhere I will post it here. x + 2 changes sign at x = -2 so that will divide the cases we need to consider. There's also something to consider at x = 0, as the expression goes to infinity there. It looks as though this will go to - infinity on one side of zero and + infinity on the other. To simplify the expression you will have to multiply by x, and that change of sign will influence the inequality sign. So you need to consider 4 cases: (i) x < -2 (ii) x = -2 (iii) -2 < x < 0 (iv) 0 < x Do you want to try this and post what you get or would you like more help with this ? Yes,I understand this rule. Well now its a bit more clear,I have to consider/assume the ranges (-3,0) and (0,+3) but I don't understand the logic just because +/-3 makes the denominator 0,in many inequation it happens but we don't have to choose those cases,what my problem was I havent looked at different cases,but still this problem is not clear properly the logic behind choosing the cases ,when I reduced the equation tofrom here why I can't able to solve the inequation ?, may be I need more time . bob bundy what you did I understood totally and appreciate it. Well thanks for helping me out If I call... statement one and ... statement two, then we can work the logic like this. I got statement one by algebra assuming statement two. So statement one is only true to the extent that it obeys statement two. As statement two is a subset of statement one that means the inequality is satisfied just for the subset ie So that provides part of the missing answer. Testing x = 0 by substitution shows it may be added to the set giving You can finish by considering (0,+3) and showing it is legitimate to add this to the range Sorry still can't understand... How can x be +/- 4 ,lets put it in the equation and see Taking x as -4 So I think x can't be +/- 4 Yes I can understand that,but how you came upto "-3 < x < 0" we just cant guess any range I want to know what are the steps,in your post #2 the case 1 is ok it givesbut case 2 doesn't give any result expect than x is 0 and case 3 which you actually explained in post #5 gives but as already given . But thats not the answer and still where is . Thanks for the reply Now in that range for x, (-x-3) is negative eg. (--2.9 - 3) = -0.1 So when you multiply by that denominator you must reverse the inequality: This final statement is true in that range so the range is part of the solution set. I'll look at the other questions once you are happy with this one. Here are the other questions Question no 4 I think I may solve it .... but the rest I tried but no result till now. And thanks bob bundy for the reply Well still I am having problem to understand it... How you can take the case (-∞,-3) as x can't be -4 or 4 ? So we can write This far I did it... and I can also see that x can't be -3 or 3 as it will make the denominator 0. But I can't find a proper steps to come to a proper solution set...I am still trying.... (ii) x = 0 (iii) x <0 LATER EDIT. Then I made the graph and realised where the missing answers had gone. As function tends to +/- infinity at x = +/- 3 we should look at 5 cases (-∞,-3) (-3,0) 0 (0,3) (3, ∞) That will produce the answer you want. Hope that helps. Well I have four inequations that I can't able to understand properly,what are the steps you guys will take to solve this... I did it and my solution was But the correct soultion is I can understand my solution is wrong ,but don't know what steps are used to get the correct solution set.Can anyone show me the proper steps?
<urn:uuid:db61552f-02cd-4dc2-8273-980a7a14a6bf>
3.5
1,075
Comment Section
Science & Tech.
84.212343
Adult male Northern Saw-whet Owls weigh only about as much as an American Robin. (c) Scott Comings In late fall, cold fronts come through our area and something really neat happens! What is so neat about the oncoming winter you may ask? Well, when we have these conditions this time of year, we also have lots of northern saw-whet owls passing through as part of their fall migration. Saw-whet owls are cute little creatures that look like something right out of Disney. They are about seven inches long, with a reddish-brown back, a white belly, and reddish streaks on the breast. They are attracted to stands of conifers (cone-bearing trees) and are strictly nocturnal (active at night). During the day they like to roost (rest) among these trees, usually close to the end of the branch. In Rhode Island the saw-whet owls eat mostly house and white-footed mice, catching them with their large talons (claws). After the catch, the owl returns to a tree and proceeds to eat the mouse by ripping it into small chunks. Owls cannot separate the meat from the fur and bones thus later on they regurgitate a pellet, which is the undigested fur wrapped around bone, much like a hairball for a cat. Although approaching a saw-whet is easy because they tend to stay put, finding one is quite challenging due to their stillness and camouflage. In fact, the difficulty in locating individuals of this species has obscured information about their numbers and distribution. And, until an extensive monitoring effort began in 1991, very little was known about the migration ecology (how this species behaves during migration) of saw-whet owls in the northeast. We are just starting to learn where saw-whet owls migrate and how the population of these owls changes from year to year. Historically, ornithologists (bird experts) believed this species to be relatively uncommon in southern New England; only a few individuals are present each year in Rhode Island during migration and some years none at all. However, recent efforts by biologists in a series of banding stations (Rhode Island, Massachusetts, New Jersey, Maryland, and Virginia) have shown saw-whet owls to be quite common (capturing over 5,000 saw-whets in the last ten years). So at this time of year, be sure to look for the saw-whet owl (especially when driving at night), for it is one of the treasures of fall.
<urn:uuid:19ace2ff-6e40-4c41-b75f-9f7cb0ee984c>
3.203125
534
Knowledge Article
Science & Tech.
50.747923
Classical Electron Radius Name: Biek V. Hello, what is the "size" of an electron in meters, given that it is seen as a particle, which is partly true. In classical terms, the electron radius is about 2.818 * 10^-15 m From: Particle Data Group Particle Physics Booklet July 2004 --Nathan A. Unterman Click here to return to the Physics Archives Update: June 2012
<urn:uuid:8b46f038-425e-4bc4-9456-7c3ceb77bd55>
2.96875
99
Q&A Forum
Science & Tech.
63.695
Introduction to JavaServer Facesby Alexander Prohorenko and Olexiy Prokhorenko This article is meant to acquaint the reader with JavaServer Faces, commonly known as JSF. JSF technology simplifies building the user interface for web applications. It does this by providing a higher-level framework for working with your web app, representing the page as event-aware components rather than raw markup. At this time, there are two JSF variants: JSF early access 4 (which is included in the Java Web Services Developer Pack 1.3), and JSF 1.0 Beta. It's important to remember that JSF is a specification, much like J2EE. And like J2EE, there is a reference implementation from Sun, along with other implementations of the interface, such as the open source MyFaces. This article is concerned with the distinctive features of the JSF specification and its ideas, not with a particular implementation. After all, since JSF is not yet final, the specifics might yet change. We assume the reader is already somewhat familiar with Java programming, servlets, JavaServer Pages (JSPs) and custom tag technologies. The reader should be experienced with servlet/JSP containers such as Tomcat, and design patterns such as model-view-controller (MVC). Roles in Web-Application Development In developing web applications, we often deal with the same problems. First off, the interface is the most frequently updated part of the application, so we want to simplify modification of the interface as much as possible. Secondly, those developing the application have significantly different skill sets — server-side programmers, HTML coders, graphic designers, etc. — and we want their work to be as independent as possible. This leads to a model-view-controller design to separate the roles. In many organizations, the development of a web application works in a familiar manner. The designer creates a prototype, the HTML coder does everything in HTML, and the server-side programmer achieves his or her needed functionality in Java. This approach often fails badly. The designer, when creating the prototype, is limited only by his or her imagination, inadvertently causing problems for the other participants, often forcing them to start from scratch when developing a new application. Even if the problem of role division can be solved in existing frameworks (for example, by using custom tags or XML/XSL transformations), there can still be a problem with code reuse. So what's so different between creating the GUI for a Swing application and a web application? It's obvious: in Swing, there is a set of standard GUI components and an entire infrastructure for tweaking and extending the functionality. A Swing developer works with the concept of a Component — an element of the user interface, such as a panel, button, list, or table — and can either set values to use the default behavior or extend the component to provide new behavior. Some of these components, specifically Containers allow combinations of components, such as a panel with a table that contains buttons in some of its cells. As an added bonus, the developer gets to work with high-level concepts: when the button is clicked, an event is generated. Meanwhile, for the web developer to know the button has been pressed, he has to analyze the HTTP request and try to determine what has happened. This is not ideal. The programmer shouldn't have to work so hard to figure out if a button, image, or hyperlink was clicked, or how these pieces were implemented in HTML. Ideally, the developer just wants to know that an event occurred. In other words, the developer needs to be able to see the web interface in terms of familiar high-level concepts. This is the problem JSF aims to solve. The JSF Approach The following presentation of JSF concepts is based on the most recent specification (dated Dec. 18, 2003). The core JSF architecture is designed to be independent of communication protocols or markup language specifics. However, it's also meant to solve the problems experienced working with HTML clients communicating via HTTP with a Java application server that supports Servlet/JSP applications. JSF aims to provide the following features to simplify application development: - Handling of UI components between requests. - Consideration of markup features supported by the client/browser. - Support for processing forms. - Strictly-typed event model. - Transformation of data on the page (Strings) into the models' corresponding data types. - User-friendly exception handling. - Navigation between pages based on UI events and interaction with data models. Of course, when developing web applications today, everyone has to deal with these problems, and everyone solves them in their own way, increasing development time and hurting maintainability. JSF tries to offer a unified way to deal with these issues. The specification understands the importance of dividing software development roles and assigns responsibilities to these roles. Component writers (a.k.a. Component Developers) are responsible for creating reusable UI components. They are thus responsible for: - Making components "understandable" to the final client (e.g., providing HTML to a browser). This process involves encoding information from the application. - Making the application understand the request form, by decoding the information in a request. - Reacting to events received by a component and understanding when such an even has occurred. Application developers are responsible for the server-side tasks, such as creating an application's business logic, its persistence layer, etc. They should develop appropriate Java objects to represent the desired functionality, and make these objects accessible from servlets. Tool providers supply tools like IDEs that facilitate creating JSF-based applications, or even higher-level frameworks that might use JSF to create their user interface. JSF implementors are responsible for implementing all the required specifications of JSF. For example, they might provide a JSF implementation within their J2EE server. Note that most developers will not perform either of the latter two roles. But they indicate the seriousness of Sun's intentions with respect to role-division: for JSF to succeed, it's necessary to follow the role-division guidelines. JSF in Detail Let's look in detail at what JSF provides us. As noted above, the highlight of JSF is the availability of reusable server components for creating GUI's. From JSF's point of view, all components should inherit from javax.faces.component.UIComponent (note that in the EA4 release this was an interface). Any page or screen of the application will consist of a set of such components. A set of hierarchically ordered components is called the JSF Tree (EA4's term) or the View (1.0 beta's term). This tree of components represents the structure of the onscreen page. Each element of a tree is UIComponent, with some components being composites and thus having child components. In JSF there is a set of standard components. The UML diagram shows the notation structure of these standard components. The developer can create new components based on existing components (for example, by inheriting from UIOutput), or by a completely new subclass of UIComponent. To make creating components easier, it's possible to inherit from UIComponentBase, which contains default implementations for the methods in UIComponent along with some convenience methods. Pages: 1, 2
<urn:uuid:4124b234-91e0-4ac7-b50b-f5e8dc152042>
2.9375
1,543
Documentation
Software Dev.
40.91503