text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
One of the most well-known climate patterns that we have come to recognize and better understand is the El Niño. Every three to seven years during the months of December and January, the balance between, wind, ocean currents, oceanic and atmospheric temperature and bioshpere breaks down, resulting in a severe impact on global weather.
In a normal year the trade winds blow westward and push warm surface water near Australia and New Guinea. When this warm water builds up in the western Pacific-Ocean, nutrient-rich cold waters are forced to rise up from the deeper ocean just off of the west coast of South America. This colder nutrient-rich water fosters the growth of the fish population.
During an El Niño event the trade winds weaken. Warm, nutrient-poor water is not pushed westward and comes to occupy the entire tropical Pacific Ocean. The cold water is not forced to the surface and the coastal waters of Peru and Ecuador are unusually warm. This warmer water has a devastating impact on their fishing crops which rely on cool waters to thrive. The region also experiences an extremely higher than average amounts of rainfall.
While the impact of an El Niño is most dramatic off of the coast of Western South America its impact is felt in weather around the world. A severe El Niño will enhance the jet stream over the western Pacific and shift it eastward, leading to stronger winter storms over California and southern United States, with accompanying floods and landslides. In contrast, El Niño may also cause severe droughts over Australia, Indonesia, and parts of southern Asia. Further, while El Niño is known to lower the probability of hurricanes in the Atlantic, it increases the chances of cyclones and typhoons in the Pacific.
Oceanography from Space
El Niño is one example of how observing the ocean from space leads to significant insights. Researchers use data from NASA Earth observing satellites to create telling images of how El Niño events form in the ocean, and the factors that may impact its strength and duration in a given climate cycle.
NOAA's AVHRR and NASA's Aqua satellite have provided scientists over 25 years of sea surface temperature data. The vast tropical Pacific Ocean receives more sunlight that any other region on Earth. Much of this sunlight is stored in the ocean in the form of heat. During an El Niño water temperatures in the Pacific Ocean may rise on average 3 - 5 degrees above average. This happens as the water in seas around Indonesia, referred to as the Pacific Warm Pool are not forced westward due to weakened east to west trade winds. This pool of warmer water expands westward toward North and South America. Scientists have observed that the size and the frequency of an El Niño is impacted by the fluctuations in the Pacific Warm Pool.
Currents and tides influence topography, as does temperature. Water expands as it gets warmer, and the lack of cold water dependent nutrients make it less dense. This expanded, less dense water results in a rise in sea level, observable from space. Ocean surface height may rise as much as 6 to 13 inches above normal in some ocean regions during an El Niño.
The QuikSCAT satellite tracks vector winds, and the Jason and Topex/Poseidon missions track their effects. Typically, the Pacific trade winds blow from east to west, dragging the sunlit warm waters westward where they accumulate in Pacific Warm Pool mentioned above. Weakened east to west trade winds during an El Niño event result in the Pacific Warm Pool waters expanding westward. NASA's QuikSCAT satellite data have shown how trade wind irregularities of less than a few months in duration leading up to an El Niño may have dramatic results on the weather events to follow.
Currents, or circulation within ocean is influenced primarily by two physical factors, the sinking and rising of warming and cooling water, and movement due to the forcing of the surface waters due to wind. The interactions of ocean water temperature and the strength and direction of winds create currents within the ocean that define the strength and duration of the El Niño. NASA satellite data in combination with data collected from buoy systems and ships provide scientists an expanding understanding of the relationship between ocean currents, weather and climate.
Ocean water salt content or salinity is a key variable in understanding the ocean's capacity to store and transport heat. Salinity and temperature combine to dictate the oceans' density. Greater salinity, like colder temperatures, results in an increase in ocean density with a corresponding depression of the sea surface height. In warmer, fresher waters, the density is lower resulting in an elevation of the sea surface. These ocean height differences are related to the circulation of the ocean. Beginning in 2009 the Aquarius mission will take regular measurements of the changes in ocean surface salinity. By knowing the how changes in salinity impact the physical processes in the ocean scientists will be able to create computer models that may more accurately predict El Niño episodes.
Sea ice modulates planetary heat transport by insulating the ocean from the cold polar atmosphere, and also by modulating the thermohaline circulation of the world ocean. Moreover, the high albedo of snow-covered ice further insulates the polar oceans from solar radiation and introduces another positive feedback in the climate system. The El Niño and its related Southern Oscillation appear to affect regional ice distributions around Antarctica. Understanding this connection between the Southern Oscillation and southern ocean climate and the sea ice cover will substantially improve our understanding of global climate. El Niño episodes affect the Weddell and Ross Seas, areas that are regarded as key sources of cold and dense bottom water that influences global ocean circulation. The strongest links were observed to be in the Amundsen, Bellingshausen and Weddell Seas of the west Antarctic. Within these sectors, higher sea level pressure, warmer air temperature and warmer sea surface temperature are generally associated with the El Niño phase.
By combining data sets from various satellite and in situ measurements we have been able to learn a great deal about how the physical properties of the ocean interact to create climate patterns. Scientists will continue to create models that combine data allowing them to better understand and predict complex ocean processes.
By using data collected from discrete measurements of physical properties of the ocean scientists have learned a great deal about El Niño and how it is impacted by the ocean system. Scientists are just beginning to have data of a time span sufficient to allow them to begin to predict climate patterns longer than the 3 to 7 year El Niño pattern. The ability to study the El Niño climate pattern and create models to simulate conditions, has helped us better predict its impact on our climate and weather.
El Niño is only one of the climate anomalies that scientists have observed;others include the North Atlantic Oscillation, the Atlantic Intertropical Convergence Zone oscillation, the Pacific Decadal Oscillation. Together with El Niño these systems are believed to be responsible for well over fifty percent of the climate variability on Earth. As models are created to simulate the individual patterns they will be combined to create a global model of climate change. If scientists get to the point to where they understand all of these climate cycles they may be able to predict major weather patters months in advance. | <urn:uuid:d2dd8c61-d87f-4655-8044-9f51dc417b8f> | 4.53125 | 1,451 | Knowledge Article | Science & Tech. | 32.679065 |
Referenced from the manual:
The standard DES-based crypt() returns the salt as the first two characters of the output. It also only uses the first eight characters of str, so longer strings that start with the same eight characters will generate the same result (when the same salt is used).
Both entries have got the same first 8 characters and the same salt. so it must return the same result
will both return 50gyRGMzn6mi6
because they share the same salt and the same first 8 characters
Every encryption algorithm has got a limit, even md5 gets repeated at some point. | <urn:uuid:93cd0ef1-78c5-41fb-836a-fcb1b9aea467> | 3.15625 | 126 | Q&A Forum | Software Dev. | 53.887438 |
James E. Hansen, who directs the NASA Goddard Institute for Space Studies and is a leading climate scientist, has a must-read op-ed in the Washington Post.
Here’s how “Climate Change Is Here — And Worse Than We Thought” opens:
When I testified before the Senate in the hot summer of 1988, I warned of the kind of future that climate change would bring to us and our planet. I painted a grim picture of the consequences of steadily increasing temperatures, driven by mankind’s use of fossil fuels.
But I have a confession to make: I was too optimistic.
My projections about increasing global temperature have been proved true. But I failed to fully explore how quickly that average rise would drive an increase in extreme weather.
In a new analysis of the past six decades of global temperatures, which will be published Monday, my colleagues and I have revealed a stunning increase in the frequency of extremely hot summers, with deeply troubling ramifications for not only our future but also for our present.
This is not a climate model or a prediction but actual observations of weather events and temperatures that have happened. Our analysis shows that it is no longer enough to say that global warming will increase the likelihood of extreme weather and to repeat the caveat that no individual weather event can be directly linked to climate change. To the contrary, our analysis shows that, for the extreme hot weather of the recent past, there is virtually no explanation other than climate change. | <urn:uuid:94b69e96-d6d7-4795-9ea8-719c993f1315> | 2.90625 | 299 | Personal Blog | Science & Tech. | 36.601229 |
ABOUT THE MOON: The moon is Earth's only natural satellite, a cold, dry orb whose surface is studded with craters and strewn with rocks and dust. The moon's gravitational force is only 17 percent of the Earth's gravity. For example, a 100 pound (45 kg) person would weigh just 17 pounds (7.6 kg) on the Moon. The temperature on the Moon ranges from daytime highs of about 265F (130C) to nighttime lows of about -170F (-110C). The moon has no atmosphere. On the moon, the sky always appears dark, even on the bright side (because there is no atmosphere). Also, since sound waves travel through air, the moon is silent; there can be no sound transmission on the moon. The phases of the moon are caused by the relative positions of the earth, sun, and moon. The moon goes around the earth, on average, in 27 days, 7 hours, and 43 minutes. The sun always illuminates the half of the moon facing the sun (except during lunar eclipses, when the moon passes through the earth's shadow). When the sun and moon are on opposite sides of the earth, the moon appears "full" to us, a bright, round disk. When the moon is between the earth and the sun, it appears dark, a "new" moon. In between, the moon's illuminated surface appears to grow (wax) to full, then decreases (wane) to the next new moon.
WHAT IS THE SOLAR WIND? Flowing outward from the Sun's extremely hot corona, the solar wind is a stream of charged particles traveling in all directions at incredibly high speeds. As these changes speed toward the Earth, they interact with other charged particles and can create phenomena such as the northern lights and geomagnetic storms, which can damage spacecraft, including communications satellites. | <urn:uuid:357ff5b4-f8e6-4a2a-9ecc-08d0e9f78327> | 3.90625 | 386 | Knowledge Article | Science & Tech. | 67.847088 |
A faint young sun
Some 3.8 billion years ago was a mystery that scientists have long attempted to solve. Way back then, the Earth was a completely different place, and so was the solar system. The sun shined with less luminescence — as much as 30 percent weaker — which meant the Earth should have been really cold. So cold, in fact, that liquid water would not have existed.
But the geologic record shows that water was, indeed, present and provided the foundation for the proverbial “primordial soup” that gave rise to life. How come? This is what’s called the “faint young Sun problem.”
There are many theories, among them that the Earth’s reflectivity was lower because of smaller continents, allowing more sunlight to be absorbed. But one of the leading theories examines the atmosphere of the Archaean period, specifically the presence of greenhouse gases like carbon dioxide and methane that might have warmed the atmosphere to temperatures at or above today’s.
The same greenhouse gases that, in abundance, are getting us into trouble today, may have been fundamental to the Earth’s life-creating conditions. As geochemist James Kasting of Penn State University points out in Chapter 8, The primitive Earth of Prebiotic Evolution and Astrobiology, methane and carbon dioxide should have been abundant in the first several hundred million years of Earth’s history because of degassing during the planet’s formation.
The concentrations would have declined over time, methane converting to carbon dioxide, and carbon dioxide converting into carbonate rocks. But the storage of carbon in rocks would have slowed with dropping temperatures. Meanwhile, continually-spewing volcanoes would have then provided the carbon dioxide boost to the atmosphere to spike temperatures back up again. The feedback loop, called the carbonate-silicate cycle, goes on to this day.
Clearly more went on than we know. CO2 concentrations would have had to be about 10 times higher than today’s values, an unlikely scenario given that past research estimates the concentration no larger than three times that of today. Methanogens, bacteria-like organisms that lived in oceans and marine sediments, may have gassed enough methane to make up at least some of the difference.
Why does this matter to modern day climate change? “One thing that paleoclimate research definitely does do is to put modern day climate change into perspective,” says Kasting.
Let’s compare the numbers. A 30 percent change in solar luminosity, up to today’s levels, corresponds to an extra 80 watts/m2 in atmospheric heat. Every doubling of anthropogenic CO2 amounts to an extra 4 watts/m2. If we get up to 12 watts/m2, well within our capabilities, we’re staring down a temperature change of between 6 to 12 degrees C. That’s still nowhere near the sun’s radiative forcing, but also not that far off.
Early Earth needed CO2 to warm up, but we sure don’t anymore. There can be too much of a good thing.
– Alison Hawkes | <urn:uuid:757cd624-19e1-4bce-86eb-80db2f9017f5> | 4.5625 | 656 | Personal Blog | Science & Tech. | 45.816265 |
The striped dolphin is thriving in the Mediterranean, despite the viral epidemic that has wiped out hundreds of the mammals in the past two years. This is the preliminary conclusion of a joint survey carried out by Greenpeace and the University of Barcelona.
Scientists on board the Green peace ship Sirius counted 2485 healthy striped dolphins during a five-week tour of the western part of the Mediterranean. The 8035-kilometre voyage took in the waters of Spain, France, Italy and Algeria.
Marion Stoller of Greenpeace says the figures show there is still a sustainable population in the Mediterranean.
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:ecee9610-802e-4d78-a780-a2fc4a995cdb> | 3.15625 | 144 | Truncated | Science & Tech. | 42.418655 |
Zero Gravity on Earth
Is it true that on earth there isn't a room capable of
reproducing a 0-gravity enviroment? If i'm right, it is immpossible to
have one. The only method to simmulate it, is to fly really
fastly towards the ground from a very high altitude? I have money on
this statement, I hope that it is immpossible.
No only can you not have 0 gravity on the earth, there is no place in the
universe with 0 gravity. following the inverse square law, you will always
have a fraction of what you had before. Yes, Mars is attracting you.
Today we talk about microgravity. We can experience microgravity in earth
orbit, or in fretful (for a short period of time).
I don't know if this earns you any money. Good Luck.
You're half right. Zero gravity has never been demonstrated. We know of
nothing that will enable one object with mass to be near another object
with mass and not experience an attraction toward it.
If you don't resist this gravitational attraction, and just free-fall
toward the earth from an altitude, you can sort of simulate an environment
without gravity. If you hold something in your hand and let it go, it
won't drop to your feet. The reason for this is that you are already
dropping just as fast as the object is. Now, in free-fall, you don't
necessarily have to be moving toward the earth, just accelerating toward
it. If, say, someone loads both you and an anvil in a catapult and fires
you both upward, you will both move first upward, then eventually downward.
Throughout the entire trajectory, you and the anvil are in free-fall, and
neither you nor the anvil drops relative to the other. This is how the
weightlessness-simulating "vomit comet" airplane works: it flies in a
curved path upward, then downward so that its acceleration matches that due
I don't know if this means that you lose your bet or not. It depends on
exactly what you've bet on.
Richard Barrans Jr., Ph.D.
Free-fall rides at amusement parks "simulate" a zero-g environment. An
elevator could easily be rigged to give a zero-g experience for a limited
time. Of course it would have to be slowed down in a safe manner so that no
one got hurt.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:7f93b529-d7f6-44d6-bb46-1e6b352a5a63> | 2.765625 | 543 | Comment Section | Science & Tech. | 63.292717 |
This past year has been riddled with extreme weather across the country. From floods to droughts to hurricanes, there is no doubt the weather has been strangely brutal.
"Well the first one that comes to most people's mind is hurricane sandy, so that would be like the number one, billions of dollars in damage and many, many lives lost." says Carol Christenson at the Duluths National Weather Service office.
The storms coming off the Atlantic were brutal.
Christenson said, "There has been an increase in the stronger hurricanes since, looks like about, the 1940's."
The Midwest also got slammed with an unusual storm this summer.
"The Derecho, the big wind storm that hit the Midwest" said Christenson.
Winds hits almost 90mph with millions of people left without power and millions more in damage. Summertime was also remembered for its lack of precipitation.
"The drought that is still ongoing for a good portion of the central US including Minnesota and Wisconsin, our neck of the woods." said Christenson.
Nearly 62 percent of the country experienced moderate to extreme drought during the hot summer months. Something that actually mitigated the extreme weather potential.
Christenson says,"We saw fewer tornados across the US than normal, about 900 and thirty some tornados where usually we see about 1000 nation–wide"
But closer to home we saw one of the biggest, most devastating, weather events in the Twin Ports' history.
"of course the June floods" Christenson says.
Dropping 8–10 inches of rain in less than 24 hours.
It was called the 500 year flood but experts say events like the June flood may become more common as our weather patterns continue to change.
Meteorologist Adam Lorch | <urn:uuid:45e01e67-b9e6-4ae4-9025-f02987b1c844> | 2.796875 | 362 | Truncated | Science & Tech. | 55.57971 |
Solid objects that are large enough to see and handle are rarely composed of a single crystal, except for a few cases (gems, silicon single crystals for the electronics industry, certain types of fiber, and single crystals of a nickel-based superalloy for turbojet engines). Most materials are polycrystalline; they are made of a large number of single crystals — crystallites — held together by thin layers of amorphous solid. The crystallite size can vary from a few nanometers to several millimeters.
If the individual crystallites are oriented randomly (that is, if they lack texture), a large enough volume of polycrystalline material will be approximately isotropic. This property helps the simplifying assumptions of continuum mechanics to apply to real-world solids. However, most manufactured materials have some alignment to their crystallites, which must be taken into account for accurate predictions of their behavior and characteristics.
Material fractures can be intergranular fracture or a transgranular fracture. There is an ambiguity with powder grains: a powder grain can be made of several crystallites. Thus, the (powder) "grain size" found by laser granulometry can be different from the "grain size" (or, rather, crystallite size) found by X-ray diffraction (e.g. Scherrer method), by optical microscopy under polarised light, or by scanning electron microscopy (backscattered electrons).
Coarse grained rocks are formed very slowly, while fine grained rocks are formed quickly, on geological time scales. If a rock forms very quickly, such as the solidification of lava ejected from a volcano, there may be no crystals at all. This is how obsidian forms.
Grain boundaries are interfaces where crystals of different orientations meet. A grain boundary is a single-phase interface, with crystals on each side of the boundary being identical except in orientation. The term "crystallite boundary" is sometimes, though rarely, used. Grain boundary areas contain those atoms that have been perturbed from their original lattice sites, dislocations, and impurities that have migrated to the lower energy grain boundary.
Treating a grain boundary geometrically as an interface of a single crystal cut into two parts, one of which is rotated, we see that there are five variables required to define a grain boundary. The first two numbers come from the unit vector that specifies a rotation axis. The third number designates the angle of rotation of the grain. The final two numbers specify the plane of the grain boundary (or a unit vector that is normal to this plane).
Grain boundaries disrupt the motion of dislocations through a material. Dislocation propagation is impeded because of the stress field of the grain boundary defect region and the lack of slip planes and slip directions and overall alignment across the boundaries. Reducing grain size is therefore a common way to improve strength, often without any sacrifice in toughness because the smaller grains create more obstacles per unit area of slip plane. This crystallite size-strength relationship is given by the Hall-Petch relationship. The high interfacial energy and relatively weak bonding in grain boundaries makes them preferred sites for the onset of corrosion and for the precipitation of new phases from the solid.
Grain boundary migration plays an important role in many of the mechanisms of creep. Grain boundary migration occurs when a shear stress acts on the grain boundary plane and causes the grains to slide. This means that fine-grained materials actually have a poor resistance to creep relative to coarser grains, especially at high temperatures, because smaller grains contain more atoms in grain boundary sites. Grain boundaries also cause deformation in that they are sources and sinks of point defects. Voids in a material tend to gather in a grain boundary, and if this happens to a critical extent, the material could fracture.
During grain boundary migration, the rate determining step depends on the angle between two adjacent grains. In a small angle dislocation boundary, the migration rate depends on vacancy diffusion between dislocations. In a high angle dislocation boundary, this depends on the atom transport by single atom jumps from the shrinking to the growing grains .
Grain boundaries are generally only a few nanometers wide. In common materials, crystallites are large enough that grain boundaries account for a small fraction of the material. However, very small grain sizes are achievable. In nanocrystalline solids, grain boundaries become a significant volume fraction of the material, with profound effects on such properties as diffusion and plasticity. In the limit of small crystallites, as the volume fraction of grain boundaries approaches 100%, the material ceases to have any crystalline character, and thus becomes an amorphous solid.
Grain boundaries are also present in magnetic domains in magnetic materials. A computer hard disk, for example, is made of a hard ferromagnetic material that contains regions of atoms whose magnetic moments can be realigned by an inductive head. The magnetization varies from region to region, and the misalignment between these regions forms boundaries that are key to data storage. The inductive head measures the orientation of the magnetic moments of these domain regions and reads out either a “1” or “0”. These bits are the data being read. Grain size is important in this technology because it limits the number of bits that can fit on one hard disk. The smaller the grain sizes, the more data that can be stored.
Because of the dangers of grain boundaries in certain materials such as superalloy turbine blades, great technological leaps were made to minimize as much as possible the effect of grain boundaries in the blades. The result was directional solidification processing in which grain boundaries were eliminated by producing columnar grain structures aligned parallel to the axis of the blade, since this is usually the direction of maximum tensile stress felt by a blade during its rotation in an airplane. The resulting turbine blades consisted of a single grain, improving reliability.
Generally, polycrystals cannot be superheated; they will melt promptly once they are brought to a high enough temperature. This is because grain boundaries are amorphous, and serve as nucleation points for the liquid phase. By contrast, if no solid nucleus is present as a liquid cools, it tends to become supercooled. Since this is undesirable for mechanical materials, alloy designers often take steps against it. See grain refinement.
"Mechanically Resilient Titanium Carbide (TIC) Nano-Fibrous Felts Consisting of Continuous Nanofibers or Nano-Ribbons with TIC Crystallites Embedded in Carbon Matrix Prepared Via Electrospining Follow
Dec 31, 2012; By a News Reporter-Staff News Editor at Nanotechnology Weekly -- A patent application by the inventors Fong, Hao (Rapid City,...
Wipo Publishes Patent of Corning, David J Bronfenbrenner, Chris Maxwell, Martin Joseph Murtagh and Bryan Ray Wheaton for "Control of Clay Crystallite Size for Shrinkage Management" (American Inventors)
May 18, 2013; GENEVA, May 18 -- Publication No. WO/2013/070876 was published on May 16.Title of the invention: "CONTROL OF CLAY... | <urn:uuid:1ae83d4c-13b0-4258-a925-e62bd547a4be> | 4 | 1,490 | Knowledge Article | Science & Tech. | 30.797175 |
Oxidation of Cyclohexanol to Cyclohexanone
There are four basic types of chemical reactions in organic
chemistry: combination, elimination, substitution, and
rearrangement. Today, you will be conducting a
reaction in which you will convert cyclohexanol into
cyclohexanone (oxidation). Although some of the finer points of
this reaction are not totally understood, it is generally agreed that
it proceeds as follows:
As you will no doubt recall from general chemistry, oxidation/reduction
reactions involve the shifting of electrons from one species to
another. The species that losses electrons is said to have been
oxidized and is therefore the reducing agent. The species that
gains electrons is said to have been reduced and is therefore the
oxidizing agent. Like any other type of chemical reaction there
must be mass balance, so the number of electrons lost by one species
must equal the number of electrons gained by another. In organic
chemistry, the species being oxidized shows a loss of hydrogen and/or a
gain in oxygen, while the species being reduced shows a loss of oxygen
and/or a gain in hydrogen. You need to prove to yourself that
these two views are complimentary.
Our oxidation of cyclohexanol begins by generating the hypochlorous
acid which will be our oxidizing agent. As you can see from its
formula, the chlorine in hypochlorous acid has
an oxidation state of +1. Recall that chlorine normally has an
oxidation number of -1. This deficiency of electrons makes this
particular species very reactive. Since this is a very strong
oxidizing agent, we will be generating it in situ through the reaction of
acetic acid and sodium hypochlorite (regular household bleach).
In the next step, the hydrogen on the hypochlorous acid undergoes
nucleophilic attack by the oxygen of the cyclohexanol to form
protonated cyclohexanol and a hypochlorite ion. This is
followed by nucleophilic attack by the hypochlorite ion and
displacement of water. The water then abstracts a hydrogen which
initiates a cascade that results in the chlorine leaving as chloride
ion and the generation of cyclohexanone.
Note that the actual
oxidizing agent was Cl+ which was reduced to Cl-.
It is also interesting to note that the oxygen atom on the
cyclohexanone is not the same oxygen atom that was initially
present on the cyclohexanol. Where did it go, and how do you
think you could prove this?
The purpose of this lab
is produce cyclohexanone through the hypochlorite oxidation of
As before, you will use gas chromatography (GC), refractive index (RI),
and infrared spectrometry (IR) to
determine the purity of your product. In addition, you will
produce a specific derivative of your product to prove its identify.
Before you come into lab, make sure you have filled in your table of
reagents and products. You will need these values to determine
the identify of your products and to calculate your final
yield. You will also need to come to lab with IRs of both
cyclohexanone already in your notebooks (this link may be helpful:
Finally, it is important that you know exactly what you are going to be
doing so you can work more efficiently.
- Accurately weigh approximately 10 g of cyclohexanol into a clean,
dry 500 mL round bottom flask.
- Add 2-3 mL of glacial acetic acid, and a magnetic
to the round bottom flask.
- Fit the flask with a Claisen adapter and insert your thermometer
into the side arm of the flask (see figure above).
Position the thermometer so it is immersed but is not hit by the
magnetic stirring bar. Remember to
lubricate all ground glass joints with a minimal amount of silicon
grease. Make sure that all joints are secured (blue clips) and
that the whole apparatus is properly clamped.
- Fit a 250 mL separatory funnel onto the Claisen adapter (see
figure above) and add 180 mL of commercial bleach to it. The
concentration of commercial bleach will vary, but is typically about
- Support your apparatus on a magnetic stir motor and adjust the
speed of the motor so that the contents of the flask are stirred
- Add the bleach drop wise but as rapidly as possible to the
flask. The addition should take about 10-15 minutes.
Monitor the temperature as the reaction proceeds.
stirring the flask after the addition is complete for an additional
- Transfer the reaction mixture to a 400 mL beaker.
- Add approximately 50 grams of NaCl to the mixture and heat to 50
°C on a hot plate. Keep stirring the mixture for 5-10 minutes.
- Allow the excess NaCl to settle out and transfer the liquid to a
250 mL separatory funnel.
- Extract with 25 mL of hexane. Use proper technique and
beware of gas pressure!
- Transfer the hexane to a 100 mL beaker.
- Extract the reaction product with an additional 15 mL of fresh
- Add this hexane to the previous extract and dry them with a small
amount of anhydrous MgSO4.
- Carefully transfer this dried hexane to a 250 mL round bottom
- Attach the round bottom flask to the rotary evaporator and strip
off the hexane. Remember to note the rotary evaporator conditions.
- Determine the mass and purity of your product (micro boiling
RI, and FTIR). Be sure to include your GC and FTIR traces in your
- Turn in your product in a labeled, tightly sealed
container. Be sure to include the following information on the
- Calculate the theoretical, actual, and percent yields for
- Use your GC data to calculate the purity of your
- Use your RI data to calculate the purity of your
- Compare your boiling point with the literature value, what does
this say about the purity of your product?
- Compare and contrast your FTIR versus the literature IR for
(Updated 12/22/03 by C.R. Snelling) | <urn:uuid:b316d09b-f085-44df-88fb-e01b09ba1007> | 4.3125 | 1,342 | Tutorial | Science & Tech. | 41.284892 |
Swift Gamma-Ray Burst Mission
|Organization||NASA/Goddard Space Flight Center|
|Major contractors||Spectrum Astro|
|Launch date||2004-11-20 17:16:00 UTC|
|Launched from||Space Launch Complex 17
Cape Canaveral Air Force Station
|Launch vehicle||Delta II 7320-10C|
|Mission length||6 years
(8 years, 6 months, and 24 days elapsed)
|Orbit height||600 km|
|Orbit period||~ 90 minutes|
|Telescope style||coded mask (BAT)
Wolter I (XRT)
|Wavelength||γ-ray / X-ray / UV / Visible|
|Diameter||30 cm (UVOT)|
|Collecting area||5,200 cm² (BAT)|
|Focal length||381 cm (UVOT)|
|BAT||Burst Alert (gamma-ray) Telescope|
|UVOT||UltraViolet / Optical telescope|
The Swift Gamma-Ray Burst Mission consists of a robotic spacecraft called Swift, which was launched into orbit on 20 November 2004 at 17:16:00 UTC on a Delta II 7320-10C expendable launch vehicle. Swift is managed by the NASA Goddard Space Flight Center, and was developed by an international consortium from the United States, United Kingdom, and Italy. It is part of NASA's Medium Explorer Program (MIDEX).
Swift is a multi-wavelength space observatory dedicated to the study of gamma-ray bursts (GRBs). Its three instruments work together to observe GRBs and their afterglows in the gamma-ray, X-ray, ultraviolet, and optical wavebands.
Based on continuous scans of the area of the sky with one of the instrument's monitors, Swift uses momentum wheels to autonomously slew into the direction of possible GRBs. The name "Swift" is not a mission-related acronym, but rather a reference to the instrument's rapid slew capability, and the nimble bird of the same name. All of Swift's discoveries are transmitted to the ground and those data are available to other observatories which join Swift in observing the GRBs.
In the time between GRB events, Swift is available for other scientific investigations, and scientists from universities and other organizations can submit proposals for observations.
The Swift Mission Operation Center (MOC), where commanding of the satellite is performed, is located in State College, Pennsylvania and operated by the Pennsylvania State University and industry subcontractors. The Swift main ground station is located at the Broglio Space Centre near Malindi on the coast of Eastern Kenya, and is operated by the Italian Space Agency. The Swift Science Data Center (SDC) and archive are located at the Goddard Space Flight Center outside Washington D.C. The UK Swift Science Data Centre is located at the University of Leicester.
Burst Alert Telescope (BAT)
The BAT detects GRBs events and computes its coordinates in the sky. It covers a large fraction of the sky (over one steradian fully coded, three steradians partially coded; by comparison, the full sky solid angle is 4π or about 12.6 steradians). It locates the position of each event with an accuracy of 1 to 4 arc-minutes within 15 seconds. This crude position is immediately relayed to the ground, and some wide-field, rapid-slew ground-based telescopes can catch the GRB with this information. The BAT uses a coded-aperture mask of 52,000 randomly placed 5 mm lead tiles, 1 metre above a detector plane of 32,768 four mm CdZnTe hard X-ray detector tiles; it is purpose-built for Swift. Energy range: 15–150 keV.
X-ray Telescope (XRT)
The XRT can take images and perform spectral analysis of the GRB afterglow. This provides more precise location of the GRB, with a typical error circle of approximately 2 arcseconds radius. The XRT is also used to perform long-term monitoring of GRB afterglow light-curves for days to weeks after the event, depending on the brightness of the afterglow. The XRT uses a Wolter Type I X-ray telescope with 12 nested mirrors, focused onto a single MOS charge-coupled device (CCD) similar to those used by the XMM-Newton EPIC MOS cameras. On-board software allows fully automated observations, with the instrument selecting an appropriate observing mode for each object, based on its measured count rate. The telescope has an energy range of 0.2 - 10 keV.
Ultraviolet/Optical Telescope (UVOT)
After Swift has slewed towards a GRB, the UVOT is used to detect an optical afterglow. The UVOT provides a sub-arcsecond position and provides optical and ultra-violet photometry through lenticular filters and low resolution spectra (170–650 nm) through the use of its optical and UV grisms. The UVOT is also used to provide long-term follow-ups of GRB afterglow lightcurves. The UVOT is based on the XMM-Newton mission's Optical Monitor (OM) instrument, with improved optics and upgraded onboard processing computers.
The Swift mission has four key scientific objectives:
- To determine the origin of GRBs. There seem to be at least two types of GRBs, only one of which can be explained with a hypernova, creating a gamma-ray beam. More data is needed to explore other explanations.
- To use GRBs to expand understanding of the young universe. GRBs seem to take place at "cosmological distances" of many millions or billions of light-years, which means they can be used to probe the distant, and therefore young, cosmos.
- To conduct an all-sky survey which will be more sensitive than any previous one, and will add significantly to scientific knowledge of astronomical X-ray sources. Thus, it could also yield unexpected results.
- To serve as a general purpose gamma-ray/X-ray/optical observatory platform, performing rapid "target of opportunity" observations of many transient astrophysical phenomena, such as supernovae.
Swift was launched on November 20, 2004, and reached a near-perfect orbit of 586x601 km altitude, with an inclination of 20°.
On December 4, an anomaly occurred during instrument activation when the Thermo-Electric Cooler (TEC) Power Supply for the X-Ray Telescope did not turn on as expected. The XRT Team at Leicester and Penn State University were able to determine on December 8 that the XRT would be usable even without the TEC being operational. Additional testing on December 16 did not yield any further information as to the cause of the anomaly.
On December 17 at 07:28:30 UT, the Swift Burst Alert Telescope (BAT) triggered and located on board an apparent gamma-ray burst during launch and early operations. The spacecraft did not autonomously slew to the burst since normal operation had not yet begun, and autonomous slewing was not yet enabled. Swift had its first GRB trigger during a period when the autonomous slewing was enabled on January 17, 2005, at about 12:55 UTC. It pointed the XRT telescope to the on-board computed coordinates and observed a bright X-ray source in the field of view.
On February 1, 2005,the mission team released the first light picture of the UVOT instrument and declared Swift operational.
As of May 2010, Swift has detected more than 500 GRBs, X-ray afterglows for more than 90% of them, and optical afterglows for more than 50% of them.
By March 2013 Swift had detected more than 750 GRBs.
- May 9, 2005: Swift detected GRB 050509B, a burst of gamma rays that lasted one-twentieth of a second. The detection marked the first time that the accurate location of a short-duration gamma-ray burst had been identified and the first detection of X-ray afterglow in an individual short burst.
- September 4, 2005: Swift detected GRB 050904 with a redshift value of 6.29 and a duration of 200 seconds (most of the detected bursts last about 10 seconds). It was also found to be the most distant yet detected, at approximately 12.6 billion light-years.
- February 18, 2006: Swift detected GRB 060218, an unusually long (about 2000 seconds) and nearby (about 440 million light-years) burst, which was unusually dim despite its close distance, and may be an indication of an imminent supernova.
- June 14, 2006: Swift detected GRB 060614, a burst of gamma rays that lasted 102 seconds in a distant galaxy (about 1,6 Billion light years). No supernova was seen following this event (and GRB 060505 to deep limits) leading some to speculate that it represented a new class of progenitors. Others suggested that these events could have been massive star deaths, but ones which produced too little radioactive 56Ni to power a supernova explosion.
- January 9, 2008: Swift was observing a supernova in NGC 2770 when it witnessed an X-ray burst coming from the same galaxy. The source of this burst was found to be the beginning of another supernova, later called SN 2008D. Never before had a supernova been seen at such an early stage in its evolution. Following this stroke of luck (position, time, most appropriate instruments), astronomers were able to study in detail this Type Ibc supernova with the Hubble Space Telescope, the Chandra X-ray Observatory, the Very Large Array in New Mexico, the Gemini North telescope in Hawaii, Gemini South in Chile, the Keck I telescope in Hawaii, the 1.3m PAIRITEL telescope at Mt Hopkins, the 200-inch and 60-inch telescopes at the Palomar Observatory in California, and the 3.5-meter telescope at the Apache Point Observatory in New Mexico. The significance of this supernova was likened by discovery team leader Dr. Alicia Soderberg to that of the Rosetta Stone for egyptology.
- February 8 and February 13, 2008: Swift provided critical information about the nature of Hanny's Voorwerp, mainly the absence of an ionizing source within the Voorwerp or in the neighboring IC 2497.
- March 19, 2008: Swift detected GRB 080319B, a burst of gamma rays amongst the brightest celestial objects ever witnessed. At 7.5 billion light-years, Swift established a new record for the farthest object (briefly) visible to the naked eye. It was also said to be 2.5 million times intrinsically brighter than the previous brightest accepted supernova (SN 2005ap). Swift observed a record four GRBs that day, which also coincided with the death of noted science-fiction writer Arthur C. Clarke.
- September 13, 2008: Swift detected GRB 080913, at the time the most distant GRB observed (12.8 billion light-years) until the observation of GRB 090423 a few months later.
- April 23, 2009: Swift detected GRB 090423, the most distant cosmic explosion ever seen at that time, at 13.035 billion light-years. In other words, the universe was only 630 million years old when this burst occurred.
- April 29, 2009: Swift detected GRB 090429B, which was found by later analysis published in 2011 to be 13.14 billion light-years distant (approximately equivalent to 520 million years after the Big Bang), even farther than GRB 090423.
- March 16, 2010, Swift tied its record by again detecting and localizing four bursts in a single day.
- April 13, 2010: Swift detected its 500th GRB.
- March 28, 2011: Swift detected GRB 110328A, which subsequent analysis showed to possibly be the signature of a star being disrupted by a black hole or the ignition of an active galactic nucleus."This is truly different from any explosive event we have seen before," said Joshua Bloom of the University of California at Berkeley, the lead author of the study published in the June issue of Science.
- September 16 and 17, 2012, "BAT triggered multiple times on a previously unknown hard X-ray source a few degrees from the Galactic Center. The trigger marked the onset of a dramatic transition from the low/hard to the high/soft state of the X-ray Nova “Sw J1745-26”, revealing a previously unknown black hole in our galaxy."
- April 24, 2013, Swift detected an X-ray flare from the Galactic Center. This proved not to be related to Sgr A* but to a previously unsuspected magnetar. Later observations by the NuSTAR and the Chandra X-Ray Observatory confirmed the detection.
- April 27, 2013, Swift detected the "shockingly bright" Gamma-ray burst GRB 130427A. Observed simultaneously by the Fermi Gamma-ray Space Telescope, it is one of the five closest GRBs detected and one of the brightest seen by either space telescope.
- "NASA Swift Mission Extended for 4 More Years". Omitron. Retrieved 2008-04-07.
- J.D. Myers (26 September 2007). "Swift Guest Investigator Program Frequently Asked Questions". NASA/GSFC. Retrieved 2009-05-02.
- "Swift". Spectrum Astro.
- J.D. Myers (28-Feb-2006). "Swift's Burst Alert Telescope (BAT)". NASA/ GSFC. Retrieved 2009-05-02.
- J.D. Myers (15 August 2008). "Swift's X-Ray Telescope (XRT)". NASA/ GSFC. Retrieved 2009-05-02.
- J.D. Myers (14 December 006). "Swift's Ultraviolet/Optical Telescope (UVOT)". NASA / GSFC. Retrieved 2009-05-02.
- Swift Captures Flyby of Asteroid 2005 YU55. NASA press release, 11 November 2011. Retrieved 2011-11-22.
- Ed Fenimore (17 December 2004). "GRB041217: The First GRB Located On-Board Swift". Los Alamos National Laboratory. Retrieved 2009-05-02.
- J.D. Myers (Friday, 27 May 2011). "The Swift Gamma-Ray Burst Mission". NASA/GSFC. Retrieved 17 July 2011.
- David Whitehouse (Wednesday, 11 May 2005). "Blast hints at black hole birth". BBC News. Retrieved 12 July 2011.
- "NASA's Swift Satellite Catches a Star Going 'Kaboom!'" (Press release). Robert Naeye, NASA Goddard Space Flight Center. 2008-05-21. Retrieved 2009-05-02.
- "NASA Satellite Detects Naked-Eye Explosion Halfway Across Universe" (Press release). J.D. Harrington, NASA Goddard Space Flight Center. 2008-03-20. Retrieved 2009-05-02.
- "More Observations of GRB 090423, the Most Distant Known Object in the Universe". Universe Today. Retrieved 2010-02-23.
- Garner, Robert (2008-09-19). "NASA's Swift Catches Farthest Ever Gamma-Ray Burst". NASA. Retrieved 2008-11-03.
- "New Gamma-Ray Burst Smashes Cosmic Distance Record" (Press release). Francis Reddy, NASA Goddard Space Flight Center. 2009-04-28. Retrieved 2009-05-02.
- Amos, Jonathan (2011-05-25). "Cosmic distance record 'broken'". BBC News. Retrieved 2011-05-25.
- Francis Reddy (19 April 2010). "NASA's Swift Catches 500th Gamma-ray Burst". NASA's Goddard Space Flight Center. Retrieved 17 June 2011.
- Alicia Chang (Thursday, June 16, 2011). "Black Hole Devours Star: Source Of Mysterious Flash In Distant Galaxy Determined". The Huffington Post. Retrieved 17 June 2011.
- Agence France-Presse (17 June 2011). "Black hole eats star, triggers gamma-ray flash". Cosmos. Retrieved 17 June 2011.
- "A Cosmic Sleight of Hand".
- "NASA's Fermi, Swift See 'Shockingly Bright' Burst".
|Wikimedia Commons has media related to: Swift Gamma-Ray Burst Mission|
- NASA Swift site at GSFC
- UK Swift site
- Swift Mission Director's Status Report Log
- Sonoma State University Swift website
- Penn State Swift Mission Operations Center site
- Gamma-ray Burst Real-time Sky Map based on Swift data
- Swift Short-Burst Discovery
- GCN entry for first GRB observed with auto-slew
- APOD page with movie of separation from rocket
- New Scientist: Swift measures distance to gamma-ray bursts
- An Image Gallery Gift from NASA's Swift | <urn:uuid:b0845062-58dd-4539-877b-4cdb46c63107> | 3.140625 | 3,567 | Knowledge Article | Science & Tech. | 62.121566 |
An identifier is a type used to uniquely identify a resource, target...
One can think of an identifier as something similar to a file path. An
identifier is a path as well, with the different elements in the path
/ characters. Examples of identifiers are:
The most important difference between an
Identifier and a file path is that
the identifier for an item is not necesserily the file path.
For example, we could have an
index identifier, generated by Hakyll. The
actual file path would be
index.html, but we identify it using
posts/foo.markdown could be an identifier of an item that is rendered to
posts/foo.html. In this case, the identifier is the name of the source
file of the page.
An identifier used to uniquely identify a value | <urn:uuid:9c5edb5b-d49e-46db-a740-cae52e7a15e5> | 3.328125 | 173 | Documentation | Software Dev. | 47.62 |
The low-gain antenna sends and receives information in every direction; that is, it is "omni-directional." The antenna transmits radio waves at a low rate to the Deep Space Network (DSN)(see communication) antennas on Earth.
The high-gain antenna can send a "beam" of information in a specific direction and it is steerable, so the antenna can move to point itself directly to any antenna on Earth. The benefit of having a steerable antenna is that the entire rover doesn't necessarily have to change positions to talk to Earth.
The rovers not only communicate with earth but also communicate with the two spacecrafts orbiting earth, that is the 2001 Mars Odyssey and Mars Global Surveyor. The orbiters have much more energy than the rovers and also have a more powerful antenna. By sending the signals to earth, via the orbiters, the rovers save a lot of energy. Also the orbiters are closer to the rovers than the Deep Space Network antennas on Earth.Some times, the rovers are on that side of mars which does not face the earth. But the orbiters have Earth in their field of view for much longer time periods than the rovers on the ground. | <urn:uuid:fa408335-2cc3-4ee0-8c0b-6312bdd7763b> | 3.640625 | 249 | Knowledge Article | Science & Tech. | 48.286658 |
Walter Kondratko [Steinmetz S, chemistry]
Identification of metallic ions using the flame test
Walter passed out a handout which was a modification of one he obtained as a participant in the Chemistry Van Project at Chicago State University.
Walter gave us his third mini-teach in as many sessions; he is our iron man this term! The flame test is a way to identify an unknown metallic ion by the color it emits when heated, in this case with a portable blow torch. The flame excites the electrons in the ion, and when they return to the unexcited state, they emit electromagnetic radiation of an energy (and thus wavelength and color) that is characteristic of each species. This permits the identification.
Walter, thanks again!!
Ron Tuinstra [Illiana Christian HS, chemistry]
Human response times
Ron brought this back from a National Association of Biology Teachers convention. It is fast, fun, and yields quantitative data. The detailed protocol is in the handout. Briefly, a vertical meter stick is dropped by one member of a two person team through a space between the other (catcher) person's thumb and index finger. The "catcher" grabs the stick by closing his/her thumb and index finger on it. The distance the stick falls between the start signal and the catching can then be easily measured by subtracting the "starting position (which we set at 10 cm, i.e., the initial thumb and index finger position was always put 10 cm from the bottom end of the stick) from the position on the stick at which the grab stopped the stick. We tried visual, auditory, and tactile signals to alert the catcher of the simultaneous release of the stick. Note that the data are collected as distance along the meter stick (i.e., the distance the meter stick falls from the initial signal until the stick is grabbed), which we use as a measure of time. The standard formula obtained from Newton's Laws of motion allows a conversion from distance to response time.
We predicted before we did the experiment that the response time would be fastest for the visual start signal and slowest for the tactile start signal (with the auditory signal intermediate). This is based on our idea that receipt of the signal should be fastest for light, next fastest for sound, and slowest for transmission along the nerves in the arm from the hand to the brain (for tactile). For all subjects all response times were in the 140 - 300 msec range, but with different averages for males and females. Here are the data:
|Female||2||210 msec||200 msec||235 msec|
|Male||4||215 msec||305 msec||186 msec|
|Boys||184 msec||230 msec||216 msec|
|Girls||192 msec||235 msec||205 msec|
Within experimental error there seemed to be no M/F differences, but visual response was faster for the younger subjects than for us older folks! Ron said that in his experience for his students, usually V is fastest, then T, then A.
Ron said that student age and sex would be appropriate parameters for examining average response times. Ed Scanlon suggested left hand versus right hand (or really strong versus weak hand) as another interesting parameter to test.
And Ron, thanks, too, for another great miniteach!
Notes prepared by Benjamin Stark. | <urn:uuid:d647fdd0-eb6e-4fba-a018-bef28287e65f> | 3.359375 | 696 | Comment Section | Science & Tech. | 55.149078 |
Make a cube out of straws and have a go at this practical
Reasoning about the number of matches needed to build squares that
share their sides.
How can the same pieces of the tangram make this bowl before and after it was chipped? Use the interactivity to try and work out what is going on!
Count what you see:
how many bricks? sectors? circles?
Identify how you think the pattern would continue:
how many will be in the next layer/ level?
Look at what would happen if you started (or ended) with a
different number of items:
how many will there be in total? | <urn:uuid:47a4129e-7b12-4c00-a3e6-e66fa3902a33> | 3.75 | 135 | Tutorial | Science & Tech. | 69.433125 |
Submerged Aquatic Vegetation, or SAV is an underwater garden for juvenile fish and small invertebrates and a barometer of water quality.
SAV produces oxygen and detritus that is exported to other habitats, and reduces moderate turbidity and turbulence. SAV is extremely dependent on clarity of the water column for its existence.
Fish use of SAV habitat
- Over 150 fish and invertebrate species are known to use SAV as adults or juveniles, of which about 30 are important commercial fishery species.
- SAV beds provide an excellent nursery area for many species, including blue crabs, red drum, pink shrimp, spotted seatrout, and gag.
- SAV blades provide a surface for post-larval shellfish attachment, especially bay scallops, and refuge for small fish like mummichogs, pipefish, and grass shrimp.
- Large predators, like flounders, rays, and red drum forage around SAV.
Some important facts
- There are about 200,000 acres of SAV in coastal North Carolina (See habitat mapping and monitoring).
- In North Carolina, SAV usually occurs in water less than 6 ft deep because of light limitations.
- Changes in SAV coverage can be a sensitive indicator of water quality.
How’s it doing?
While high-salinity SAV appears fairly stable, with some possible expansions in southern estuaries, large losses (50% or more) of low-salinity SAV have been reported in tributaries of western Pamlico, Albemarle, and Currituck Sounds since the 1970s. However, there are recent reports of SAV recovery in some of the low-salintiy areas. Reduced light availability from nutrient and sediment loading is thought to be the primary cause of losses. Research is needed, however, to quantify current distribution and cause of changes. See Threats to Habitat Index for more information
See SAV chapter of CHPP | <urn:uuid:f2115796-767b-4cca-8a26-2d896f966192> | 3.234375 | 413 | Knowledge Article | Science & Tech. | 31.822596 |
Shark bite scars on Sarasota Bay resident bottlenose dolphins
An analysis of shark bite scars on the Sarasota Bay resident bottlenose dolphin community and implications for habitat use
By Krystan Wilkinson, MS Student, University of Florida
January 2013 Nicks n Notches
Predator-prey relationships have long been an interest of ecologists. Such relationships are dynamic. One false move may result in an individual being taken out of a community, or a predator may go hungry if unsuccessful.
Predation itself is difficult to study due to predation events rarely being observed. We are left to rely on evidence of predation attempts through wounds and scars on the surviving prey, and on wounds on carcasses.
The frequency of scars and wounds has been used to measure predation risk, with an obvious disadvantage in that we only have evidence of failed predation attempts. Thus rate of predation will be greater than that measured by wound and scar frequencies. Regardless of this disadvantage, bite frequency is still a useful measure to determine relative risk and proves useful in comparisons among populations and allows us to further our understanding of these complex species interactions.
The complete article can be viewed on page 21 in the January 2013 Nicks n Notches. | <urn:uuid:ca110d6c-e3a7-4e59-8468-e71902d4709e> | 2.921875 | 250 | Truncated | Science & Tech. | 26.48336 |
All Earth's water, liquid fresh water, and water in lakes and rivers. Credit: Howard Perlman, USGS
As human population grows and pollution levels rise, our demand for clean water increases but its supply dwindles. How long will it be before our planet cannot provide its population with enough clean water to survive?
Some would say that this is already happening..
Japanese design company, Takram, was asked to design a water bottle that could be used to used to ensure that we could get enough daily water to survive if the worst case scenario become reality. But they went one step further and designed a set of cyborg organs, the Hydrolemic System, that could be used to reduce water loss from the body in order to keep intake down to a minimum.
The video shows how they would work. | <urn:uuid:761f4b22-2834-46ef-8383-cada959a2647> | 3.703125 | 166 | Truncated | Science & Tech. | 51.039663 |
According to Newton's second law...
What does this mean?
Everyone unconsiously knows the Second Law. Everyone knows that heavier objects require more force to move the same distance as lighter objects.
However, the Second Law gives us an exact relationship between force, mass, and acceleration. It can be expressed as a mathematical equation:
FORCE = MASS times ACCELERATION
This is an example of how Newton's Second Law works:
Mike's car, which weighs 1,000 kg, is out of gas. Mike is trying to push the car to a gas station, and he makes the car go 0.05 m/s/s. Using Newton's Second Law, you can compute how much force Mike is applying to the car.
Answer = 50 newtons
This is easy, let's go on to
Newton's Third Law of Motion | <urn:uuid:0dbaf413-2390-4060-9852-eea72b1c38ce> | 3.953125 | 178 | Tutorial | Science & Tech. | 72.542541 |
Heliconius pardalinusMargarita Beltrán
Early stages: Eggs are yellow and approximately 1.4 x 0.9 mm (h x w). Females usually place eggs singly on growing shoots and tendrils of the host plant. Mature larvae have a white body and black spots, black scoli, and head and anal cap are orange; length is around 2 cm. Caterpillars are gregarious in small numbers (Brown, 1981).
Heliconius pardalinus is distributed in the Upper Amazon. The map below shows an approximate representation of the geographic distribution of this species. The original data used to draw these maps are derived from Brown (1979) which is available at Keith S. Brown Jr. (1979). Ecological Geography and Evolution in Neotropical Forests .
H. pardalinus occurs from sea level to 1,200 m in riparian forests. Usually individuals fly rapidly in the middlestory. Females mate multiply and adults roost solitarily or in loose groups at night at 2-10 m above ground on twigs or tendrils.
Hostplant: H. pardalinus larvae feed primarily on plants from the subgenera Granadilla, Astrophea and Distephana (Passifloraceae) (Brown, 1981).
Bates, H. W. [1825-1892] 1862. Contributions to an insect fauna of the Amazon Valley. Lepidoptera: Heliconidae. Transactions of the Linnean Society of London 23(3): 495-566, pls. 55-56.
Brown K. S. 1981 The Biology of Heliconius and Related Genera. Annual Review of Entomology 26, 427-456.
Correspondence regarding this page should be directed to Margarita Beltrán at
Page copyright © 2010
Page: Tree of Life Heliconius pardalinus Authored by . Margarita Beltrán. The TEXT of this page is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License - Version 3.0. Note that images and other media featured on this page are each governed by their own license, and they may or may not be available for reuse. Click on an image or a media link to access the media data window, which provides the relevant licensing information. For the general terms and conditions of ToL material reuse and redistribution, please see the Tree of Life Copyright Policies.
- First online 18 February 2007
- Content changed 12 August 2008
Citing this page:
Beltrán, Margarita. 2008. Heliconius pardalinus http://tolweb.org/Heliconius_pardalinus/72902/2008.08.12 in The Tree of Life Web Project, http://tolweb.org/. Version 12 August 2008 (under construction). | <urn:uuid:c6127d40-baf9-4118-9c3b-f7d4b5aa4cb1> | 2.84375 | 592 | Knowledge Article | Science & Tech. | 54.386364 |
The Chemists in the Library Working Group has compiled these electronic and print resources for further exploration of the theme of National Chemistry Week: Earth's Atmosphere and Beyond!. The resources have been classified by grade level and annotated.
- Composition of Earth's Atmosphere
- What the atmosphere is made of, how it came to be that way, and how it could change in the future.
- The forces behind weather and how to forecast it. Also a special section on storms, tornadoes, and hurricanes.
- Ozone Hole
- The history of the ozone hole and how ozone depletion affects us.
- Global Warming and Climate Change
- How scientists measure global warming, what may be causing it, and its impact.
- Extraterrestrial Atmospheres -- Mars
- All about the planet next door and the Mars Rover missions.
- The History of Flight
- The Wright Brothers, how airplanes have changed over the years, and the principles behind flight.
- A - Z
- Alphabetical list of selected resources as PDF files. More items are included on the web site.
- Reference Shelf
- Sources of basic reference information.
- How to Find More in Your Library
- Find other interesting books by searching your library's catalog or by browsing certain call numbers in your library's stacks.
- How to Find More on the Web
- Find other interesting web sites using search engines and subject directories.
Copyright © 2003 American Chemical Society | <urn:uuid:3f3a2129-7140-4c64-89ad-3b439bead4d3> | 3.796875 | 303 | Content Listing | Science & Tech. | 44.383024 |
An Entirely New Phase of Matter
A liquid crystal
In 1888, the Austrian chemist Friedrich Reinitzer, working in the Institute of Plant Physiology at the University of Prague, discovered a strange phenomenon. Reinitzer was conducting experiments on a cholesterol based substance trying to figure out the correct formula and molecular weight of cholesterol. When he tried to precisely determine the melting point, which is an important indicator of the purity of a substance, he was struck by the fact that this substance seemed to have two melting points. At 145.5°C the solid crystal melted into a cloudy liquid which existed until 178.5°C where the cloudiness suddenly disappeared, giving way to a clear transparent liquid. At first Reinitzer thought that this might be a sign of impurities in the material, but further purification did not bring any changes to this behavior.
Puzzled by his discovery, Reinitzer turned for help to the German physicist Otto Lehmann, who was an expert in crystal optics. Lehmann became convinced that the cloudy liquid had a unique kind of order. In contrast, the transparent liquid at higher temperature had the characteristic disordered state of all common liquids. Eventually he realized that the cloudy liquid was a new state of matter and coined the name "liquid crystal," illustrating that it was something between a liquid and a solid, sharing important properties of both. In a normal liquid the properties are isotropic, i.e. the same in all directions. In a liquid crystal they are not; they strongly depend on direction even if the substance itself is fluid.
This new idea was challenged by the scientific community, and some scientists claimed that the newly-discovered state probably was just a mixture of solid and liquid components. But between 1910 and 1930 conclusive experiments and early theories supported the liquid crystal concept at the same time that new types of liquid crystalline states of order were discovered.
At the early time of Reinitzer and Lehmann, scientists only knew three states of matter. The general idea was that all matter normally had one melting point, where it turns from solid to liquid, and a boiling point where it turns from liquid to gas. Water is a good example of this view. It melts at 0°C and it boils and becomes steam, which is water in its gaseous form, at 100°C.
Today, thanks to Reinitzer, Lehmann and their followers, we know that literally thousands of substances have a diversity of other states. Some of them have been found very usable in several technical innovations, among which liquid crystal screens and liquid crystal thermometers may be the best known.
In the 1960s, a French theoretical physicist, Pierre-Gilles de Gennes, who had been working with magnetism and superconductivity, turned his interest to liquid crystals and soon found fascinating analogies between liquid crystals and superconductors as well as magnetic materials. His work was rewarded with the Nobel Prize in Physics 1991. The modern development of liquid crystal science has since been deeply influenced by the work of Pierre-Gilles de Gennes.
What is So Special about Liquid Crystals?
Video showing what happens when heating up a liquid crystal display.
Liquid crystals are partly ordered materials, somewhere between their solid and liquid phases. Their molecules are often shaped like rods or plates or some other forms that encourage them to align collectively along a certain direction. The order of liquid crystals can be manipulated with mechanical, magnetic or electric forces. Finally, liquid crystals are temperature sensitive since they turn into solid if it is too cold, and into liquid if it is too hot. This phenomenon can, for instance, be observed on laptop screens when it is very hot or very cold.
Liquid Crystal Phases
|The twist in a cholesteric liquid crystal||Liquid crystal in the smectic C phase|
Liquid crystals have two main
phases which are called the "nematic phase" and the
The nematic phase is the simplest of liquid crystal phases and is close to the liquid phase. The molecules float around as in a liquid phase, but are still ordered in their orientation. When the molecules are chiral* and in the nematic phase, they arrange themselves into a strongly twisted structure that often reflects visible light in different bright colors which depend on the temperature. They can therefore be used in temperature sensors (thermometers). This special case of a nematic is often called cholesteric. The name is historic as it goes back to the substances on which Reinitzer made his discovery.
The smectic phase is close to the solid phase. The liquid crystals are here ordered in layers. Inside these layers, the liquid crystals normally float around freely, but they cannot move freely between the layers. Still, the molecules tend to arrange themselves in the same direction. The smectic phase is in turn divided into several sub-phases with slightly different properties.
* The definition of chirality is "something that can not be superimposed on its own mirror image, like a hand."
How Do Liquid Crystal Displays Work?
To understand how a liquid crystal screen works, you must first understand the concept of light polarization. Light is made out of particles called photons. These photons travel at the speed of light. While moving, a photon vibrates in a plane which is perpendicular to its direction, but within this plane the vibration direction is random for normal (non-polarized) light.
|These four arrows represent unpolarized light rays. As you can see, the arrows are tilted in random directions||
Four parallel arrows representing polarized light
Some processes affect the
direction of vibration. For instance, if sunlight is
reflected off the surface of a road, or the surface
of the open sea, the reflected light contains more
vibrations parallel to this surface than
perpendicular to it. The light has been polarized, in
this case only partly. Some materials however, like
the plastic used in polarizing sunglasses, may absorb
almost all vibration components along a certain
direction and only let through vibrations along the
perpendicular direction. In normal sunglasses the
admitted vibration direction is vertical. This is why
polarizing sunglasses can be used to remove the glare
of reflected light from the surface of a road, or the
surface of the open sea.
To polarize light you can use such a polarization filter as found in sunglasses. Two such polarization filters, placed after each other along the light with their admitting direction perpendicular to each other will not let any light through.
So what has this to do with liquid crystal displays? Well, the conventional liquid crystal display basically consists of such a package of two crossed polarizers with a liquid crystal in between. If the molecules lie perpendicular to the plane of the polarizers, i.e. along the direction of the light ray, they have no influence on the state of polarization. Thus, the package of crossed polarizers lets no light through. The cell appears black. On the other hand, if the molecules are arranged to lie parallel to the plane of the polarizers, i.e. in the plane in which the light vibrates, the presence of the liquid crystal will strongly affect the state of polarization.
Video showing how light responds to polarization filters.
In the so-called twisted nematic (or TN) display, the molecules are arranged in this way. More specifically, the glass surfaces are treated such that the molecular direction is parallel to the admitting direction of each neighboring polarizer. Because these directions are crossed, the molecular direction is confined to a 90° twist from one side of the cell to the other (see figure).
In this case, what happens is
that the light vibration follows this twist from one
polarizer to the other, so that all light in fact
passes the cell, without being absorbed, in spite of
the fact that the polarizers are crossed. Hence, the
cell appears bright.
As mentioned earlier, liquid crystals are sensitive to electric forces. If you apply an electric field that is strong enough across a liquid crystal of the right kind, its molecules arrange themselves parallel to the electric field. So now, by applying a voltage across the liquid crystal cell, i.e. along the light direction, you destroy the twist and instead force the molecules into the direction in which they do not affect the polarization state of the light. All light is now absorbed by the crossed polarizers and the cell appears black when the electric field is turned on.
By creating a matrix of squares that locally control the state of the twist in their respective area you get a liquid crystal display containing a large number of individual picture elements (pixels).
First published 9 September 2003
|The Nobel Prize
in Physics 1991
- Pierre-Gilles de Gennes »
|Play the liquid crystal games »| | <urn:uuid:fda62f66-5bb7-4a7c-91d9-5acfa6c2936a> | 3.796875 | 1,806 | Knowledge Article | Science & Tech. | 41.794194 |
The Impulse-Momentum Change Theorem
Improve your problem-solving skills with problems, answers and solutions from The Calculator Pad.Flickr Physics
Visit The Physics Classroom's Flickr Galleries and take a visual tour of the topic of momentum and collisions.Flickr Physics
Some photographers have too much phun, like the one who took this photo.
Enjoy a rich source of instructional, demonstration and lab ideas on the topic of momentum and impulse.Curriculum Corner
Learning requires action. Give your students this sense-making activity from The Curriculum Corner.Treasures from TPF
Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on momentum.
The sports announcer says, "Going into the all-star break, the Chicago White Sox have the momentum." The headlines declare "Chicago Bulls Gaining Momentum." The coach pumps up his team at half-time, saying "You have the momentum; the critical need is that you use that momentum and bury them in this third quarter."
Momentum is a commonly used term in sports. A team that has the momentum is on the move and is going to take some effort to stop. A team that has a lot of momentum is really on the move and is going to be hard to stop. Momentum is a physics term; it refers to the quantity of motion that an object has. A sports team that is on the move has the momentum. If an object is in motion (on the move) then it has momentum.
Momentum can be defined as "mass in motion." All objects have mass; so if an object is moving, then it has momentum - it has its mass in motion. The amount of momentum that an object has is dependent upon two variables: how much stuff is moving and how fast the stuff is moving. Momentum depends upon the variables mass and velocity. In terms of an equation, the momentum of an object is equal to the mass of the object times the velocity of the object.
Momentum = mass
p = m
where m is the mass and v is the velocity. The equation illustrates that momentum is directly proportional to an object's mass and directly proportional to the object's velocity.
The units for momentum would be mass units times velocity units. The standard metric unit of momentum is the kgm/s. While the kgm/s is the standard metric unit of momentum, there are a variety of other units that are acceptable (though not conventional) units of momentum. Examples include kgmi/hr, kgkm/hr, and gcm/s. In each of these examples, a mass unit is multiplied by a velocity unit to provide a momentum unit. This is consistent with the equation for momentum.
Momentum is a vector quantity. As discussed in an earlier unit, a vector quantity is a quantity that is fully described by both magnitude and direction. To fully describe the momentum of a 5-kg bowling ball moving westward at 2 m/s, you must include information about both the magnitude and the direction of the bowling ball. It is not enough to say that the ball has 10 kgm/s of momentum; the momentum of the ball is not fully described until information about its direction is given. The direction of the momentum vector is the same as the direction of the velocity of the ball. In a previous unit, it was said that the direction of the velocity vector is the same as the direction that an object is moving. If the bowling ball is moving westward, then its momentum can be fully described by saying that it is 10 kgm/s, westward. As a vector quantity, the momentum of an object is fully described by both magnitude and direction.
From the definition of momentum, it becomes obvious that an object has a large momentum if either its mass or its velocity is large. Both variables are of equal importance in determining the momentum of an object. Consider a Mack truck and a roller skate moving down the street at the same speed. The considerably greater mass of the Mack truck gives it a considerably greater momentum. Yet if the Mack truck were at rest, then the momentum of the least massive roller skate would be the greatest. The momentum of any object that is at rest is 0. Objects at rest do not have momentum - they do not have any "mass in motion." Both variables - mass and velocity - are important in comparing the momentum of two objects.
The momentum equation can help us to think about how a change in one of the two variables might affect the momentum of an object. Consider a 0.5-kg physics cart loaded with one 0.5-kg brick and moving with a speed of 2.0 m/s. The total mass of loaded cart is 1.0 kg and its momentum is 2.0 kgm/s. If the cart was instead loaded with three 0.5-kg bricks, then the total mass of the loaded cart would be 2.0 kg and its momentum would be 4.0 kgm/s. A doubling of the mass results in a doubling of the momentum.
Similarly, if the 2.0-kg cart had a velocity of 8.0 m/s (instead of 2.0 m/s), then the cart would have a momentum of 16.0 kgm/s (instead of 4.0 kgm/s). A quadrupling in velocity results in a quadrupling of the momentum. These two examples illustrate how the equation p = mv serves as a "guide to thinking" and not merely a "plug-and-chug recipe for algebraic problem-solving."
Express your understanding of the concept and mathematics of momentum by answering the following questions. Click the button to view the answers.
1. Determine the momentum of a ...
a. 60-kg halfback moving eastward at 9 m/s.
b. 1000-kg car moving northward at 20 m/s.
c. 40-kg freshman moving southward at 2 m/s.
2. A car possesses 20 000 units of momentum. What would be the car's new momentum if ...
a. its velocity was doubled.
b. its velocity was tripled.
c. its mass was doubled (by adding more passengers and a greater load)
d. both its velocity was doubled and its mass was doubled.
3. A halfback (m = 60 kg), a tight end (m = 90 kg), and a lineman (m = 120 kg) are running down the football field. Consider their ticker tape patterns below.
Compare the velocities of these three players. How many times greater are the velocity of the halfback and the velocity of the tight end than the velocity of the lineman?
Which player has the greatest momentum? Explain. | <urn:uuid:c2d45784-61a4-4829-8c44-1f55d422f3fa> | 4.21875 | 1,396 | Tutorial | Science & Tech. | 71.398065 |
This is a printer version of an UnderwaterTimes.com
To view the article online, visit: http://www.underwatertimes.com/news.php?article_id=80274351016
Squid (Gonatus onyx) carrying egg mass. (Image courtesy of Monterey Bay Aquarium Research Institute)
Rhode Island -- Squid have always been considered poor parents: they lay their eggs on the seafloor and leave them to develop on their own. But a University of Rhode Island scientist has made the first observation of parental care by squid when he used a remotely operated underwater vehicle in the deep sea to watch as five squid each carried thousands of eggs in their arms.
The observations made in Monterey Canyon off California in 2000 and 2002 are reported in the Dec. 15 issue of the journal Nature.
“Our finding is unexpected because this behavior differs from the reproductive habits of all other known squid species,” wrote Brad Seibel, a URI assistant professor of biological sciences who collaborated with colleagues at the Monterey Bay Aquarium Research Institute on the discovery.
Gonatus onyx is one of the most abundant species of squid in the Atlantic and Pacific Oceans, but because it spawns at great depths it has been difficult to observe its reproductive behaviors.
Spectacular video and photographic images captured by Seibel show the squid transporting a tubular pouch of some 2,000 to 3,000 eggs attached to hooks under its arms. After several months, the mature eggs break away from the pouch and hatch before setting out on their own.
According to Seibel, repeated extension of the squid’s arms appeared to be an intentional effort to flush water through the eggs to aerate them in the oxygen-starved waters found at depths of 5,000 to 7,000 feet off California.
Seibel’s discovery was also unexpected because it was thought that the arm and mantle muscles of squid deteriorate soon after sexual maturation, rendering the adult squid incapable of carrying its eggs. Seibel said this may still be somewhat true, because the squid he observed were unable to swim as efficiently as unencumbered ones, making them more likely to be preyed upon by whales and seals. | <urn:uuid:ecee96e1-b448-48dd-b75b-0baa2de18577> | 3.703125 | 454 | Truncated | Science & Tech. | 41.913277 |
Small wetlands in this large area have hosted migratory birds for a long time, but with changes in agricultural practice and regional climate those habitats may not remain hospitable to the wild populations.
Assessment of the importance of the Conservation Reserve Program in preventing the decline of grassland breeding birds by preserving grassland habitats in North Dakota. Published as Wilson Bulletin v. 107 no. 4, pp. 709-718 (1995).
Home page of the Forest and Rangeland Ecosystem Science Center, Corvallis, providing research and technical support for ecosystem management in the western U.S. Links to projects, field stations, fact sheets, partnerships, and publications.
Catalog of bird species common to forest and rangeland habitats in the U.S. with natural histories including taxonomic information, range, and habitat descriptions to assist land managers in resource management. Text available as a *.zip file.
Description of the use of a miniature video-camera system deployed at nests of passerine species in North Dakota to videotape predation of eggs or nestlings by animals such as mice, ground squirrels, deer, cowbirds and others.
We conducted a national landowner survey, evaluated short-term vegetation responses to land management practices (primarily grazing, haying, and burning), and initiating a long-term vegetation monitoring study for wetland buffers.
Study to identify grasslands that may be suitable for cellulosic feedstock production. Producing ethanol from non-cropland areas such as grassland will minimize the effects of biofuel developments on global food supplies.
Homepage for the Northern Prairie Wildlife Research Center, Jamestown, ND, with links to announcements, science prgorams, biological resources finder, publications search option, contacts, and answers to common questions about the Center | <urn:uuid:253c45e3-4f25-4434-a71f-6ca7a5ffb55d> | 3.265625 | 368 | Content Listing | Science & Tech. | 29.845862 |
TRMM Observatory and Instruments
GSFC designed, built and tested the observatory "in house" at its Greenbelt, Md., facility. At launch, the observatory weighed 7,920 lbs. (3,600 kg). It is about 17 feet tall (approximately 5 meters) and 12 feet (3.6 meters) in diameter. A gallium arsenide solar array/nickel cadmium battery power subsystem provides 1,100 watts of load power to the satellite.
A three-axis attitude control subsystem stabilizes the observatory and keeps the instruments pointing toward Earth to within 0.2 degrees. A command and data handling subsystem provides onboard commanding, data collection, processing and storage. This subsystem uses state-of-the-art technology employing a fiber optic data bus and solid state recorders.
A reaction control subsystem maintains the orbit at approximately 217 miles (350 km). Data for each orbit is stored on board and transmitted to the ground by the communication subsystem through TDRSS once per orbit.
The observatory instruments for primary rainfall measurements are a precipitation radar, a multi-frequency microwave radiometer and a visible/infrared radiometer. For observations related to precipitation, NASA added a Lightning Imaging Sensor (LIS) and a Clouds and the Earth's Radiant Energy System (CERES). A brief description of the five instruments follows:
The PR determines the vertical distribution of precipitation by measuring the "radar reflectivity" of the cloud systems and the weakening of a signal as it passes through the precipitation. A unique feature of the PR is the measurement of rain over land, where passive microwave channels have more difficulty.
The TRMM Microwave Imager (TMI) is a multi-channel radiometer, whose signals in combination can measure rainfall quite accurately over oceans and somewhat less accurately over the land. The TMI and PR data, will yield the primary precipitation data sets.
The VIRS measures radiance in five bandwidths from the visible through the infrared spectral regions. Scientists use Infrared (IR) data to make rough estimates of tropical precipitation. The VIRS, PR and TMI data will help improve the techniques by which scientists use IR data from other satellites to calculate rainfall. This is the third component of TRMM's rain package.
The LIS is an optical telescope and filter imaging system that will investigate the distribution and variability of both atmospheric and cloud-to-ground lightning over the Earth. These instruments will contribute to our understanding of storm dynamics and will be correlated to levels of precipitation and the release of latent heat.
The CERES is a visible/infrared sensor designed especially to measure energy rising from the surface of the Earth and the atmosphere including its constituents (e.g., clouds and aerosols). This energy, when balanced against the energy received by the Earth from the Sun, constitutes the Earth's radiation budget. Understanding the radiation budget, from the top of the atmosphere to the Earth's surface, is important to understanding climate and its variability. | <urn:uuid:049b81f3-e142-4036-a6e0-76c2a6c233f5> | 3.0625 | 612 | Knowledge Article | Science & Tech. | 35.1425 |
I really dont get this, and need someone professional to explain this to me.
XYZ has coordinates X(12,9), Y(-1,4), and Z(8,16). After a dialation with a center at 0, X' is (3,1.8). What is the scale factor? What coordinate is Y' & Z'?
I can see why you are confused by this problem! But first, let's review dilations.
A dilation is a transformation that stretches or shrinks the original figure to produce a similar figure; the new figure is the same shape as the original, but a different size.
The firgure is enlarged or reduced with respect to a fixed point called the center of dilation.
Remember ratios of similar figures? The scale factor is the ratio of the length of a side of the new figure to the corresponding side length of the original figure. For example, if the scale factor is 1/2, each side of the new figure will be 1/2 the size of the original. Also, when the scale factor is between 0 and 1, the dilation is a reduction. A scale factor greater than one would result in an enlargement.
Further, the scale factor is also the ratio of the distances from the center of dilation to the corresponding points of the figure (new distance to original distance).
If the center of dilation is the origin, then the coordinates are multiplied by the scale factor: (x,y) -> (kx, ky) where k is the scale factor.
To solve a problem like the one you presented, determine the scale factor by dividing the coordinates of X' by the corresponding coordinates of X. Then, multiply the other coordinates by that scale factor.
What is confusing about the problem you presented is that the scale factor is not consistent: 3/12 = .25, 1.8/9 = .2. If you plot them on the coordinate plane, they do not line up with the origin. I'm guessing that there is a typo in either your original problem or the way that you entered it here.
Hope the explanation helps!
You can think of the points X, Y, Z as vectors originating from the origin. Each of these "vectors" would have a magnitude associated with it, found by using the Pythagorean Theorem.
For example, the magnitude of X would be sqrt(12^2 + 9^2) = 15.
After the dilation, the magnitude of X became sqrt(3^2 + 1.8^2) = 3.50.
So the dilation caused X to shrink by a factor of 3.50/15 = 0.233. This factor is the scale factor.
In addition to being scaled, the graph was also rotated. You can tell because the angle that X makes with the x-axis [arctan(9/12) = 0.644] is not the same as the angle that X' makes with the x-axis [arctan(1.8/3) = 0.540]. The difference in these angles is the amount that the graph was rotated by [0.540 - 0.644 = -0.103 radians].
To find Y' and Z', simply take the Y and Z vectors through the same scaling and rotating transformations. I will walk you through how to do this for Y below.
To scale Y, multiply its coordinates by the scale factor.
Yscaled = (0.233*-1, 0.233*4) = (-0.233, 0.932)
To rotate Y, left-multiply Yscaled by the rotation matrix for -0.103 radians.
Y'x = -0.233*cos(-0.103) - 0.932*sin(-0.103) = -0.136
Y'y = -0.233*sin(-0.103) + 0.932*cos(-0.103) = 0.951
This gives Y' = (-0.136, 0.951).
Hope this helps! | <urn:uuid:2dba80ba-d33d-4bfd-9807-7f203da3a64e> | 3.6875 | 861 | Q&A Forum | Science & Tech. | 85.486299 |
|What Is The Madison Aquifer?
The Madison aquifer is that part of the Madison Limestone that is saturated with ground water. The Madison Limestone is a rock formation, composed of limestone, that is exposed in the Black Hills area. It is sometimes called Pahasapa Limestone. It is a hard, crystalline rock that is composed of calcium carbonate and calcium-magnesium carbonate. This rock is slightly soluble in rainwater, enough so that caves and passageways have been dissolved in the Madison Limestone by infiltrating ground water . Well-known caves, such as Wind Cave and Jewel Cave, were formed in this way in the Madison Limestone. Water in the Madison aquifer is contained in these underground caves and fractures. Because the Madison aquifer contains drinking-quality water in the Black Hills area, many wells are drilled into it for water supplies in places such as Rapid City and Spearfish.
Where Is The Madison Aquifer Found?
The Madison Limestone is exposed in a band around the Black Hills uplift (shown as limestone plateau and limestone in Figure 1), but it is also present in the subsurface beneath the ground in parts of Wyoming, Montana, North Dakota, South Dakota, and Nebraska. It extends into the region of South Dakota east of the Missouri River. It is not present in the higher Black Hills in the area of Mount Rushmore and Harney Peak, because the limestone layer has been eroded away in these locations.
Why Is It Important?
About 90% of South Dakota's population relies on ground water from aquifers such as the Madison for drinking water supplies. The Madison aquifer is vitally important because it contains approximately 66 million acre-feet of drinking-quality water in South Dakota. Cities such as Rapid City use water from wells drilled into the Madison aquifer. Unfortunately, in some places the aquifer is too far beneath the surface for the water to be economically pumped for use. The water in the Madison will become more important in the future, as South Dakota's population grows and more people require water from scarce and dwindling supplies.
Heat in the Earth's interior, where the Madison is deep below the ground, causes some of the water in the aquifer to be very warm. Near Philip and Midland, the Madison contains water that is almost 160 degrees Fahrenheit (71 degrees C). This makes the Madison a valuable geothermal resource, although this hot water is often unsuitable for drinking because of its high mineral content. This heat is used to warm a school and some municipal buildings and homes.
How Productive Are Wells In The Madison Aquifer?
The Madison is one of South Dakota's most productive aquifers, but well yields can vary tremendously. Some wells in the Rapid City and Spearfish areas can produce more than 1,000 gallons (about 3800 liters) per minute. Some of these are flowing artesian wells that will produce more than 500 gallons (almost 1900 liters) per minute without pumping. Other wells sometimes are less productive and might yield only 20 to 30 gallons (75 to 113 liters) per minute.
The outcrop area where the Madison Limestone is exposed around the Black
Hills is the recharge zone of the aquifer, where
rainwater and snowmelt infiltrate into the rock and replenish this underground reservoir. Water in the Madison aquifer flows through fractures, pores, and caves in the rock, and usually does not receive the natural filtering that most ground water undergoes as it seeps through soil and sediments. Some streams, such as Boxelder Creek and Spring Creek, lose their flow to sinkholes in the Madison. Protection of the aquifer's recharge areas from pollution by sewage, gasoline, and industrial activities can help preserve the water quality of the aquifer.
Acre-foot - the amount of water that will cover one acre of land
to a depth of one foot.
Artesian well - a well in which the water is under sufficient pressure that it rises partway up in the well without the assistance of pumping. A flowing artesian well will flow at the surface.
Geothermal - referring to the heat from the earth's interior.
Ground water - all water below the land surface.
Recharge zone - that area through which water in an aquifer is replenished either by runoff, by infiltration from precipitation, or by underground flow from connected aquifers.
Sinkhole - a funnel-shaped depression in the land surface through which surface water drains into underground channels.
Arden D. Davis, Dept. of Geology and Geological Engineering, SDSM&T, Rapid City, SD. 1995
Ralph K. Davis, Dept. of Geology, University of Arkansas, Fayetteville, AR 72701.
Publication of the Madison Aquifer fact sheet was funded by the West Dakota Water Development District, Rapid City, SD. | <urn:uuid:50b5eeb5-5e1d-48c3-b883-ccd22de37fc6> | 3.71875 | 1,005 | Knowledge Article | Science & Tech. | 42.939544 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2007 May 14
Explanation: When passing Earth on your way to Jupiter, what should you look for? That question arose for the robotic Galileo spacecraft that soundlessly coasted past the Solar System's most photographed orb almost two decades ago. The Galileo spacecraft, although originally launched from Earth, coasted past its home world twice in an effort to gain speed and shorten the duration of its trip to Jupiter. During Galileo's first Earth flyby in late 1990, it made a majestically silent home movie of our big blue marble rotating by taking images almost every minute during a 25-hour period. The above picture is one frame from this movie -- clicking on this frame will put it in motion (in many browsers). Visible on Earth are vast blue oceans, swirling white clouds, large golden continents, and even one continent frozen into a white sheet of water-ice. As Galileo passed, it saw a globe that not only rotated but began to recede into the distance. Galileo went on to a historic mission uncovering many secrets and mysteries of Jupiter over the next 14 years, before performing a final spectacular dive into the Jovian atmosphere.
Authors & editors:
Jerry Bonnell (USRA)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. | <urn:uuid:ed6d5948-2907-456c-91a8-8ac7854bdfab> | 3.140625 | 303 | Knowledge Article | Science & Tech. | 39.203048 |
Building on my recent discussion of common method names, I wanted to add a little more detail on factory methods. Of course, this relates to Java - other languages have different conventions.
One of the habits I have got into is using factory methods for immutable classes and constructors for mutable ones. Like all rules, this isn't applies 100%, but it is a good place to start.
One of the benefit of factories is the ability to change the implementation class (such as to an optimised subclasss). This can tackle performance issues without affecting the main application. Similarly, the factory might return a cached instance if the class is immutable. The JDK
Integer.valueOf() method is a good example of applying a cache (although sadly there are more negative knock-on implications in Java).
But, what name should be used to define these factories?
The JDK default choice was
Foo.valueOf(). This name is perfectly usable, and has the benafit of being well-known. But it is a bit more wordy than necessary.
The JDK added to this convention with
Foo.of() naming pattern. This is my favourite choice, as "of" is short and clear for most cases.
Duraton.ofSeconds() (example). These factories are normally a little more complex in how they go about manipulating the input parameters into the state of the class, and the Javadoc of the factory will tend to be a little more complex.
Foo.of()itself (no descriptive suffix) should be reserved for the most common case where there is no confusion as to meaning. This will typically take parameters that relate very simply to the internal state of the class (example). For example, there should be relatively little complication in the Javadoc that describes what the factory does.
Sometimes, the "of" prefix doesn't make sense. So within a given API it may make sense to deviate. In ThreeTen/JSR-310 I have a variety of other common factory names.
DateTime.now() variants create an instance of the class with the current time. This could be
DateTime.ofNow(), but the functionality of the factory feels sufficiently different to justify its own specific factory name. (example)
Similarly, I use a specific factory name for parsing -
DateTime.parse(). (example). Parsing is a very specific operation, that really justifies standing out in the API. Note that if the string being parsed is effectively just an identifier and therefore the actual state of the class, then I would use "of", not "parse", as in
For these more complex cases, its about clarity. For example,
Duration.between(a,b) to calculate the duration between two instants,is a lot clearer than
There are lots of possible alternatives:
Foo.for() (which requires a suffix to make a valid method name!), and many more. The advantage of "from" is that it is the opposite of "to", when converting from another type. However, in most cases, I find the consistency and simplicity of using "of" as a general factory prefix as being more useful.
And of course a key advantage of method prefix naming strategies is that you can type
Foo.of and do your IDE auto-complete to see all the available principal factories. That is a key API usability feature.
Finally, I particularly dislike factory methods that start with "get", like
Foo.getInstance(). These are really confusing in all circumstances, but especially in an IDE where auto-complete after "get" really shouldn't show a factory.
I like to use
Foo.of() for most of my factory methods, supplemented by
Foo.parse() amd other specialist variants. I also try to use factory methods for immutable classes, and constructors for mutable classes where possible and sensible.
Any thoughts on this? Comments welcome... | <urn:uuid:d09e25aa-2cb8-4e2b-91d2-2e7b46fb6d5f> | 2.71875 | 814 | Personal Blog | Software Dev. | 46.541694 |
Chemistry of Hydrogen
Hydrogen is one of the most important elements in the world. It is all around us. It is a component of water (H2O), fats, petroleum, table sugar (C6H12O6), ammonia (NH3), and hydrogen peroxide (H2O2)—things essential to life, as we know it. This module will explore several aspects of the element and how they apply to the world.
History of Hydrogen
Hydrogen was first isolated and shown to be a discrete element by Henry Cavendish in 1766. Before that, Robert Boyle and Paracelsus both used reactions of iron and acids to produce hydrogen gas. Antoine Lavoisier gave hydrogen its name because it produced water when ignited in air. Hydrogen comes from Greek meaning “water producer” (“hydro” =water and “gennao”=to make).
Properties of Hydrogen
Hydrogen is a nonmetal. It is placed above group in the periodic table because it has ns1 electron configuration like the alkali metals. However, it varies greatly from the alkali metals as it forms cations (H+) more reluctantly than the other alkali metals. Hydrogen‘s ionization energy is 1312 kJ/mol, while lithium (the alkali metal with the highest ionization energy) has an ionization energy of 520 kJ/mol.
Because hydrogen is a nonmetal and forms H- (hydride anions), it is sometimes placed above the halogens in the periodic table. Hydrogen also forms H2 dihydrogen like halogens. However, hydrogen is very different from the halogens. Hydrogen has a much smaller electron affinity than the halogens.
H2 dihydrogen or molecular hydrogen is non-polar with two electrons. There are weak attractive forces between H2 molecules, resulting in low boiling and melting points. However, H2 has very strong intramolecular forces; H2 reactions are generally slow at room temperature due to strong H—H bond. H2 is easily activated by heat, irradiation, or catalysis. Activated hydrogen gas reacts very quickly and exothermically with many substances.
Hydrogen also has an ability to form covalent bonds with a large variety of substances. Because it makes strong O—H bonds, it is also considered a good reducing agent for metal oxides. Example: CuO(s) + H2(g) → Cu(s) + H2O(g) H2(g) passes over CuO(s) to reduce the Cu2+ to Cu(s), while getting oxidized itself.
Reactions of Hydrogen
Hydrogen gas reacting with oxygen to produce water and a large amount of heat.
5) Reactions with Transition Metals: Reactions of hydrogen with Transition metals (Group 3-12) form metallic hydrides. There is no fixed ratio of hydrogen atom to metal because the hydrogen atoms fill holes between metal atoms in the crystalline structure.
Uses & Application
Hydrogen is very important to the world. About 70% of the hydrogen produced is used in the Haber process, which is a process of fixing nitrogen gas into ammonia (a usable form by plants). Without the Haber process, we would not be able to grow the huge amounts of crops we grow today.
Hydrogen is also used for the hydrogenation of oils.Hydrogenation entails replacing double bonds in oils by hydrogen, converting the double bonds into single bonds. This transformation of unsaturated fats to saturated fats drastically increases the shelf life of many foods. However, an increased consumption of saturated fats has been linked to greater visible for heart disease, high cholesterol, and certain types of cancer.
Because hydrogen is a good reducing agent, it is used to produce metals like iron, copper, nickel, and cobalt from their ores.
Liquid hydrogen (combined with liquid oxygen) is a major component of rocket fuel (as mentioned above combination of hydrogen and oxygen relapses a huge amount of energy).
Because one cubic feet of hydrogen can lift about .07 lbs, hydrogen lifted airships or Zeppelins became very common in the early 1900s.However, the use of hydrogen for this purpose was largely discontinued around World War II after the explosion of The Hindenburg. The Hindenburg prompted greater use of inert helium, rather than flammable hydrogen for air travel.
Video Showing the explosion of The Hindenburg. (Video from Youtube)
Recently, due to the fear of fossil fuels running out, extensive research is being done on hydrogen as a source of energy. Because of their moderately high energy densities liquid hydrogen and compressed hydrogen gas are possible fuels for the future. A huge advantage in using them is that their combustion only produces water (it burns “clean”). However, it is very costly, and not economically feasible with current technology.
Combustion of fuel produces energy that can be converted into electrical energy when energy in the steam turns a turbine to drive a generator. However, this is not very efficient because a great deal of energy is lost as heat. The production of electricity using voltaic cell can yield more electricity (a form of usable energy). Voltaic cells that transform chemical energy in fuels (like H2 and CH4) are called fuel cells. These are not self-contained and so are not considered batteries. The hydrogen cell is a type of fuel cell involving the reaction between H2(g) with O2(g) to form liquid water; this cell is twice as efficient as the best internal combustion engine. In the cell (in basic conditions), the oxygen is reduced at the cathode, while the hydrogen is oxidized at the anode.
Reduction: O2(g)+2H2O(l)+4e- → 4OH-(aq)
Oxidation: H2(g) + 2OH-(aq) → 2H2O(l) + 2e-
Overall: 2H2(g) + O2(g) → 2H2O(l)
E°cell= Reduction- Oxidation= E°O2/OH- - E°H2O/H2 = .401V – (-.828V) = +1.23
However, this technology is far from being used in everyday life due to its great costs.
Image of A Hydrogen Fuel Cell. (Image made by Ridhi Sachdev)
Natural Occurrence & Other Sources
Naturally Occurring Hydrogen
Hydrogen is the fuel for reactions of the Sun and other stars (fusion reactions). (Video from Youtube)
Hydrogen is the lightest and most abundant element in the universe. About 70%- 75% of the universe is composed of hydrogen by mass. All stars are essentially large masses of hydrogen gas that produce enormous amounts of energy through the fusion of hydrogen atoms at their dense cores. In smaller stars, hydrogen atoms collided and fused to form helium and other light elements like nitrogen and carbon(essential for life). In the larger stars, fusion produces the lighter and heavier elements like calcium, oxygen, and silicon.
On Earth, hydrogen is mostly found in association with oxygen; its most abundant form being water (H2O). Hydrogen is only .9% by mass and 15% by volume abundant on the earth, despite water covering about 70% of the planet. Because hydrogen is so light, there is only .5 ppm (parts per million) in the atmosphere, which is a good thing considering it is EXTREMELY flammable.
Other Sources of Hydrogen
1) Hydrogen gas can be prepared by reacting a dilute strong acid like hydrochloric acids with an active metal. The metal becomes oxides, while the H+ (from the acid) gests reduced to hydrogen gas. This method is only practical for producing small amounts of hydrogen in the lab, but is much too costly for industrial production: Zn(s) + 2H+(aq)→ Zn2+(aq) + H2(g)
2) The purest form of H2(g) can come from electrolysis of H2O(l), the most common hydrogen compound on this plant. This method is also not commercially viable because it requires a huge amount of energy (about 572 kJ): 2H2O(l) → 2H2(g) + O2(g) ΔH°=+572 kJ
3) H2O is the most abundant form of hydrogen on the planet, so it seems logical to try to extract hydrogen from water without electrolysis of water. To do so, we must reduce the hydrogen with +1 oxidation state to hydrogen with 0 oxidation state (in hydrogen gas). Three commonly used reducing agents are carbon (in coke or coal), carbon monoxide, and methane. These react with water vapor form H2(g):
C(s) + 2H2O(g) → CO(g) + H2(g)
CO(g) + 2H2O(g) → CO2 + H2(g)
Reforming of Methane:
CH4(g) + H2O(g) → CO(g) + 3H2(g)
These three methods are industrially feasible (cost effective) methods of producing H2(g).
Figure :Three Hydrogen Isotopes (Image Made by of Ridhi Sachdev)
This page viewed 8532 times | <urn:uuid:8a734d9f-95d6-4f36-a7ae-1b9ddcd77d77> | 4.09375 | 1,974 | Knowledge Article | Science & Tech. | 44.933 |
data: heavy metals in till.
includes 1407 samples from till (moraine) over the area of 100
x 100 kilometers. Variables are: X, Y (sample coordinates, m),
Cu (copper), Co (cobalt), Ni (nickel), Pb (lead), V (vanadium),
Zn (zinc) (contents, ppm=parts per million). Glacial till is a
mixture of rock material, and the levels of metals in till represent
the natural geochemical background. There are three main rock
types present within the area: granites (70% of the area), acid
volcanic rocks (20%), and basic rocks (10%).The concentrations
of heavy metals vary from 0 to 50-200 ppm (Figure 1). The freqency
distributions of values are positively skewed, with varying number
of outliers for different metals.
step is to identify and deal with outliers. After that, we are
interested in identifying and quantifying some "multivariate
fingerprints" (associations of metals) which are related
to known bedrock types. Normally the geochemical data would be
transformed (or outliers removed) in order to proceed with uni-
and multivariate statistical analysis. The outliers may, however,
contain a lot of valuable information. Analysis with GIS techniques
assume interpolation of the point data, and discovering the relations
between elements would mostly be based on visual displays.
look, the outliers can be easily identified, and there are no
visible clusters in the data (Figure 1). From the scatterplot
display one can see that most elements are more or less positively
correlated with each other, except for Pb (Figure 2).
can be brushed to see if there are any atypical ones. For example,
looking at the highest values for Ni, the brushed samples (Figure
3) are located mostly in the eastern part of the area (Figure
4) and certainly belong to the same type of rock, which is enriched
with Ni, Co, Cu, V and Zn (basic rocks, confirmed by a bedrock
to study the natural geochemical anomalies within the area, we
can limit the ranges of concentrations of interest to the last
10% of the samples in the high range. For example, we are interested
in metal Pb and its relations with other metals (the scatterplots
showed no or weak correlations). The 90th percentile value for
Pb is about 35 ppm. Brushing the samples with Pb levels equal
or higher than 35 ppm (Figure 5) shows that they are concentrated
in the S-SE and NE of the area (Figure 6), and their "multivariate
fingerprint" is a bit unclear (Figure 5). Pb and Zn seem
to be correlated (that indicates some common source), but the
relations with other elements are not as easy to interpret. For
example, we can see that Pb and V have several kinds of relations
(both positivly and negatively correlated samples), and we can
try to separate those.
brush the samples that have the highest concentrations of Pb,
together with low V levels (Figure 7). The scatterplot display
indicates that most of those samples (Figure 8) belong to the
above-mentioned S-SE cluster ( Figure 6), and contain relatively
low levels of Co (less than 20 ppm).
select all the samples with V equal to or higher than 75 ppm (the
90th percentile). The scatterplot shows a main cluster of points
in the E side of the area (Figure 10). We can see that there is
strong positive correlation between V and Co (Figure 9). For simplicity,
we do not consider the elements Cu, Ni and Zn.
parallel coordinates display above (Figure 9) we can notice a
small cluster of samples with nearly parallel lines between the
Pb and V axes. Those points have both Pb and V over their respective
90th percentile values (Figure 11, Figure 12) and belong to both
the S-SE (Figure 6) and E (Figure 10) clusters. Looking closer
at the first row (or first column) in the scatterplot matrix (Figure
12) we can tell that this is rather an overlap of the two anomalies
than a new multivariate fingerprint, because the values of V and
Co differ slightly in the two clusters.
of SGU (the Geological Survey of Sweden). | <urn:uuid:1d97e066-2896-4980-a926-b92e4467e3eb> | 3.375 | 949 | Academic Writing | Science & Tech. | 48.824327 |
Indeed, one theory proposes that the Tunguska object was a fragment of Comet Encke. This ball of ice and dust is responsible for a meteor shower called the Beta Taurids, which cascade into Earth's atmosphere in late June and July - the time of the Tunguska event.
Does Lake Cheko have anything to do with the Tunguska blast?
The absence of any crater connected with the Tunguska event has left the door open for some outlandish alternatives to the meteorite theory. A lump of anti-matter, a colliding black hole and - inevitably - an exploding alien spaceship have all been proposed as the possible source of the blast.
But in 2007, Giuseppe Longo, from the University of Bologna, Italy, and his colleagues, suggested they might have found something Leonid Kulik had missed all those years ago.
Lake Cheko does not appear on any maps of the area made before 1908; it also happens to lie North-West-West of the epicentre, on the general path taken by the impactor as it plummeted to Earth.
To Dr Longo, a radar signal from beneath the lake is suggestive of a dense object, possibly part of the Tunguska meteorite, buried about 10m down. The team plans to conduct an expedition to the area in 2009, to investigate this possibility.
"We have no positive proof it is an impact crater, we have come to this conclusion [about Lake Cheko] through the negation of other hypotheses," Dr Longo told BBC News last year.
But other researchers, including Gareth Collins and Phil Bland of Imperial College London, cast doubt on the idea Lake Cheko has anything to do with the Tunguska event.
They point to trees older than 100 years which are still standing around the rim of the lake (and, they say, should have been levelled by the impact) and the features of the lake itself, which, the researchers argue, are inconsistent with an impact origin.
From: BBC News. | <urn:uuid:12959d28-9076-4cea-b9a1-60735a553407> | 3.359375 | 423 | Personal Blog | Science & Tech. | 42.274977 |
this is a true/false question, and i know i can use a graphing calculator, but i dont quite remember how to do it, or what signifies this to be true or false. help please!
two runners are 100 ft. apart. On a signal, they run to a flag on the ground midway between them. The faster runner hesitates for 0.1 sec. The following parametric equations model the race to the flag:
Use your graphing calculator to simulate the race. Based on this simulation, runner x1/y1 wins the race. | <urn:uuid:a9e00467-0e1f-48fe-aca5-0941b7e9b25f> | 2.6875 | 116 | Q&A Forum | Science & Tech. | 71.066728 |
2.4 Lyman Continuum Optical Depth
An additional, important piece of information about the gas distribution and optical depth comes from studying the wavelength region around the Lyman limit, at 912 Å. This wavelength range has been observed only in high luminosity AGNs and most of these objects do not show any significant Lyman jump in absorption or in emission. A conservative limit on the Lyman discontinuity (the relative change in the continuum level at the Lyman limit), in most of the observed cases, is about 0.1. If the line emitting gas completely surrounds the continuum source this must mean (912 Å) << 1. Alternatively, the gas may be clumped into small clouds, covering only a small fraction of the continuum source. In this case, individual clouds can have large Lyman optical depth without violating the observational limit. The strong observed broad lines of MgII and Fell tend to support the second hypothesis, at least for the broad line gas. Whatever the case may be, it seems that only a small fraction of the total available continuum energy is absorbed by the line emitting gas. More observations are needed to confirm this finding in low luminosity AGNs. This issue is of primary importance for the understanding of AGNs and is further discussed in chapters 6 and 10.
An additional, important piece of information about the relative size of the clouds and the source of continuum emission comes from X-ray studies. It is found that low luminosity AGNs tend to show larger X-ray opacity and more material on the line of sight. Very luminous objects do not show any X-ray absorption. This may or may not be related to the emission line clouds discussed later on.
To summarize, the study of AGN line intensities, variability and profiles reveals information about the physical conditions in the line emitting gas, the gas distribution and dynamics. The optical depth of the gas and the amount of continuum energy absorbed by it, is deduced from observations of the Lyman and the soft X-ray continuum. This information can be used to construct theoretical models for AGNs. The following chapters address all these issues in greater detail. | <urn:uuid:aa887f76-4515-4f1e-a31d-525f2f075173> | 2.796875 | 437 | Academic Writing | Science & Tech. | 42.636176 |
A didactic question publish in The Physics Teacher (http://tpt.aapt.org/resource/1/phteah/v41/i1/p8_s1) asks which will melt more ice: 100g of metal at 100C or 100g of wood at 100C. (The particular ...
What is the dominant form of heat transfer between warm water and cold air? If a $100 mg$ drop of water falls through $-40 C$ air, how quickly could it freeze? Is it credible that in very cold ...
When judging if relativity is important in a given phenomenon, we might examine the number $v/c$, with $v$ a typical velocity of the object. If this number is near one, relativity is important. In ...
Suppose I do two experiments to find the triple point of water, one in zero-g and one on Earth. On Earth, water in the liquid or solid phase has less gravitational potential per unit mass than water ... | <urn:uuid:f7157ab2-c051-40bf-ae74-90da78df8626> | 3.34375 | 206 | Q&A Forum | Science & Tech. | 80.998324 |
The link between the seasons and the Sun is pretty obvious on a blazing July day: more sunlight means hotter weather. But the link between the amount of sunlight and Earth's climate is a bit more complicated and subtle than that. Changes in the Sun's energy output may create regional or even global changes in Earth's climate.
The Sun's overall energy output varies on an 11-year cycle. At the cycle's peak, the Sun's magnetic field produces more sunspots and explosions of energy and particles. The Sun also grows a tiny fraction of a percent brighter. That extra energy warms Earth's outer atmosphere, causing it to expand, which creates extra drag on orbiting satellites. Some of that energy also reaches the surface, warming the whole planet by a fraction of a degree.
But a couple of studies released last year showed that there might be more pronounced effects on a regional scale.
One study found that Africa is more likely to see drenching rains about a year before the peak in the solar cycle. Another study found that there can be good-sized variations in temperature from region to region. During the last "solar maximum" in 2004 and 5, for example, the central U.S. grew about half a degree warmer than the country as a whole.
Several spacecraft will monitor the Sun and its interaction with Earth as we head toward the next peak in the solar cycle, around 2012.
More about the Sun tomorrow.
Script by Damond Benningfield, Copyright 2008
For more skywatching tips, astronomy news, and much more, read StarDate magazine. | <urn:uuid:f457c554-de90-44c6-99d6-ceab3e98b1ba> | 4.03125 | 322 | Truncated | Science & Tech. | 57.042898 |
Coastal vegetative shingle
What is it?
Shingle is defined as sediment with particle sizes in the range 2-200 mm. It is a globally restricted coastal sediment type with few occurrences outside North West Europe, Japan and New Zealand. Shingle beaches are widely distributed round the coast of the UK, where they develop in high-energy environments. In England and Wales it is estimated that 30% of the coastline is fringed by shingle. However, most of this consists of simple fringing beaches within the reach of storm waves, where the shingle remains mobile and vegetation is restricted to temporary and mobile strand-line communities.
Shingle structures take the form either of spits, barriers or barrier islands formed by longshore drift, or of cuspate forelands where a series of parallel ridges piles up against the coastline. Some shingle bars formed in early post-glacial times are now partly covered by sand dunes as a result of rising sea levels leading to increased deposition of sand.
The origin of coastal shingle varies according to location. In southern England, much of it is composed of flint eroded out of chalk cliffs. Shingle deposits of Ice Age origin lying on the seabed may be reworked by wave action and redeposited or moved by longshore drift along the coast. In northern and western Britain, shingle may derive from deposits transported to the coast by rivers or glacial out-wash. Shingle structures are of geomorphological interest.
The vegetation communities of shingle features depend on the amount of finer materials mixed in with the shingle, and on the hydrological regime. The classic pioneer species on the seaward edge include sea kale Crambe maritima, sea pea, Lathyrus japonicus, Babington's orache, Atriplex glabriuscula, sea beet, Beta vulgaris, and sea campion Silene uniflora; such species can withstand exposure to salt spray and some degree of burial or erosion.
Further from the shore, where conditions are more stable, more mixed communities develop, leading to mature grassland, lowland heath, moss and lichen communities, or even scrub. Some of these communities appear to be specific to shingle, and some are known only from Dungeness. On the parallel ridges of cuspate forelands, patterned vegetation develops, due to the differing particle size and hydrology. Some shingle sites contain natural hollows that develop wetland communities, and similar vegetation may develop as a result of gravel extraction.
Shingle structures may support breeding birds, including gulls, waders and terns. Diverse invertebrate communities are found on coastal shingle, with some species restricted to shingle habitats.
Shingle structures that are sufficiently stable to support perennial vegetation are a comparatively rare feature, even in the UK.
The situation in the South East
|Extent in England||5,000 ha
|Extent in the SE region|| app. 1,500 ha
|Percentage UK resource in the SE||30%|
|Extent covered by SSSI designation|
Dungeness, in southern England, is by far the largest site, with over 2,000 ha of shingle – there are only five other structures over 100 ha in extent in the UK. The main concentrations of vegetated shingle occur in East Anglia and on the English Channel coast, in North East Scotland, and in North West England and South West Scotland. The Welsh coast has a number of small sites.
Rate of Change
|County||1998 extent (ha)||2008 extent (ha)|
|Isle of Wight||5||5|
|Kent||1810||Data to be added|
The main threats to this habitat are:
- Sediment supply: The health and ongoing development of a shingle feature depend on a continuing supply of shingle. This may occur sporadically as a response to storm events rather than continuously. It is frequently lacking owing to the interruption of coastal processes by: coast defence structures, offshore aggregate extraction, or artificial redistribution of material within the site (eg. Dungeness). Attempts have been made to rectify the situation by mechanical re-profiling, which is likely to fail in the long run because it does not address the lack of new material, or by beach recharge.
- Natural mobility: Shingle features are rarely stable in the long term. Many structures exhibit continuous longshore drift, and ridges lying parallel to the shoreline tend to be rolled over towards the land by wave action in storm events. This movement has a knock-on effect on low-lying habitats behind the shingle. Movement is likely to be accelerated by climate change resulting in sea level rise and increased storminess.
- Exploitation: Shingle structures have been regarded as a convenient source of aggregates, and have been subject to varying degrees of extraction resulting in severe alteration of morphology and vegetation (eg. Dungeness and Spey Bay) or almost total destruction of major parts of the feature (eg. Rye Harbour). Industrial plant, defence infrastructure and even housing have been built on shingle structures (eg. Dungeness, Orfordness, Spey Bay), destroying vegetation and ridge morphology. At Dungeness water is abstracted from the groundwater system; there is some evidence of drought stress on the vegetation, but it is difficult to distinguish the effects of water abstraction from those of gravel extraction.
- Access: Shingle vegetation is fragile; the wear and tear caused by access on foot, and particularly by vehicles, has damaged many sites. The causes include military use, vehicle access to beaches by fishermen, and recreational use. Such disturbance can also affect breeding birds.
- Grazing: In a few cases areas of shingle were traditionally grazed, but this management has now largely ceased, leading to domination by willow carr on wetlands and changes to vegetation structure. The impacts of removal of grazing on breeding birds and other shingle species are not fully understood.
Vision for Coastal Vegetative Shingle
The South East Biodiversity Forum’s vision for this habitat is that there should be:
- No further loss of existing habitat
- Good management, including visitor management, on all extant sites
- No damage to site integrity from activities arising outside the sites, eg. inadequately managed public access
- Re-creation of vegetated coastal shingle on appropriate sites to restore some past losses, including the linking up of fragmented sites
- Greater public appreciation of vegetated coastal shingle and their specialist wildlife, including greater awareness of the impacts of human pressures, such as dog-walking, mountain-biking, dumping of waste
- Creation of alternative green space around important vegetated coastal shingle under pressure for increasing new housing
How we can deliver this vision
- MoD Integrated Land Management Plans (www.defence-estates.mod.uk/conservation/2_biodiversity.php)
- Provision of land management advice by statutory (Natural England) and non-statutory agencies (NGOs)
- Agreements under Higher Level Stewardship
- Project funding (SITA Trust, WREN etc)
- Site management plans
- Minerals after-use (http://www.afterminerals.com/)
- Land purchase/management agreements by NGOs
- River restoration linked to Water Framework Directive Programme of measures
- Catchment Sensitive Farming tackling diffuse pollution issues | <urn:uuid:9a41d030-2c97-4a57-8963-456743ae4428> | 3.8125 | 1,561 | Knowledge Article | Science & Tech. | 27.772387 |
Ever wondered how relative humidity affects how hot it really feels outside? Confused about what the dew point is? Learn all about these and other weather terms with our interactive ABC15.com Weather Calculators below.
Related weather definitions from ABC15:
• temperature – the amount of heat or cold measured on a thermometer.
• dew point – the temperature to which air must be cooled for saturation to occur; the temperature at which dew or frost will form.
• relative humidity – a percentage of moisture in the air compared to what it can hold at that temperature.
What else can the Dew Point tell you?
Dew Point, along with Temperature, can tell you the estimated height of a cloud's base. How? When air rises in a convective current, it cools at the rate of 5.4° F/1,000 ft, and its dew point decreases 1° F/1,000 ft. Therefore, the temperature and dew point are converging at 4.4° F/1,000 ft. Since clouds form when the temperature/dew point spread is 0°, you can use that information to estimate the base of a cumulus cloud. Here's how it works: Subtract the surface dew point from the surface temperature to give you the temperature/dew point spread. Then divide that number by 4.4° F. Finally, multiply this result by 1,000 ft. This total will be your estimate of the cloud base's height above ground level (AGL). Try it using the Cloud Base Calculator!
Click on these ABC15.com Weather Calculators:
• Barometric Pressure Conversions
• Cloud Base Calculator
• Density Altitude Calculator
• Dew Point Calculator
• Fahrenheit / Celsius Conversion
• Heat Index Calculator #1 (using Dew Point)
• Heat Index Calculator #2 (using Relative Humidity)
• Relative Humidity Calculator (using Temperature and Dew Point)
• Relative Humidity / Dew Point Calculator (using Dry Bulb-Wet Bulb Temperatures)
• Wind Chill Calculator
• Wind Velocity Conversions
Learn more weather terminology here.
Hot Content Right Now
Tornadoes destroyed homes and tossed trees around like toothpicks as powerful storms …
Television reports showed houses and buildings in the Oklahoma City suburbs reduced to …
TornadoAlleyLIVE.com is the Internet's first-of-its-kind total immersive, interactive …
A tornado touched down Monday afternoon near Oklahoma City, leaving behind severe damage.
A mix of volunteers and first responders are combing through debris in an Oklahoma City …
More Hot Content | <urn:uuid:fd038ec0-f0bc-49d8-8e13-234c17b884bc> | 3.859375 | 548 | Tutorial | Science & Tech. | 44.197766 |
What's the Harm?
Invasive plants cause billions of dollars of damage each year to native habitats, wetlands and waterways, crops, and parks and other preserved areas. Some of our most endangered species fight a losing battle with invasive plants to maintain their populations.
Invasive plants can crowd out or even cause the death of native plants. Some emit toxic substances that poison the soil for all other plants. Invasives can alter a native habitat so severely that not only are native plants eliminated, but the habitat can no longer support the wildlife it once did. In wetlands, invasive aquatic plants can essentially kill off all life below the surface by blocking out sunshine and oxygen. Invasive plants in croplands can reduce crop yields by 50 percent in some cases.
Some examples of the most harmful invasives in the United States are Purple Loosestrife, which takes over wetlands so aggressively that it eliminates native flora; Kudzu, which can smother woodlands by covering every tree and blocking out all sunlight; salt-cedar or tamarisk trees with long roots that suck up large quantities of water, lowering the water table to the detriment of other plants and animals; and Cheatgrass, which takes over native sagebrush-grassland habitat and provides potent fuel for wildfires that further harm the natives.
Check the Invasive Plant Finder to get a list of invasives for your state, and avoid introducing such plants into your local area. | <urn:uuid:fa25a12f-dead-4cca-b5bf-8bd76bb68d26> | 3.578125 | 295 | Knowledge Article | Science & Tech. | 32.491144 |
Embedding an SQL Database with SQLite
SQLite has an extremely easy-to-use API that requires only three functions with which to execute SQL and retrieve data. It is extensible, allowing the programmer to define custom functions and aggregates in the form of C callbacks. The C API is the foundation for the scripting interfaces, one of which (the Tcl interface) comes included in the distribution. The Open Source community has developed a large number of other client interfaces, adapters and drivers that make it possible to use SQLite in other languages and libraries.
Using the C API requires only three steps. Basically, you call sqlite_open() to connect to a database, in which you provide the filename and access mode. Then, you implement a callback function, which SQLite calls for each record it retrieves from the database. Next, call sqlite_exec(), providing a string containing the SQL you want to execute and a pointer to your callback function. Besides checking for error codes, that's it. A basic example is illustrated in Listing 2.
One of the nice things about this model that differs from other database client libraries is the callback function. Unlike the other client APIs where you wait for the result set, SQLite places you right in the middle of the result-gathering process, in the thick of things as they happen. Therefore, you play a more active role in fetching data and directly influence the retrieval process. You can aggregate data as you collect it or abort record retrieval if you want. The point is, because the database is embedded, your application is essentially as much the server as it is the client, and SQLite takes full advantage of this through the use of its callback interface.
In addition to the standard C API, an extended API makes it even easier to fetch records, using sqlite_get_table(), which does not require a callback function. This function behaves more like traditional client libraries, taking SQL and returning a rowset. Some of the features of the extended API are functions to extend SQL by adding your own functions and aggregates, which is addressed later in this article.
Finally, if for some reason you need an ODBC interface, I am pleased to inform you that one is available, written by Christian Werner. His ODBC driver can be found at www.ch-werner.de/sqliteodbc.
While SQLite does not support sequences per se, it does have an auto-increment key and the equivalent of MySQL is mysql_insert_id(). A primary key can be set to auto-increment by declaring it INTEGER PRIMARY KEY. The value of the last inserted record for that field is obtained by calling sqlite_last_insert_rowid().
You can store binary data in SQLite columns with the restriction that it only stores up to the first NULL character. In order to store binary data, you must first encode it. One possibility is URL-style encoding; another is base64. If you have no particular preference, SQLite makes life easy for you through two utility functions: sqlite_encode_binary() and sqlite_decode_binary().
SQLite is as threadsafe as you are. The answer more or less centers around the SQLite connection handle returned by sqlite_open(). This is what should not be shared between execution contexts; each thread should get its own. If you still want threads to share it, protect it with a mutex. Likewise, connection handles should not be shared across UNIX fork() calls. This is more common sense than anything else. Bottom line: thread or process, get your own connection handle, and everything should be fine.
SQLite uses the concept of a pragma to control runtime behavior. Pragmas are parameters that are set using SQL syntax. There are pragmas for performance tuning, such as setting the cache size and whether to use synchronous writes. There are some for debugging, like tracing the parser and the VDBE, and others still are for controlling the amount of information passed to client callback functions. Some pragmas have options to control their scope, having one variant that lasts only as long the current session and another that takes effect permanently.
SQLite sorts a column lexigraphically if, and only if, that column is declared as type BLOB, CHAR, CLOB or TEXT. Otherwise, it sorts numerically. SQLite used to make decisions on how to sort a column solely by its value. If it “looked” like a number, then it was sorted numerically, otherwise lexigraphically. A tremendous amount of discussion about this appeared on the mailing list, and it eventually was refined to the rules it uses today, which allow you to control the method of comparison by the declared type in the schema.
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
|Trying to Tame the Tablet||May 08, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- New Products
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Home, My Backup Data Center
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Trying to Tame the Tablet
- New Products
- Ahh, the Koolaid.
4 min 17 sec ago
- git-annex assistant
6 hours 3 min ago
- direct cable connection
6 hours 26 min ago
- Agreed on AirDroid. With my
6 hours 36 min ago
- I just learned this
6 hours 40 min ago
7 hours 10 min ago
- not living upto the mobile revolution
10 hours 2 min ago
- Deceptive Advertising and
10 hours 37 min ago
- Let\'s declare that you have
10 hours 38 min ago
- Alterations in Contest Due
10 hours 39 min ago | <urn:uuid:8b4e57e1-c2ec-4709-9433-315ef65bac85> | 3.28125 | 1,325 | Knowledge Article | Software Dev. | 47.875714 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 7 results on physics.org and 42 results in our database of sites
41 are Websites,
1 is a Videos,
and 0 are Experiments)
Search results on physics.org
Search results from our links database
A short article about serving beer in different glasses.
Andy from Redemption Brewing about the physics behind keeping beer separate from oxygen in the air when moving it to a conditioning tank.Exposure to oxygen causes reactions in beer that lead to ...
Article looking at the effect of temperature on how fast a beer ages
A great article explaining the reactions that go on to give beer its colour.
How does pressure help get beer out of a keg? Find out in this article
A in-depth guide at what goes on in the brewing process to make your beer, from developing flavours to ensure the right amount of bubbles
This page from Marshall Brain's HowStuffWorks explains how widgets in beer cans work to create bubbles.
Mathematicians analyse beer bubbles to peer deeper into the structure of materials
Article about beer made with barley grown in space.
Beer was tested to see whether it would be safe to drink after experiencing radiation from a nuclear blast.
Showing 11 - 20 of 42 | <urn:uuid:8340b6fe-5d5d-488f-913a-c7857fa52c7f> | 2.953125 | 297 | Content Listing | Science & Tech. | 61.106892 |
Electric rocket engines known as Hall thrusters, which use a super high-velocity stream of ions to propel a spacecraft in space, have been used successfully onboard many missions for half a century. Erosion of the discharge channels walls, however, has limited their application to the inner solar system. A research team at Caltech's Jet Propulsion Laboratory has found a way to effectively control this erosion by shaping the engine's magnetic field in a way that shields the walls from ion bombardment.
A century after Western explorers first crossed the dangerous landscapes of the Arctic and...
Electric rocket engines known as Hall thrusters, which use a super high-velocity stream of ions...
After imaging during the holidays, NASA's Mars rover Curiosity resumed driving Jan. 3 and pulled within arm's reach of a sinuous rock feature called "Snake River." Snake River is a thin curving line of darker rock cutting through flatter rocks and jutting above sand. Curiosity's science team plans to get a closer look at it before proceeding to other nearby rocks.
Stanford University researchers, in collaboration with NASA's Jet Propulsion Laboratory and the Massachusetts Institute of Technology, have designed a robotic platform that could take space exploration to new heights. The mission proposed for the platform involves a mother spacecraft deploying one or several spiked, roughly spherical rovers to the Martian moon Phobos.
A California judge has tentatively ruled in favor of NASA's Jet Propulsion Laboratory in a wrongful termination lawsuit brought by a former computer specialist who alleged he was singled out in part because of his belief in intelligent design.
The ability to ingest solid samples and examine them using X-ray diffraction is a core capability for the Curiosity rover. This week that ability was tested using a small scoop of minerals that has been shaken to remove any residues carried from Earth. These particles have been placed inside CheMin, an analytical instrument about the size of a laptop computer inside a carrying case.
Would you like icy organics with that? Maybe not in your coffee, but researchers at NASA's Jet Propulsion Laboratory are creating concoctions of organics, or carbon-bearing molecules, on ice in the laboratory, then zapping them with lasers. Their goal: to better understand how life arose on Earth.
Today marks the 35th anniversary of Voyager 1's launch to Jupiter and Saturn. Since leaving the ringed gas giant behind many years ago, Voyager 1 has rocketed toward an invisible boundary that no human spacecraft has ever ventured beyond. Scientists now say, based on instrument readings, that it is about to leave our solar system and venture into interstellar space.
Scientists using the Mini-RF radar on NASA's Lunar Reconnaissance Orbiter have successfully estimated the maximum amount of ice likely to be found inside a permanently shadowed lunar crater located near the moon's South Pole. Their results, which offer more definite support to prior findings, show as much as 5 to 10% of the material, by weight, could be patchy ice.
Cheers and applause echoed through the NASA Jet Propulsion Laboratory late Sunday after the most high-tech interplanetary rover ever built signaled it had survived a harrowing plunge through the thin Mars atmosphere. Minutes after the landing signal reached Earth at 10:32 p.m. PDT, Curiosity beamed back the first black-and-white pictures from inside the crater showing its wheel and its shadow, cast by the afternoon sun.
Data from NASA's Cassini spacecraft have revealed Saturn's moon Titan likely harbors a layer of liquid water under its ice shell. Researchers saw a large amount of squeezing and stretching as the moon orbited Saturn. They deduced that if Titan were composed entirely of stiff rock, the gravitational attraction of Saturn would cause bulges, or solid "tides," on the moon only 3 ft in height. Spacecraft data show Saturn creates solid tides approximately 30 ft in height, which suggests Titan is not made entirely of solid rocky material.
Astronomers using data from NASA's Spitzer Space Telescope have, for the first time, discovered buckyballs in a solid form in space. Prior to this discovery, the microscopic carbon spheres had been found only in gas form in the cosmos.
A new NASA study suggests if life ever existed on Mars, the longest lasting habitats were most likely below the Red Planet's surface. Spectral evidence gathered by orbiters support a new hypothesis that persistent warm water was confined to the subsurface, and erosional were carved during brief periods when the surface supported stable water.
A NASA-led team has used radar sounding technology developed to explore the subsurface of Mars to create high-resolution maps of freshwater aquifers buried deep beneath an Earth desert, in the first use of airborne sounding radar for aquifer mapping. The research may help scientists better locate and map Earth's desert aquifers, understand current and past hydrological conditions in Earth's deserts, and assess how climate change is impacting them.
Last month, NASA's Dawn spacecraft began orbiting the 330-mile-wide rocky body of Vesta, the asteroid belt’s second-largest resident. The latest photos have been full of surprises, revealing extensive features, from multiple craters to mysterious grooves, that will keep scientists busy for years.
Gale Crater was chosen as the target for the $2.5 billion Mars Science Laboratory mission after an extensive review of dozens of potential sites. NASA chose this site because they believe they have located the boundary where life may have sprung up and where it may have been extinguished.
Water really is everywhere. Two teams of astronomers, each led by scientists at the California Institute of Technology, have discovered the largest and farthest reservoir of water ever detected in the universe. Looking from a distance of 30 billion trillion miles away into a quasar, the researchers have found a mass of water vapor that's at least 140 trillion times that of all the water in the world's oceans combined.
Vesta, thought to be the source of a large number of meteorites that fall to Earth, was visited close-up over the weekend by NASA’s Dawn space probe, which is the first spacecraft to enter orbit around an object in the main asteroid belt between Mars and Jupiter.
Ever since a crash landing on Earth grounded NASA's Genesis mission in 2004, scientists have been gathering, cleaning, and analyzing solar wind particles collected by the spacecraft. Now, two new studies published in Science reveal that Earth's chemistry is less like the sun's than previously thought.
Based on water vapor plumes found by the spacecraft Cassini in 2005, researchers already suspected that Enceladus hid a liquid saltwater ocean. Now, based on the dynamics of plumes studied by the Cassini team, they are now more certain that 50 miles beneath the surface crust a large body of liquid water exists between the rocky core and the icy mantle.
Two digital color cameras riding high on the mast of NASA's next Mars rover will complement each other in showing the surface of Mars in exquisite detail. They are the left and right eyes of the Mast Camera, or Mastcam, instrument on the Curiosity rover of NASA's Mars Science Laboratory mission, launching in late 2011.
Design techniques honed at NASA's Jet Propulsion Laboratory in Pasadena, Calif., for Mars rovers were used to create the rover currently examining the inside of Japan's nuclear reactors, in areas not yet deemed safe for human crews.
New evidence from the discovery of a huge underground reservoir of dry ice, or frozen carbon dioxide, at the south pole of Mars, suggests to Southwest Research Institute scientists that the red planet’s climate 600,000 was probably a lot like the American Dust Bowl of the 1930s — but a lot worse.
Messenger first moved into close orbit around the speedy inner planet about two weeks ago. By the end of this week, NASA will have received more than 15,000 pictures from the $446 million spacecraft, giving us a comprehensive view of a heavily-cratered world that may hold ice at its south pole.
The March 11, magnitude 9.0 earthquake in Japan may have shortened the length of each Earth day and shifted its axis. But don't worry—you won't notice the difference.
Finding life on Mars could get easier with a creative adaption to a common analytical tool that can be installed directly on the robotic arm of a space rover.
Engineers just installed six new wheels on the Curiosity rover, and rotated all six wheels at once on July 9, 2010. This milestone marked the first in a series of "tune ups" to get the rover ready for a drive in the clean room where it is being assembled at NASA's Jet Propulsion Laboratory in Pasadena, Calif. Curiosity is the centerpiece of NASA's Mars Science Laboratory mission, which is expected to launch in late 2011, and touch down wheels-first in summer 2012. | <urn:uuid:173b5802-cc35-437c-925d-146d598ed74c> | 3.59375 | 1,801 | Content Listing | Science & Tech. | 40.188796 |
Mar. 13, 2013 Diving and plunging through the waves to feed, some whales throw their jaws wide and engulf colossal mouthfuls of fish-laden water while other species simply coast along with their mouths agape (ram or skim feeding), yet both feeding styles rely on a remarkable substance in the whales' mouths to filter nutrition from the ocean: baleen. Alexander Werth from Hampden-Sydney College, USA, explains that no one knew how the hairy substance actually traps morsels of food.
'The standard view was that baleen is just a static material and people had never thought of it moving or that its function would be altered by the flow of water through the mouth', he says. Werth became fascinated with the substance during his postdoc days, when he worked with the Inupiat Eskimos of Barrow, Alaska, and decided to find out more about how the flexible material filters whale-sized mouthfuls of water. He publishes his discovery that baleen is a highly mobile material that tangles in flowing water to form the perfect net for trapping food particles at natural whale swimming speeds in The Journal of Experimental Biology.
Explaining that baleen is composed of keratin – the same protein that makes hair and fingernails – Werth also describes how the protein forms large continually growing plates, each with an internal fibrous core sandwiched between smooth outer plates. Whales usually carry 300 of these structures on each side of their mouths – arranged perpendicular to the direction of water flowing into the mouth – and Werth explains that the plates are continually worn away by the tongue to form bristly food-trapping fringes on the tongue-edge of each plate. In addition, the baleen fringes of the skim-feeding bowhead whale's bristles are twice as long as the lunging humpback's. Having obtained baleen samples from the body of a stranded humpback during graduate work at the New England Aquarium and collected samples from ram-feeding bowheads in Alaska, Werth began to compare how well the baleen trapped minute latex beads carried in flowing water.
First, he tested a small section of each type of baleen in a flow tank as he varied the flow speed from 10 to 120 cm/s and altered the inclination of the baleen to the water flow from parallel to perpendicular. Monitoring the fringes and recording how many beads became lodged for 2 s or more, Werth saw that the bristles trapped most beads at the lowest speeds, and as the flow increased the bristles began streaming like hair, increasing the fringe's porosity and reducing the number of snagged particles: single baleen plates are less effective filters at higher swimming speeds.
However, Werth says, 'It doesn't make sense to look at flow across a single plate of baleen, it's like looking at feeding with a single tooth; you can't chew anything with just one tooth, you need a whole mouthful.' So, he built a scaled down rack of six, 20 cm long baleen plate fragments and tested how well they trapped the latex beads.
This time, Werth could clearly see the fringes from adjacent baleen plates becoming tangled and more matted as the flow increased, trapping the most particles at speeds ranging from 70 to 80 cm/s, which corresponds exactly with the swimming speed of bowhead whales skimming through shoals of copepods. However, when he compared the porosity of the baleen of both species, he was surprised by the similarity of the performances, despite the whales' different feeding styles.
Having found that baleen filters best at the natural swimming speed of skim-feeding bowheads, Werth is keen to scale up and investigate how full-sized 4 m long baleen plates perform
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- A. J. Werth. Flow-dependent porosity and other biomechanical properties of mysticete baleen. Journal of Experimental Biology, 2013; 216 (7): 1152 DOI: 10.1242/jeb.078931
Note: If no author is given, the source is cited instead. | <urn:uuid:ae1f6ef4-6c1a-4877-81ad-43071a53e308> | 3.59375 | 885 | Truncated | Science & Tech. | 44.613471 |
The use of web application frameworks by developers has gained a significant market share in the last few years.
These frameworks allow developers to significantly increase their productivity by minimizing the amount of time they spend writing and compiling code. One of the preferred and most exciting web application frameworks in use today is Ruby on Rails, also known simply as Rails or RoR.
Publicly released in 1995, the Ruby language was developed by Yukihiro Matsumoto, nicknamed Matz, to carefully balance functional and imperative programming. When his search for a scripting language “more powerful than Perl and more object-oriented than Python” turned up nothing suitable, Matz constructed Ruby so everything is an object. Since the rules that apply to objects apply to everything in Ruby, the new user has no exceptions to remember.
Software frameworks provide the scaffolding developers use to support their code. Function libraries, project templates, and layers for database access are a few of the components a framework might provide. Danish programmer David Heinemeier Hansson extracted what became the Rails framework from his work on the project management tool Basecamp and chose Ruby as its language.
Released as open source in 2004, Rails allows rapid development and deployment of web programs by providing a solid consistent infrastructure. This allows developers to focus on the unique features of the software they are creating instead of worrying about the underlying architecture.
Ruby on Rails is, as mentioned, an open source framework written in the Ruby programming language. Development of RoR has quickly grown since its release in 2004. The current RoR version at this writing is 3.0. The underlying architecture of Rails consists of a Model-View-Controller approach pattern of development.
The current output options of a Rails application are HTML or XML; although the final number of lines of code is negligible when compared to other frameworks. The popularity of RoR is undeniable: its usage is estimated at over 200,000 websites, and about 2% of the most visited websites take advantage of the Rails framework as their enterprise solution. Organizations such as Amazon, NASA, the British Broadcasting Corporation, IBM, and Cisco have implemented Rails for their websites.
In another simplification over existing object-oriented languages, Ruby supports only single-level inheritance. Collections of methods, called modules and referred to as categories in Objective-C, can be included into a Ruby class. This technique, known as a mixin, gives the user the flexibility of multiple inheritance without needless complexity or restriction. Ruby is a powerful tool with a short learning curve.
Ruby is a highly portable language, running on many versions of Linux/UNIX, OS/2, Windows 95 through Windows 7, Mac OS X, and even DOS. The language offers threading independent of the operating system, so multi-threaded applications can be written for systems that don’t have native support.
One of the reasons Rails is growing in popularity among developers is that it delivers certain assumptions about how web development has come along over the last few years. The philosophy behind Ruby on Rails is to allow the developer to start seeing results without having to resort to long configuration files or redundant writing of code.
To this extent, the rails framework presents an orthodox yet effective approach. One special strength of Rails is that it favors meta-programming: the use of existing programs to write new programs. RoR is based on the concept that coding should be breeze, not a burden.
The elegance of Ruby on Rails can be seen in its internal database configuration file. Assuming that SQLite3 is installed, take a look at how RoR allocates a separate database file in each environment for development, testing, and production:
1 2 3
adapter: sqlite3 database: db/development.sqlite3 timeout: 5000
1 2 3
adapter: sqlite3 database: db/test.sqlite3 timeout: 5000
1 2 3
adapter: sqlite3 database: db/production.sqlite3 timeout: 5000 | <urn:uuid:c8815571-f3fa-4628-bf66-1db8415ffb8d> | 3.046875 | 811 | Knowledge Article | Software Dev. | 32.365712 |
A University of Delaware engineer’s solar-powered, hydrogen-producing reactor may solve the energy crisis.
We live in an era that has seen more technological progress than all the centuries before it combined but we are still at the mercy of batteries and hydrocarbons (fossil fuels such as gasoline). (There are already vehicles running on hydrogen such as the Honda FCX Clarity, available in California, and the Mercedes-Benz F-Cell, supported by a network of hydrogen-fueling stations.) While green energy sources are slowly replacing more carbon intensive sources such as coal, nuclear, and natural gas, (one of) the ultimate goal is to use a fuel source such as hydrogen, the universe’s most abundant element (though not on Earth itself).
On Earth, the issue is that hydrogen isn’t found in abundance in its natural form. The easiest method of obtaining hydrogen is by performing electrolysis on water, which is made up of two hydrogen atoms and one oxygen atom (hence H2O) to produce H2 (hydrogen) and O2 (oxygen). Thus, to make use of hydrogen, it must be separated or broken down from more complex elements and this typically requires a larger net amount of energy than would be obtained from the hydrogen itself when used as a fuel, thereby defeating the purpose.
Now, a mechanical engineer Erik Koepf at the University of Delaware may have found the first sustainable reactor capable of producing pure hydrogen for fuel. The reactor uses solar energy along with zinc oxide (ZnO) and water (H2O) to create a reaction that produces hydrogen. The entire reaction process is carbon-free, making it clean for the environment.
The reactor works by using a common concept: the solar oven. Using solar energy, which is abundant in many areas on Earth, the reactor is heated to 3,000° F/1,649° C, which causes gravity-fed zinc oxide to vaporize into a zinc vapor. The vapor is then separated and reacted with water to produce hydrogen.
The reaction is as follows:
Reaction #1: 2(ZnO) [zinc oxide] + heat [3,000° F] → 2(Zn) [zinc] + (O2) [oxygen]
Reaction #2: 2(Zn) [zinc] + 2(H2O) [water] → 2(ZnO) [zinc oxide] + 2(H2) [hydrogen]
The zinc oxide produced by Reaction #2 is then reused again and the process starts over, sustaining itself.
2012-04-18 UPDATE: The hydrogen is then burned to release energy with the byproduct of pure water.
2(H2) [hydrogen] + O2 [oxygen, required to "burn" anything] → energy + 2(H2O) [pure water]
The reactor prototype, which is only 2×3 feet, but weighs a gargantuan 1,750 lbs. will be undergoing testing in Zurich in the coming weeks where it will be subjected to the equivalent of light from 10,000 suns. This testing will determine how efficient it is at producing hydrogen as well as its durability. If testing proves successful, the design will be scaled up for industrial use.
While this won’t immediately solve the energy crisis (infrastructure and other things will need to be changed over decades), it will certainly speed the transition away from a fossil fuel-based economy. (Cue X-Files/big oil conspiracy theorists…)
With advances such as this solar reactor, hopefully we’ll see a clean fuel source in widespread use in our lifetimes. | <urn:uuid:53b4e859-9814-4bf6-84d1-ce6dd8db2862> | 4.125 | 766 | Personal Blog | Science & Tech. | 40.839505 |
Photosynthesis is a process in plants, algae, and some prokaryotes, that coverts solar insolation into chemical energy stored in glucose or other organic compounds. Photosynthesis occurs in slightly different ways in higher plants relative to photosynthetic bacteria.
|Photosynthesis (Source: Wikimedia Commons)|
Photosynthesis in Higher Plants
In higher plants, photosynthesis involves chemical reactions in which the sun's energy is transferred along a series of oxidation and reduction events until it is stabilized in the chemical bonds of glucose. In the broadest sense, light energy converts carbon dioxide (CO2) into chemical energy while water is split to release oxygen.
Most photosynthesis occurs in the leaves of plants, although there may be photosynthetic stems, flowers, and fruits. At the cellular level, photosynthesis occurs inside organelles known as chloroplasts. Plants use photosynthetic pigments (e.g., chlorophyll) to capture the light energy which is ultimately converted into chemical energy in the form of sugars. Photosynthesis involves two stages, the light reactions and calvin cycle reactions.
|Internal view of a chloroplast. (Source: Carbon Dioxide and the Earth)|
Electomagnetic energy from the sun is captured by photosynthetic pigments and is transferred along a series of proteins and iron-sulfur containing compounds along the thylakoid membranes of the chloroplast; the net result being the formation of high-energy compounds such as ATP and NADPH. Water molecules are split during the transfer of light energy along the membranes. Oxgyen is produced as a result of this water-splitting event.
Calvin Cycle Reactions
In the reactions of the Calvin Cycle, chemical energy held within ATP and NADPH are used to convert carbon dioxide into sugars through a series of enzymatic reactions. In the initial step of the Calvin Cycle, carbon dioxide (from the atmosphere) reacts with a five-carbon compound, ribulose bisphosphate (RuBP), in a reaction that is catalyzed by the enzyme RuBP carboxylase/oxygenase ("RuBisco"). The first stable product of this reaction is a three-carbon compound known as phosphoglycerate (PGA). Energy captured in the light reactions in form of ATP and NADPH is used to convert PGA into glyceraldehyde 3-phosphate (G3P) which can be converted to other organic compounds, or using energy from ATP, some is converted into RuBP to continue the cycle. The Calvin cycle reactions occur in the stroma of the chloroplasts.
Alternative Mechanisms of Photosynthesis
Sagauaro cacti, a species with CAM photosynthesis.
The standard pattern of photosynthesis (described above) is known as C3 photosynthesis because the first stable product of carbon fixation is a three-carbon compound. Approximately 90% of all plants on earth utilize this pathway to convert CO2 into sugars. All trees and many shrub species use this pathway. Two alternative mechanisms of photosynthesis have evolved in plants living in hot, arid environments. Plants using CAM photosynthesis (Crassulacean Acid Metabolism) open their stomata at night in order to take up carbon dioxide when it is cooler so rates of water loss from the plant are lower. Carbon is stored at night as an organic acid. During the daylight hours, the organic acid releases carbon dioxide which then enters the Calvin cycle. Approximately 7% of all plant species on earth use this photosynthetic strategy to survive. These include the succulent cacti and euphorbias found in the harsh desert areas on earth as well as many tropical orchids that grow as epiphytes on trees.
Many grasses utilize a third photosynthetic biochemical pathway known as [[C4 photosynthesis|RTENOTITLEC4 photosynthesis]], where carbon dioxide is initially incorporated into an organic acid in mesophyll cells which is transported into the bundle sheath cells. The acid is decarboxylated inside the bundle sheath cells and the CO2 is concentrated inside these cells. Rubisco is flooded with CO2 and sugars are made in abundance using the Calvin cycle. Concentrating carbon dioxide in the bundle sheath cells minimizes photorespiration. Only 1% of all plant species on earth utilize this photosynthetic pathway, but these include most tropical and sub-tropical grasses, including crops such as corn and the millets. Many of these species, such as switchgrass and Miscanthus exhibit high yields because photorespiration is reduced.
Factors limiting photosynthesis
The rate of photosynthesis is determined by environmental factors. Factors limiting photosynthetic rates include light intensity, water availability, soil nutrient content, concentration of carbon dioxide and temperature.
When water availability is reduced, photosynthesis is mainly limited by a reduction in the diffusion of CO2 into the leaf through the stomata. Stomata typically close when atmospheric humidity and soil moisture availability decline. Under these conditions, high light conditions can cause thylakoid membranes to become damaged in a process known as photoinhibition. As a result photosynthesis is limited by photoinhibition. A similar decline of photosynthetic efficacy is observed when water availability is inhibited by overgrazing induced root zone reduction combined with excessive leaf destruction.
As internal leaf CO2 concentrations decline due to stomatal closure, the enzyme Rubisco tends to fix more oxygen and liberates CO2 in a process known as photorespiration. The net result is that photosynthesis becomes limited by the process of photorespiration. This process occurs mostly in C3 plants and is enhanced under hot dry conditions.
Importance of Photosynthesis
Photosynthesis is an important process because it harnesses the sun's energy into utilizable forms of energy on earth. Most biological organisms such as animals and fungi are unable to directly use light energy to power biological processes such as active transport, cell division and muscle movement. ATP is used to power these processes. Photosynthesis converts light energy into chemical energy in the form of glucose and then the process of cellular respiration converts energy in glucose to energy in the form of ATP which is ultimately used to power biological processes. The energy produced by photosynthesis forms the basis of virtually all terrestrial and aquatic food chains. As a result, photosynthesis is the ultimate source of carbon in the organic molecules found in most organisms. The high oxygen concentration in the atmosphere is derived directly from the light reactions of photosynthesis. Prior to the evolution of photosynthesis on earth, the atmosphere was anoxic.
The Role of Photosynthesis in Biofuel production
The process of photosynthesisis is of fundamental importance in utilizing the carbon locked up in plant material as a source of biofuel energy. Currently, ethanol is derived from cellulose from corn and sugar cane and biodiesel from soybean and other oil crops. It is estimated that the USA will rely on obtaining approximately 30% of its energy resources from cellulosic ethanol by the year 2030. As a result improving the current yield of fast growing C4 plant species such as corn, switchgrass and Miscanthus under various environmental conditions is of primary interest at many research institutions throughout the world. | <urn:uuid:d9ff77fe-e45e-473a-b3f5-b60b2e59606c> | 3.953125 | 1,481 | Knowledge Article | Science & Tech. | 20.637884 |
Branch of earth science that deals with water as it occurs in the atmosphere, on the surface of the ground, and underground.
California Water-Quality Assessment [ More info] Results of activities in California conducted under the National Water-Quality Assessment Program (NAWQA): maps, publications and data.
California's BAY-DELTA: USGS Science Supports Decision Making [ More info] What causes changes in the hydrology, the ecology and the water quality of the Sacramento-San Joaquin River Delta and the San Francisco Bay estuary? These studies help state and local agencies manage resources well.
California's Central Valley Groundwater Study: A Powerful New Tool to Assess Water Resources in California's Central Valley [ More info] Modeling effort that integrates a wide variety of geographic, hydrologic, agricultural, climatic, and biological information to help local land managers address resource use issues.
Canoeing North Dakota rivers [ More info] Links to streamflow, stage, pictures, maps, river descriptions, and general information for canoeing on North Dakota's rivers. Files are in PDF format.
Cascades Volcano Observatory - Mount St. Helens real-time hydrologic monitoring [ More info] Real-time and historic data from gaging stations for water and mudflow detection in the Mount St. Helens, WA vicinity with data tables and plots, interactive location map and station descriptions.
Changes in Water Levels and Storage in the High Plains Aquifer, Predevelopment to 2007 [ More info] Summarizes changes in water levels and drainable water in storage in the High Plains aquifer from predevelopment (before about 1950) to 2007.
Changes in water levels and storage in the High Plains Aquifer, predevelopment to 2009 [ More info] Summarizes graphically the areas where water levels have dropped, and by how much, in this extensive underground water reservoir that covers several states in the mid-continent.
Characterization of Fish Creek, Teton County, Wyoming, 2004-08 [ More info] Preliminary results of hydrologic and biological sampling confirm anecdotal reports that this stream shows unusually high algal growth due to elevated nutrients, probably from groundwater.
Characterizing contaminant concentrations with depth by using the USGS well profiler in Oklahoma, 2003-9 [ More info] Use of specialized sampling equipment to study public water supply wells, with examples showing Arsenic in two aquifers.
Chloride control and monitoring program in the Wichita River Basin, Texas, 1996-2009 [ More info] Chloride concentrations in this river have historically been high due to natural saltwater springs and seeps from geologic formations. We monitor the water to help assess the progress of human efforts designed to mitigate this problematic salinity.
Alphabetical Index of Topics
a b c d e f g h i j k l m n o p q r s t u v w x y z | <urn:uuid:aa8fd955-405c-4d16-99d0-9b694de1b3a9> | 3.15625 | 607 | Content Listing | Science & Tech. | 22.518192 |
Broadcast meterologists such as Alan Sealls (WKRG, Mobile, Alabama) operate in studios far more technologically advanced than those of a generation ago.
Perhaps nothing about television weather has changed more in the last several decades than the look of it. A typical early-1950s weathercast featured crude hand-drawn symbols for warm and cold fronts. As recently as 1980, nearly all weather maps were created manually. Today, maps are rendered by computer in a variety of styles, with weathercasters providing the finishing touches rather than generating maps from scratch. The earliest radar displays showed nothing but diffuse blocks of white against a dingy gray background, in contrast to the brilliant colors and pinpoint definition of contemporary radars.
According to weathercaster Tom Skilling (WGN, Chicago), one of the nation's most accomplished users of weather graphics: “We've undergone a revolution in the last four decades not only in our ability to measure the atmosphere and its evolution but in our ability to visualize it.”
ROBERT HENSON is a contributing editor to Weatherwise and a writer/editor at the University Corporation for Atmospheric Research. This article is excerpted from his book Weather on the Air: A History of Broadcast Meteorology, to be published this summer by the American Meteorological Society. | <urn:uuid:5a309d6b-7563-4ac8-b6fe-168f6694d8a5> | 2.96875 | 263 | Truncated | Science & Tech. | 22.564923 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2000 April 14
Explanation: Not all stars form a big Q after they explode. The shape of supernova remnant E0102-72, however, is giving astronomers a clue about how tremendous explosions disperse elements and interact with surrounded gas. The above image is a composite of three different photographs in three different types of light. Radio waves, shown in red, trace high-energy electrons spiraling around magnetic field lines in the shock wave expanding out from the detonated star. Optical light, shown in green, traces clumps of relatively cool gas that includes oxygen. X-rays, shown in blue, show relatively hot gas that has been heated to millions of degrees. This gas has been heated by an inward moving shock wave that has rebounded from a collision with existing or slower moving gas. This big Q currently measures 40 light-years across and was found in our neighboring SMC galaxy. Perhaps we would know even more if we could buy a vowel.
Authors & editors:
Jerry Bonnell (USRA)
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
& Michigan Tech. U. | <urn:uuid:2cc5cd8d-85c1-4047-8bde-c3e3f357f10f> | 3.328125 | 269 | Knowledge Article | Science & Tech. | 50.901696 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
March 6, 1997
Explanation: Why put observatories in space? Most telescopes are on the ground. On the ground, you can deploy a heavier telescope and upgrade it more easily. The trouble is that Earth-bound telescopes must look through the Earth's atmosphere. First, the Earth's atmosphere blocks out a broad range of the electromagnetic spectrum, allowing a narrow band of visible light to reach the surface. Telescopes which explore the Universe using light beyond the visible spectrum, such as those onboard the Compton Observatory (gamma rays), the ASCA satellite (x-rays), or the new ultraviolet and infrared instruments on the above-pictured Hubble Space Telescope (HST), need to be carried above the absorbing atmosphere. Second, the Earth's atmosphere blurs the light it lets through. The blurring is caused by varying density and continual motion of air. By orbiting above the Earth's atmosphere, the Hubble can get clearer images. In fact, even though HST has a mirror 15 times smaller than large Earth-bound telescopes, it can still resolve detail almost 100 times finer.
Authors & editors:
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
&: Michigan Tech. U. | <urn:uuid:26029290-ccae-4446-909e-29b8990deb3c> | 3.765625 | 287 | Knowledge Article | Science & Tech. | 47.462292 |
NULL is a concept in SQL databases (exact implementation can vary) that is used to represent unknown, missing or inapplicable data. Normally it is considered to be a state of the data as opposed to the value of the data.
Ordinary equivalence comparisons do not work with
NULL for the above reason. A
field IS NULL as opposed to
field = NULL.
NULL handling can be contentious as some believe
NULL has no place in a database, and others use it for business logic.
NULLs can also be dangerous and lead to unexpected results when used with the
NOT IN operator.
A good analogy for the meaning of
You are at a party with some people you know and some others you don't.
There are 3 men that you know and 4 men that are strangers. To you at that time, those 4 men have a name of
NULL - you know they have names, but you do not know what they are.
If you were asked "Is that person Bill?" the answer would be
I don't know - which is what most RDBMS will return as well when asked about a value being equal to
However, you would not be able to say "His name is not Bill or Jim", which is why
NULL will not return
TRUE (which in this case means "no match") when used with a
NOT IN comparison. | <urn:uuid:7de8c6da-fd09-43d8-961a-23b2858c94f8> | 2.984375 | 287 | Knowledge Article | Software Dev. | 62.538934 |
Xray Crystallography/Radiation Damage
The problem of radiation damage to crystals, even to cryocooled crystals, has become a major problem again since the advent of the brilliant beam 3rd generation synchrotrons.
Basis of Radiation Damage
It has long been known that radiation damage to proteins is important, in 1962 Blake and Phillips showed that damage was proportional to the dose received by the crystal and that each 8keV photon was capable of disrupting 70 molecules and disordering 90 more.
At 12.7 keV, of the 2% of X-rays that interact with the crystal, 84% interact through the photo-electric effect (Murray J and Rudiño-Piñera E, 2005). Each X-ray photon can only produce one photoelectron, however each photoelectron can result in over 500 secondary electrons of a lower energy (O'Neil et al 2002).
Even at cryo temperatures electrons are still mobile within the protein, they can exploit electron tunnelling effects to jump along the protein backbone chain (Jones et al 1987).
Types of Radiation Damage
Primary radiation damage is dose dependent and caused by the interactions X-ray beam photons creating photoelectric electrons. This is an inevitable consequence of X-ray crystallography.
Photoelectric electrons from primary radiation damage can then go on to further damage the protein through the creation of free radical, this process is termed secondary radiation damage, in a manner which is both time and temperature dependent. The diffusion of free radicals through the crystal can be slowed by cryocooling.
Direct / Indirect
Both primary and secondary radiation damage can be either direct, directly altering the protein, or indirect, altering the surrounding solvent. Indirect effects can still be just as damaging to the protein, the radiolysis of water can create OH, H, H+ and hydrated electrons which are especially destructive.
Specific Effects
Damage occurs to proteins in a specific manner. The ability of free electrons to tunnel along the peptide backbone (Jones et al, 1987) provides a mechanism for specific structural damage to occur. Structural damage occurs in the order of covalent bond strength (Garman and Owen, 2006);
- Disulphide bond breakage.
- Decarboxylation of aspartates and glutamates.
- Loss of OH groups from tyrosines.
- C-S bond cleavage in methionines.
- (Burmeister, 2000; Ravelli and McSweeny, 2000; Weik et al, 2000)
Damage also occurs to metal centres within proteins (Carugo and Carugo, 2005) causing reduction. This can create a problem if most of the pdb structures contain reduced (or partially reduced) metal centres. Active sites of proteins are also susceptible to damage as with the FAD cofactor in DNA apophotolyase (Kort et al, 2004).
It is also known that specific damage occurs before the diffraction pattern is visibly comprimised (Ravelli and McSweeny, 2000).
General Effects
- Change in unit cell dimensions.
- Increase Wilson B values.
- Decreased diffraction power of crystal.
- Loss of high resolution data.
A change in the unit cell dimensions is also characteristic of radiation damage (Ravelli and McSweeny, 2000), however they cannot be used as a measure of damage due to the wide variations in change seen even for crystals of the same protein (Murray and Garman, 2002). Weik et al (2001) showed a dependence of unit cell volume with temperature, however temperature effects are not thought to be significant to radiation damage in beamlines (Nicholson et al, 2001; Kuzay et al, 2001). Unit cell volume changes are important when you consider that a 1/2% change in all cell dimensions gives a 15% change in general reflection intensities (Crick and Magdoff, 1957), especially when you are only trying to measure a 6-10% change in intensities for anomalous dispersion experiments (Hendrickson and Ogate, 1997).
Monitoring Damage
Rate of Damage
damage proportional to dose
Radiation Limit
cryo room temp | <urn:uuid:f2f2eec5-dddc-4f1d-8e55-8fc2f0512110> | 3.375 | 881 | Knowledge Article | Science & Tech. | 32.327145 |
Lead, because of its large Z number
, high density and low cost, is a popular shielding material in radiation labs
. It is used to protect the lab and its environment from highly radioactive objects and processes, but it is also used to protect sensitive detectors and counting chambers
from background radiation
. For the latter purpose it is very important that the lead itself not be sigificantly radioactive
-- that is, it contribute a low(er than usual) amount of background radiation
This sounds trivial, but in fact, it isn't. Many radioactive heavy metals are chemically very similar to lead (or identical, such as lead-210) and tend to easily contaminate it. On July 16, 1945, the United States detonated the world's first atomic bomb (the Trinity test). This relased large amounts of radioactive heavy metals into the environment. All lead mined or processed since contains substantial amounts1 of these radioactive contaminants and is therefore completely unsuited to use in shielding sensitive detectors2. (This is also the reason why carbon-14 dating is useless for objects dating from the 20th century).3
Thus, there is a significant industry in obtaining lead produced before 1945 and selling it to radiation labs. This is known as low background lead and is significantly more expensive than ordinary lead (because, obviously, no more is being made!). It is usually sold in bricks, but is also available in sheets.
One sigificant (and fascinating) source of low-background lead is shipwrecks. Many cargo ships used large quantities of lead as ballast when travelling unloaded or lightly loaded. There are companies and individuals who will dive to sunken ships and harvest the lead ballast and sell it as shielding material.
1 Please note that we are talking about 'substantial' in terms of the noise limits of the detectors in question. Some of these are noticably affected by as little as 10 Becquerel of contamination -- and 10 Bq of lead-210 is about 1.2 picograms, barely detectable by most methods.
2 RPGeek states that the real contributor to lead having a low background level, and thus being valuable, is lead-210 depletion after chemical separation and that nuclear fallout makes an insignificant contribution. The above explanation is what I was tought in my radiochemistry course, but may be erroneous or incomplete.
3 Here I am referring to an inability to reliably carbon-14 date 20th-century items in the far future. Carbon-14 is always useless for recent items because the margin of error is at least +/-50 years at best. RPGeek disagrees and informs me that bomb-contributed carbon-14 is accurately known and subtracted out; this is not my understanding but carbon dating is not my field. | <urn:uuid:de952b17-28d5-4955-9c0f-bb1d8f68dc15> | 3.703125 | 558 | Personal Blog | Science & Tech. | 42.394238 |
The Cambrian Explosion Never Happened.
Blair and Hedges just published an article where they use molecular clocks to examine the validity of the Cambrian explosion. They conclude, "molecular clocks continue to support a long period of animal evolution before the Cambrian explosion of fossils." Their results indicate that the Cambrian explosion (a period ~520 million years ago [mya] when many animal phyla first appear in the fossil record) is an artifact of the fossilization process and not an adequate description of animal evolution.
Molecular clock estimates of divergence times work by determining the rate at which DNA sequences diverge (the rate of DNA evolution). Rates of sequence evolution are calibrated to real time (years) using the fossil record. Estimates of the divergence time of two taxa from the fossil record are used to determine the age of divergence, which can then be used to figure out how fast a genomic sequence should evolve. For instance, if two taxa diverged 50 mya and they differ at 10% of their nucleotide sites, we can say that the nucleotide sequence evolves at a rate of 2% divergence every 10 million years or 0.2% divergence every 1 million years. This rate of evolution is then used to calculate the divergence times of other taxa in which fossil evidence is scarce or non-existent.
Previous molecular studies have been inconsistent -- some support the Cambrian explosion and some refute it. Blair and Hedges argue that the results that support the Cambrian explosion are flawed because they either misapplied calibration points or used an improper model of nucleotide substitution. The allegation of calibration point misconduct is a bold one coming from the Hedges lab, considering a recent review in which the authors conclude that Hedges and collaborators' "divergence-time estimates were generated through improper methodology on the basis of a single calibration point that has been unjustly denuded of error." I'll stop at that, as I don't want any grief from the folks upstairs (and, no I don't mean god -- the Hedges lab is literally "upstairs" from me), but I will point out that Hedges and Kumar did refute the allegations here.
The concept of the Cambrian explosion is often used by anti-evolutionists as support for divine intervention during the origin of animals. If most animals appeared at the same time, they argue, evolution would fall apart since Darwin's theory depends on gradual change over time. It now appears, however, that the Cambrian explosion is a mere artifact of the fossilization process. Because fossilization is a chance process that requires multiple events of varying probabilities, the fossil record can be misleading as a true history of life on Earth. It provides a general guideline, but the first appearance of an organism or taxon in the fossil record cannot be taken as the first appearance of the taxon in history. Instead, the first appearance of a taxon in the fossil record is the first discovered fossilized account of that taxon.
If Blair and Hedges's interpretation of the molecular data is correct, many of the animal taxa that were thought to have arisen in a very short time period may have evolved over hundreds of millions of years. This is extremely consistent with Darwin's view of evolution as a gradual process, and it puts an axe through the anti-evolutionists claims that the Cambrian explosion is inconsistent with evolution. | <urn:uuid:8605d83e-ebb0-4798-98f0-9d3f987b31e2> | 2.84375 | 688 | Personal Blog | Science & Tech. | 27.366517 |
Select a fraction k between 0 and 1 and mark a length on the hypotenuse that is k times the distance from the end of the segment of length t.
Construct a perpendicular from this point to the segment of length t to construct a similar triangle with sides of length ka, kh, and kt.
Another SIMILAR triangle can be constructed by drawing a perpendicular to the segment of length h
The sides of this new triangle will have lengths of (1-k)a, (1-k)t, and
(1-k)h. This is one side of the picture for our original triangle and
the inscribed rectangle, completed as follows: | <urn:uuid:02cc8623-3013-4edd-ac8d-8bf420059a8a> | 3.578125 | 137 | Tutorial | Science & Tech. | 57.415382 |
it's not just for math teachers!
If you like to learn new things and play around with ideas, you're sure to find something intriguing here. Don’t try to read all 40(!) posts at once; take the time to enjoy browsing. Savor a few posts today, and then come back for another helping tomorrow or next week.
At my fortieth birthday party, I got a few of those gag presents meant to remind me how terribly old I was getting. Math Teachers at Play is less than 40 months old (it used to come out twice a month), but just imagine how many great math posts have been included over the months, in all 40 issues.
Forty: A Puzzle
In English, the number forty is spelled out so that the letters appear in alphabetical order. Can you find any other numbers that work this way, or is forty the only one? (If I couldn't find another, how would I prove it was the only one?) Does switching to another language help?
|from You Can Count on Monsters|
- Forty is a pentagonal pyramidal number.
- Forty is the atomic number of zirconium.
- Negative forty is the one temperature at which the Fahrenheit and Celsius scales correspond; that is, −40°F = −40°C.
- Forty is the number of thieves in Ali Baba and the Forty Thieves, from the Thousand and One Nights. (Both the numbers 40 and 1001 may have meant "many", rather than indicating a specific number.)
- 40 = 11000 (base two) = 1111 (base three) = 2*2*2*5
- Forty winks is a nice afternoon nap on a hot summer day.
- 40 acres and a mule refers to the short-lived policy, during the last stages of the American Civil War in 1865, of providing arable land to black former slaves who had become free as a result of the advance of the Union armies into the territory previously controlled by the Confederacy, instituted by General Sherman. After the assassination of President Abraham Lincoln, his successor, Andrew Johnson, revoked Sherman's Orders and returned the land to its previous white owners. Because of this, the phrase "40 acres and a mule" has come to represent the failure of Reconstruction policies to return to African Americans the fruits of their labor. (Wikipedia)
- And last but not least, we have the forty hour work week, brought to you by unions.
Jennifer Knopf brings us 4th of July Fun for beginning counters, with a sorting mat and graph to go with red, white, and blue star marshmallows.
In How to Teach Math Concepts at the Dinner Table, Bon Crowder looks through her young daughter's eyes at the commutative and associative properties, along with substitution. These concepts help form a strong foundation for first arithmetic and then algebra.
Denise Gaskins is giving away a complete set of the Arithmetic Village picture books, by Kimberly Moore. The stories look beautiful and whimsical. They remind me of the Waldorf arithmetic stories, which are usually shared orally. I'm excited to see these stories in print. (The giveaway is open until July 17. Go comment now at Denise's blog if you're interested.)
Karyn Tripp has created an addition bingo game, and posted it at her blog.
Maria Miller suggests helping kids focus on the first step of a problem (and then the next) by bubbling it.
Jennifer Bardsley describes how using coupons to buy toothpaste can turn into a wonderful math lesson for young children in Coupons and Kids, Math in Action.
In Math or Magic?, David Ginsburg asks us to make sure students don't learn math as if it were magic, and walks us through an example involving multiplying fractions.
Rebecca Hanson offers a very simple arithmetic exercise that gets students thinking more deeply.
The Art in Math
Jenny got her first graders playing with Turtle Art (a simple programming environment that produces beautiful results), and they loved it. The turtle art software is free to download. To the left you see a 40-sided star, and to the right, my first art attempt in TurtleArt. (I loved making it.)
Luyi shows off her friend Justin's fabulous knot drawings.
Dan MacKinnon gives an example of what you can do with specialized graph paper, and links to a source for lots of free varieties of it.
If you haven't been following toomai's series on building a computer from first principles (using paper and then wood to create adders), you are missing out! Here's his first post in the series.
John Golden has another cool game, called Linear War. I think of Algebra I as two-thirds linear and one-third quadratic; here's John's quadratic post.
Henri Picciotto writes about kinesthetic activities for secondary math; I'm looking forward to using some of these in class. (Henri maintains a math ed page as a website rather than a blog. There's lots of good material here; check it out.)
What's more likely, getting struck by lightning, or hit by a car? Which risky activities do you avoid, and which do you engage in? Take the BBC's Big Risk Test to learn more about your risk-taking profile.
Puzzles and Games
Knight's Tour and King's Tour puzzle posts.
Jim Wilder brings us a magic square post.
Rachel Lynette offers a symmetry game called Guess My Grid, with a free game board available.
Teaching and Learning
Denise Gaskins is collecting quotes from bloggers. Which is your favorite?
Alexandre Borovik blogs about place value and the problem with not having a name for a number that has just one non-zero digit (which may be followed by any number of zeros).
Alexander Bogomolny blogs about the ambiguity of math words in English.
Geoff blogs about some benefits he saw with Problem-Based Learning.
Whit Ford blogs about Eight Attributes of Effective Activities, Problems, or Projects.
Terrance Banks used a menu system to allow students some say about how their quizzes were graded. He shows us how it works in this blog post.
Allison Cuttler posts about solving a hard problem and the lessons she took from that about how students feel about problem solving.
I went to a workshop on Complex Instruction, which involves using groups in math class, and blogged about it.
The Real World
Pyramid Schemes, with math coming to the rescue. I found this post surprisingly timely as I had just received a chain letter in the mail. (Chain letters are illegal in the U.S., so I tried to report it, but haven't succeeded yet.)
Jonah Lehrer blogs about the problem with math, in sports. "Because it translates sports into a list of statistics, this tool can also lead coaches and executives to neglect those variables that can't be quantified." The problem Jonah describes shows up in lots of other arenas (standardized tests getting you down, anyone?) - I'd love to see more bloggers write about this issue.
John Cook thinks about how different people would answer the interview question "What's the square root of 101?"
Plus Magazine has an article on the Unplanned Impact of Maths.
Ihor found this sign in Italy, and wonders if numbering the floors this way makes for better math students.
Peter Price looks at Roman ruins to help him teach Roman numeration.
Katie Sorene brings us 7 more buildings of mathematical interest. I wish the math were explained more clearly - I'm not sure I believe all the hype about particular ratios. I'd really like to see a floor plan of the Rushton Triangular Lodge; maybe I'll draw one myself...
John Cook has posted a sweet collection of limericks. And Mad Kane offered up her limerick ode to Tau Day on June 28th (6.28).
Math Haiku for you, from Yan Kow Cheong.
JoAnne Growney posts math poetry regularly. In this poem, Robert Gerther asks whether math truly is a universal language.
Visit Our Sister Carnivals
- Carnival of Mathematics
- Mathematics and Multimedia
- Carnevale della Matematica
- Carnaval de Matemáticas
Exit the Carnival Here
I Hope This Old Train Breaks Down. If you would like to contribute, please use this handy submission form or email Mimi directly. (We've had trouble with the submission form these past few months. We hope they’ll get it fixed.) Posts must be relevant to students or teachers of preK-12 mathematics. Old posts are welcome, as long as they haven’t been published in past editions of this carnival. Past posts and future hosts can be found on our blog carnival index page. | <urn:uuid:7fc88f7b-5b39-411b-a8dc-aa43228fe541> | 3.375 | 1,850 | Personal Blog | Science & Tech. | 61.158559 |
In the deep, sticky jungles of Cameroon lives a rare type of gorilla — one which has been hunted nearly to extinction by indigenous peoples and has never been placed on film.
Until recently, when a camera crew spotted the gorillas hanging out high off the ground in its natural habitat:
“These gorillas are extremely wary of humans and are very difficult to photograph or film,” said Roger Fotso, director of the Wildlife Conservation Society’s (WCS) Cameroon Program. “Eventually, we identified and staked out some of the gorillas’ favorite fig trees, which is where we finally achieved our goal.”
While the video footage of the great apes is comforting to those who thought they may already be extinct, scientists still worry for the future of these beasts. Not because of encroachment on their habitat, but because these gorillas appear to be knuckledraggers in more ways than one. Witness the video:
As you can see, this rare species is in mortal peril thanks to a lack of basic self-awareness and common sense. Some researchers predict the entire species will choke to death on potted meat sandwiches. Others predict they will be electrocuted by peeing on high voltage transformer stations. Either way, the world may soon lose the last of its rarest — and most retarded — primate groups.
Most Elusive Gorilla Caught on Video [LiveScience] | <urn:uuid:5903cc95-8292-4ac9-be80-c9b56056fe43> | 2.6875 | 293 | Personal Blog | Science & Tech. | 36.006632 |
The quark gluon plasma is explored at LHC with the ion experiments there. A description can be found here.
In heavy-ion collisions, the first evidence for jets was seen in 2003 in the STAR and PHENIX experiments at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) in the US. These jets showed a remarkable difference from those in simpler collisions, however. In the most striking measurement, STAR observed that one of the two back-to-back jets was invariably “quenched,” sometimes weakened and sometimes completely extinguished. The further a jet has to push through the dense fireball of a heavy-ion collision – 30 to 50 times as dense as an ordinary nucleus – the more energy it loses.
Jets are “hard probes”, by nature strongly interacting but moving so fast and with so much energy that they are often not completely absorbed by the surrounding quarks and gluons in the quark-gluon plasma. The degree of jet quenching – a figure that emerges in data from millions of collision events – plus the jets' orientation, directionality, composition, and how they transfer energy and momentum to the medium, reveal what’s inside the fireball and thus the properties of the quark-gluon plasma.
Recently the ALICE, ATLAS and CMS experiments at CERN’s Large Hadron Collider (LHC) have confirmed the phenomenon of jet quenching in heavy-ion collisions. The much greater collision energies at the LHC push measurements to much higher jet energies than are accessible at RHIC, allowing new and more detailed characterization of the quark-gluon plasma. Theoretical understanding of these measurements is challenging, however, and is one of the most important problems in quantum chromodynamics today. | <urn:uuid:90526f19-a2f2-499c-940a-7de335efa5fa> | 3.3125 | 376 | Q&A Forum | Science & Tech. | 32.53827 |
"A novel genetic programming (GP) technique, a new method of evolutionary algorithms, was applied to a small data set to predict the water storage of Wolonghu wetland in response to the climate change in the northeastern part of China. Fourteen years (1993-2006) of annual water storage and climatic data of the wetland were used for model training and testing. Results of simulations and predictions illustrate a good fit between calculated water storage and observed values (mean absolute percent error = 9.47, r = 0.99). By comparison, a multilayer perceptron method (a popular artificial neural network model) and Grey theory model with the same data set were applied for performance estimation. It was found that GP technique had better performance than the other two methods, in both the simulation step and the predicting phase. The case study confirms that GP method is a promising way for wetland managers to make a quick estimation of fluctuations of water storage in some wetlands under the limitation of a small data set." | <urn:uuid:a7f10097-9572-4a20-a12c-eb1b4363c5fc> | 2.6875 | 203 | Academic Writing | Science & Tech. | 37.217977 |
Guest post from Stephen Wilde:
Greenhouse Gases and the Ideal Gas Law
Stephen Wilde – Jan 2013
This is the usual form of the Ideal Gas Law:
where P is the pressure of the gas, V is the volume of the gas, n is the amount of substance of gas (also known as number of moles), T is the temperature of the gas and R is the ideal, or universal, gas constant, equal to the product of Boltzmann’s constant and Avogadro’s constant.
The Ideal Gas Law as set out above is a representation of certain physical relationships and is therefore not about absolute values.
It is widely known how the various terms within that equation respond to changes in any one or more of them,
P and V are inversely proportional to each other so a rise in Pressure results in reduced Volume and vice versa.
Increasing either P or V without reducing the other requires an increase in
n – total atmospheric mass and/or
R – the gas constant which is related to the strength of the gravitational field and/or
T – Temperature.
The product of n, R and T then rises to match the increased product of P and V.
The problem with AGW theory in relation to the Ideal Gas Law.
AGW theory proposes that an increase in GHGs causes an increase in T which then causes an equal increase in V so as to keep the two sides of the equation balanced.
So far so plausible.
However, an increase in V results in a reduction of density (n) throughout an atmosphere which must REDUCE the product of nRT.
Normally a reduction in n would be accompanied by a reduction in P as well because less mass in an atmosphere results in reduced pressure at the surface if the strength of the gravitational field stays the same but there is no reduction in P at the surface from mere expansion even though the density of the entire atmosphere reduces when V increases.
Therefore we cannot look to a reduction in P to correct the imbalance caused by the reduction in density (n).
According to the Ideal Gas Law it is not possible for PV to fail to equal nRT yet that is just what happens if one holds P steady whilst increasing T and V equally but reducing n.
There would only be balance with PV continuing to equal nRT :
if more mass were added to the atmosphere so as to avoid reducing the average density of the atmosphere when expansion occurred.
or, if the strength of the gravitational field increases to pull V back down to restore n to the previous value
Since no extra mass or gravity is being added AGW theory cannot be right because of the residual imbalance.
If a higher T from more GHGs leads to a higher V then the reduction of density (n) results in lower V which must be accompanied by lower T so there is a logical impasse.
That is what I think some sceptics mean when they say that AGW theory is impossible or contrary to the Laws of physics.
In order to resolve the problem we need to look at the same scenario differently.
What is the effect of adding more GHGs ?
We have proof that GHGs expand the region of an atmosphere in which they are situated.
Since they absorb more energy than non GHGs they spread energy more evenly across the whole area that they occupy. The effect is to reduce the rate at which temperature would otherwise decline with height (the lapse rate).
In the Troposphere the dry adiabatic lapse rate without water vapour would be about 10C per km. The presence of water vapour reduces the actual lapse rate to about 6.5C. As a consequence of reducing the lapse rate the distance required for the air to cool between surface and tropopause needs to increase and so the expanded troposphere pushes the tropopause upwards thereby expanding the troposphere beyond the height that it would have achieved without GHGs.
We see the same process in the stratosphere where ozone (a GHG) warmed by the sun actually reverses the lapse rate so that temperature increases with height up to the stratopause. The expansion of the stratosphere can push both up and down because there is no solid surface beneath it and that results in some interesting features of our climate system that are beyond the scope of this article.
So the effect of GHGs is to increase atmospheric height AND reduce the slope of the lapse rate.
As I will now explain, that is important because the combination of expansion and increased height enables the atmosphere to accommodate more GHGs without altering system equilibrium temperature.
How to approach the problem.
Note first that a constant flow of new energy from outside the atmosphere is required to maintain it in gaseous form.
If that supply of energy from external sources were to be cut off the atmosphere would simply collapse and freeze to the surface in solid form.
Those energy sources can be anything such as from a nearby sun, from geothermal energy below the surface, from nearby planetary gas giants large enough to radiate, even the temperature of space being above absolute zero makes some contribution.
Above all it must be constant because energy is also being radiated to space at an equal rate when the atmosphere is at equilibrium.
Note second that a large amount of work is constantly being done in order to keep the gases lifted off the surface against the constant force of gravity. If the work rate drops the atmosphere will contract and if the work rate increases the atmosphere will expand.
The persistence of a gaseous atmosphere despite the efforts of gravity to pull it down to the surface is due to work being done constantly.
Now, recall the problem we had with AGW theory in that we had nothing to counter the reduction in density (n) caused by more GHGs and we needed something to counter it in order to comply with the Ideal Gas Law.
The rise in V on one side of the equation offset the rise in T on the other side of the equation but we couldn’t balance the numbers because n had reduced on one side of the equation but P had been held steady on the other side.
We obviously need another variable but all we have left is the Gas Constant (R)
Let’s look at R more closely and see what can be done.
Dimensions of R
From the general equation PV = nRT we get
R = PV/nT or (pressure × volume) / (amount × temperature).
As P is defined as force per unit area, so we can also write the gas equation as
R = [(force/area) × volume] / (amount × temperature).
Again, area is simply (length)2 and volume is equal to (length)3. Therefore,
R = [force / (length)2] (length)3 / (amount × temperature).
Since force × length = work,
R = (work) / (amount × temperature).
The physical significance of R is work per degree per mole. It may be expressed in any set of units representing work or energy.
It turns out that R, the Gas Constant is not really constant at all except in clearly defined circumstances unique to each planet. R is only a constant for a fixed gravitational field, a fixed amount of atmospheric mass and a fixed height of atmosphere. Change any of those features and the amount of work required will also change and the value of R will rise or fall.
The amount of work per degree per mole will be related to the strength of a gravitational field. A stronger such field will require more work per degree per mole and the value of R will increase.
It will also be related to the amount of mass that is available to be raised off the surface. The more atmospheric mass the higher the value of R will need to be in order to lift it all off the surface.
Crucially, it will also be related to the height of the atmosphere because a higher atmosphere requires more work to raise molecules to the greater height against the continuing force of gravity so for a higher atmosphere the value of R will increase.
If the atmosphere expands thereby rising in height the value of R will increase because more work needs to be done in order to raise the molecules higher against the force of gravity.
One can then increase the value of R in the equation nRT which will offset the reduction in n to bring both sides of the equation back into balance.
But there is another step to take.
The final step.
It turns out that it isn’t necessary for GHGs to raise T.
In the Ideal Gas Law Equation T is often taken as simply temperature but it is actually more than that.
T is the amount of energy available from all sources to maintain the constant flow that keeps the atmosphere off the surface.
GHGs do not add to that external energy source. Nor do they detract from it.
So it is wrong to include their thermal characteristics within T.
Instead they go straight to expanding the atmosphere thereby raising V
So how do we balance that on the other side of the equation without raising T?
We have determined that a higher atmosphere requires a higher value of R because more work needs to be done in order to maintain the new higher atmosphere.
So all we have to do is raise R and the product of nRT then balances again with PV at the increased V.
The increase in R is enough to offset both the increase in V on the other side of the equation and the reduction of n on the same side of the equation leading to overall balance without raising T.
What happens to the ‘missing energy’
Since there has been no increase in T the total amount of energy flowing through the atmosphere at equilibrium remains the same as before but the GHGs have absorbed more energy so where is it?
The atmosphere has expanded so the total amount of energy held within the system has obviously increased.
The answer is contained in the fact that the energy held within the system isn’t just kinetic energy which registers as heat. It is also comprised of potential energy which does not register as heat.
The higher the atmosphere is allowed to rise within the gravitational field the more of its energy content takes the form of potential energy.
So the extra energy absorbed by GHGs has all gone to increasing atmospheric height which has converted that initial kinetic energy to potential energy where it will remain until the atmosphere contracts again.
That scenario avoids the problem of imbalance inherent in AGW theory, keeps PV = nRT in balance and explains why any extra energy absorbed by GHGs is no longer available to affect equilibrium temperature.
The higher atmosphere does result in air circulation changes that potentially have a climate impact but that is another story that I have dealt with elsewhere. | <urn:uuid:051a0b5a-9741-4383-bc6f-d8641d1e5061> | 3.53125 | 2,196 | Personal Blog | Science & Tech. | 47.239949 |
In this article I will try to introduce one of the more controversial and interesting to the general public questions in Astrobiology. This is the Fermi Paradox.
Enrico Fermi was a very prominent scientist of the 20th century. He was not very involved in the field of astrobiology (exobiology, as it was called back then), but still had an enormous impact on science in general and his opinion was widely acclaimed in the scientific circles, with very good reasons for that of course. He put into words a question that had been on the minds of many scientists who were involved in Astrobiology and in SETI in particular at that time.
Very well described in a good book that I recommend reading (Bennet, 2008) this question can be clarified only by spending some time on pondering on what we know of civilizations and their [putative] occurrence throughout the Cosmos. Of course, one might very well say that we cannot say much since, after all, we are the only civilization we know of, so we don’t have really too much material to work with in the first place. But still, try we must and try we will.
By using the Drake equation (which I have briefly explained in this post) and emerging data about some of the previously dubious variables in it, namely exoplanet and habitable exoplanet count, we can state that there must be around 100 000 intelligent civilizations currently in our Galaxy. The interesting bit is that statistically the youngest one of these should be the human civilization, and the next one after that should be 50 000 years older than us (remember this part, we will come to this later).
So there are at least three possible answers to this paradox:
- We are indeed alone. One of the key variables in the Drake equation defines life as a one-time event, a galactic stroke of fortune. As of now a good candidate for such a variable in the equation would be the ability of life to originate, since despite the numerous experiments done after those of Stanley Miller, we are no closer to discovering how life on Earth emerged than 50 years ago, and this despite all the advancements in science, especially those in molecular and cell biology.
- All civilizations that have developed technocratically have eventually destroyed themselves.
- There are many intelligent civilizations in the Cosmos, but we cannot detect them at present with our technology since they are vastly superior to us and we cannot even define them as living. Here ideas about the Galactic engineering concepts in Contact (my review on the book here) come to mind. If a civilization is 50 000 years older (and some could be much older than that, in the order of billions of years) than us, how would it look like? In the moment we can’t (and have failed before, in most of the science fiction books of the 20th century) predict the future even 50 years from now, let alone tens of thousands, or even billions… A civilization that is that much older (read advanced) than us might be able to be completely invisible to the primitive (comparatively) biology of the human senses, both physical and mental. They can be everywhere and we might not know. For all we know that part of the Cosmos which is visible by the Hubble Space Telescope might be just a tiny pebble on a beach of their world.
There are many implications of the Fermi paradox and it sure made many people think about the possibilities. The second answer is perhaps the most gloomy one. It reminds me of many of the things that the late Carl Sagan devoted his life to explaining to the public. Or ability to destroy not only our own civilization, but even life on Earth (although this would be also disputable, taking into account the ability of extremophilic organisms to thrive in such diverse niches in the biosphere, but still the development of intelligence might be rare, as I said beforehand, so we should not risk it).
We can indeed be alone, it is possible. In the mind of an informed scientist, highly unlikely, but still possible. But determining this might take forever. Literally. As Carl Sagan said, atheists and believers alike know far more than an agnostic. The proof of in-existence will mean searching the entire Cosmos in every possible way imaginable, and even then we won’t be sure of the credibility of our results. The absence of evidence is not evidence of absence.
It is simply inexcusable, taking in mind the military budget of the wealthy nations of the world, not to spend at least a respectable amount of that for SETI research. It is a cliche, but we are born to explore. The sky is no longer the limit.
1. Bennet, J. (2008). Beyond UFO’s. The Search for Extraterrestrial Life and its Astonishing Implications for Our Future. Princeton University Press, New Jersey, USA.
If you wish to read more on the matter, the best place to go will be SETI institute’s website. Don’t forget that they are a privately funded institution and your support can mean a lot. Any contribution you make can prove useful in of humanity’s most noble and age defining endeavors, that give purpose to our existence.
Also Carl Sagan’s Cosmos is indeed a masterpiece of science shared to the world. It is a must-see for any aspiring scientist. His other books, most notably the “Dragons of Eden” consider different topics, but still relevant to humanity in more than one way. | <urn:uuid:425ec99a-7945-4705-b385-6856168868e7> | 3.03125 | 1,141 | Personal Blog | Science & Tech. | 43.983696 |
A vertical string with weights attached at spacings that correspond to increasing squares (4 inches, 9 inches, 16 inches, etc.) is dropped, either by the demonstrator or a member of the audience. The weights don’t need to be heavy, they just need to make a notable sound when they hit the floor. The sounds of the weights hitting the ground will occur at regular intervals.
Alternatively, or as a lead-in demonstration, drop a vertical string with weights attached at equal spacings. In this case the frequency of the weights hitting the ground will increase as the string falls. In both cases the longer the string is, the better the effect.
The geometric nature of the relationship between distance (d) and time (t) for constant acceleration is demonstrated, audibly. Since the string is starting from rest, d = .5gt2, where g is a constant (9.8 meters per second per second). The constant is irrelevant when units are of no concern, so the squares of members of any arithmetic sequence will work for the string with increasing spacing of weights.
This demonstration is best for smaller groups of mid-level students (some middle schoolers, high schoolers, and college freshmen). It is great for a visually handicapped audience; have one or more members of the audience verify the spacing of the weights before the string is dropped.
Shock Value: 2 | <urn:uuid:0b46725b-bb0f-4708-8d2e-d99ae977954a> | 3.8125 | 284 | Tutorial | Science & Tech. | 60.561758 |
|Here is one of the
largest objects that anyone will ever see on the sky.
Each of these fuzzy blobs is a galaxy, together making up the
Perseus Cluster, one of the closest
The cluster is seen through a foreground of faint stars in our own
Milky Way Galaxy.
Near the cluster center, roughly 250 million light-years
away, is the cluster's dominant galaxy NGC 1275,
seen above as a large galaxy on the image left.
A prodigious source of
x-rays and radio emission,
NGC 1275 accretes
matter as gas and galaxies fall into it.
The Perseus Cluster of Galaxies, also cataloged as Abell 426,
is part of the Pisces-Perseus supercluster
spanning over 15 degrees and containing over 1,000 galaxies.
At the distance of NGC 1275, this view covers about 15 million
Credit & Copyright: | <urn:uuid:e05ce76f-8bdf-4352-8925-a5ca7fc4b366> | 3.0625 | 199 | Knowledge Article | Science & Tech. | 51.921 |
Conceptual sketch of a Wayland Mk I starship leaving the Solar System
Wayland is a set of design sketches for a manned starship.
These unofficial notes have been produced in parallel with the Icarus project, itself an updating of the Daedalus project of the 1970s. Daedalus produced a detailed design for a robotic probe which might be launched to a nearby star (Barnard’s Star was chosen) within a couple of centuries, while Icarus has its sights on a departure by the end of the 21st century.
Inspired by the launch of the Icarus project, but even more irritated by James Cameron’s infuriating new movie Avatar (see review of Avatar here), I found myself compelled to set down some sketches of a starship design of my own. Wayland looks ahead to ask whether, when and how human beings might voyage to the stars in person. While speculative, Wayland shall still remain physically plausible.
The name “Wayland” has been chosen since a character of this name was the Anglo-Saxon equivalent to the Daedalus of ancient Greek mythology.
The top-level design guidelines for the Wayland starship are as follows:
The resulting design for Wayland carries 100 people, and has a mass of about 178,000 tonnes at Solar System departure. It is propelled by 1254 tonnes of antimatter (probably antihydrogen ice), which energises about 100 times that mass of liquid hydrogen for acceleration and deceleration. With a cruising speed of 0.1c, it should reach Alpha Centauri in about 46 years flight time. Its first journey is likely to be undertaken in the period AD 2500 to 3000.
Obviously, this is all extremely speculative and sketchy compared with the vastly more detailed and professional collaborative work that has gone into Daedalus and is now being focused on Icarus. Yet I hope that some of the thoughts I have put into Wayland may possess some lasting interest. In particular, I have tried to correct what I see as a fundamental misconception about the focus on finding earthlike or marslike exoplanets for future human habitation, which is expressed in the recent DVD “How To Colonise The Stars”.
Many scientists and engineers continue to believe that the voyage of a manned starship, bringing our own descendants into direct physical contact with extrasolar planets and ultimately extrasolar alien life, is too difficult a project ever to bring to fruition.
The numbers suggest otherwise. Although a manned starship is certainly way beyond our current abilities, so long as technological progress and economic and population growth remain possible then a point can be reached when the capabilities of civilisation match the challenge. The question becomes one of asking how much growth and progress is the precondition for a starship of a given speed and size.
Since the future expansion of civilisation inevitably entails our occupation of the Solar System, growth on the interplanetary frontier will prepare us in many different but complementary ways to eventually face the interstellar frontier.
Click here for the full report in PDF format.
Last revised 22 May 2010 / 41st Apollo Anniversary Year | <urn:uuid:d505cf76-711c-44e4-af6f-f2b60a65117e> | 2.828125 | 632 | Personal Blog | Science & Tech. | 34.203568 |
What are "currents"?
Is it true that sand piles up on the bottom of the ocean only 1 centimeter every 1000 years?
What’s the difference between the current and the tide?
What is a warm water current?
How do whirlpools form?
What is a riptide?
What is a whirlpool?
What are upwellings?
Why do hurricanes hit Florida a lot?
How are beaches made?
Is it safe to go in the ocean if I don’t know how to swim?
Here are some questions about this topic.
Click the question to read the answer!
Ask another question about this topic!
See a Full List of Topics
© 1999–2013 BrainPOP and BrainPOP's Licensees. All rights reserved.
Your use of the site indicates your agreement to be
bound by our | <urn:uuid:a621355e-83a2-4e8b-9944-52607207c4dd> | 2.953125 | 185 | Q&A Forum | Science & Tech. | 77.325981 |
Part 1 - Setup
Each student must complete this section before working with any chemicals in this lab.
|my partner is:||_________________________|
|name of compound:||_________________________|
|volume of solution to make:||_________________________||_____|
|chemical formula for compound:||_________________________|
|molecular weight of compound:||_________________________||_____|
|mass of solute required:||_________________________||_____|
Show your calculations to find the mass of solute. Remember to
use the factor/label method, and don't round anything off until
the final result.
|red litmus paper ®||_________________________||_________________________|
|blue litmus paper ®||_________________________||_________________________|
After recording your observations, gently mix the two solutions in a
flask large enough to hold the combined volume. Write your name (in
pencil) on a piece of clean, dry filter paper, and record it's mass.
Then carefully filter your solution to collect the solid ("residue").
Do not discard the liquid that passes through tht filter paper ("filtrate").
|mass of filter paper:||_________________________||_____|
|color of filtrate:||_________________________|
|smell of filtrate:||_________________________|
|filtrate + red litmus paper ®||_________________________|
|filtrate + blue litmus paper ®||_________________________|
|mass of filter paper + residue:||_________________________||_____|
|mass of residue:||_________________________||_____|
volume of copper (II) sulfate pentahydrate mass of residue (g) source of data
When you have completed the table, you will produce a graph of the data. The rest of the instructions for this lab will be presented in class.
[Continuous Variation Lab score sheet][MHS Chem page] | <urn:uuid:1136a5d2-eb41-4ca9-8db3-8cdfe586a10f> | 3.734375 | 401 | Tutorial | Science & Tech. | 54.592719 |
Definitions for magnetic azimuth
The Standard Electrical Dictionary
The angle, measured on a horizontal circle, between the magnetic meridian and a great circle of the earth passing through the observer and any observed body. It is the astronomical azimuth of a body referred to the magnetic meridian and therefore subject to the variation of the compass. The angle is the magnetic azimuth of the observed body.
Use the citation below to add this definition to your bibliography:
"magnetic azimuth." Definitions.net. STANDS4 LLC, 2013. Web. 23 May 2013. <http://www.definitions.net/definition/magnetic azimuth>. | <urn:uuid:9c8b8246-e968-495b-8d5a-3530c34eb5e3> | 3.25 | 140 | Structured Data | Science & Tech. | 40.285899 |
A new study by engineering researchers at Rensselaer Polytechnic Institute demonstrates how introducing certain polymers—like those found in human mucus and saliva—into the environment makes it significantly more difficult for H. pylori and other microorganisms to coordinate.
“In the human body, microorganisms are always moving around in mucus, saliva, and other systems that exhibit elasticity due to the presence of polymers. Our study is among the first to look at how this elasticity impacts the collective behavior of microorganisms like H. pylori,” said lead researcher Patrick T. Underhill, assistant professor in the Howard P. Isermann Department of Chemical and Biological Engineering at Rensselaer. “What we found is that polymers do in fact have a substantial impact on the flows created by the swimming bacteria, which in turn makes it more difficult for the individual bacteria to coordinate with each other. This opens the door to new ways of looking at our immune system.”
Click "source" to read the entire article. | <urn:uuid:50d07b82-9a3c-4870-8ebc-1c835e407e97> | 3.15625 | 217 | Truncated | Science & Tech. | 30.069778 |
Even prior to the oil spill, decades of over-harvesting, disease, pollution and declining habitat have decimated the massive oyster reefs that once dominated the country’s coastal estuaries. Globally, 85 percent of reefs have been lost, making oyster reefs the most severely impacted marine habitat on Earth.
Because oyster reefs are essential to a healthy marine system, The Nature Conservancy has been experimenting — from North Carolina to Texas — with techniques that may provide hope for the oyster’s future. What projects are happening in your state?
“While some of these projects are on hold due to the oil spill, our work is now undoubtedly even more important to help the Gulf of Mexico heal,” said Rob Brumbaugh, coastal restoration director for the Conservancy.
“Meanwhile, there may be a temptation to increase harvest pressure on reefs not affected by the spill – in the Gulf and beyond – to make up for decreased oyster production,” Brumbaugh continued.
“Given the magnitude of oyster reef loss globally, however, and the importance of the impacted reefs, we need to be increasingly thoughtful about how to balance these near-term pressures with longer-term goals.”
When coastal water temperatures rise each spring, an ancient spawning ritual begins. Mature female oysters release millions of eggs; the males release an even greater number of sperm. Nature takes its course as fertilization occurs in the open water. The result? Microscopic oyster larvae that look like specks of black pepper.
These larvae soon develop a shell and feed on algae, drifting on currents and riding the tide. If they are lucky enough not to become fish food by the third week, they attach themselves to a hard surface — usually other oysters — where they transform into a tiny oyster called spat and bond with others to form a reef.
As oyster reefs decline, though, larvae have a slim chance of finding a suitable surface.
Just as coral reefs are critical to tropical marine habitats, oyster reefs are the ecosystem engineers of bays and estuaries. They provide important services to people and nature by:
Southern coastal communities feel the impact of oyster reef loss. In North Carolina’s Albemarle-Pamlico Sound, oyster habitat is down by 50 percent while the rate of coastal erosion can be as much as 45 feet per year. Along the Gulf Coast, devastating hurricanes have severely degraded estuarine and beach habitat, making an already fragile coastal system even more vulnerable to future storm activity.
Attempts to protect coastal areas from storms, increased wave activity and rising waters have usually involved the construction of hard structures like rock jetties, bulkheads and seawalls. But scientists have learned that these artificial barriers do more harm than good.
Not only do they reflect wave energy back into the water, causing additional loss of habitat, the structures also disrupt the ecological balance of the ecosystem.
“Hardened structures cut off the marine habitat from the rest of the ecosystem, disconnecting marine life from marshes along the margins of our estuaries and bays,” said Brumbaugh. “It’s similar to putting a dam in the middle of a river that prevents salmon from swimming upstream.”
Oyster reefs, a natural component of the marine environment, are an alternative solution to the concrete and steel bulkheads that line some coastlines. Conservancy scientists throughout the South have been working with partners and experimenting with ways to create new reefs. The following are some highlights.
In Alabama's Mobile Bay, where 30 percent of the shoreline is now armored with bulkheads, scientists are testing three techniques to create 3 acres of oyster reefs and protect 10,000 feet of shoreline. One method — interlocking rectangular cages made of welded steel with space for mesh bags of oyster shells — is being used along Alabama and Louisiana shores after proving highly successful at Clive Runnells Family Mad Island Marsh in Texas.
Along Jeremy Island in South Carolina, scientists are constructing “oyster castles” made from interlocking blocks of concrete, limestone, crushed shell and silica. Meanwhile, Georgia researchers are testing the effectiveness of bagged oyster shells and chain link baskets made of welded steel and filled with rock and shells.
“One reason the techniques vary from place to place is that the threats vary,” Brumbaugh offered. “In Florida’s Indian River Lagoon, for instance, we lay oyster mats in the water that are specifically designed to work in the face of boat wakes.”
Rising sea levels — 2 inches per decade along the Albemarle Peninsula in North Carolina — spurred an effort to create four oyster reefs using 600 tons of limestone marl in Pamlico Sound.
In Mississippi and Texas, high pressure hoses were used to blow several tons of loose shell off barges to create oyster reefs in the Gulf of Mexico to enhance fish habitat and absorb the impact of hurricanes and tropical storms.
In South Carolina, scientists are testing the efficacy of "oyster castles" — concrete, shell, and limestone blocks — assembled on the shoreline to grow new oyster reef. They've also launched a pilot project to use loose fossilized oyster shell deposited in a managed wetland to see whether it is successful in growing reef.
“Conservancy scientists are trying a wide variety of new restoration methods,” Brumbaugh continued. “These will create better solutions — workable solutions — to problems like erosion, sea-level rise and today’s intense storms. We always strive to ensure that our solutions preserve the natural functions of the habitat.”
View an interactive map of the various oyster reef restoration techniques used around the South.July 21, 2011 | <urn:uuid:c0b9244d-5f4a-4ef0-8cd2-e73f7ab757e3> | 4.03125 | 1,203 | Knowledge Article | Science & Tech. | 33.016101 |
Future Earth Prosperity Will Depend on Resources in Space
By Mark Hopkins
THE LATE PRINCETON PHYSICS PROFESSOR GERARD O'NEILL first published his ideas concerning the construction of space settlements in a now classic Physics Today article in 1974. His ideas led to the establishment of the L5 Society, which in 1987 merged with NSI to create the present National Space Society. What has come to be known as an O'Neill space settlement consists of a large cylindrical shell several miles long and a few miles wide, which is built in space, primarily from materials found in space.
The shell is spun to create the effect of normal Earth gravity and its interior is filled with what constitutes a normal Earth atmosphere. A system of windows and external mirrors brings sunlight into the cylinder in a fashion that approximates daytime on Earth. The insides of the cylinder are molded to create a highly desirable living area complete with fields, forests, hills, streams, lakes, towns, etc. This amounts to land built in space.
The asteroids have sufficient material of roughly the right composition to build O'Neill space settlements with a combined land surface area of more than 1,000 times the land surface area of Earth. If you digest the moons of the outer planets, then the land area that could be created is increased by two orders of magnitude (a factor of 100).
Beyond Pluto there exists the Oort Cloud of comets that circle the sun in orbits which taken together comprise a large sphere. Six trillion is the current estimate of the number of these comets. They extend outward from the sun for three light-years in all directions. They have enough mass to increase our total land area estimate that can be built via O'Neill settlements by substantially more than another order of magnitude (factor of 10).
Thus there are enough material resources in the solar system (not counting the planets) to create land equal to more than one million times the land area of Earth. This is a very large number but, compared to the energy resources of the solar system, it is tiny. The sun produces more than 10 trillion times the amount of energy currently used by humanity.
Nor need it stop with the solar system. Other stars may well have similar Oort Clouds. Let's assume this is the case with the Alpha Centauri system, the center of which is 4.35 light-years from the sun. If that Oort Cloud has a radius of three light-years (gravitational considerations suggest it's modestly larger) and our Oort Cloud has a radius of three light-years, then the clouds overlap creating a star bridge between the stellar systems. Any civilization that spans our Oort Cloud also will expand into the Alpha Centauri Oort Cloud. Jumps to the Oort Clouds of other stars will likely follow. In terms of land area, it is trivial but still interesting to note, that one of the stars in the Alpha Centauri system is remarkably similar to the sun and is considered to be a good candidate to have an Earth-like planet.
Why is all of this important in a big picture sense? The average American has a per capita income that is seven times greater than that of the human race as a whole. Poverty is defined in America to be substantially above human average income. This is the human average we are making comparisons with — not poor people by human standards.
Even if we assume no further population increase and no further increase in America's per capita income, then the human economy must increase seven times in order to raise the human average to what Americans now enjoy. Where are we going to get the resources to do anything like this? In actuality, the population is increasing. American per capita income is rising and desirably so. The environment needs to be improved — not get worse. All of which makes the problem more difficult. What is the solution? The answer lies in space. | <urn:uuid:96694086-c40b-453b-b50a-107a71cc2acf> | 4.03125 | 795 | Nonfiction Writing | Science & Tech. | 48.94366 |
In honour of the hundredth anniversary of Einstein\'s \'miraculous year\', I will describe the modern view of space and time. I will start with special relativity, then describe how space and time are modified in Einstein\'s general theory of relativity, and end with recent ideas coming out of string theory. In all cases, the view of space and time arising from modern physics is radically different from our everyday experience, yet many of their strange properties have already been confirmed by experiment.
It is well known that Einstein worked to develop a unified field theory that would encompass all of physics including (he hoped) all quantum phenomena. It is not so well known that there was \'another Einstein,\' who from 1916 on was skeptical about the continuum as a foundational element in physics, especially because of the existence of quantum phenomena.
Could the laws of physics change? The laws of physics are usually meant to be set in stone; variability is not usually part of physics. Yet contradicting Einstein\'s tenet of the constancy of the speed of light raises nothing less than that possibility. I will discuss some of the more dramatic implications of a varying speed of light.
Howard Burton, the Executive Director and chief architect of Perimeter Institute, describes the process and pitfalls of constructing a home for budding Einsteins from scratch in Waterloo.
The most important scientist of the twentieth century, and its most important artist, went through their periods of greatest creativity almost simultaneously and in remarkably similar circumstances: Einstein\'s special theory of relativity and Picasso\'s Les Demoiselles d\'Avignon. It turns out they were both working on the same problem: the nature of space and time and, more particularly, simultaneity. | <urn:uuid:816fdd7a-5a98-4744-add5-c336157c466e> | 3.390625 | 351 | Truncated | Science & Tech. | 31.484454 |
IN PHYSICS I OFTEN GET STRUCK OF A QUESTION OF NOT A SINGLE TEACHER OR MY FRIENDS COULD ANSWER ME THE QUESTION IS THAT
“IF SPEED OF LIGHT IS 3 x 108 METRE PER SEC, THAN WHAT IS THE SPEED OF DARKNESS????….”
Asked Taoqeer Nezam
Can you see your image when you are travelling at the speed of light?
Visitors and students are requested to respond and answer via comments.
Einstein’s Theory of relativity says:”The Laws of Physics are the same in any Inertial Frame of Reference.”
A stone tied to a string of length l is whirled around a vertical circle with the other end of the string at the centre. At a certain instant of time the stone is at the lowest position and has a speed u. What is the magnitude of change in its velocity as it reaches a position where the string is horizontal?
Let’s assume that the potential energy at the lowest position be zero. So, when the string is horizontal, the stone has risen by a vertical height l, the length of the string which is also the radius of the vertical circle.
If v is the magnitude of velocity at the horizontal position, then according to the law of conservation of energy,
KE+PE at the lowest position = KE+PE at the horizontal position
From the equation above, v-u can be calculated.
The following links will help you for deeper understanding and you can browse through some solved problems from the topic too.
Categories: Answers, ANSWERS TO QUESTIONS FROM VISITORS, CLASS XI, energy loss, energy motion, Exam Help, General Physics, IIT JEE, Interesting Questions, motion work, Physics Homework, Plus Two Physics, power and energy, Problem Solving, Problems, Soved Numerical Problems Tags: Auto, centre, conservation of energy, law of conservation, law of conservation of energy, magnitude, motion, motion laws, position, stone, vertical circle, vertical height
These problems were posted by Geena. Hope that we will be able to post the answers to these questions soon; each in a separate post. By the time visitors can attempt to post their answers as comments to this post. (Only selected posts will be published) Please note that the answers are not published until the [...]
Categories: Ask Physics, charge density, Electromagnetism, Interesting Questions, potential drop, Project, time answers Tags: ammeter, AskPhysics, cell, charge density, internal resistance, Physics, physics teachers, post, potential drop, problem, proton, question, time answers, wire
We have received hundreds of questions on the above subject. As the questions continue to pour in, we found it necessary to create a post on it. IIT JEE is an entrance exam of international repute and IITs are considered at par with MIT by many. So, if you are an IIT aspirant, it is very [...]
Categories: Entrance Exams, entrance preparation, General, IIT JEE, Interesting Questions, reference book Tags: class xi, entrance, fundamentals of physics, how to, iit, international repute, physics questions, university physics
What is the reason for proton- proton attraction inside the nucleus according to nuclear physics? (Sanjeev Asked)
Answer: Inside the nucleus where the nucleons are very close to each other, the force which holds them together is the nuclear force, which is the strongest force in nature. The electrostatic force between protons is negligibly small compared to the nuclear force. But the nuclear force is of a very short range, 10^-15 m.
The nuclear force is charge independent, i.e; the nuclear force between proton and proton, proton and neutron as well as neutron and neutron are almost the same.
Yukawa’s meson theory suggest that the nuclear force is an exchange force. The nucleons are bound because of constant exchange of the mesons.
The nuclear force is only felt among hadrons. At small separations between nucleons (less than ~ 0.7 fm between their centers) the force becomes repulsive, which keeps the nucleons at a certain average separation, even if they are of different types. At distances larger than 0.7 femtometer (fm) the force becomes attractive between spin-aligned nucleons, becoming maximal at a center-center distance of about 0.9 fm. Beyond this distance the force drops essentially exponentially, until beyond about 2.0 fm separation, the force drops to negligibly small values.
At short distances (less than 1.7 fm or so), the nuclear force is stronger than the Coulomb force between protons; it thus overcomes the repulsion of protons inside the nucleus.
However, the Coulomb force between protons has a much larger range due to its decay as the inverse square of charge separation, and Coulomb repulsion thus becomes the only significant force between protons when their separation exceeds about 2 to 2.5 fm.
(There are many terms introduced in the explanation. You can discuss them as comments for obtaining further details, if required)
Categories: +2 Physics, Answers, ANSWERS TO QUESTIONS FROM VISITORS, Chemistry, interesting question, Interesting Questions, Nuclear Physics Tags: answer, attraction, coulomb, coulomb force, coulomb repulsion, femtometer, Force, meson, meson theory, nuclear force, nucleus, Physics, proton, range, reason
Mohak Tandon Asks – “ “Is the acceleration due to gravity different in south pole and north pole?” Answer: As per theory, acceleration due to gravity changes with altitude, depth and latitude. If we consider that the variation of g with latitude is caused by rotation of earth only, then the value of g will be same for both [...]
Categories: Ask Physics, gravity changes, Interesting Questions, mohak, north pole, polar radius, pole, poles, Project, rotation of earth, south pole, Tandon Tags: acceleration, acceleration due to gravity, altitude, answer, Asks, gravity changes, Physics, polar radius, Projectile motion, rotation of earth, Tandon, theory, variation | <urn:uuid:b65e75d9-cf44-4119-83df-1077124b8e3e> | 3.34375 | 1,329 | Q&A Forum | Science & Tech. | 42.711114 |
“We ‘killed’ five people to make No Pressure – a mere blip compared to the 300,000 real people who now die each year from climate change,”
The message from the child snuff film makers being that it is OK to kill children of your choice – if you believe their deaths are for the greater good. (See Germany, 1939)
So let’s dissect their claim. The 25 worst natural disasters of the 20th century all occurred with CO2 levels lower than Hansen’s safe level of 350 ppm. Most of these were the result of climate change.
Here is a small sampling of climate change induced deaths from more than 50 years ago.
In 1958, climate change killed between 20 and 43 million people in China. CO2 levels were well below Hansen’s safe 350 ppm.
In 1931, climate change killed 3.7 million people in China. CO2 levels were well below Hansen’s safe 350 ppm.
1n 1928, climate change killed 3 million people in China. CO2 levels were well below Hansen’s safe 350 ppm.
In 1900, climate change killed 1.2 million people in India. CO2 levels were well below Hansen’s safe 350 ppm.
The USGS reported in 2004 that climatic shifts to are part of the natural cycle.
Analyses of tree rings have been used extensively to reconstruct the history of drought in the United States for the past 800 years. Tree-ring reconstructions of precipitation in northern Utah (Gray and others, in press) indicate that, since 1226 A.D., nine droughts have occurred lasting 15-20 years and four droughts have occurred lasting more than 20 years. Moreover, tree-ring records indicate that some past droughts in the Colorado River basin persisted for several decades (Meko and others, 1995).
Conclusion : Climate change deaths have no correlation with CO2, and have declined over the last hundred years. | <urn:uuid:db807b5e-fdc9-4c39-b6db-15c344880306> | 2.890625 | 409 | Nonfiction Writing | Science & Tech. | 69.258938 |
Dennis Tito’s 500-Day Mission to Mars – the Orbital Mechanics
You'll all have heard of Dennis Tito's idea about sending a manned mission to Mars that would take 500 days (well, 501, actually). Launched in January 2018, a small manned spacecraft would fly to Mars, skim past the red planet and use its gravity to send it on an Earth return trajectory.
This is without any doubt a bold concept, one that is fraught with perils and technical challenges. I can't claim to know the answers to all questions that arise. One thing I do know about is celestial mechanics. So what I did is to apply my knowledge to recalculate the trajectory both ways (out- and inbound), based on the information I have. I know that launch shall take place in January 2018, that the minimum altitude at Mars flyby shall be 100
km miles (=162 km, which means the spacecraft will actually be dipping into the tenuous upper atmosphere, albeit briefly), that the transfers shall be free of deep space manoeuvres (so no large propulsion stage is required once Earth escape is achieved) and that the total duration shall be around 500 days. That is sufficient information to get started.
I will spare you the details on the mathematical process. The main thing is that I immediately found a solution that should be very close to what Dennis Tito's mission concept is based on. Small wonder; it is a rather straightforward math problem. The salient results I obtained are as follows:
- Launch date:
8 January 20187 January 2018
- Hyperbolic escape velocity 6.2 km/s (rather high but feasible for a large launch vehicle even with a manned spacecraft)
- Mars swing-by date: 21 August 2018 (Earth-Mars transfer duration 226 days)
- Earth arrival date: 22 May 2019 (Mars-Earth transfer duration 275 days)
- Total mission duration
500 days501 days
- Hyperbolic Earth arrival velocity 8.9 km/s. That definitely is high and will make the design and choice of materials for the heat shield of the entry capsule non-trivial to a high degree
- No delta-v manoeuvres required either on the out- or the inbound arc, other than small trajectory corrections for targeting
So that's it, in a nutshell. Now let's look at the obtained results in a bit more detail. First, I plot the trajectories as seen from looking down from over the north pole of the ecliptic (the plane in which the Earth revolves around the Sun). The outbound trajectory (Earth to Mars) is shown in red, the inbound trajectory (Mars to Earth) in purple:
Next, the distances from the Sun, the Earth, Mars and Venus. You can see that the maximum distance from the Earth is around 1 astronomical unit (AU). this means that radio signals from the Earth will take 8 minutes to reach the spacecraft and it will take another 8 minutes for a reply to reach the Earth. So the crew will have to operate autonomously. That's the essence of it. You can also see that while the maximum distance from the Sun is 1.4 AU, at the time of the Mars encounter, the minimum Sun distance is less than 0.75 AU, on the way back.
That's not so good, but it's typical for this class of fast Mars missions. In essence it means that they will have to design the spacecraft such that it will work at both half and double the power and heat received from the Sun compared to the conditions we have at the Earth. That is a challenge for sure, but it can be done. The low minimum solar range also has implications for the radiation loads, solar corpuscular radiation probably constituting the single most important threat to the crew.
Next, the entry conditions. I don't know how what atmospheric entry conditions they plan to baseline. Too steep, the heat flux and g-loads go off-scale. Too shallow, integrated heat load gets too large, the landing accuracy suffers or you might start having to worry about not achieving capture and instead skipping back out of the atmosphere, which would be a major disaster for the crew. I assumed an entry angle range between -11 and -13 degrees, which is likely a bit steep. It doesn't make a big difference for the sake of this analysis, so let's just stick with this assumption for now.
The diagram below shows on the horizontal axis the local solar time and on the vertical axis the geographical latitude of the entry locations. It also shows the sub-solar point at the date of Earth arrival (at 12 h local solar time, obviously, and the terminator), the direction from which the spacecraft will approach the Earth and the locations in which it enters the atmosphere. Now, there are several things that are not good here. Firstly, the spacecraft will approach the Earth almost directly from the Sun, which will may impede communications and interfere with orbit determination. Then, entry and landing will in all instances take place on the night side of the Earth, with prograde landing (landing in the same direction as the Earth rotates around its axis, so the actual entry velocity is reduced a bit) taking place in the early evening, just after sunset and touchdown taking place a bit further into the night, due to the distance travelled between entry and landing. Neither issue necessarily constitutes a show stopper, but they certainly don't make life easier for the mission designers.
OK, one more diagram and we're through for today, I promise. This one's the velocity als function of entry latitude.For prograde entry you have to look at the left end of the graph, where the lower values are. The lowest possible entry velocity is a bit above 13.8 km/s, if entry takes place near the equator. If the range of possible entry locations shall include also moderate latitudes, the velocity rises to 13.9 km/s. This is a fundamental input parameter for the heat shield design. By the way, this diagram also shows that for retrograde entry, you'd have to design the system to cope with over 14.6 km/s, which one certainly would not want to do if it's not strictly necessary.
We could now start to look at the Mars swing-by in some more detail, but I think I will do that in a later post. | <urn:uuid:423baac9-fbaf-40de-afee-50023558b581> | 2.8125 | 1,309 | Personal Blog | Science & Tech. | 58.173599 |
Exploding Chromosomes Fuel Research About Evolution
Chicago IL (SPX) Aug 26, 2008
Human cells somehow squeeze two meters of double-stranded DNA into the space of a typical chromosome, a package 10,000 times smaller than the volume of genetic material it contains.
"It is like compacting your entire wardrobe into a shoebox," said Riccardo Levi-Setti, Professor Emeritus in Physics at the University of Chicago.
Now research into single-celled, aquatic algae called dinoflagellates is showing that these and related organisms may have evolved more than one way to achieve this feat of genetic packing. Even so, the evolution of chromosomes in dinoflagellates, humans and other mammals seem to share a common biochemical basis, according to a team Levi-Setti led.
Packing the whole length of DNA into tiny chromosomes is problematic because DNA carries a negative charge that, unless neutralized, prevents any attempt at folding and coiling due to electrostatic repulsion. The larger the quantity of DNA, the more negative charge must be neutralized along its length.
"Dinoflagellates have much more nuclear DNA than humans," said Texas A and M biologist Peter Rizzo, who collaborated on the research with Levi-Setti and Konstantin Gavrilov, a Visiting Research Scientist in the Enrico Fermi Institute at the University of Chicago.
Every bit of DNA must be properly duplicated and divided to facilitate reproduction and growth. In humans and mammals, proteins called histones partially neutralize the DNA's negative charge. When histones wrap themselves in DNA, they become nucleosomes.
Dinoflagellates are stuffed at the core with tightly compacted chromosomes, yet these organisms contain neither histones nor nucleosomes. "What takes care of neutralizing DNA, to allow chromosomes to condense?" Levi-Setti asked. "Most biology books do not tell you."
Other scientists had already identified positively charged atoms called cations as neutralizing factors. They found that dinoflagellate chromosomes explode upon the removal of calcium and magnesium cations.
Levi-Setti has produced the first images of the distribution of these cations in dinoflagellate chromosomes. These images verify that cations, mainly of calcium and magnesium, neutralize DNA's enormous negative charge, and further suggest a critical role in folding the protein as well.
The finding raises questions about the evolution of chromosomes, Rizzo said. "Did dinoflagellates once have histones and then lost them? Or did dinoflagellates never have histones and just 'figured out' a different way to fold large amounts of DNA into chromosomes?" Rizzo asked.
The images were produced using a high-resolution scanning ion microprobe, an instrument that Levi-Setti developed in the 1980s jointly with Hughes Research Laboratories in Malibu, Calif.
For the last 15 years, Levi-Setti has collaborated with associates of pioneering chromosome researcher Janet Rowley, the Blum-Riese Distinguished Service Professor in Medicine, Molecular Genetics and Cell Biology and Human Genetics at the University of Chicago.
In 2001, the collaboration demonstrated that cations play an important role in compacting mammalian DNA and helping chromosomes maintain their structure. "Chromosomes would fall apart when calcium and magnesium were removed," Levi-Setti said.
Wondering if there could be a fundamental evolutionary process at work, Levi-Setti extended his research to the fruit fly. Like mammals, fruit flies belong to the pantheon of eukaryotes. In contrast to prokaryotes like bacteria, eukaryotes pack their genetic material in a cellular nucleus. Prokaryotes lack a nucleus.
"Cations play a very important role in the folding and charge neutralization of DNA in all eukaryotes, but more so in dinoflagellates," Rizzo said.
"I find it truly amazing that in all other eukaryotes, histones help in this charge neutralization, and dinoflagellates constitute the only exception to this nearly universal rule. It looks like this may have been the first and very efficient step toward the goal of neutralizing DNA, long before histones came into play."
Email This Article
Comment On This Article
Share This Article With Planet Earth
University of Chicago
Darwin Today At TerraDaily.com
Washington DC (SPX) Aug 26, 2008
Shipwrecks on coral reefs may increase invasion of unwanted species, according to a recent U.S. Geological Survey study. These unwanted species can completely overtake the reef and eliminate all the native coral, dramatically decreasing the diversity of marine organisms on the reef.
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2007 - SpaceDaily.AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by SpaceDaily on any Web page published or hosted by SpaceDaily. Privacy Statement| | <urn:uuid:5abb3244-176f-4575-9841-58f0cba5c251> | 3.34375 | 1,100 | Truncated | Science & Tech. | 20.918622 |
Genetics and Genomics Glossary
The following glossary was obtained with permission from the following resource: Allendorf, F.W., and G. Luikart. 2007. Conservation and the Genetics of Populations. Blackwell Publishing. 642 pp.
The principle that the least complicated explanation (most parsimonious hypothesis) generally should be accepted to explain the data at hand.
See ex situ conservation.
The relative reduction in the fitness of hybrids compared to parental types.
Loci that may be under selection (or linked to loci under selection) that are detected because they fall outside the range of expected variation for a given summary statistic (e.g., extremely high or low FST compared to most “neutral” loci in a sample).
See heterozygous advantage.
A breeding system where sexual maturity does not occur at a specific age, or where individuals breed more than once, causing individuals from different brood years to interbreed in a given year. | <urn:uuid:3fe1730b-607b-4534-99d9-6243b1da8788> | 2.796875 | 203 | Structured Data | Science & Tech. | 39.244381 |
During the week of May 13th, the CO2 level at the Mauna Loa Observatory in Hawaii topped 400 ppm repeatedly. Daily levels of CO2 can vary due to weather, and there are seasonal trends as well. The level of atmospheric greenhouse gases continues to increase, now over 120 ppm since the Industrial Revolution began. For more on the Keeling Curve, see http://keelingcurve.ucsd.edu/. Find out more about greenhouse gases and warming.
The week of May 19 brings dozens of tornadoes to Tornado Alley in the states of Oklahoma, Kansas, Iowa, Illinois and Missouri. On May 20th, a massive tornado struck Moore, Oklahoma, devastating communities - destroying over 100 homes and hitting two elementary schools and a hospital - with many casualties and deaths. Our thoughts are with our friends and colleagues suffering from these storms. For more on the May 20th storms, see the NOAA Storm Prediction Center Storm Report.
Even though the sleeping man is no longer on the bed, you can still see where he was lying down. The heat from his body warmed up the bed sheets which are now radiating infrared light toward your eyes....more
All warm objects (not just people) radiate in the infrared. Warmer objects give off more infrared radiation. Very hot objects radiate other types of light in addition to infrared. Click on the picture...more
Your eye is a wonderful detector of visible light. Different frequencies of light produce different sensations in the eye which we interpret as colors. Our eyes detect light by using light sensitive components...more
Imagine you found a pair of special glasses that not only gave you telescopic vision but gave you the ability to see all forms of radiant energy. The universe in visible light contains all the familiar...more
This is a volcano on the island of Miyake in Japan. It has erupted, sending hot lava and ash into the air, a total of ten times. The time after one eruption until the next occurred was about twenty years...more
This is a picture of a galaxy in visible light. A galaxy is a large number of stars, some like our sun, some bigger, some smaller and all moving together through space. This galaxy is called Centaurus...more
This is a plant in Gary, Indiana where power is made. We use power to run things like television sets, radios, lights, and microwave ovens. The picture looks very strange because it was taken in infrared....more | <urn:uuid:cfe448ba-9ced-49af-bd1c-cc46f9b43995> | 2.953125 | 497 | Content Listing | Science & Tech. | 58.605864 |
- Scope of the Challenge
- Strengths and Weaknesses
- U.S. Climate Change Policy Issues
- Recent Developments
- Options for Strengthening the Climate Change Regime
Climate change is one of the most significant threats facing the world today. According to the American Meteorological Society, there is a 90 percent probability that global temperatures will rise by 3.5 to 7.4 degrees Celsius (6.3 to 13.3 degrees Fahrenheit) in less than one hundred years, with even greater increases over land and the poles. These seemingly minor shifts in temperature could trigger widespread disasters in the form of rising sea levels, violent and volatile weather patterns, desertification, famine, water shortages, and other effects.
Avoiding the worst consequences of climate change will require large cuts in global greenhouse-gas emissions. Humans produce greenhouse gases by burning coal, oil, and natural gas to generate energy for power, heat, industry, and transportation. Deforestation and agricultural activity also yield climate-changing emissions.
Started in year 2010, ‘Climate Himalaya’ initiative has been working on the mountain and climate related issues in the Himalayan region of South Asia. In the last two years this knowledge sharing portal has become one of the important references for the governments, research institutions, civil society groups and international agencies, those have work and interest in Himalayas. The Climate Himalaya team innovates on knowledge sharing, capacity building and climatic adaptation aspects in its focus countries like Bhutan, India, Nepal and Pakistan. Climate Himalaya’s thematic areas of work are mountain ecosystem, water, forest and livelihood. Read>> | <urn:uuid:5ff86e93-b5f7-4a44-b942-27678857ccec> | 3.390625 | 335 | Knowledge Article | Science & Tech. | 25.597961 |
|Start of Tutorial > Start of Trail > Start of Lesson||
The basic format of the command for creating a JAR file is:Let's look at the options and arguments used in this command:jar cf jar-file input-file(s)
- The c option indicates that you want to create a JAR file.
- The f option indicates that you want the output to go to a file rather than to stdout.
- jar-file is the name that you want the resulting JAR file to have. You can use any filename for a JAR file. By convention, JAR filenames are given a .jar extension, though this is not required.
- The input-file(s) argument is a space-separated list of one or more files that you want to be placed in your JAR file. The input-file(s) argument can contain the wildcard * symbol. If any of the "input-files" are directories, the contents of those directories are added to the JAR archive recursively.
The c and f options can appear in either order, but there must not be any space between them.
This command will generate a compressed JAR file and place it in the current directory. The command will also generate a default manifest file for the JAR archive.
Note: The metadata in the JAR file, such as the entry names, comments, and contents of the manifest, must be encoded in UTF8.
You can add any of these additional options to the cf options of the basic command:
Option Description v Produces verbose output on stdout while the JAR file is being built. The verbose output tells you the name of each file as it's added to the JAR file. 0 (zero) Indicates that you don't want the JAR file to be compressed. M Indicates that the default manifest file should not be produced. m Used to include manifest information from an existing manifest file. The format for using this option is:See Modifying a Manifest File for more information about his option.jar cmf existing-manifest jar-file input-file(s)
Warning: The manifest must end with a new line or carriage return. The last line will not be parsed properly if it does not end with a new line or carriage return.
-C To change directories during execution of the command. See below for an example.
Note: When you create a JAR file, the time of creation is stored in the JAR file. Therefore, even if the contents of the JAR file do not change, when you create a JAR file multiple times, the resulting files are not exactly identical. You should be aware of this when you are using JAR files in a build environment. It is recommended that you use versioning information in the manifest file, rather than creation time, to control versions of a JAR file. See the Setting Package Version Information section.
Let's look at an example. The JDKTM demos include a simple TicTacToe applet. This demo contains a bytecode class file, audio files, and images all housed in a directory called TicTacToe having this structure:
The audio and images subdirectories contain sound files and GIF images used by the applet.
To package this demo into a single JAR file named TicTacToe.jar, you would run this command from inside the TicTacToe directory:The audio and images arguments represent directories, so the Jar tool will recursively place them and their contents in the JAR file. The generated JAR file TicTacToe.jar will be placed in the current directory. Because the command used the v option for verbose output, you'd see something similar to this output when you run the command:jar cvf TicTacToe.jar TicTacToe.class audio imagesadding: TicTacToe.class (in=3825) (out=2222) (deflated 41%) adding: audio/ (in=0) (out=0) (stored 0%) adding: audio/beep.au (in=4032) (out=3572) (deflated 11%) adding: audio/ding.au (in=2566) (out=2055) (deflated 19%) adding: audio/return.au (in=6558) (out=4401) (deflated 32%) adding: audio/yahoo1.au (in=7834) (out=6985) (deflated 10%) adding: audio/yahoo2.au (in=7463) (out=4607) (deflated 38%) adding: images/ (in=0) (out=0) (stored 0%) adding: images/cross.gif (in=157) (out=160) (deflated -1%) adding: images/not.gif (in=158) (out=161) (deflated -1%)
You can see from this output that the JAR file TicTacToe.jar is compressed. The Jar tool compresses files by default. You can turn off the compression feature by using the 0 (zero) option, so that the command would look like:jar cvf0 TicTacToe.jar TicTacToe.class audio images
You might want to avoid compression, for example, to increase the speed with which a JAR file could be loaded by a browser. Uncompressed JAR files can generally be loaded more quickly than compressed files because the need to decompress the files during loading is eliminated. However, there's a tradeoff in that download time over a network may be longer for larger, uncompressed files.
The Jar tool will accept arguments that use the wildcard * symbol. As long as there weren't any unwanted files in the TicTacToe directory, you could have used this alternative command to construct the JAR file:jar cvf TicTacToe.jar *
Though the verbose output doesn't indicate it, the Jar tool automatically adds a manifest file to the JAR archive with pathname META-INF/MANIFEST.MF. See the Working with Manifest Files: The Basics section for information about manifest files.
In the above example, the files in the archive retained their relative pathnames and directory structure. The Jar tool provides the -C option that you can use to create a JAR file in which the relative paths of the archived files are not preserved. It's modeled after TAR's -C option.
As an example, suppose you wanted to put audio files and gif images used by the TicTacToe demo into a JAR file, and that you wanted all the files to be on the top level, with no directory hierarchy. You could accomplish that by issuing this command from the parent directory of the images and audio directories:The -C images part of this command directs the Jar tool to go to the images directory, and the . following -C images directs the Jar tool to archive all the contents of that directory. The -C audio . part of the command then does the same with the audio directory. The resulting JAR file would have this table of contents:jar cf ImageAudio.jar -C images . -C audio .By contrast, suppose that you used a command that didn't employ the -C option:META-INF/MANIFEST.MF cross.gif not.gif beep.au ding.au return.au yahoo1.au yahoo2.auThe resulting JAR file would have this table of contents:jar cf ImageAudio.jar images audioMETA-INF/MANIFEST.MF images/cross.gif images/not.gif audio/beep.au audio/ding.au audio/return.au audio/yahoo1.au audio/yahoo2.au
|Start of Tutorial > Start of Trail > Start of Lesson||
Copyright 1995-2005 Sun Microsystems, Inc. All rights reserved. | <urn:uuid:b37d229c-b9cb-4e3a-b0bb-3b5909c2c73d> | 3.40625 | 1,698 | Tutorial | Software Dev. | 59.703906 |
Previous Next Edit Rename Undo Search Administration
Function String ( Source As String [ , Level As Integer, AllowGrow As Boolean ] ) As String
This function returns a string compressed using the algorithm defined by the Type Vetia
- Source: string to be compressed.
- Level: compression level, a value from Min value to Max value. If this parameter is missing, Default will be used.
- AllowGrow: If this parameter is missing, or you pass FALSE as value, this function will return a compressed string only if its length is lesser than original (uncompressed) string length. If you pass TRUE, it always will return the compressed string. Note that almost all compression algorithms can really compress a string (reduce its length) only if it has clear patterns. Very short strings and random strings hardly can be compressed.
Dim Cz As New Compress
Dim Buf As String
Cz.Type = "bzlib2"
Buf = Cz.String(SourceString,Cz.Max,FALSE)
IF Len(Buf) < Len(SourceString) THEN
PRINT "Compression successfully finished"
PRINT "Unable to compress that string" | <urn:uuid:6d1ef82f-0c0e-4894-87d2-e8a7e4f7b8c2> | 2.953125 | 253 | Documentation | Software Dev. | 55.240227 |
Data modeling is the science (and art) of creating the database schema that most purely matches the real world objects involved in your project. Part of this is defining how the objects relate to one another. Let’s say your application tracks Items and Categories. If each item can only belong to one category, then you have a one-to-many relationship; categories have many items. But if an item can appear in more than one category, you have a many-to-many relationship.
There are two ways to handle many-to-many relationships in Ruby on Rails, and this article will cover both.
The simplest approach is if you don’t need to store any information about the relationship itself. You just want to know what items are in each category, and what categories each item belongs to. This is called “has_and_belongs_to_many”. We use has_and_belongs_to_many associations in our models, and create a join table in our database. Here are your models:
# app/models/category.rb class Category < ActiveRecord::Base has_and_belongs_to_many :items end # app/models/item.rb class Item < ActiveRecord::Base has_and_belongs_to_many :categories end
Next, let’s create the join table by generating a new migration. From the command line:
script/generate migration AddCategoriesItemsJoinTable
Now we’ll edit the migration file it creates:
class AddCategoriesItemsJoinTable < ActiveRecord::Migration def self.up create_table :categories_items, :id => false do |t| t.integer :category_id t.integer :item_id end end def self.down drop_table :categories_items end end
:id => false, which keeps the migration from generating a primary key. The name of the table is a combination of the two table names we’re joining, in alphabetical order. This is how Rails knows how to find the join table automatically.
The other way to setup a many-to-many relationship between objects is used if you do, or think you will, need to track info on the relationship itself. When was item X added to category Y? That’s info you can’t store in the category or item tables, because it’s info about the relationship. In Rails, this is called a
has_many :through association, and it’s really just as easy as the first way.
First, we’re going to create a new model, that defines the relationship between items and categories. For back of a better name, let’s call it a Categorization. Setup your models like this:
# app/models/category.rb class Category < ActiveRecord::Base has_many :categorizations has_many :items, :through => :categorizations end # app/models/item.rb class Item < ActiveRecord::Base has_many :categorizations has_many :categories, :through => :categorizations end # app/models/categorization.rb class Categorization < ActiveRecord::Base belongs_to :category belongs_to :item end
We’re connecting both original models to
:categorizations, and then connecting the them to each other via the intermediary Categorization model. Now, instead of a join table whose only function is connecting the others, we add a full-fledged table to manage our new model:
class CreateCategorizations < ActiveRecord::Migration def self.up create_table :categorizations do |t| t.integer :category_id t.integer :item_id t.timestamps end end def self.down drop_table :categorizations end end
We still have the two foreign key integer columns, but we’ve removed
:id => false so this table will have an id column of its own. We also added timestamps, so we’ll be able to tell when an item was added to a specific category. I also created a migration that removes the old
categories_items table, but it’s not shown here.
Which is Better?
The simpler has_and_belongs_to_many approach has a small advantage when you *know* you’re not going to need to track info about the relationship itself. If this is the case, there’s a very slight performance gain because you’re not loading an extra model class at runtime.
More often than not, however, you’re going to eventually want to track relationship-specific data. We used the example of tracking when a relationship was created. Another would be if you want to track, over time, how many times a visitor clicks on an item under each category. That counter needs to be stored in the Categorization model, and that’s a reason not to use the simpler has_and_belongs_to_many approach.
I’ve created an example application (get it here) with tags for each version – | <urn:uuid:1e163c0d-67ae-4bf9-97ec-8b8cb6fb34a3> | 3.34375 | 1,086 | Tutorial | Software Dev. | 37.797023 |
4th March 2007 - 11:13 AM
I was trying to solve a problem of motion with mass changing along the way , such as in rocket : gas is going down and gaining mass while the rocket lose mass and going up.
Lets call the rocket #1 and the gas #2.
Lets say the derivative of M1 , the velocity of the gas relating to the rocket V0 , starting mass of the rocket M0 are known , and let M1,M2,V1,V2 be the mass and velocities of the rocket and gas [they change in time].
Now we know some things :
1. dM1/dt = -dM2/dt
because losing of mass to #1 is gaining mass for #2
Lets call dM1/dt = -h
dM2/dt = h
2. V2 = V1-V0
dV1/dt = dV2/dt
because V0 don't change in time.
3. M1 = -ht+M0
M2 = ht
getting this by doing integral to the derivative of M1 and M2.
To make it easier to read i put it on artPad :
The black dots above the variables means derivative in time.
As you can see , the acceleration is constant !
But in the book they say :
dV1/dt = hV0/M1
which is changing in time because M1 = -ht+M0
and then they get expression with "ln" function for the acceleration ..
I solved this thing "by the book" although the book says differently so I really don't know who is right !
Thanks for even reading this
4th March 2007 - 04:42 PM
The problem is rather subtle, because it involves the correct interpretation of the variables, rather than the math itself. The term M2v2' is where the trouble lies. V2 represents the velocity of the fuel CURRENTLY being ejected from the rocket, and so it is correct to say v2' = v1'. But M2 represents the total mass of all the fuel which has been ejected so far, and this trail of fuel is not of course all traveling at velocity v2; only the most recent increment of ejected fuel is traveling at that velocity.
There are two terms related to the rate of change of momentum of the entire fuel trail. The first, M2'v2 is certainly correct, because mass is being added to the fuel trail at a rate M2', and this additional mass enters the fuel trail at a velocity of v2, so it brings a momentum increment dM2v2 with it in a time interval dt. The value of the second term, M2v2', should be ZERO, because it represents the momentum change of the fuel trail due to the changing velocity of the fuel that is already in the trail. But the fuel in the trail is just coasting through space after it exits the rocket engine, so each individual bit of mass in it is moving at a constant velocity. All the different increments of fuel are traveling at different speeds because they were ejected at different times, but once they have been ejected, their speeds do not change.
This is a bit like a shell-game. You have to keep your eye on the mass. The bit of fuel that has velocity v2 at time t is not the same bit that has velocity v2+dv2 at time t + dt. The latter is a NEWLY EJECTED bit of fuel, and the first bit of fuel is still traveling at its old speed v2. So the short answer is, if you delete the term M2v2' from your algebra, you will get the correct result.
Another way to analyze this problem is to note that once the fuel leaves the rocket, it has no further interaction with the rocket, so it can be ignored. Therefore, you can analyze the system consisting of the rocket (m) together with a bit of fuel (dm) which it is just about to expel, and then look at momentum of the system just before and just after this bit of fuel is expelled. In that case, the initial momentum is (m+dm)v1 and the final momentum is m(v1+dv1) + dmv2. Setting Pinitial = Pfinal gives mv1 + dmv1 = mv1 + mdv1 + dmv1 - dmv0, so 0 = mdv1 - dmv0. Therefore dividing by dt gives mdv1/dt = v0dm/dt, i.e. mv' = v0m', which is the standard answer.
By the way, this rearranges to dv = v'dt = v0(dm/m) = v0d(ln(m)). Integrating this gives v(t) = v0·ln(m(t)/m(0)), so the final velocity of the rocket can be well above its exhaust velocity if the mass of the loaded rocket is mostly fuel.
Hope this helps!
6th March 2007 - 04:04 PM
Wow that was the best explanation ever !
I not only understand what was my mistake but now i also understand mechanics differently ... if you are not yet a professor , i pronounce you as one
Thanks from the moon and all the way back | <urn:uuid:e952cd7b-8ea4-4ad3-9b34-87b51699c6cc> | 3.0625 | 1,133 | Comment Section | Science & Tech. | 79.193632 |
Table of Contents:
Beckman, P. A History of Pi. New York: Dorset. 1989.
Bell, E. T. Men of Mathematics. New York: Simon & Schuster. 1986.
Boyer, C. B. A history of mathematics. New York, Wiley. 1968.
Dorrie, Heinrich. 100 great problems of elementary mathematics: their history and solution. New York: Dover. 1965.
Eves, H. An Introduction to the History of Mathematics. New York: CBS College. 1983.
Eves, H. Great moments in mathematics (before 1650). Mathematical Association of America. 1980.
Eves, H. Great moments in mathematics (after 1650) Mathematical Association of America. 1981.
Jacobs, H. Mathematics: A Human Endeavor. New York: W. H. Freeman and Co. 1982.
Kline, M. Mathematical Thought from Ancient to Modern Times. New York: Oxford University. 1972.
Kline, M. Mathematics in Western Culture. New York: Oxford University. 1953.
Simmons, G. F. Calculus Gems. New York: McGraw-Hill. 1992.
Home || The Math Library || Quick Reference || Search || Help | <urn:uuid:b090e389-b70d-4b3f-a560-06b3e18d5a50> | 3.234375 | 264 | Content Listing | Science & Tech. | 68.231092 |
Modular Exponentiation by Repeated Squaring
Hello, I have the following problem:
Suppose we are given and exponents where the binary length of the is at most . Show how to compute using at most squarings and multiplications.
Just in case: if the binary length of is , using repeated squaring to compute uses at most squarings and multiplications.
I tried a few things that makes the computation somewhat more efficient, but I am unable to stay within the multiplication bound. For example, if is the smallest exponent, then we can look at , where . Then we can precompute at the cost of multiplications, then use repeated-squaring to raise it to the , at the cost of (at worst) squarings and multiplications. However, the rest of the terms need to be computed via multiplication, which breaks the bound.
It is rather frustrating to have to read the author's mind to determine what kind of algorithm he was thinking of. So, I would appreciate any suggestions you might have about this problem. | <urn:uuid:f1b910d6-9331-4712-8aac-91f263d20111> | 2.71875 | 219 | Q&A Forum | Science & Tech. | 46.888333 |
15 Answers votes newest views recent 3analogue of a set with n binary operations 0Euclidean inside Hyperbolic 2Time scale calculus vs Lebesgue–Stieltjes calculus 4What are some examples of colorful language in serious mathematics papers? 3Proofs that require fundamentally new ways of thinking 5More open problems 2Suggestions for good notation 1What is the indefinite sum of tan(x)? 2Modern algebraic geometry vs. classical algebraic geometry 1Books you would like to see translated into English. 2Statements reliant on conjectures 2How should the Math Subject Classification (MSC) be revised or improved? 2Quick proofs of hard theorems 10Consolidation: Aftermathematics of fads 3why haven’t certain well-researched classes of mathematical object been framed by category theory? | <urn:uuid:c5798a19-f022-4151-b0ba-15a21f66d468> | 2.8125 | 172 | Q&A Forum | Science & Tech. | 22.262529 |
Explore the effect of reflecting in two intersecting mirror lines.
Explore the effect of reflecting in two parallel mirror lines.
Explore the effect of combining enlargements.
A red square and a blue square overlap so that the corner of the red square rests on the centre of the blue square. Show that, whatever the orientation of the red square, it covers a quarter of the. . . .
Charlie likes tablecloths that use as many colours as possible, but insists that his tablecloths have some symmetry. Can you work out how many colours he needs for different tablecloth designs?
What would be the smallest number of moves needed to move a Knight
from a chess set from one corner to the opposite corner of a 99 by
99 square board?
With one cut a piece of card 16 cm by 9 cm can be made into two pieces which can be rearranged to form a square 12 cm by 12 cm. Explain how this can be done.
It starts quite simple but great opportunities for number discoveries and patterns!
Can you dissect a square into: 4, 7, 10, 13... other squares? 6, 9,
12, 15... other squares? 8, 11, 14... other squares?
Draw a square. A second square of the same size slides around the
first always maintaining contact and keeping the same orientation.
How far does the dot travel?
Spotting patterns can be an important first step - explaining why it is appropriate to generalise is the next step, and often the most interesting and important.
Imagine an infinitely large sheet of square dotty paper on which you can draw triangles of any size you wish (providing each vertex is on a dot). What areas is it/is it not possible to draw?
How many moves does it take to swap over some red and blue frogs? Do you have a method?
If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable.
Decide which of these diagrams are traversable.
Imagine a large cube made from small red cubes being dropped into a
pot of yellow paint. How many of the small cubes will have yellow
paint on their faces?
An article for teachers and pupils that encourages you to look at the mathematical properties of similar games.
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
Try entering different sets of numbers in the number pyramids. How does the total at the top change?
What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles?
A game for 2 players
Square numbers can be represented as the sum of consecutive odd
numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153?
What would you get if you continued this sequence of fraction sums?
1/2 + 2/1 =
2/3 + 3/2 =
3/4 + 4/3 =
Take any two positive numbers. Calculate the arithmetic and geometric means. Repeat the calculations to generate a sequence of arithmetic means and geometric means. Make a note of what happens to the. . . .
Can you find sets of sloping lines that enclose a square?
List any 3 numbers. It is always possible to find a subset of
adjacent numbers that add up to a multiple of 3. Can you explain
why and prove it?
Consider all two digit numbers (10, 11, . . . ,99). In writing down
all these numbers, which digits occur least often, and which occur
most often ? What about three digit numbers, four digit numbers. . . .
The sum of the numbers 4 and 1 [1/3] is the same as the product of 4 and 1 [1/3]; that is to say 4 + 1 [1/3] = 4 × 1 [1/3]. What other numbers have the sum equal to the product and can this be so for. . . .
Triangle numbers can be represented by a triangular array of
squares. What do you notice about the sum of identical triangle
This article for teachers describes several games, found on the
site, all of which have a related structure that can be used to
develop the skills of strategic planning.
Find some examples of pairs of numbers such that their sum is a
factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and
16 is a factor of 48.
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
Rectangles are considered different if they vary in size or have different locations. How many different rectangles can be drawn on a chessboard?
It's easy to work out the areas of most squares that we meet, but
what if they were tilted?
The Egyptians expressed all fractions as the sum of different unit
fractions. Here is a chance to explore how they could have written
Can all unit fractions be written as the sum of two unit fractions?
The aim of the game is to slide the green square from the top right
hand corner to the bottom left hand corner in the least number of
What size square corners should be cut from a square piece of paper to make a box with the largest possible volume?
A country has decided to have just two different coins, 3z and 5z
coins. Which totals can be made? Is there a largest total that
cannot be made? How do you know?
Use the animation to help you work out how many lines are needed to draw mystic roses of different sizes.
Charlie has made a Magic V. Can you use his example to make some more? And how about Magic Ls, Ns and Ws?
Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
How could Penny, Tom and Matthew work out how many chocolates there
are in different sized boxes?
Can you find an efficient method to work out how many handshakes
there would be if hundreds of people met?
Imagine you have a large supply of 3kg and 8kg weights. How many of
each weight would you need for the average (mean) of the weights to
be 6kg? What other averages could you have?
A package contains a set of resources designed to develop
pupils’ mathematical thinking. This package places a
particular emphasis on “generalising” and is designed
to meet the. . . .
Can you tangle yourself up and reach any fraction?
Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning?
A collection of games on the NIM theme
Start with any number of counters in any number of piles. 2 players
take it in turns to remove any number of counters from a single
pile. The winner is the player to take the last counter.
It would be nice to have a strategy for disentangling any tangled | <urn:uuid:3ba9d42e-0073-4076-9685-41d6152feacd> | 4.21875 | 1,532 | Content Listing | Science & Tech. | 73.036803 |
T-79: Taking Titan's Temperature
The composite infrared spectrometer (CIRS) got prime time during the Titan ‘T-79’ flyby on Dec. 13, 2011, a day after the Cassini spacecraft visited Dione.
CIRS performed a wide variety of observations, including mapping of surface and atmospheric temperatures. It also took “limb sounding” measurements where it inspected a cross section of the atmosphere, which could provide insights into the transition of the northern polar circulation from spring to summer. There was also an opportunity to track newly formed possible mist.
The visible and infrared mapping spectrometer (VIMS) instrument got a turn too, and it observed the area near Belet, the same region where the imaging science subsystem (ISS) saw extensive surface changes in Fall 2010.
ISS got in the game as well, and imaged Titan’s surface and atmosphere, while the ultraviolet imaging spectrograph (UVIS) obtained an ‘image cube’ of Titan’s atmosphere. These cubes provide spectral and spatial information on nitrogen emissions, hydrogen emission and absorption, absorption by simple hydrocarbons, and the scattering properties of haze aerosols. | <urn:uuid:620b3348-175c-4c43-a006-fc1e4500a98b> | 3.109375 | 248 | Knowledge Article | Science & Tech. | 23.957273 |
The Miconia plant has taken root on an island far from its origins in native Mexico. The islands of Hawai’i are the unlucky recipients of the Miconia, a plant that seems determined to take over the vegetation of the tropic islands. It was first transported from Mexico as a decorative plant, considered beautiful because of the brilliant coloring of its leaves’ undersides. Upon introduction to Hawai’i, the Miconia soon flourished and spread. Currently, it can be found occupying 10,000 acres on the Big Island.
The plant endangers native species by growing above the vegetation line and blocking the sunlight from the plants below. Starved for the light they need to live, native plants die, leaving Miconia roots the only ones supporting the soil. Because Miconia roots are few and relatively shallow, the danger of erosion quickly becomes a reality in Miconia-dominated areas of forest. In the event of a large rain or runoff from the hillsides, the chance of a landslide is alarmingly high. The Miconia plants would be unable to sustain their position in such conditions and the runoffs from such a landslide would head straight for the ocean, covering delicate coral and endangering the aquatic species that make coral their homes.
A species of bird called the Japanese white-eye, another introduced species, has had a part in spreading this plant. When the Miconia plant flowers, it produces thousands of seeds, many of which are transported about the islands by the Japanese white-eye. Once these seeds find soil, they take root and a new area of growth is established. The Miconia plant flowers three times a year, making it a very prolific plant, detrimentally so in an area where it is unwanted.
Scientist Greg Asner, followed in the National Geographic “Strange Days on Planet Earth” documentary episode Invaders, has undertaken a project to create aerial maps of Miconia infestations using a variety of technology—helicopter remote sensing, satellite photos, simple walks through the vegetation to record plant locations, etc. With these maps, he hopes to be able to manage the infestations before they become too large, mainly through the work of volunteers.
The efforts made by Asner, and other involved researchers, seem to be the best course of action to take with such prevalent invaders. I was impressed by the variety of venues through which Asner collects the information to create his maps of Miconia infestations—he takes into account leaf size and plant height as well as simple longitudinal and latitudinal measurements. I think that if public interest remains concerned about the spread of Miconia, Asner’s method of management will continue to succeed. However, if public interest falls, it will be difficult to continue. The focus of the public is a powerful tool in any effort to stop the force of invasive species like the Miconia. | <urn:uuid:4d2d15bc-0d56-4c07-bf2e-0bf3ad04d6c4> | 3.90625 | 595 | Personal Blog | Science & Tech. | 34.980667 |
Guest post by Clive Best
Perhaps like me you have wondered why “global warming” is always measured using temperature “anomalies” rather than by directly measuring the absolute temperatures ?
Why can’t we simply average the surface station data together to get one global temperature for the Earth each year ? The main argument to work with anomalies (quoting from the CRU website) is: ”Stations on land are at different elevations, and different countries estimate average monthly temperatures using different methods and formulae. To avoid biases that could result from these problems, monthly average temperatures are reduced to anomalies from the period with best coverage (1961-90)….” In other words although measuring an average temperature is “biased”, measuring an average anomaly (deltaT) is not. Each monthly station anomaly is actually the difference between the measured monthly temperature and so-called “normal” monthly values. In the case of Hadley Cru the normal values are the 12 monthly averages from 1961 to 1990.
The basic assumption is that global warming is a universal, location independent phenomenon which can be measured by averaging all station anomalies wherever they might be distributed. Underlying all this of course is the belief that CO2 forcing and hence warming is everywhere the same. In principal this also implies that global warming could be measured by just one station alone. How reasonable is this assumption and could the anomalies themselves depend on the way the monthly “normals” are derived?
Despite temperatures in Tibet being far lower than say the Canary Islands at similar latitudes, local average temperatures for each place on Earth must exist. The temperature anomalies are themselves calculated using an area-weighted yearly average over a 5×5 degree (lat,lon) grid. Exactly the same calculation can be made for the temperature measurements in the same 5×5 grid which then reflect the average surface temperature over the Earth’s topography. In fact the assumption that it is possible to measure a globally averaged temperature “anomaly” or DT also implies that there must be a globally averaged surface temperature relative to which this anomaly refers. The result calculated in this way for the CRUTEM3 data is shown below:
Fig1: Globally averaged temperatures based on CRUTEM3 Station Data
So why is this never shown ?
The main reason for this I believe is that averaged temperatures highlight something different about the station data. They instead reflect an evolving bias in the geographic sampling of the station data used over the last 160 years. To look into this I have been working with all station data available here and adapting the PERL programs kindly included. The two figures below show the location of stations used dating from 1860 compared to all stations.
Fig 2: Location of all stations in the Hadley Cru set. Stations with long time series are marked with slightly larger red dots.
Fig 3: Stations with data back before 1860
Note how in Figure 1 there is a step rise in temperatures for both hemispheres around 1952. This coincides with a sudden expansion in included land station data as shown below. Only after this time does the data properly cover the warmer tropical regions, although there still remain gaps in some areas. The average temperature rises because gaps for grid points in tropical areas are now filled. (There is no allowance made in the averaging for empty grid points neither for average anomalies nor temperatures). The conclusion is that systematic problems due to poor geographic coverage of stations affects average temperature measurements prior to around 1950.
Fig 4: Percentage of points on a 5×5 degree grid with at least one station. 30 % is roughly the land surface of Earth
Can empty grid points similarly affect the anomalies? The argument against this, as discussed above, is that we measure just the changes in temperature and these should be independent of any location bias i.e. CO2 concentrations rise the same everywhere ! However it is still possible that the monthly averaging itself introduces biases. To look into this I calculated a new set of monthly normals and then recalculated all the global anomalies. The new monthly normals are calculated by taking the monthly averages of all the stations within the same (lat,lon) grid point. These represent the local means of monthly temperatures over the full period, and each station then contributes to its near neighbours. The anomalies are area-weighted and averaged in the same way as before. The new results are shown below and compared to the standard CRUTEM3 result.
Fig5: Comparison of standard CRUTEM3 anomalies(BLACK) and anomalies calculated using monthly normals averaged per grid point rather than averaged per station (BLUE).
The anomalies are significantly warmer for early years (before about 1920), changing the apparent trend. Therefore systematic errors due to the normalisation method for temperature anomalies are of the order of 0.4 degrees in the 19th century. The origin of these errors is due to the poor geographic coverage in early station data and the method used to normalise the monthly dependences. Using monthly normals averaged per lat,lon grid point instead of per station causes the resultant temperature anomalies to be warmer before 1920. Early stations are concentrated in Europe and North America, with poor coverage in Africa and the tropics. After about 1920 these systematic effects disappear. My conclusion is that anomaly measurements before 1920 are unreliable, while those after 1920 are reliable and independent of normalisation method. This reduces evidence of AGW since 1850 from a quoted 0.8 +- 0.1 degrees to about 0.4 +- 0.2 degrees | <urn:uuid:ddb1fc29-feed-43f5-a785-dde65258abb9> | 3.515625 | 1,132 | Personal Blog | Science & Tech. | 32.524024 |
Author: Scott Salom
Dr. Lee Humble, a scientist with the Canadian Forest Service in British Columbia who searches for natural enemies of HWA and the balsam woolly adelgid, observed that the small, little-known beetle, Laricobius nigrinus Fender (Coleoptera: Derodontidae), consistently feeds on HWA in western hemlock seed orchards. Drs. Scott Salom and Loke Kok (Virginia Tech) visited these seed orchards and, between 1997 and 2003, imported the beetles to Virginia for study under quarantine. They determined L. nigrinus produces one generation per year and undergoes diapause at the same time and for the same duration as HWA (Zilahi-Balogh et al. 2003 a, b). The predator will feed on other adelgids, but prefers to feed on HWA, and will complete development only on HWA (Zilahi-Balogh et al. 2002).
Lab studies show that L. nigrinus prefers temperatures between 12° and 15°C (54° to 59°F), and field studies in British Columbia and Virginia show it is active in the winter – a critical point, because HWA is also active during the winter (Fig. 12). Adults feed on all HWA life stages present from November to May, and begin laying eggs in HWA ovisacs (one per ovisac) in February (Fig. 13). Larvae feed exclusively on HWA eggs.
Laricobius nigrinus was removed from quarantine in September, 2000. The following year, Virginia Tech began conducting field evaluations and exploring ways to rear the insect on a large scale. In general, rearing HWA predators in the lab is labor intensive, mostly due to the enormous amount of fresh food required to maintain and build colonies. For L. nigrinus specifically, there are additional complications: They need cold temperatures; both the pupae and diapausing adults must live in soil; and the diapause period in lab-raised L. nigrinus must be in synch with the diapause period of HWA in the field. Complications aside, tremendous progress has been made, and the mass production of sufficient quantities of beetles needed for operational release appears achievable.
Field evaluations of L. nigrinus have been promising. Adults survive the Virginia winters in sleeve cages, and feed voraciously on HWA sistens. In the first field-release study of this predator, progeny produced by ovipositing adults in sleeve cages killed 50 percent more HWA progrediens than died naturally on untreated branches (Fig. 14). These experiments utilized 144 adults (one to three per branch) for only 10 days. Yet, during that time they yielded close to 12,000 predator eggs. Sampling to determine the establishment of L. nigrinus at this first release site began in fall 2003 and will continue for several years. In November and December 2003, 300 adult L. nigrinus were released at each of seven sites within the mid-Atlantic region. Releases will continue at increasing frequency over the next several years in an attempt to establish L. nigrinus.
A study was initiated in 2003 to evaluate the potential competitive interaction among two host-specific predators, L. nigrinus and S. tsugae, and the generalist, Harmonia axyridus. First-year results show the majority of L. nigrinus activity occurs earlier than that of both the other predators; both L. nigrinus and S. tsugae will feed on each others’ eggs, but only when HWA density is very low; and H. axyridus might not be as formidable a predator on the other two as was expected.
A worldwide search for additional Laricobius species began in 2002. As a result, two new species were discovered in China, one of which is currently being reared and studied under quarantine at Virginia Tech (Fig. 15).
Robbie Flowers, Lee Humble, L. T. Kok, Ashley Lamb, Warren Mays, Tom McAvoy, David Mausel, Gabriella Zilahi-Balogh. | <urn:uuid:6f50f5a8-5833-4aa0-b2ed-a72437d492c1> | 3.109375 | 876 | Knowledge Article | Science & Tech. | 50.35046 |
|Science Home News in Science Features Explore TV & Radio Dr Karl Play Podcasts|
Uranus, the 7th planet out from the Sun was discovered accidentally in 1781 when William Herschel was trying out the 7" telescope that he had built. It is barely visible to the naked eye, but through a telescope it looks like a blue-green disc.
It's about four times the size of Earth. Uranus has a solid core made up of iron and silicates. This solid core is about 14,000 km in diameter, just a little larger than our whole planet. Above the solid core is a 9,000 km thick layer of ice and various gases (mostly hydrogen and helium, with a little methane).
There doesn't seem to be much visible weather happening on Uranus. The cloud tops are almost featureless. Another peculiarity about the weather of Uranus is that the temperature from the Poles to the equator is almost constant, within 2oC.
The planet is tilted, and lies on its side. So in its 84 year orbit, the Sun shines on one Pole for 42 years, and then on the other pole for another 42 years. The current scientific theory is that a colossal impact with a pretty large body must have tipped Uranus on its side. This must have happened fairly early in the history of Uranus. The satellites go around the equator of Uranus, so they must have formed after the big impact that tilted Uranus onto its side. The day is shorter than our own - about 17-18 hours long.
In 1977, some five rings were discovered around Uranus. But when Voyager 2 zipped past, it discovered another four rings. These rings are very dark and narrow.
Before Voyager 2, only 5 moons were known. But Voyager 2 discovered another 10 moons. Two of them are tiny "shepherd moons", like the shepherd moons that maintain the braided rings of Saturn.
But the real surprise was the moon Miranda. It's a small moon, only about 350 km across. It's made half of water ice, with the other half being rocks. Yet this tiny moon has huge deep oval scratches, about 200-300 km across. It also has the tallest cliffs in the known solar system - 20 km high.
[Sun] [Mercury] [Venus] [Earth] [Moon] [Mars] [Jupiter] [Saturn] [Uranus] [Neptune] [Pluto]
Images courtesy of NASA | <urn:uuid:7ae535ba-7f22-48dd-b890-dadd834ae117> | 3.640625 | 515 | Knowledge Article | Science & Tech. | 64.407719 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Wednesday, 17 April 2013
Astronomers have found a mysterious new type of the most powerful explosions in the universe since the Big Bang.
Thursday, 14 March 2013
Distant starburst galaxies were more numerous and forming stars far earlier than previously thought, according to early observations from a revolutionary new telescope.
Thursday, 7 March 2013
Scientists have made the most accurate measurements yet of the distance to a neighbouring galaxy, the Large Magellanic Cloud (LMC).
Wednesday, 6 March 2013
The 320-year-old mystery of the origins of the now-extinct Falkland Islands wolf and how it came to be the only land-based mammal on the island has been solved by Australian researchers.
Wednesday, 19 December 2012
Scientists have been surprised by the unexpected discovery of a nearby solar system using a new experimental technique.
Thursday, 29 November 2012
Astronomers have discovered a massive blast coming from a black hole, five times more powerful than any previously seen.
Thursday, 15 November 2012
Astronomers have detected what could be a rogue planet wandering all alone through deep space without a host star.
Thursday, 6 September 2012
Mystery surrounds the discovery of a very young star deep in the midst of an ancient stellar cluster.
Thursday, 30 August 2012
Scientists have detected simple sugar molecules in the gas surrounding a young Sun-like star.
Thursday, 1 March 2012
Scientists have developed a new method to study reflected light from the Earth, that can correctly measure the amount of cloud cover, ocean and vegetation our planet has.
Friday, 10 February 2012
Astronomers in Chile have combined four powerful telescopes as if they were one.
Friday, 14 October 2011
A new survey has turned up more than two dozen failed stars, including one lightweight only about six times the mass of Jupiter.
Tuesday, 11 October 2011
A virus harbouring more than 1000 genes has found in the sea off the coast of Chile.
Tuesday, 4 October 2011
A new radio telescope has peered into the sky affording a view of the universe unmatched by most ground-based observatories.
Wednesday, 20 July 2011
A spectacular new image of a cosmic superbubble in a nearby galaxy is giving scientists a glimpse of the birth and death of young stars and solar systems. | <urn:uuid:90c9b73e-ad93-4413-bea2-cf566310140d> | 3.015625 | 486 | Content Listing | Science & Tech. | 39.725694 |
In the North Atlantic, Oceanic Currents Play a Greater Role in the Absorption of Carbon Than Previously Thought
ScienceDaily (Mar. 9, 2011) — The ocean traps carbon through two principal mechanisms: a biological pump and a physical pump linked to oceanic currents. A team of researchers from CNRS, IRD, the Muséum National d'Histoire Naturelle, UPMC and UBO (1) have managed to quantify the role of these two pumps in an area of the North Atlantic. Contrary to expectations, the physical pump in this region could be nearly 100 times more powerful on average than the biological pump. By pulling down masses of water cooled and enriched with carbon, ocean circulation thus plays a crucial role in deep carbon sequestration in the North Atlantic.
These results are published in the Journal of Geophysical Research.
The ocean traps around 30% of the carbon dioxide emitted into the atmosphere through human activity and represents, with the terrestrial biosphere, the main carbon sink. Much research has been devoted to understanding the natural mechanisms that regulate this sink. On the one hand, there is the biological pump: the carbon dioxide dissolved in the water is firstly used for the photosynthesis of phytoplankton, microscopic organisms that proliferate in the upper layer of the ocean. The food chain then takes over: the phytoplankton is eaten by zooplankton, itself consumed by larger organisms, and so on. Cast into the depths in the form of organic waste, some of this carbon ends its cycle in sediments at the bottom of the oceans. This biological pump is particularly effective in the North Atlantic, where a spectacular bloom of phytoplankton occurs every year. On the other hand, there is the physical pump which, through oceanic circulation, pulls down surface waters containing dissolved carbon dioxide towards deeper layers, thereby isolating the gas from exchanges with the atmosphere.
On the basis of data collected in a specific region of the North Atlantic during the POMME (2) campaigns, the researchers were able to implement high-resolution numerical simulations. They thus carried out the first precise carbon absorption budget of the physical and biological pumps. They succeeded, for the first time, in quantifying the respective proportions of each of the two mechanisms. Surprisingly, their results suggest that in this region of the North Atlantic the biological pump would only absorb a minute proportion of carbon, around one hundredth. The carbon would thus be trapped mainly by the physical pump, which is almost one hundred times more efficient. At this precise location, oceanic circulation pulls down the carbon, in dissolved organic and inorganic form, to depths of between 200 and 400 meters, together with the water masses formed at the surface.
Article continues: http://www.sciencedaily.com/releases/2011/03/110309132015.htm | <urn:uuid:c61cae0e-c779-40cd-a448-1108b029b771> | 3.765625 | 587 | Truncated | Science & Tech. | 37.004713 |
Program in Modeling and Decision Support
Capability/Technical Service - Empirical and simulation modeling.
Conserving populations of sensitive aquatic species often requires a better understanding of the anthropogenic and natural factors (ecological context) that affect persistence. Often, basic empirical data are lacking or existing empirical data must be applied or extrapolated to address important management questions. We use complementary approaches to address these needs. First, in the absence of empirical data we attempt to fill the information need by estimation of important landscape-level patterns in occurrence or population parameters (empirical modeling). Second, given sufficient data and an appropriate conceptual or ecological framework, we can probabilistically model processes for which empirical data are lacking and evaluate different management strategies (simulation modeling).
1) Modeling occurrence of cutthroat trout in isolated stream networks as a function of habitat size and time since isolation
Our objective is to use a combination of found and new data to develop a predictive model for presence of westslope cutthroat trout in isolated stream networks in the northern Rocky Mountains as a function of time since isolation and patch (habitat) size. The intent is to produce an empirically-based tool for managers to refine evaluations of barriers and isolation management options for the species.
Preliminary results were presented at the 2008 Annual Meeting of the Western Division of the American Fisheries Society, in Portland, OR. Final analysis of the dataset is ongoing with an expected completion date of summer 2011.
Partners: US Forest Service, Rocky Mountain Research Station; US Forest Service, Region 1
2) Modeling the potential effects of redd trampling by cattle on cutthroat trout
High estimated rates of cattle trampling on artificial redds (clay targets, e.g., Gregory and Gamett 2008) within federal grazing allotments in southwestern Montana has raised concern that direct mortality from trampling may contribute to imperilment of native westslope cutthroat trout (Oncorhynchus clarkii lewisi). Our goal was to estimate and model the effects of trampling by cattle on egg-to-fry mortality for stream-resident cutthroat trout and to explore the demographic implications of that mortality. We used results of a study of angler trampling by Roberts and White (1992) to estimate the mortality caused by cattle trampling, and incorporated these estimates into a temperature-driven model of egg-to-fry mortality representative of the developmental stages during which resident cutthroat trout populations in central and southwestern Montana would be vulnerable to the effects of trampling. The egg-to-fry model was used to characterize the effects of trampling by cattle in streams under two thermal regimes and across a range of empirically-estimated trampling rates from typical cattle grazing scenarios. We linked the egg-to-fry model to a matrix population model to evaluate how trampling affects population growth rates, then considered how trampling may influence persistence in demographically isolated populations.
Modeling indicated the effect on mortality not as dramatic as observed trampling rates might suggest, but these trampling rates were most likely to contribute to declines where the population is marginally stable without the additional impact of trampling.
The models and results are intended to serve as guidance for federal biologists evaluating the potential environmental impacts of livestock permits (e.g., under the National Environmental Policy Act, or NEPA).
Partners: U.S. Forest Service, Rocky Mountain Research Station; US Forest Service, Beaverhead-Deerlodge National Forest; US Forest Service, Region 1
Publication: Peterson, D.P., B.E. Rieman, M.K. Young, and J. Brammer. 2010. Modeling predicts that redd trampling by cattle may contribute to population declines of native trout. Ecological Applications 20(4):954-966. (pdf)
Gregory, J. S., and B. L. Gamett. 2009. Cattle trampling of simulated bull trout redds. North American Journal of Fisheries Management 29:361–366.
Roberts, B. C. and R. G. White. 1992. Effects of angler wading on survival of trout eggs and pre-emergent fry. North American Journal of Fisheries Management 12:450–459.
3) Modeling suppression of nonnative brook trout to benefit native cutthroat trout.
Nonnative trout species are among the most significant threats to persistence of native inland salmonids, such as cutthroat trout (Oncorhynchus clarkii spp.). Early detection of nonnative trout invasions and subsequent eradication is the preferred management alternative to deal with this threat, but sometimes eradication is not possible for technical or socio-political reasons. In such cases, maintenance control or suppression of nonnative species using mechanical methods (e.g., electrofishing) becomes a frequent alternative where the risk of inaction is unacceptable.
We conducted population modeling to help biologists design and implement effective electrofishing suppression programs. To do this we built stage-based, stochastic matrix models describing sympatric populations of stream-resident brook trout (Salvelinus fontinalis) and cutthroat trout and used the models to show the demographic differences between the species and compare the efficacy of various electrofishing treatments for suppressing brook trout. The models were used to assess the population response of cutthroat trout to brook trout suppression as a function of the frequency and temporal distribution of annual suppression visits, electrofishing intensity (number of passes) during individual suppression events, electrofishing capture efficiency, and immigration by brook trout.
Partners: Colorado State University; Canadian Rivers Institute, University of New Brunswick; Department of Mathematics, University of New Brunswick
Peterson, D. P., K. D. Fausch, J. Watmough, and R. A. Cunjak. 2008. When eradication is not an option: modeling strategies for electrofishing suppression of nonnative brook trout to foster persistence of sympatric native cutthroat trout in small streams. North American Journal of Fisheries Management 28:1847-1867. | <urn:uuid:10b8a89e-a94d-4ac9-a960-290ea6c0c190> | 2.765625 | 1,267 | Academic Writing | Science & Tech. | 25.386497 |
<electronics> The absence of an electron in a semiconductor material. In
the electron model, a hole can be thought of as an incomplete outer electron
shell in a doping substance. Holes can also be thought of as positive charge
carriers; while this is in a sense a fiction, it is a useful abstraction.
HOL « HOL-88 « HOL-90 « hole » hole model »
Hollerithabetical order » Hollerith, Herman | <urn:uuid:b16ea339-3f98-47b5-b79e-3d8307166244> | 2.984375 | 98 | Structured Data | Science & Tech. | 33.864872 |
A REMOTE region of Antarctica has yielded what may be the best-preserved comet dust yet found, perhaps in better condition than the samples NASA's Stardust mission brought back from a comet's tail.
A team of meteorite experts led by Jean Duprat of the University of Paris South, France, found the dust in snow collected near Concordia base, high on the Antarctic plateau. When they melted the snow and filtered out anything more than 25 micrometres across, almost a third of the particles they found were from space. "It's the only place on Earth where you've got this number," Duprat says.
Preliminary tests show that some of the particles have a composition close to what one would expect from comet dust. Many of them seem to be in remarkably good condition: fluffy, fragile grains that somehow entered the atmosphere without vaporising or melting. Presumably they arrived slowly, travelling on a similar ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:96443134-1f09-4bd5-a512-6282628411e1> | 3.890625 | 215 | Truncated | Science & Tech. | 50.009558 |
Crystal moss animal
Colonies can reach up to 1cm in diameter and collectively cover large areas of substratum when growing conditions are favourable.
Growth occurs by budding of new zooids.
Colonies can divide and slowly glide apart to form new daughter colonies.
Unusually for freshwater bryozoans, L. crystallinus is commonly found during the cold winter months.
Colonies are typically present in the UK between September and March, although in sites where temperatures stay relatively low (for example, spring fed pools) colonies are found year-round (Hill 2006).
Adverse conditions are survived by dormant stages (statoblasts) which develop into adult colonies when suitable conditions return.
Genotypes are potentially very long-lived as a result of clonal reproduction.
Asexual production is the main reproductive mode. There are two forms:
Lemon-shaped statoblasts are released from colonies. Statoblast dormancy is broken by poorly understood environmental cues.
Sexual reproduction is rare (Wöss 1996) and its timing poorly known. | <urn:uuid:46ee3035-3a08-4d5d-993d-2bfc3613e8cd> | 3.609375 | 216 | Knowledge Article | Science & Tech. | 30.102673 |
The Monterey Accelerated Research System (MARS) is part of the the Monterey Ocean Observing System (MOOS) ocean initiative developed at the Monterey Bay Aquarium Research Institute (MBARI). MBARI was an inaugural member of the University of Washington-led partnership known as NEPTUNE that was formed in 2000 with the goal of building a cabled ocean observatory that would encircle the Juan de Fuca tectonic plate. Other NEPTUNE Partners included Caltech's Jet Propulsion Laboratory, which helped develop the power system and subsystems, the Woods Hole Oceanographic Insititution, which helped develop data and communication systems, and NEPTUNE Canada, now at the University of Victoria. Feasibility studies and benchtop demonstrations for the U.S. NEPTUNE were funded by the National Science Foundation and the National Oceanographic Partnership Program.
In 2002, a $7M grant was awarded to MBARI from the National Science Foundation to construct a deep-water test bed in Monterey Bay for the NEPTUNE cabled observatory. Additional funds were secured from the David and Lucile Packard Foundation ($1.8M) and from Canadian partners that were then part of the NEPTUNE team ($1.2M). A significant technological driver for the development of MARS was to design and implement high-power nodes for seafloor use.
MARS is now fully operational with 52 km of submarine cable and a node at 891 meters water depth (2,923 feet). Examples of science experiments on MARS include deployment of ORCA's Eye-in-the-Sea experiment to examine animals that thrive in the deep sea where sunlight does not penetrate; installation of a seafloor seismometer to study offshore earthquakes; and deployment of a benthic ROVER to investigate the carbon cycle at the seafloor-ocean interface. Expansion capabilites on this system allow scientists and researchers to take advantage of the real-time power and data transmission capabilities required to develop novel sensors for deployment in ocean environments. | <urn:uuid:1e745a4b-a350-4ee5-892b-c9b70d6558c7> | 3.359375 | 426 | Knowledge Article | Science & Tech. | 24.711613 |
The final shuttle launch has been getting so much last minute attention one would think that the 130+ that came before were widely covered, too. Not so. Except for the first few and the two tragic disasters, the media has been loosing interest for years. And I guess so has the space industry.
The shuttle is like a camel: a horse designed by committee. It was the first manned space vehicle – U.S or Soviet – with no practical means of escape, i.e. no rocket propelled escape system. Because it was the first manned vehicle not sitting on top of the fuel tanks, it was subject to foam from those tanks falling on to its fragile tiles.
The shuttle is/was a vehicle with no mission, headed mostly to a place the public really cared little about: the International Space Station. This “space truck” could haul large satellites into space, something the military wanted for secret stuff. But the Pentagon stopped using it, too.
Its greatest accomplishment is perhaps the Hubble Space Telescope and the fantastic photos it sent back.
We were told way back when that the chance for a disaster was something like 1 in 100,000. New studies show the real chance is more like 1 in 100. Reality: 2 in 130.
Kids growing up now know little about the 70s moon missions, except perhaps for historic words from the lunar soil, “one small step for man….” They’ll probably know even less about the shuttle.
Maybe the last words spoken from space in the shuttle may be worth remembering. What do you think? | <urn:uuid:effc65e4-7875-4e29-a7de-b5b3cc616005> | 3.03125 | 323 | Personal Blog | Science & Tech. | 66.554719 |