text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Earth today is one of the most active planets in the Solar System, and was probably even more so during the early stages of its life. Thanks to the plate tectonics that continue to shape our planet's surface, remnants of crust from Earth's formative years are rare, but not impossible to find. A paper published in Nature Sept. 2 examines how some ancient rocks have resisted being recycled into Earth's convecting interior...
- Water in Earth's mantle key to survival of oldest continentsThu, 2 Sep 2010, 13:37:34 EDT
- What goes down, must come up: Earth's leaky mantleWed, 27 May 2009, 13:45:07 EDT
- McGill researchers find oldest rocks on EarthThu, 25 Sep 2008, 14:36:48 EDT
- Building blocks of early Earth survived collision that created moonMon, 20 Feb 2012, 7:33:15 EST
- Scientists publish first ever evidence of asteroids with earth-like crustWed, 7 Jan 2009, 14:09:59 EST | <urn:uuid:dd000837-a58a-4661-8455-0271ba182b66> | 3.9375 | 210 | Content Listing | Science & Tech. | 65.541274 |
Can you devise a fair scoring system when dice land edge-up or corner-up?
Can all but one square of an 8 by 8 Chessboard be covered by
Given that ABCD is a square, M is the mid point of AD and CP is
perpendicular to MB with P on MB, prove DP = DC.
I noticed this about streamers that have rotation symmetry : if
there was one centre of rotation there always seems to be a second
centre that also worked. Can you find a design that has only. . . .
Points off a rolling wheel make traces. What makes those traces
Sketch the members of the family of graphs given by y =
a^3/(x^2+a^2) for a=1, 2 and 3.
An article for students and teachers on symmetry and square dancing. What do the symmetries of the square have to do with a dos-e-dos or a swing? Find out more?
What groups of transformations map a regular pentagon to itself?
Investigations and activities for you to enjoy on pattern in
Proofs that there are only seven frieze patterns involve complicated group theory. The symmetries of a cylinder provide an easier approach.
Plot the graph of x^y = y^x in the first quadrant and explain its
A design is repeated endlessly along a line - rather like a stream
of paper coming off a roll. Make a strip that matches itself after
rotation, or after reflection
Charlie likes tablecloths that use as many colours as possible, but insists that his tablecloths have some symmetry. Can you work out how many colours he needs for different tablecloth designs?
This resources contains a series of interactivities designed to
support work on transformations at Key Stage 4.
A gallery of beautiful photos of cast ironwork friezes in Australia with a mathematical discussion of the classification of frieze patterns.
Sketch the graphs for this implicitly defined family of functions.
Plex lets you specify a mapping between points and their images.
Then you can draw and see the transformed image.
Can you show that you can share a square pizza equally between two
people by cutting it four times using vertical, horizontal and
diagonal cuts through any point inside the square?
Find the shape and symmetries of the two pieces of this cut cube.
Investigate the family of graphs given by the equation x^3+y^3=3axy
for different values of the constant a.
A and B are two points on a circle centre O. Tangents at A and B
cut at C. CO cuts the circle at D. What is the relationship between
areas of ADBO, ABO and ACBO?
Sketch the graph of $xy(x^2 - y^2) = x^2 + y^2$ consisting of four curves and a single point at the origin. Convert to polar form. Describe the symmetries of the graph.
When a strip has vertical symmetry there always seems to be a
second place where a mirror line could go. Perhaps you can find a
design that has only one mirror line across it. Or, if you thought
that. . . .
Scheduling games is a little more challenging than one might desire. Here are some tournament formats that sport schedulers use.
Ten squares form regular rings either with adjacent or opposite
vertices touching. Calculate the inner and outer radii of the rings
that surround the squares.
Consider a watch face which has identical hands and identical marks
for the hours. It is opposite to a mirror. When is the time as read
direct and in the mirror exactly the same between 6 and 7?
Patterns that repeat in a line are strangely interesting. How many types are there and how do you tell one type from another?
The ten arcs forming the edges of the "holly leaf" are all arcs of
circles of radius 1 cm. Find the length of the perimeter of the
holly leaf and the area of its surface.
Join some regular octahedra, face touching face and one vertex of
each meeting at a point. How many octahedra can you fit around this
An irregular tetrahedron has two opposite sides the same length a
and the line joining their midpoints is perpendicular to these two
edges and is of length b. What is the volume of the tetrahedron?
Create a symmetrical fabric design based on a flower motif - and realise it in Logo.
The twelve edge totals of a standard six-sided die are distributed symmetrically. Will the same symmetry emerge with a dodecahedral die?
An environment for exploring the properties of small groups.
Toni Beardon has chosen this article introducing a rich area for
practical exploration and discovery in 3D geometry
An equilateral triangle is sitting on top of a square.
What is the radius of the circle that circumscribes this shape? | <urn:uuid:529d920a-c4a6-44b0-8e5e-e1417e0da64e> | 2.78125 | 1,051 | Content Listing | Science & Tech. | 60.73655 |
View Latest Questions
Old and New Vector Methods
The Vector class was updated to implement the List interface..
Java Notes: Vectors
To Create a Vector.
An ArrayList can be traversed using either iterators or indexes..
An ArrayList has methods for inserting, deleting, and searching.. | <urn:uuid:3e185e0c-1665-4229-a996-fdefabc83d47> | 2.921875 | 63 | Content Listing | Software Dev. | 46.636818 |
Unit of power in the SI unit system. 1 Watt = 1 Joule per second.
A very dense star with a mass below 1.4 solar masses that is no longer burning nuclear fuel. The Sun will one day evolve into a white dwarf with a diameter of 10 000 km.
Periodic comet which orbits the Sun once every 5.45 years. Discovered in 1948 at the Lick Observatory, California, by Carl A. Wirtanen. It is a so-called `Jupiter-type' comet, whose orbit is strongly influenced by that planet. Perihelion is at 159 million km (1.06 AU) from the Sun, i.e. just outside the orbit of the Earth. Aphelion is at a distance of about 768 million km (5.13 AU), near the orbit of Jupiter. Target of ESA's Rosetta mission, which will go into orbit around the nucleus and deploy a lander on its surface. | <urn:uuid:5d36c921-c366-4b80-a14b-c5d2d32c0879> | 3.375 | 198 | Structured Data | Science & Tech. | 78.703924 |
David J. Flaspohler, an avian ecologist and conservation biologist at Michigan Technological University, writes from Hawaii, where he is studying the influence of human activities on birds and the natural ecosystems that support them.
Tuesday, May 22
One of the great pleasures of learning bird songs comes in the drowsy predawn twilight. Through the window comes the voice of the first bold male offering up his species’ diagnostic song. From my bed in a friend’s cabin 30 miles north of Hilo this morning, the first sound to break the silence is the emphatic, repeated “whit-cheer!” of the northern cardinal, a bird I grew up hearing in southern Michigan. Next comes the soft cooing of Asian spotted and zebra doves, followed by the occasional harsh notes of the common myna, an import from India. Finally, I hear the slurred warbles of the Japanese white-eye. Later, with a cup of coffee, looking out over the pasture and woodlots spreading down to the sea, I hear and see a rich and complex ecosystem, almost none of which belongs here.
It is quite conceivable that a casual visitor to Hawaii could spend a pleasant holiday of a week or two and not see a single native Hawaiian species. Nearly all native lowland ecosystems in Hawaii have been replaced by nonnative species, including nearly all plants, birds, reptiles, amphibians and insects. Human residents and tourists concentrate themselves in these areas near the ocean, so it is even possible to grow up in many parts of Hawaii thinking that mynas, doves, papaya, eucalyptus, geckos and even mosquitoes have always been here.
To see, hear and smell native Hawaiian forests, you need to get away from the beaches and go up in elevation where most of the exotic birds disappear. Our research in these kipuka forests is aimed at understanding how kipuka size and introduced rats influence kipuka food webs and the native birds. But if the birds in these kipuka are imperiled, some listed and others being considered for listing under the Endangered Species Act, Hawaii is also home to a few bird species even worse off.
To see what intensive care looks like for the most critically endangered birds, I spend a morning at the Keauhou Bird Conservation Center in Volcano, Hawaii. Rich Switzer, manager for the Hawaii Endangered Bird Conservation Program at the San Diego Zoo, meets me at the gate, and we drive up to a compound of one-story buildings scattered within a sparse ohia forest. The mission of Keauhou is to conserve the most critically endangered Hawaiian birds through captive breeding and, with careful planning, reintroduce them back into the wild.
We remove our shoes, a common practice in Hawaii, but here a precaution against spreading disease. We first tour a darkened hallway with rooms on both sides and small vertical windows along the walls, looking into aviaries housing palila, Maui parrotbill, puaiohi and apapane. It is the breeding season, so we speak in whispers. There are only about 500 parrotbills left, and they are found only on a small part of Maui. The puaiohi is even rarer. It is one of only two endemic thrushes left in Hawaii and is found only at the highest elevations on Kaua’i.
Next, we visit the extensive aviaries dedicated to saving one of the rarest birds in the world — the Hawaiian crow, called an ‘alala, which no one has seen in the wild since 2002. Prior to the disappearance of the wild birds, a small population was established in captivity with the hope of preventing their extinction. We enter a control room with a dozen black-and-white screens streaming live video from each aviary. Lisa Komarczyk, a senior research associate at Keauhou, closely monitors a mated pair of ‘alala and a nest platform covered with sticks assembled by the female. Lisa records how frequently the female nestles in the nest cup, a precursor to egg laying, and as we watch, a female named Moa Nui sits quietly as her mate perches in the background.
The ‘alala is a relative of the crows, ravens and jays found across much of the globe. This family is among the smartest birds, with some species using tools and in other ways showing the capacity for complex thought and for cultural transmission of learned information. Today, with the ‘alala facing extinction, such traits cut both ways. A greater capacity for learning might be advantageous as this bird struggles to survive in a changing world. Yet cultural information like where and when to find food and how best to react to predators is at least somewhat learned from experienced older birds. It does not take long in captivity for such abilities to disappear. An earlier effort to reintroduce ‘alala into the wild failed in part because the native hawk, the ‘io, was able to capture birds that may have been weakened by introduced disease or lost adaptive responses to the predator, or both.
As we watch the monitor, Rich points to a video screen showing a female sitting quietly on her nest. He and Lisa note some rhythmic head movements and contractions of the bird’s body. We note that the fine black feathers on the bird’s head have become erect, and the bird gives a subtle but distinct shudder. They’ve seen egg laying many times before, but for me, this is thrilling. I ask if they think she just laid, but they are noncommittal. The female shuffles a bit and rises to reveal a shiny spotted egg. Bending her head down, she uses her bill to delicately arrange the egg beneath her. She then settles her warm belly down over the egg and waits to see what the future will bring. | <urn:uuid:8b7e4453-8c86-4749-ab1e-4e3e6f0220d8> | 3.109375 | 1,209 | Nonfiction Writing | Science & Tech. | 46.917827 |
First, "unit" is intentionally vague. It could be a class, a function, a module or a package. It's "unit" of code. Anything could be considered a "unit".
Second--and more important--the extensive mocking isn't fully appropriate for Python programming. Mocks are very helpful in statically-typed languages where you must be very fussy about assuring that all of the interface definitions are carefully matched up properly.
In Python, duck typing allows a mock to be defined quite trivially. A mock library isn't terribly helpful, since it doesn't reduce the code volume or complexity in any meaningful way.
Dependencies without Injection
The larger issue with trying to unit test in Python with mock objects is the impact of change.
We have some class with an interface.
class AppFeature( object ):
def app_method( self, anotherObject ):
class AnotherClass( object ):
def another_method( self ):
We've properly used dependency injection to make AppFeature depend on an instance of AnotherClass. This means that we're supposed to create a mock of AnotherClass to test the AppFeature.
class MockAnotherClass( object ):
def another_method( self ):
In Python, this mock isn't a best practice. It can be helpful. But adding a mock can also be confusing and misleading.
Consider the situation where we're refactoring and change the interface to AnotherClass. We modify another_method to take an additional argument, for example.
How many mocks do we have? How many need to be changed? What happens when we miss one of the mocks and have the mysterious Isolated Test Failure?
While we can use a naming convention and grep to locate the mocks, this can (and does) get murky when we've got a mock that replaces a complex cluster of objects with a simple Facade for testing purposes. Now, we've got a mock that doesn't trivially replace the mocked class.
Alternative: Less Strict Mocking
In Python--and other duck typing languages--a less mock-heavy approach seems more productive. The goal of testing every class in isolation surrounded by mocks needs to be relaxed. A more helpful approach is to work up through the layers.
- Test the "low-level" classes--those with few or no dependencies--in isolation. This is easy because they're already isolated by design.
- The classes which depend on these low-level classes can simply use the low-level classes without shame or embarrassment. The low-level classes work. Higher-level classes can depend on them. It's okay.
- In some cases, mocks are required for particularly complex or difficult classes. Nothing is wrong with mocks. But fussy overuse of mocks does create additional work.
The benefit of this is
- The layered architecture is tested the way it's actually used. The low-level classes are tested in isolation as well as being tested in conjunction with the classes that depend on them.
- It's easier to refactor. The design changes aren't propagated into mocks.
- Layer boundaries can be more strictly enforced. Circularities are exposed in a more useful way through the dependencies and layered testing.
We need to still work out proper dependency injection. If we try to mock every dependency, we are forced to confront every dependency in glorious detail. If we don't mock every single dependency, we can slide by without properly isolating our design. | <urn:uuid:b688e442-b91c-4fcc-9040-36197e948e4b> | 2.84375 | 723 | Personal Blog | Software Dev. | 47.199073 |
The Origin of a Land Flora
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
discussed in biography
The precise algal origin of land plants has yet to be ascertained, but questions raised by Bower’s work, summarized in his classic The Origin of a Land Flora (1908), have done much to coordinate paleobotany and plant morphology in a widespread study of plant evolution. Bower also wrote The Ferns, 3 vol. (1923–28), Size and Form in Plants (1930), and Primitive Land...
What made you want to look up "The Origin of a Land Flora"? Please share what surprised you most... | <urn:uuid:d47815ad-abbf-4d4a-8023-e64272d60a93> | 3.109375 | 165 | Truncated | Science & Tech. | 59.987894 |
1. What is the Definition of an Earthquake?
An earthquake is the sudden, sometimes violent movement of the
surface from the release of energy in the earth's crust.
2. What Causes Most Earthquakes?
The crust of the earth when it is subject to tectonic forces, bends
slightly. But, because the crust is rigid, when the stress or pressure
exceeds the strength of the rocks, the crust breaks and snaps into a
position. Vibrations called seismic waves are generated and
both through the earth and along its surface. These seismic waves
the movement we call earthquakes.
3. Where Are Earthquakes Likely to Occur?
Within areas of the crust are fractures, known as faults, along
which two crustal blocks have slipped or moved against each other.
block may move up while the other moves down, or one may move
horizontally in one direction and the other in the opposite direction.
Geologists and seismologists (scientists who study earthquakes
the processes that create them) have found that earthquakes occur
repeatedly at faults, which are zones of weakness in the
4. How Many Earthquakes Happen Each Year?
There are over a million quakes annually, including those too small
felt. The following table shows the average frequency of different
||Frequency per year
|Minor (damage slight)
||less than 2.0
From Earthquakes and the Urban Environment, Vol. 1, G. Lennis
5. How Many Earthquakes Happen Every Month? Day?
Using the previous table:
And, per second, one earthquake is felt
every 30 seconds.
Of these only a relative few are capable of causing damage.
are common natural events.
6. How Deep Do Earthquakes Occur in the World?
Earthquakes occur in the crust or upper mantle which ranges from
surface to about 800 kilometers deep (about 500 miles).
7. Where Do Most Earthquakes Occur in the World?
The surface of the earth is divided like a jigsaw puzzle into giant
pieces called tectonic or crustal plates. These giant pieces
slowly over partially melted rock known as the mantle. As
move, they slide along each other, move into each other, move away
each other, or one slips under another. On these active plate
about 95% of all the world's earthquakes occur. California, Alaska,
Japan, South America, and the Philippines are all on plate
Only 5% are in areas of the plates far away from the boundaries.
called mid-plate or intra-plate earthquakes and are, as yet,
8. Where do the Most Earthquakes Occur in the United States ?
Alaska has more earthquakes per year than the combined total of
of the United States. As many as 4,000 are recorded there every
Alaska is on a plate boundary where one plate is sliding along
another, a subduction zone.
9. Where Did the Largest Known Earthquake Occur?
A magnitude 9.5 earthquake in Chile in 1960 was the largest known
earthquake and resulted in over 6,000 deaths. It triggered a tsunami or seismic wave (incorrectly known as a tidal wave)
killed people as far away as Hawaii and Japan. Chile is also on a subduction zone.
10. What Was the Largest Earthquake in the United States?
The great Alaska earthquake of March 27, 1964, is the largest
in the United States. It had a magnitude of 9.2. 115 people died,
most of the deaths due to the tsunami it generated. Shaking
felt for an estimated 7 minutes, and raised or lowered the ground
as much as 2 meters (6.5 feet) in some areas and 17 meters (approx.
feet) in others. The length of the ruptured fault was between 500
1,000 kilometers (310.5 and 621 miles). The amount of energy
equal to 12,000 Hiroshima-type blasts, or 240 million tons of TNT.
11. Where Was the Largest Earthquake in the Continental United
A series of four great earthquakes occurred in the central United States on December 16, 1811, and January 23, and February 7, 1812. All had estimated magnitudes greater than 7.5 on the Richter Scale, the largest happening on February 7, 1812. They are collectively known as the New Madrid earthquakes (after a small town in Missouri) and were felt as far away as Washington D.C., and Boston, Massachusetts. These events were felt over a region far greater than any other in the United States, an estimated 2 million square miles. There were fewer than 100 deaths, because of the small number of people living in the area. The earthquakes raised and lowered land levels several feet, created one large lake and several smaller lakes, and formed waterfalls on the Mississippi River. One small town was destroyed and there was extensive damage to structures and changes to land surfaces throughout the region. These earthquakes were far away from a plate boundary, and are the largest known to have happened in a mid-plate area.
12. It Seems that Large Earthquakes in the U.S. Are Responsible
Relatively Few Deaths. Is This True Around the World?
No. In other areas of the world smaller earthquakes are responsible
the deaths of many thousands of people. This is primarily because of
buildings which are poorly designed and constructed for earthquake
regions, and population density. The following table shows some of
major earthquakes around the world in the last twenty years, and
number of deaths associated with them.
13. What Was the Greatest Number of People Killed in One
An earthquake in China in 1556 killed approximately 830,000
14. How Are Earthquakes Measured?
A seismometer is an instrument that senses the earth's motion;
a seismograph combines a seismometer with recording
obtain a permanent record of the motion. From this record scientists
calculate how much energy was released in an earthquake, which is
to decide its magnitude. Calculations are made from several different
seismograms, both close to and far from an earthquake source to
its magnitude. Calculations from various seismic stations and
seismographs should give the same magnitude, with only one
any given earthquake.
Richter Magnitude is the scale most people are familiar with,
scientists use other more accurate scales. Another nonscientific way
measuring earthquakes is by their intensity or degree of shaking.
Intensity is descriptive, and is determined by inspection of damage
other effects, with the greatest intensity being close to the epicenter,
and smaller intensities further away. The Modified Mercalli
Scale uses Roman Numerals from I to XII to describe different
earthquake effects is commonly used.
15. What Does the Richter Scale Look Like?
The Richter Scale is not an actual instrument. It is a measure
the amplitude of seismic waves and is related to the amount of
released. This can be estimated from the recordings of an earthquake
seismograph. The scale is logarithmic, which means that each
number on the scale increases by 10. A magnitude 6.0 earthquake is
times greater than a 5.0, a 7.0 is 100 times greater, and a magnitude
is 1,000 times greater.
16. When was the First Instrument for Detecting Earthquakes
The earliest known earthquake detection instrument was invented in
A.D. by Zhang Heng, a Chinese philosopher. The instrument was a
meters or 6.5 feet in diameter) bronze jar, with a central pendulum
inside. Decorating the jar on the outside were a series of dragon
connected to a pendulum, each with a ball in a hinged mouth.
beneath each dragon head, on the surface of the stand, was a bronze
head up, mouth open to receive a ball from the dragon's mouth.
During an earthquake, the ground motion would move the pendulum
one or more balls to fall from a dragon's mouth into a toad's mouth.
direction of the earthquake was indicated by which of the dragon
had dropped a ball.
This instrument was sensitive enough to perceive shaking too small
felt, as it detected an earthquake over 600 kilometers (372 miles)
news of which arrived several weeks later.
Earthquake detectors are mentioned later in oriental manuscripts,
the west earthquake detection instruments did not emerge until
17. What is the Difference Between an Earthquake Prediction
An earthquake prediction involves assigning a specific date, location,
and magnitude for an earthquake. A forecast assigns a series of
probabilities and a range of years and magnitudes to a region. There
no way to accurately predict earthquakes, but forecasts have been
calculated for different areas of the United States. The earthquake in
northern California on October 17, 1989 was not predicted, but did
within the magnitude range, time span, and region forecast by U.S.
Geological Survey staff.
18. Does Animal Behavior Change Before Earthquakes?
Changes in animal behavior before earthquakes have been observed
documented in different parts of the world, most recently in the
California earthquake of October 17, 1989. It has been recorded that
fish in a high school biology lab in California would flip on its side
before some earthquakes.
Dogs, cats, snakes, and horses has also been known to behave
before earthquakes. Since behavior is not earthquake specific, change
animal behavior can therefore result from other events, and it is
impossible to determine beforehand what factor has caused the
Also, the behavior is not consistent. Sometimes earthquakes occur
previous behavior change.
19. Does the Ground Really Open Up and Swallow People?
This is an earthquake myth. Cracks and fissures appearing in the
are a common effect of earthquakes. Most of these are narrow and
In very large earthquakes changes in the level of the land can result
larger cracks that can cause a lot of damage to buildings, but people
buildings do not get swallowed by the ground.
20. Do Earthquakes Cause Volcanoes?
No, there are different earth processes responsible for volcanoes.
Earthquakes may occur in an area before, during, and after a volcanic
eruption, but they are the result of the active forces connected with
eruption, and not the cause of volcanic activity.
21. Are Earthquakes Weather Related?
In the 4th Century B.C., Aristotle proposed that earthquakes were
by winds trapped in subterranean caves. Small tremors were thought
have been caused by air pushing on the cavern roofs, and large ones
the air breaking the surface. This theory lead to a belief in
weather, that because a large amount of air was trapped
weather would be hot and calm before an earthquake.
A later theory stated that earthquakes occurred in calm, cloudy
conditions, and were usually preceded by strong winds, fireballs, and
meteors. There is no connection between weather and earthquakes.
the result of geologic processes within the earth and can happen in
weather and at any time during the year.
22. What Are Earthquake Scientists Called?
Seismologists: seismos-from the Greek meaning earthquakes, and
ologist-meaning a person who studies (something). A seismologist is
person who studies earthquakes and the mechanics of the earth.
23. How Much Energy is Released in an Earthquake?
Earthquakes release a tremendous amount of energy, which is why
be so destructive. The table below shows magnitudes with the
amount of TNT needed to release the same amount of energy.
From Earthquakes and the Urban Environment, Vol. 1, G.
24. Do All Large Magnitude Earthquakes Result in Great Amounts
Death and Destruction?
No.The destructive forces of an earthquake depends on many factors.
earthquakes commonly occur in remote areas of the world, with no
buildings or people, and are not destructive. In addition to
some of the factors that determine damage and deaths are:
densities, the density and types of building construction, local
conditions, distance from the epicenter, earthquake depth, how long
shaking continues, and the degree of earthquake preparedness in the
25. Can Earthquakes Be Prevented?
There is no known way to prevent earthquakes, but it is possible to
lessen the impact. The amount of devastation from an earthquake
greatly diminished by building structures using earthquake resistant
design, making the interiors of buildings safe from falling objects,
educating people about earthquake safety. | <urn:uuid:381ce5ec-5210-4c88-a65f-010c252d56d3> | 3.53125 | 2,646 | Content Listing | Science & Tech. | 51.772441 |
Over fifty years ago, a supernova was discovered in M83, a spiral galaxy about 15 million light years from Earth. Astronomers have used NASA's Chandra X-ray Observatory to make the first detection of X-rays emitted by the debris from this explosion.
Named SN 1957D because it was the fourth supernova to be discovered in the year of 1957, it is one of only a few located outside of the Milky Way galaxy that is detectable, in both radio and optical wavelengths, decades after its explosion was observed. In 1981, astronomers saw the remnant of the exploded star in radio waves, and then in 1987 they detected the remnant at optical wavelengths, years after the light from the explosion itself became undetectable.
A relatively short observation -- about 14 hours long -- from NASA's Chandra X-ray Observatory in 2000 and 2001 did not detect any X-rays from the remnant of SN 1957D. However, a much longer observation obtained in 2010 and 2011, totaling nearly 8 and 1/2 days of Chandra time, did reveal the presence of X-ray emission. The X-ray brightness in 2000 and 2001 was about the same as or lower than in this deep image.
This new Chandra image of M83 is one of the deepest X-ray observations ever made of a spiral galaxy beyond our own. This full-field view of the spiral galaxy shows the low, medium, and high-energy X-rays observed by Chandra in red, green, and blue respectively. The location of SN 1957D, which is found on the inner edge of the spiral arm just above the galaxy's center, is outlined in the box (or can be seen by mousing over the image.)
The new X-ray data from the remnant of SN 1957D provide important information about the nature of this explosion that astronomers think happened when a massive star ran out of fuel and collapsed. The distribution of X-rays with energy suggests that SN 1957D contains a neutron star, a rapidly spinning, dense star formed when the core of pre-supernova star collapsed. This neutron star, or pulsar, may be producing a cocoon of charged particles moving at close to the speed of light known as a pulsar wind nebula.
If this interpretation is confirmed, the pulsar in SN 1957D is observed at an age of 55 years, one of the youngest pulsars ever seen. The remnant of SN 1979C in the galaxy M100 contains another candidate for the youngest pulsar, but astronomers are still unsure whether there is a black hole or a pulsar at the center of SN 1979C.
An image from the Hubble Space Telescope (in the box labeled "Optical Close-Up") shows that the debris of the explosion that created SN 1957D is located at the edge of a star cluster less than 10 million years old. Many of these stars are estimated to have masses about 17 times that of the Sun. This is just the right mass for a star's evolution to result in a core-collapse supernova as is thought to be the case in SN 1957D.
These results will appear in an upcoming issue of The Astrophysical Journal. The researchers involved with this study were Knox Long (Space Telescope Science Institute), William Blair (Johns Hopkins University), Leith Godfrey (Curtin University, Australia), Kip Kuntz (Johns Hopkins), Paul Plucinsky (Harvard-Smithsonian Center for Astrophysics), Roberto Soria (Curtin University), Christopher Stockdale (University of Oklahoma and the Australian Astronomical Observatory), Bradley Whitmore (Space Telescope Science Institute), and Frank Winkler (Middlebury College).
NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra's science and flight operations from Cambridge, Mass. | <urn:uuid:5b4d3117-75fd-4728-9165-d73f0d976ffb> | 3.671875 | 787 | Knowledge Article | Science & Tech. | 42.963761 |
Waves, Nearshore Currents, and Tides
Energetic storm-generated winter waves at La Jolla, CA. Waves move sand on, off, and along the shore. These waves can strip beaches of sand during winter.
Photographer - Memorie Yasuda, image used with permission.
ost energy in nearshore waters comes from wind-generated waves and tidal currents. The dispersion of water, pollutants, nutrients, and sediments near the coast and the formation and erosion of sandy beaches are some of the common results of nearshore energy dissipation.
Waves and the currents they generate are the primary factors in transport and deposition of coastal sediments. Waves move material along the bottom and suspend it for weaker currents to transport.
Rips at Carolina Beach, North Carolina were the result of swells from a hurricane located a few hundred miles offshore (September 11, 2001).
Image courtesy Carolina Beach Police Department, image used with permission.
Rollover the image to highlight the rip current.
Wave climate - Wave climate history for the southern California Bight over the last century. Incidents with wave heights greater than 4m are shown. Other years had maximum wave heights less than 4m.
Diagram from Inman, D.L. and S.A. Jenkins, 1997.
Click here for a larger image.
Wave action along the southern California coast is seasonal, responding to changing wind systems over the Pacific Ocean. Some waves are generated in the southern ocean and travel 11,000 km (7,000 mi) before breaking on California beaches.
The height and period of the waves depend on the speed and duration of the generating winds and the fetch. The types of waves that break on a beach and their seasonal variance are known as the wave climate.
Deep water waves are long, low, and sinusoidal in form. As the waves enter shallow water, the propagation speed and wavelength decrease, the wave steepens, and the wave height increases until the wave train consists of peaked crests separated by flat troughs. This wave shoaling begins at the depth where the waves "feel bottom." This depth is about one-half the deep-water wave length.
Upon entering shallow water, waves are also subjected to refraction, a process in which the wave crests tend to parallel the depth contours. Simultaneously, wave diffraction causes a flow of energy along the wave crest from high waves to low waves.
Wave refraction - Wave refraction causes wave fronts to parallel the shape of the coastline as they approach shore and encounter ground. Wave refraction causes wave energy to concentrate on headlands and preferentially erode them rather than bays.
Location TBA. Photograph courtesy - Greg Moore. Permission pending.
Modelled wave refraction - Jaws, Maui.
Image used with permission - Katie M. Fearing and Robert A. Dalrymple.
Click here to see the movie.
For straight coasts with parallel contours, refraction decreases the angle between the approaching wave and the coast, and diffraction causes a spreading of energy along the crests.
Consequently, change in wave height and direction at any point along the coast is a function of wave period, direction of approach, and configuration of the bottom contours.
When waves break at an angle to the beach, the momentum of the breaking wave generates onshore currents that flow in the direction of propagation of the breaking wave and its bore. The pile up of water along the shore causes longshore currents that flow parallel to the beach inside the breaker zone. The water in the longshore current returns seaward as rip currents. The spacing between rip currents is usually two to eight times the width of the surf zone.
Nearshore river discharge and transport - Currents of the nearshore circulation system produce a continuous interchange of water between the surf zone and offshore areas waters, distributing nutrients and dispersing runoff from the land.
Click on buttons to compare.
Click to see photo of flood discharge.
Longshore currents may reach velocities of 2.5 m/s (8 ft/s). Rip currents have been measured in excess of 1.5 m/s (5 ft/s).
Headlands, breakwaters, and piers influence the circulation pattern and alter the direction of currents flowing along the shore. Where a straight beach is terminated on the down-current side by points or other obstructions, a pronounced rip current extends seaward.
Tidal flushing model - San Diego Bay, California
A simulation showing percent of initial tracer concentration forced by a 60 cm. tidal amplitude.
Click here for the movie. Movie requires QuickTime.
Image and movie courtesy John Helly, San Diego Supercomputer Center, UCSD.
Click here for more information.
The lunar semidiurnal tide, with a period of 12.42 h, is the principal world tide, and its amplitude is controlled by local ocean bathymetry. Tides generate shelf and coastal currents that are important to transport of finer sediments. Velocities over the southern California shelf may reach 15-20 cm/s at times.
The tidal range determines the elevation of wave attack at the shoreline. Extreme tides influence inundation and flooding and are amplified by sea level changes associated with El Niño events.
Tidal currents are the primary sediment transport force inside enclosed bays and harbors, and tidal flow through the entrances may be very fast.
©2002-2003 by the Regents of the University of California and the Kavli Institute.
All rights reserved.
Last modifed Friday, June 25, 2003 | <urn:uuid:ef2c1a3c-b14a-4203-97d4-8f6933568ba4> | 3.8125 | 1,159 | Knowledge Article | Science & Tech. | 48.014507 |
Environment: Win32, MSVC 6.0, STL
Sometimes it is necessary to create XML files from a program in C++. It is possible to use the very powerful set of IXMLDOMxxx interfaces, but this may add unnecessary requirements and overhead to the program. On the other hand, to create XML files from code is not a trivial task in the general case. Even when simple XML files are created, a set of wrapping classes can still be helpful.
The front-end class CXmlNode exposes a number of constructors to handle the output of different types of variables - strings, wide character strings, numeric values, boolean variables. All constructors take as parameters a reference to an open output stream object and the XML element name. Most of the functionality is provided by the base class, CXmlNodeBase. When the method StartNode is called, the begin tag of the element is created in the output file. The destructor of the class puts automatically the end tag if StartNode has been called.
To attach attributes to an element, it is necessary to construct a CXmlAttributeSet object and call AssignAttributeSet before StartNode is called. This will ensure all attributes are properly put in the begin tag of the element.
Often XML elements are created in fixed order. A set of macros makes this task extremely easy. To create an element with child elements, use the macro bracket pair START_NODE CLOSE_NODE and create all child elements between these macros. The macro DO_NODE creates an element and immediately puts its end tag. The macros with suffix _A take an additional reference to a CXmlAttributeSet object for the attributes of the node.
When more complex processing is required, CXmlNode objects can be manipulated directly or through a smart pointer wrapper, as shown in the sample.
ofstream ofs("test.xml"); CXmlAttributeSet as("a1","v1"); as.AddAttribute("a2","v2"); UTL_PutXmlHeader(ofs); START_NODE(ofs, "DemoDocument", CXmlNode::EMPTY_DATA); START_NODE_A(ofs, "e1", "t1", as); DO_NODE(ofs, "e11","t11") DO_NODE(ofs, "e12","t12") START_NODE(ofs, "e12","t12") DO_NODE_A(ofs, "e121","t121", CXmlAttributeSet("wee","jee")) DO_NODE(ofs, "answer", 42) DO_NODE(ofs, "NO_PI",3.1514) START_NODE(ofs, "fact","5 3") DO_NODE(ofs, "is ok", true) CLOSE_NODE CLOSE_NODE CLOSE_NODE as.AddAttribute("a3","v3"); DO_NODE_A(ofs, "e2", CXmlNode::EMPTY_DATA, as); CXmlNode * n3 = new CXmlNode(ofs,"e3",CXmlNode::EMPTY_DATA); SmartPtr<CXmlNode> spXml(NULL); n3->StartNode(); spXml = new CXmlNode(ofs,"e31", "t31"); spXml->StartNode(); spXml = NULL; spXml = new CXmlNode(ofs,"e32", "®"); spXml->StartNode(); spXml = NULL; delete n3; CLOSE_NODE
The sample does not catch the exceptions which the CXmlNode class can throw. A professional program would handle these exceptions in an appropriate way. | <urn:uuid:efeacc9d-b85f-48b8-9c50-4c27d9cf01d7> | 2.984375 | 805 | Documentation | Software Dev. | 45.61656 |
Little-wing pearly mussel
Description: The little-wing pearly mussel is small, not exceeding 1.5 inches in length and 0.5 inches in width. The shell’s outer surface is usually eroded, giving the shell a chalky or ashy white appearance. When the outer surface is intact, it’s light green or dark yellowish brown with dark rays of variable width along the shell’s front surface.
Specific food habits of the mussel are unknown, but it likely feeds on food items similar to those consumed by other freshwater mussels, including detritus, diatoms, phytoplankton, and zooplankton.
The reproductive cycle of the species is likely similar to other native mussels. Males release sperm into the water, and the eggs are fertilized when the sperm are taken in by the females through their siphons during feeding and respiration. Females retain the fertilized eggs in their gills until the larvae (glochidia) fully develop. The glochidia are released into the water and must attach to the gills or fins of the appropriate fish species. They remain attached to their “fish host” for several weeks, drawing nourishment from the fish while they develop into juvenile mussels. They do not hurt their “fish host.” The juvenile mussels then detach from the fish host and drop to the bottom of the stream where they continue to develop, provided they land in a suitable place with good water conditions. This dependence on a certain species of fish increases the mussels’ vulnerability to habitat disturbances. If the fish host is driven off or eliminated because of habitat or water quality problems, the mussels can’t reproduce and will eventually die out.
Habitat: The little-wing pearly mussel inhabits small to medium streams, with low-turbidity, cool-water, and high to moderate gradients.
Range: This mussel was historically widespread but uncommon in the smaller tributaries of the upper Cumberland and Tennessee River basins in Alabama, North Carolina, Kentucky, Tennessee, and Virginia.. Today, in the Cumberland River system, the mussel is known from Horse Lick Creek (Jackson and Rockcastle Counties, KY); Big and Little South Forks, Cumberland River (McCreary and Wayne Counties, KY); Cane Creek (Van Buren County, TN). In the Tennessee River System, the mussel is known from: the Little Tennessee River (Macon and Swain Counties, NC), North Fork Holston River (Smyth and Washington Counties, VA), and the Clinch River (Tazewell County, VA).
Listing: Endangered, November 14, 1988. 53 FR 45861
Critical habitat: None designated
Threats: Poor water quality and habitat conditions have led to the decline and loss of populations of the littlewing pearly mussel and threaten the remaining populations.
Agriculture (both crop and livestock) and forestry operations, roads, residential areas, golf courses, and other construction activities that do not adequately control soil erosion and water run-off contribute excessive amounts of silt, pesticides, fertilizers, heavy metals, and other pollutants that suffocate and poison freshwater mussels.
The alteration of floodplains or the removal of forested stream buffers can be especially detrimental. Flood plains and forested stream buffers help maintain water quality and stream stability by absorbing, filtering, and slowly releasing rainwater. This also helps recharge groundwater levels and maintain flows during dry months.
Acid mine drainage and other water quality impacts associated with gas, oil, and mineral extraction also contribute to imperilment.
Why should we be concerned about the loss of species? Extinction is a natural process that has been occurring since long before the appearance of humans. Normally, new species develop through a process known as speciation, at about the same rate other species become extinct. However, because of air and water pollution, forest clearing, loss of wetlands, and other man-induced environmental changes, extinctions are now occurring at a rate that far exceeds the speciation rate.
All creatures, including humans, are interconnected. Native mussels rely on certain fish species in order to reproduce. In turn, these mussels provide numerous benefits to fish and other aquatic organisms. Mussels continuously filter the water for food and oxygen; as they do so, they are cleaning the water of pollutants and large quantities of organic particles, much line a tiny water purifying system. They play an important role in the aquatic food chain as a food source for wildlife, including river otters, muskrats, great blue herons, and numerous species of fish and turtles. Their shells provide cover and nesting habitat for aquatic insects, crayfish, and bottom-dwelling fish species like darters, sculpins, and madtoms (major prey items for man game fish species).
Endangered species are indicators of the health of our environment. The loss of these plants and animals is a sign that the quality of our environment – air, land, and water – is declining. Gradual freshwater mussel die-offs, such as the declining littlewing pearly mussel, and sudden mussels kills are reliable indicators of water pollution problems. Stable, divers mussel populations generally indicate clean water and a healthy aquatic environment. While poor environmental quality may first manifest itself in the health of our plant and animals populations, if untreated, it eventually affects humans directly, as we breathe polluted air, loose valuable topsoil to erosion, or get sick from swimming in contaminated water.
We depend on the diversity of plant and animal life for our recreation, nourishment, many of our lifesaving medicines, and the ecological functions they provide. One-quarter of all the prescriptions written in the United States today contain chemicals that were originally discovered in plants and animals. Industry and agriculture are increasingly making use of wild plants, seeking out the remaining wild strain of many common crops, such as wheat and corn, to produce new hybrids that are more resistant to disease, pests, and marginal climatic conditions. Our food crops depend on insects and other animals for pollination. Healthy forests clean the air and provide oxygen for us to breathe. Wetlands clean water and help minimize the impacts of floods. These services are the foundation of life and depend on a diversity of plants and animals working in concert. Each time a species disappears, we lose not only those benefits we know it provided but other benefits that we have yet to realize.
What you can do to help
Species Contact:Bob Butler
office - 828/258-3939, ext. 235
fax - 828/258-5330
160 Zillicoa St.
Asheville, NC 28801 | <urn:uuid:065f4604-6b5e-42b9-85bb-dce13ae98371> | 3.671875 | 1,402 | Knowledge Article | Science & Tech. | 35.194443 |
What is the status of benthic invertebrate communities in the open Baltic Sea?
The benthic invertebrate community of the entire Baltic Proper, from the Bornholm Basin to the northern Baltic Proper and the Gulf of Finland, was in a severely disturbed state during the period 2003-2007.
The status of benthic invertebrate communities was good in the Bothnian Sea and the Bothnian Bay.
Figure: Status of benthic inverterate communities in the open sea areas of the Balitc Sea during the period 2003-2007. The interpolated map has been produced in three steps: 1) the status of coastal assessment units has been interpolated along the shores, 2) the status of open sea basins have been interpolated and 3) the coastal and open interpolations have been combined using a smoothing function. The larger circles indicate the status of open sea assessment units and the smaller circles that of the coastal assessment units.
The composition of animal communities living on the sea bed of the Baltic Sea reflects the conditions of the environment.
In the eutrophication process, broad-scale changes in the composition of the communities – usually involving reduced biodiversity – accompany the increasing organic enrichment of the sediments. At advanced stages of eutrophication, oxygen depletion becomes common.
In many areas of the Baltic, the benthic animals are exposed to widespread oxygen depletion. In areas with periodic oxygen depletion (every late summer and autumn), the number of benthic species is reduced significantly and mature communities cannot develop.
Oxygen depletion may be viewed as a temporal and spatial mosaic of disturbance that results in the loss of habitats, reductions in biodiversity, and a loss of functionally important species. In a Baltic-wide perspective, these disturbances have also resulted in a reduction in the connectivity of populations and communities, which impairs recovery potential and threatens ecosystem resilience. Recovery of benthic communities is scale-dependent and an increase in the extent or intensity of hypoxic disturbance may dramatically reduce rates of recovery.
It is evident that reductions in the distribution and diversity of benthic macrofauna, owing to hypoxic events, have severely altered the way benthic ecosystems contribute to ecosystem processes in the Baltic Sea.
Author(s) and institutions
Alf Norkko and Anna Villnäs - Finnish Environment Institute (SYKE), Finland
Andersen, J.H., P. Axe, H. Backer, J. Carstensen, U. Claussen, V. Fleming-Lehtinen, M. Järvinen, H. Kaartokallio, S. Knuuttila, S. Korpinen, M. Laamanen, E. Lysiak-Pastuszak, G. Martin, F. Møhlenberg, C. Murray, G. Nausch, A. Norkko, & A. Villnäs. 2010. Getting the measure of eutrophication in the Baltic Sea: towards improved assessment principles and methods. Biogeochemistry. DOI: 10.1007/s10533-010-9508-4.
HELCOM 2009a. Eutrophication in the Baltic Sea. An integrated thematic assessment of the effects of nutrient enrichment in the Baltic Sea region. Baltic Sea Environment Proceedings No. 115B.
HELCOM 2009b. Biodiversity in the Baltic Sea. An integrated thematic assessment on biodiversity and nature conservationin the Baltic Sea. Baltic Sea Environment Proceedings No. 116B.
Last updated 5 May 2010 | <urn:uuid:3973c5c5-4945-42de-8374-c5a2bd7e6f73> | 2.859375 | 732 | Academic Writing | Science & Tech. | 36.244961 |
|Fighting Forest Fires|
|Thursday, 06 September 2007 06:28|
One of the key factors in fighting forest fires is the ability to predict the spread of the fire. Human experience and empirical spread models provide important tools, but they are not always accurate enough. In particular, they are often inadequate for large, intense wildfires. In the future, however, firefighters might get more reliable, hi-tech help predicting the flames and the smoke.
photographer Kari Greer
Two FSU scientists, CSIT professor Yousuff Hussaini and meteorology professor Phil Cunningham, are using computer models to try to understand the course of wildfires, hoping to be able to develop predictive tools that can be used in real life situations. The core of their research is not the chemistry of the combustion itself, but the fluid mechanics describing how the fire interacts with the surrounding air. Some major questions are: Where is the hot air going, and how does it interact with the atmospheric winds to feed back on the fire?
The FSU scientists work with Dr. Rodman Linn of Los Alamos National Laboratory, who developed one of the models that the group currently runs on the FSU supercomputers, and Dr. Scott Goodrick of the USDA Forest Service. Linn's model, called FIRETEC, is a coupled fire-atmosphere model, meaning that at every instant it calculates the interactions between the chemical reactions and the atmospheric flow to provide predictions of fire behavior.
The intense buoyancy forces associated with the heat from the fire result in a column of rapidly rising air, causing the predominantly horizontal winds in the atmosphere to be diverted around the fire and to bend back in its wake, only to be swept up into the rising column of air. This effect results in a pair of vortices on the downstream edge of the fire, one rotating clockwise and the other rotating counter-clockwise, that can have major impact both on the spread of the fire and the transport of smoke away from the fire in the plume.
In addition to the work on fire spread, Phil Cunningham has adapted a model of the atmosphere to look specifically at the plumes. While most tools currently used to predict smoke concentrations downwind of a fire assume that the plume spreads like a cone with the highest concentration of smoke in the middle, this is not always the case. Occasionally the counter-rotating vortices are intense enough to entrain smoke-free air between them, causing the plume to split into two branches.
Beyond its immediate impact on air quality, the smoke from fires plays an important role in the carbon cycle and on the climate, and authorities concerned with air quality management are very interested in both short and long term effects of the smoke. In fact, despite the threatening nature of the fire itself, the fact is that smoke impacts the well being of a larger number of people over a much larger area.
Caption: Simulation from the FIRETEC coupled fire-atmosphere model showing temperature, air flow streamlines and fuel depletion. (Image courtesy Wilfredo Blanco, FSU Visualization Lab, and Chunmei Xia.) | <urn:uuid:a3b213ea-38ec-4465-ab57-6f011c2485a5> | 3.53125 | 641 | Knowledge Article | Science & Tech. | 33.558083 |
In this chapter I want to teach you the method to create 2D landscapes I used in my game Castles. Again, I won't talk about the Main class, which controls the applet in general and I won't talk about the Stars class, that paints the stars in the background. But both classes are pretty simple, so I think it will be no problem for you to learn them by yourself if you want to. Before we start programing, we have to know what our "problem" looks like.
We want to generate a 2D landscape that looks like the "Rocky Mountains" and can be used in a game like "Castles". This landscape will look different in every game, so it's not possible to use *.gif's or something like that. We want this landscape to be generated at random. Also we want to have the possibility to change it afterwards for example if "bombs" hit the ground. Another important thing is, that the datastructure of our landscape mustn't be too complicated, so that the game is still running fast and the amount of data is not too big. I'm sure, there are many different solutions to this problem one could think of. Let's talk about mine:
Idea and outline of the algorithm
To produce as little data as possible, I want to generate my landscape out of many lines and not out of single points/pixels. The basic idea is to draw a vertical line for every pixel of the length of the landscape. Now we choose the lower point of all lines as constant. So we only have to store the upper point, the "surface" of the landscape that will have different values for every line, in a array.
First of all one has to recognize, that it doesn't make sense to choose every upper point at random. If you want to generate a "landscape" this way, you wouldn't get a structured surface, instead your "surface" would look like the picture beside this text.
To generate a real surface is a little bit harder. First of all I will show you the basic concepts of the algorithm, then afterwards, you get the method I used in Castles as a Java program. Remember, that we leave the lower point of every line constant and change/store the upper point only!
- First of all we choose a start value at random and store it in a array
- Now we initialize the second upper point (the second line) with a value 1 pixel higher or smaller than the one before (the first one).
- We initialize all values of the array this way, adding or subtracting 1 to the value before the value we want to initialize.
- The decision of adding or subtracting is made by random. The propability to change the direction (from adding to subtacting and vice versa) is 10% the propability to do the same operation (adding or subtracting 1) as before is 90%.
Beside this text you can see the result of this pretty simple but pretty effective algorithm and it is realy not bad. There are just a view small problems. First of all it happens pretty often, that "mountains" are higher and "valleys" are lower than the appelt size. Another "problem" is, that this landscape looks a little bit boring, I think. So I will present you an algorithm that solves these problems and adds some additional features to the landscape like changing colors. Here we go:
The "final" algorithm
// Ändern der Richtung
faktor = - (faktor);
// new plus (value 1 or 2)
plus = 1 + Math.abs(rnd.nextInt() % 2);
// Ändern der Richtung
faktor = - (faktor);
// if color value get's too high
greenvalue -= 10;
// if color value get's too small
greenvalue += 10;
// get the value before the actual position and store it in last
last = map [i - 1];
// Decision if changing direction or not
change = Math.abs(rnd.nextInt() % 10);
// changing direction and possibly plus
if (change > 8)
/* Make sure that surface values stay in a certain range */ if (last > 350 || last < 120)
// Make sure that color values stay in a certain range
if (greenvalue > 240)
else if (greenvalue < 100)
// Calculate and store surface value on position i
map [i] = last + (faktor * plus);
// Calculate and store color value on position i
greenvalue = greenvalue + (-faktor * plus);
colors [i] = new Color (redvalue, greenvalue, bluevalue);
/* initialize plus, this variable tells you which value will be added or subtracted from the last value. */
plus = 1;
// initialize variable factor which decides if + or - value of plus faktor = 1; // Initializing start value of the surface
start = Math.abs(300 + (rnd.nextInt() % 50));
// Store start value on the first position in the array
map = start;
// Initializing start values for the colors
int greenvalue = 200;
int redvalue = Math.abs(rnd.nextInt() % 200);
int bluevalue = Math.abs(rnd.nextInt() % 201);
// Storing first RGB value in the Color array
colors = new Color (redvalue, greenvalue, bluevalue);
// Loop to initialize all array positions
for (int i = 1; i < mapsize; i ++)
public void generateLandscape ()
If one has initialized all array values that way one can paint the landscape. The easiest way to do this is to get all values of the arrays with a for - loop. Then draw a line from the bottom of the applet (constant value) to the surface array value. The x - coordinate is the array position of the value. Draw the line in the corrosponding color out of the color array. You can see the result of this method besides this text. As I mentioned, there are many different possibilitys to generate a landscape (for example one could calculate and store single points instead of lines) but my algorithm is fast and doesn't generate a too large an amount of data. And at least the result is not too bad. Ok, that's it, if you have another good method to generate a landscape (maybe even 3D) write me. Here comes the source code and the applet! | <urn:uuid:ec7cffca-381a-43c8-8a01-86a60ac6338a> | 3.515625 | 1,390 | Tutorial | Software Dev. | 56.454358 |
Magnetism and CRT's
I am doing a simple science project with my son. I accidently waived a magnet by a
CRT screen and saw the magnetic fields in colors. I want to do this experiment at the school
along with a 4th grade explanation on the reaction between screen and magnet. Can you help?
On a CRT, the picture is drawn by three separate electron beams scanned rapidly across the screen.
Each bean is supposed to strike only one of the three primary color phosphors on the interior of
the CRT face. The beams are guided to their proper location by the interplay of electric and
magnetic fields applied back in the area of the CRT tube's neck.
When you applied the magnet to the CRT face, the externally imposed magnetic field distorted the
aim of the electron beams, thereby causing a mis-registration of their intended points of impact.
Older color TV sets can be temporarily damaged by application of a magnet to the screen. No
damage will occur if the TV CRT is of the black and white style.
This is similar to the generation of electrical current by moving a wire through a magnetic
In this case, you have an electrical current moving through a magnetic field. A force is
generated that causes the electrons moving toward the screen to be displaced slightly and
hit the wrong color dot.
Color CRTs work by superimposing three images -- one Red, one Blue, one Green -- at least
this is one color scheme that is used. It takes precise alignment of the electron beams to do
this. The interaction of the moving electron beam with the magnetic field causes the electrons
to shift slightly and hit the wrong color dots.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:e93b223c-c7d2-4a09-9fa7-47075a0452c3> | 3.796875 | 367 | Q&A Forum | Science & Tech. | 55.857945 |
It’s never been this hot in the United States.
We just finished the hottest year on record. For the contiguous United States, the 12 months between July 2011 and the end of June marked the hottest year since record keeping began in 1895. Nationally, the average temperature was 56 degrees Fahrenheit -- 3.2 degrees above the long-term average.
This won’t come as news to the folks of Red Willow County, Nebraska, where temperatures reached 115 degrees on June 26 -- shattering the previous record set in 1932 -- or any of more than 100 other places that saw their hottest days ever during just the past three weeks.
It’s not exactly a scoop to the 76 percent of the country where corn is roasting to dust and lakes are baking to chalk amid drought or abnormally dry conditions.
Nor will it much surprise the thousands of Americans who have fled their homes in advance of wildfires that have burned more than 2.6 million acres of land so far this year. (See "Climate Change Fuels the Perfect Firestorm.")
News that we’ve just wrapped up the hottest 12 months on record, though, does put this summer’s heat, drought, and wildfires into perspective, at a time when conservative pundits like George Will are shrugging their shoulders and asking: so what?
“How do we explain the heat? One word: summer,” Will said Sunday on ABC News. “We’re having some hot weather. Get over it.”
Some summers are hotter than others, falling within a range of what scientists call “natural variability,” which can be influenced by global weather patterns. There are many reasons, also, for the kinds of wildfires, and drought we’re seeing this summer. All, though, are part of the wider pattern that climate scientists have been predicting for decades and documenting for years, as the burning of fossil fuels builds up heat-trapping carbon in our atmosphere.
“You’re watching climate change in action,” Kevin Trenberth, distinguished senior scientist with the National Center for Atmospheric Research in Boulder, Colorado, told me. “These are just the sort of things we expect to see as the climate gets warmer. To some extent, these things are going to continue to happen.”
Get over it? Get used to it might be more accurate.
“This is not your grandfather’s summer,” said Trenberth, whose office was forced to evacuate for two days last month as wildfires raged within 1.5 miles of the facility. “This is the summer of the future.”
In just the past three weeks, at least 46 deaths have been blamed on the heat.
How hot? It was 118 degrees in Norton County, Kansas, on June 28. Wait a minute -- 118 degrees? You leave a bayberry candle on your screened porch in that weather, and it melts like a stick of butter.
Amid the longest stretch of triple-digit days since 1930 in the nation’s capital, officials blamed a “heat kink” in a commuter rail line for forcing a train off the tracks July 6. Roads are buckling from the heat in Wisconsin. Yes, Wisconsin.
“Evidence supporting the existence of climate change is pummeling the United States this summer,” Amy Goodman wrote in a July 5 column for the Guardian, “from the mountain wildfires of Colorado to the recent ‘derecho’ storm that left at least 23 dead and 1.4 million people without power from Illinois to Virginia.”
A ‘derecho’ is a powerful line of thunderstorms, supercharged by extreme heat, like the one that swept from Indiana to the coast of Delaware in 10 hours on June 29, packing gusts up to 80 miles per hour and toppling massive trees and telephone poles in its wake.
The kind of heat that fuels derechos has been in ample supply of late.
It was 113 degrees in Edgefield County, South Carolina, on the last day of June, and 112 degrees in Marshall County, Tennessee, on the second of July. Both of those scorchers broke the local all-time highs, part of more than 140 records shattered in 115 different places during late June and early July, according to the National Climatic Data Center, which maintains the largest weather data archive in the world.
Run a finger across a map along a line from northern California to the top of Ohio. Most of the country south of that line is experiencing serious drought, with little relief in sight. “The near-term forecast doesn’t bode very well,” Mark Svoboda, climatologist with the National Drought Mitigation Center at the University of Nebraska, Lincoln, told me. “July and August are the hottest and driest months of the year.”
Across much of the heartland, lakes are drying up, crops are withering, and farming communities are getting hammered, from the cornfields of Nebraska to the pecan groves of Georgia. “There’s a lot of praying,” Illinois Farm Bureau spokesman John Hawkins told the New York Times last week. “These 100-degree temperatures are just sucking the life out of everything.”
Moreover, the warming trend is global. Worldwide, land surface temperatures were 2.18 degrees above the historical average in May. It was the second-hottest May since record keeping began 132 years ago, according to the National Oceanic and Atmospheric Administration. Last year was the 35th year in a row that annual global temperatures exceeded the 20th century average. The 11 years since 2001 all rank among the 13 hottest since 1880.
Hot weather, early snowmelt, and persistent drought have combined to create tinderbox conditions across much of the country. By July 9, 30,495 wildfires had burned 2.6 million acres nationwide, according to the National Interagency Fire Center in Boise, Idaho, and there are more than two months of hot, dry summer conditions ahead.
The amount of forest and other land burned so far this year is in line with the 10-year average, but the average has risen 80 percent over the past decade.
Many factors contribute to wildfires. Experts, though, have been warning for years that global climate change is making matters worse. (See "How the West Was Lost.")
“The effects of climate change will continue to result in greater probability of longer and bigger fire seasons, in more regions of the nation,” states the 2009 Quadrennial Fire Review, a joint document produced every four years by the U.S. Forest Service, National Park Service, Fish and Wildlife Service, Bureau of Indian Affairs, Bureau of Land Management, and the National Association of State Foresters. “What has already been realized in the past five years -- shorter, wetter winters and warmer, drier summers, larger amounts of total fire on the landscape, more large wildfires -- will persist and probably escalate.”
Even if we’re able to insulate ourselves from the worst effects of fires, drought and heat waves -- for a while, at least --- we’re not much likely to just “get over it,” as some might wish.
“We will have to live with the consequences of the carbon dioxide we’ve already put up there,” Trenberth explained. “While humans can hide in our air-conditioned houses, the trees and the ecosystems can’t.” | <urn:uuid:393fddde-f1ac-441a-88e6-5b9d6e879dbd> | 3.46875 | 1,572 | Personal Blog | Science & Tech. | 55.295791 |
Posted By Jason Gray on May 19, 2013
The total cost of this sunlit energy was more than $300,000 a kilowatt-1,000 watts, only enough to light ten 100-watt bulbs. Less sophisticated cells intended for earthbound use now cost about $20,000 a kilowatt, still prohibitive except in remote places like offshore oil rigs and isolated radio relay stations.
But many experts predict that solar-cell costs will spiral downward to a competitive $500 a kilowatt or less in the next ten years. And considering how fast the cost of electronic hand calculators (made from similar silicon circuitry) has dropped in just three years, such hopes do not seem unreasonable.
At the http://www.apartmentsapart.com/europe/uk/blackpool I saw a solar array undergoing tests. From a distance the multifaceted panel of solar cells, mounted at the end of a 20-foot pole, looked like a gigantic sunflower waving on its stalk in the breeze.
Close up, I could hear the buzz of a small electric motor that kept the 12-by-20-foot array tilted toward the sun. Plastic lenses on top of each round cell concentrated the sunlight so that each disk “saw” the equivalent of ten suns. The array was capable of generating one kilowatt of electricity.
The Shah of Iran may soon become a big Spectrolab customer. He has announced plans to bring electricity by the end of this decade to the 70,000 remote villages scattered throughout his land. Each hamlet will be equipped with electric pumps for well water, medical refrigerators, even educational-TV sets receiving signals from a broadcast satellite Iran proposes to put in space.
And the answer to Iran’s near-instantaneous rural electrification lies with solar-cell arrays such as the kilowatt prototype I saw—not, ironically, with petroleum. Thus may come a true socio-technological revolution.
While we were staying at Chicago apartment rentals, Dr. A. I. Mlaysky showed me one of the most promising experiments for mass production of solar cells. So far solar cells have been made by hand in limited quantities. Tyco has developed a precision machine that pulls a thin silicon strip in a continuous ribbon (left, above). Already the process has produced ribbon more than 75 feet long; Dr. Mlaysky expects the automated machines will eventually wind out spools of solar-cell silicon several hundred feet long. “Within three years we should know if it is possible,” he says.
The day may arrive when solar cells are delivered to a house like rolls of roofing paper, tacked on, and plugged into the wiring, making the home its own power station.
The imaginative brain of Arthur D. Little’s energy expert, Peter Glaser, has conceived what he considers the ultimate solution to the world’s energy needs—a solar power station orbiting in space.
Satellite Would Know No Night
At his rented Berlin accommodation near his office, Dr. Glaser showed me a design for such futuristic satellites. They look like gigantic butterflies, with solar-panel wings 6 by 71/2 miles in size. A single one of these power stations in synchronous orbit 22,300 miles above earth might provide as much as 5,000 megawatts, half the present capacity of New York City’s generating plants. | <urn:uuid:c662fead-e097-4714-92ca-24c6f3238874> | 2.96875 | 707 | Personal Blog | Science & Tech. | 51.305573 |
Can really smart scientists come down to earth?
Two major Chicago museums now give the general public the chance to observe working scientists in their natural habitat. The Adler Planetarium’s Space Visualization Lab launched in November, and the Field Museum’s DNA Discovery Center debuted last week. But when the last Bunsen burner is turned off for the evening, will regular folks come away comprehending the complicated projects under way? We spend a few minutes with two scientists to see if they can make us—Joe Blow English Majors—understand their project, or if they’ll be blinding us with science.
Doug Roberts, of Adler Planetarium’s Space Visualization Lab
You’re creating a World Wide Telescope. How does that work?
It’s a computer-based program that looks at real imagery from telescopes but allows you to interact with it as you would if you were just looking with your naked eye. It allows you to zoom in to really tiny, tiny scales that are way beyond what the eye can see. I’m contributing content for tours-—stories told visually within this interactive environment.
Can we use it to zoom in on Dick Cheney’s undisclosed location?
Earth data is in the system. I’d have to look [at the data] to see if his house is in there. My research is on the super-massive black hole at the center of the galaxy.
Do you ever worry about accidentally coming across a wormhole?
If I came across a wormhole it would be great, because you could travel through it or sample something from a totally different part of the universe. But the bigger thing to get worried about is getting wiped out by an asteroid.
Do you scare the shit out of kids this way?
Oh, they love it. If kids are in the mind to hear about catastrophes coming from space, we feel like it’s our obligation to give it to them.
It would be really cool to have a music soundtrack.
No. Robert Fripp from King Crimson.
Is it pretty trippy?
Oh yeah, absolutely. It’s part of what makes my tour so cool.
Kevin Feldheim, of the Field Museum’s DNA Discovery Center
You study the reproductive patterns of lemon sharks. Is that awkward for the sharks?
The good news for the sharks is we don’t actually watch them having sex. But we can infer what’s going on based on what we find in the DNA.
What’s the most interesting thing you’ve discovered about how lemon sharks do it?
DNA tells us that a female will mate with anywhere from two to four males. There could be a couple things going on. [Another study] found that male sharks help each other mate with the female. They basically take turns. We don’t know if that’s happening with lemon sharks, but it’s possible. Another confounding effect is that female sharks can actually store sperm and combine sperm from many males to fertilize her eggs.
Like a sperm cocktail?
Right. Yes. Sperm cocktail is a good way to say that.
How do you use DNA in your research at the museum?
Since DNA is the thread that connects all life, we can look at the relationship between individuals, between populations, all the way to relationships between different phyla. I’m using the exact same sort of genetic marker that forensic scientists use when they try to identify criminals. These [genetic markers] are extremely variable between individuals, and we can essentially get a genetic fingerprint of every individual we capture.
You mean each individual shark?
That’s right. I’m doing the “Who’s the shark’s daddy?” kind of thing. You could say I’m the Maury Povich of sharks. | <urn:uuid:53c1cf9a-0b99-4b61-b154-f2673e57bbce> | 2.921875 | 811 | Audio Transcript | Science & Tech. | 65.766792 |
You WIll Probably Live to See 'Dangerous' Levels of Climate Change
As far as Americans are concerned, climate change is a perpetually distant and ambiguous threat. Some glaciers thousands of miles away might melt, some poor people might suffer through droughts in Africa, some polar bears might drown. Et cetera. This 'distance effect' is, partly, what drives global warming to the bottom of our priority lists time and again. It's an amorphous problem, ever-looming. That's what it seems like, anyway.
Yet two recent studies published in Nature reiterate a warning scientists have been issuing for years: if greenhouse gas emission trajectories remain as rapidly ascendent, we'll likely see "dangerous levels" of climate change by midcentury. That means a good many of us reading these very words will be alive and well by the time climate change begins to reach what are commonly referred to as "catastrophic" levels.
Reuters parses the report: (emphasis mine)
Global temperature rise could exceed "safe" levels of two degrees Celsius in some parts of the world in many of our lifetimes if greenhouse gas emissions continue to increase, two research papers published in the journal Nature warned.That's right--we could see a 2 degree Celsius rise in less than 20 years in some parts of the world. Like Canada. And Canada's not exactly a far-off locale, even to the most avid American exceptionalists.
"Certain levels of climate change are very likely within the lifetimes of many people living now ... unless emissions of greenhouse gases are substantially reduced in the coming decades," said a study on Sunday by academics at the English universities of Reading and Oxford, the UK's Met Office Hadley Center and the Victoria University of Wellington, New Zealand. "Large parts of Eurasia, North Africa and Canada could potentially experience individual five-year average temperatures that exceed the 2 degree Celsius threshold by 2030 -- a timescale that is not so distant,"
Hopefully, studies like this will continue to underscore the urgency by which we must act to address our still-ballooning carbon emissions. And hopefully, responsible news outlets will continue to publicize them.
Mat noted in a post about these same studies that greenhouse gas emissions must peak by 2020 if we hope to avoid the worst climate change impacts. That's not much time. And given the current political aversion to climate change mitigation in the US--the world's largest historic carbon polluter--forging a meaningful international agreement aimed at reducing emissions between now and then will require something of a diplomatic miracle. | <urn:uuid:ac183d74-e84a-4f48-ad08-4b42c6a242d5> | 3.03125 | 524 | Nonfiction Writing | Science & Tech. | 34.845455 |
Can Forests Survive Without Birds?
News story originally written on January 30, 2009
Take one species out of an ecosystem and you could see lots of changes in the other living things. This is what is happening in the forests on the western Pacific island of Guam where bird populations have been decimated.
There used to be birds on Guam until the brown tree snake was accidentally brought to the island in the 1940s. The snakes multiplied as they ate Guamís birds. Today 10 of the 12 forest bird species are gone and there are about 13,000 snakes per square mile in Guamís forests.
Haldre Rogers, graduate student from the University of Washington, wants to know how the loss of birds is affecting Guamís forests. She and a team of researchers have been examining whether the loss of birds is having an effect on the forest trees.
They are studying whether there are changes in how the seeds from these trees are spread through the forest. Birds move seeds around a forest. They eat the fruit from a tree swallowing the seeds, and then fly to another tree in the forest where they defecate the seeds. Once the seeds are out of the bird, they can grow into a new tree.
To study whether seeds in Guamís forests were able to travel without the birds, the researchers set traps to catch falling seeds. They positioned seed traps various distances from fruiting False Elder trees to see how far seeds get from the trees. They also set up seed traps in the forests of Saipan, a nearby island where birds are still common. Counting the number of seeds that fell into each trap, they compared how far seeds travel in bird-less Guam and bird-rich Saipan.
So far they have found that all of the seeds from the fruiting trees on Guam remained near their parent trees while many more of the seeds from the fruiting trees on Saipan were found away from their parent trees. The lack of birds appears to be having an effect.
On the bird-less island of Guam, seeds donít get moved around. The fruits donít fall far from their trees. And seeds that are under the tree where they formed are less likely to grow into a new fruit tree than seeds that are moved away from their parent trees. | <urn:uuid:1db39eea-fdbd-4889-9b08-8d9b5db60080> | 3.46875 | 464 | Truncated | Science & Tech. | 67.459261 |
More on Lake Erie Temperature Trends & Gardening in the Great Lakes
Thursday, I posted about observed water temperature increases at the Buffalo Water Treatment Plant site. To remove the seasonality from the data (and better discern trends), I converted all of the data since 1960 to an annual departure relative to the 1960-2012 period. So far, 2012 has averaged 2.8C above the mean for the 1960-2012 period. The current warmest year on record is 1998 at +1.8C above the mean. The annual water temperature has been increasing at 0.29C per decade since 1960.
Perhaps more stunning is the decrease in number of days with a water temperature of 32F during that period. These readings are taken 35' below the surface, so when the surface is covered or substantially covered in ice, the water temperature is usually 32. The chart shows that the annual number of 32F readings has been decreasing at a rate of over 1 day per year! In recent years, the rate of change appears to be increasing. Both 2012 and 1998 had 0 such days, while 2002 had just one. By contrast, 1964 had 139 days, and 1971 138 days.
These graphics show an umistakable warming of the Lake Erie climate system. As the region continues to experience warming, winter ice coverage will continue its marked decline. The decrease in ice cover will itself greatly affect the climate of surrounding areas. This will be accomplished by two means: (1) the warmer, open waters will better modify arctic airmasses moving southeast from Canada; and (2) the warmer, open waters will contribute to increased cloudiness & precipitation, which will make conditions less favorable for extreme cold.
This effect is already apparent in data from observation sites downwind of the Great Lakes. I took a look at the coldest minimum annual temperature at the Youngstown-Warren Regional Airport in northeast Ohio. As a native of the area, this should be a good site to conduct this analysis, as it is a small airport with minimal traffic and little, if any, contamination from surrounding land use over the period being considered.
In the 1960s, the average minimum temperature was -8.0F; in the 1970s, -8.6F; in the 1980s, -11.3F; in the 1990s, -3.2F; in the 2000s, -2.3F; and in the 2010s, +0.0F. As you can see, the trend is definitely up, with fewer days of extreme cold. The USDA plant hardiness zone maps illustrate this to some extent, but they are already obsolete. According to the most recent update, released just last year and based on data compiled from 1976-2005, this area is in zone 6a, with an average minimum temperature between -5 and -10F. In the last 15 years, however, the actual average minimum temperature is just -1.2F, well within zone 6b (almost nearing the threshold of 0F for zone 7!). In fact, in the 2010s, the average minimum temperature is just 0.0F. This is based on just three years; nonetheless, it does include data from two (allegedly) "bitter" cold winters (2009-10 & 2010-11). In fact, those winters were not particularly cold and would have been milder than most winters in the 60s, 70s, & 80s.
Closer to the lake and in more urbanized areas, zone 7 temperatures are already evident. Since 1998, the average minimum temperature at Cleveland Hopkins International has actually been above 0F. The official USDA plant hardiness map classifies no part of Ohio as zone 7, but more recent data shows this to be false. For gardeners, this means you can probably begin experimenting with different plants that traditionally would not grow in northern Ohio and surrounding areas. If current trends continue, much of northeast Ohio, will likely be zone 7 by the 2020s. Should warming continue to increase in rate, as projected, and ice coverage continue to decline over the Great Lakes basin, the effects may be even more substantial. By mid to late century, I wouldn't be surprised to see zones 8 or even 9+ begin to appear.
You seldom hear much about the actual effects of global warming and how global warming will manifest itself. This is how! On a high emissions path, the Great Lakes region is on a collision course with a subtropical or even tropical climate. Models project up to 6C of globally-averaged warming within the next century under a high emissions path, which would likely result in 9 or 10C of warming at mid-latitude landmasses, such as the Great Lakes region. | <urn:uuid:10f4080f-0760-4e62-93a0-7897f2f73fcc> | 2.828125 | 965 | Personal Blog | Science & Tech. | 57.15772 |
The Server Programming Interface (SPI) gives users the ability to run SQL queries inside user-defined C functions.
Note: The available Procedural Languages (PL) give an alternate means to build functions that can execute queries.
In fact, SPI is just a set of native interface functions to simplify access to the Parser, Planner, Optimizer and Executor. SPI also does some memory management.
To avoid misunderstanding we'll use function to mean SPI interface functions and procedure for user-defined C-functions using SPI.
Procedures which use SPI are called by the Executor. The SPI calls recursively invoke the Executor in turn to run queries. When the Executor is invoked recursively, it may itself call procedures which may make SPI calls.
Note that if during execution of a query from a procedure the transaction is aborted, then control will not be returned to your procedure. Rather, all work will be rolled back and the server will wait for the next command from the client. This will probably be changed in future versions.
A related restriction is the inability to execute BEGIN, END and ABORT (transaction control statements). This will also be changed in the future.
If successful, SPI functions return a non-negative result (either via a returned integer value or in SPI_result global variable, as described below). On error, a negative or NULL result will be returned. | <urn:uuid:ca9aa132-0229-4f3d-97c6-df136cb3b915> | 2.6875 | 291 | Documentation | Software Dev. | 37.182156 |
Stephanie Keske does computer visualization work, and is starting a graduate program this fall at Texas A&M University. She told me, “Just living on a ship, I think … you know, I try to be outside as much as I can so just being trapped on a floating hunk of metal is maybe going to be a little difficult. I don’t know: I’ve never been in one place with an inability to leave it for 2 months solid.”
At the moment, Keske’s in the northeast Pacific onboard an oceanographic research vessel. She and six other educators and artists from the US and France are working with the science team to do unprecedented outreach. Have a listen.
A smaller boat brought supplies for the CORKs to the JOIDES Resolution on some choppy seas.
You just may need a survival suit one day. But until that day comes, you look silly. Stephanie Keske (L) & Jackie Kane (C).
Drilling through the ocean crust is a massive undertaking.
The JOIDES Resolution stands tall at sea. Credit: The Consortium for Ocean Leadership.
Andrew Fisher (left) is on a drilling cruise to understand how water moves through the ocean crust, and what kind of life is thriving down there.
National Science Education Standards Grade 5 to 8
National Science Education Standards Grade 9 to 12
Ocean Literacy Principles
Send a Message
Send a note to anyone you hear in this podcast, or leave them a voicemail. Do it soon, though, since the cruise ends in late August: | <urn:uuid:e937552c-b587-47db-9d78-862155a929e6> | 2.765625 | 326 | Truncated | Science & Tech. | 59.083887 |
Safety Class
- Very dangerous (Should only be attempted after careful training)
Required Safety Equipment
- The show performer: Glasses / Cryogenic gloves
- Audience/participants: None.
- One brass tube, 5cm across, 40 cm high. This brass tube is closed on one end and open on the other.
- One cork with two holes in it which fits on the brass tube
- Two glass tubes which fit into the holes in the cork, 40cm long
- Liquid nitrogen
- A stable table
Preparation: Stick the glass tubes into the cork. Make sure the glass tubes stick out about 5 cm above the cork.
Put on safety goggles and cryogenic gloves.
Put some water onto the table, and put the tube in this - with the closed side down. Pour the liquid nitrogen into the tube. Now put the cork with the glass tubes down into the brass tube. This will cause a massive fountain of liquid nitrogen to squirt out of the glass tubes.
The waste product (liquid nitrogen) will evaporate into the air.
- Be sure to always wear gloves and glasses when doing this experiment
- Be sure to NEVER do this experiment twice in a row, while the equipment is still cold. If the equipment is still cold, the lower temperature difference will cause the liquid nitrogen to evaporate slower. This means that you get liquid nitrogen sprayed all over you!
- Keep your head below the tube when doing this experiment. Liquid nitrogen which lands in your neck is less dangerous than liquid nitrogen which lands in your face
Other Factors
As said above, the higher the temperature difference between the equipment and the liquid nitrogen, the larger the effect. Therefore, it is possible to increase the effect of this experiment by heating the equipment with a blow drier beforehand.
Show notes
It is very important that you need two people to do this experiment; one pours the liquid nitrogen into the fountain tube, the other one puts the cork on. If you try to do both by yourself, so much liquid nitrogen will have evaporated that the effect will be much less.
What's taking place
The liquid nitrogen evaporates, creating more pressure in the tube. Because the glass tubes are below the waterline of the liquid nitrogen, the liquid nitrogen is pushed out instead of the gas.
In the real world
Espresso machines work like this; steam is built up in a chamber, and the boiling liquid is pushed through coffee powder.
More Information
- Websites with more info the end.
- example reference
This opensource material is from The Wiki Science Show Book in English. There is an overview of available guides. It is based on the template for new guides. Related media files can be found on Wikimedia Commons ScienceShow.
- Links to this guide in other languages: English. | <urn:uuid:5728eaff-ec7f-4aff-a3e9-31e88e423742> | 3.046875 | 596 | Tutorial | Science & Tech. | 51.182999 |
Still remember the Interpreter example? No matter what you type is a number or an expression; it can calculate the result just by the clause” new GeneralExpression(inputText.text).interpret()”
Archive for the ‘Design Patterns’ Category
In web programming, we often use regular expression for validating the e-mail address or phone number. Regular expression is a powerful tool in validating the specific format field. However, the interpretation of regular expression is not an easy job.
In the GoF’s design patterns, there is a corresponding pattern named Interpreter.
Have you ever use HTTP-proxy or some other proxy? When you’re in a relatively isolated environment, such as the LAN in your company, maybe you’ll need it. Actually, when I was an intern in an IT company, I always used the HTTP-proxy to login the MSN and surf on the internet. When I used MSN or surf on the net, I can’t feel the existence of the proxy. And this is the role of a proxy. And this can be express as follows.
Yesterday, when I was on my way home, I suddenly met an old friend. I haven’t met him since I began to write these articles. We stop at a little cafe, and began to talk.
“I’ve just changed my job”, he said, “and now I join the Orange”.
“WOW, Orange, you mean the biggest MP3 manufacturer company,” I answered.
“Yeah, and I join the new product team, we want to surprise the whole world by our new product.” He said proudly.
……. (The rest of our conversation is very boring, so, let’s stop here)
Have you ever buy a computer online, maybe from dell? In dell’s website, we just need to follow its process to order the accessories you need, then you can get your own computer configuration. Of course, you can’t get the real computer until you pay it
Here, you direct the producer to produce your own computer through the dell website. Ok, three roles here, you, dell website and the real producer.
When I want to write the Prototype pattern, I firstly look up the ActionScript 3.0 manual. I want to find out whether there is a clone method in Object class. If so, I will use this as an example. But, unfortunately, I can’t find this method in the Object class. Then I found the prototype attribute, but the explanation confused me. So, I decide to show you this pattern in my own way without using the Object class.
Firstly, you need to know the intent of this pattern. You can read the following text.
There is a famous saying in computer science, “Program = Data Structure + Algorithm”. It figures out the importance of how to organize the data and how to deal with the data.
Maybe, in your applications, you care more about how to show the data, because this has much to do with the user experience. However, how to organize the data and deal with the data is also or much more important, because this has much to do with the performance.
Do you like playing cards? If your had ever played, you may noticed that everyone has their own way arranging the cards. And in most cases, people will put the cards in order, maybe from the biggest one to the smallest one. Eh, this is a way of sorting.
We can write down the following code to mimic this action. | <urn:uuid:060149e6-c418-471a-a3b9-75faf375ec14> | 2.859375 | 767 | Personal Blog | Software Dev. | 63.526894 |
Just to add a bit more to Stuarts answer, Mercury has an axial tilt of only 2 degrees, and no atmosphere to speak of. It is the only planet that has such. What do those 2 things lead to?
- If an area at one of the poles has an incline of more than 2 degrees in all directions, it will remain in permanent shadow. If it is near, add the latitude-90 to the angle to get it's accuracy.
- There is no atmosphere, and thus, only light directly from the sun, and indirectly through ground heat transportation could possibly heat an area on the planet.
So, putting both of these together, it stands to reason that a reasonable chance that if there was some kind of a valley near a pole, it could be very cold.
So, an area that would be "balmy" would need to have some method of combining shadow and light in the right proportion. The ideal thing to do would be to have some sort of an atmosphere. It is possible that a proper habitat could be constructed that would allow for that. But, it would have to account for the slight tilt in the axis, and somehow rotate to different angles depending on where the sun is pointed at that time.
Bottom line is, a large enough habitat could be constructed where it would naturally be heated to 80 degrees inside, but it might take some considerable effort.
Now, it should be noted that there would be some spot of shade, and another of sunlight. The ground would have some distribution beween them, which would allow for temperature contours. I suspect that they would be fairly small, on the order of a few meters or at most tens of meters, from hot to cold. So that narrow band would probably exist, and would be about the size of your shoes. | <urn:uuid:902e02c0-2624-44e0-a76a-132f9c0f71e9> | 3.375 | 371 | Q&A Forum | Science & Tech. | 62.030522 |
Nanotubes wreak havoc with heat
Mar 2, 2009
Physicists in the US have discovered that electrons flowing in carbon nanotube-based circuits dissipate energy in very different ways from electrons flowing through devices made from conventional semiconductors such as silicon. The findings reveal processes of heat conduction that were never previously thought important and could influence the types of materials chosen for the next generation of electronic devices in order to prevent them from overheating.
In conventional semiconductor devices, different layers of material are always joined by chemical bonds. This provides continuity for heat flowing through such devices, making them relatively easy to cool. Many researchers believe that future generations of electronic devices could be made from carbon nanotubes — tubes with walls just one atom thick — which could enable much smaller feature sizes and hence much better computing performance. However, nanotubes do not bond chemically to adjoining structures, which suggested that it should be very difficult to remove heat from such devices.
Bonding not needed
But now Phaedon Avouris and colleagues at the IBM Thomas J Watson Research Center in New York and researchers at Duke University in North Carolina have found that electrons in nanotubes can dissipate energy straight to an adjacent substrate even though it is not chemically bonded.
The team has also found that current-carrying electrons in nanotube devices do not undergo the normal process of “thermalization”, in which a material’s thermal vibrations reach statistical equilibrium (Nature Nanotechnology doi:10.1038/nnano.2009.22).
Avouris and team studied a carbon nanotube on a silicon-dioxide substrate, an arrangement that acts like the active channel of a field-effect transistor. They have used a variety of techniques including Raman scattering, in which the energy of scattered light reveals the different temperatures or “modes” of vibration of the nanotube lattice.
Normally when a current passes through a semiconductor the electrons bump into nearby atoms, which begin to vibrate in a certain mode. This mode then gradually transfers its energy to atoms at lower temperature modes until, at thermalization, all atoms are vibrating in statistical equilibrium.
The researchers have shown that, in nanotubes, thermalization does not take place; the atoms continue to vibrate in the same mode and statistical equilibrium is never reached.
Just as surprising, however, is that the lack of chemical bonding to the substrate does not inhibit heat conduction. The team has shown that when the electrons collide with atoms in the silicon dioxide, which is a polar material, the subsequent shift in position of the atoms generates an electric field that extends beyond the substrate and into the nanotube. When the nanotube’s electrons interact with this field they are able to dissipate energy straight to the substrate.
Scientists were aware of this process of remote heat conduction before, but had never considered it important because they had focused on 2D and 3D materials in which the effect is much weaker. But Avouris told physicsworld.com that the other unusual mechanism — the absence of thermalization — could exist in other materials, and that it may have been overlooked because researchers have not had the right observational tools.
“Understanding this dissipation mechanism in detail is important, especially if nanotubes are someday employed in electronic circuits,” says Adrian Bachtold, a nanoelectronics researcher at the Autonomous University of Barcelona. “Indeed, a better understanding is the first step to be able to engineer the dissipation pathway. For instance, it may be possible to find tricks to enhance the current in the ‘on’ state of the transistor, [which would be] good for rapid circuits.”
The US team is now investigating similar effects in graphene, a one-dimensional “chickenwire” lattice of carbon atoms rather like an unrolled nanotube. Avouris says his team knows that the same mechanisms occur in graphene, but expects some “curious effects” due the material’s larger footprint on the substrate.
About the author
Jon Cartwright is a freelance journalist based in Bristol, UK | <urn:uuid:baf2b1fe-2fc6-4cba-8527-1be8da967d20> | 3.609375 | 860 | Truncated | Science & Tech. | 21.485845 |
This graceful arc traces a Delta rocket climbing through
Thursday's early morning skies over
Canaveral Air Force Station in Florida, USA.
Snug inside the rocket's Centaur upper stage were
Radiation Belt Storm Probes (RBSP),
now in separate orbits within planet Earth's Van Allen
Reflected in the Turn Basin
from a vantage point about 3 miles from
Space Launch Complex 41, the scene was captured
in a composite of two exposures.
One highlights the dramatic play of launch pad lighting, clouds, and sky.
A subsequent 3 minute long exposure records the rocket's fiery trail.
While most spacecraft try to avoid
the radiation belts,
named for their discoverer James Van Allen,
RBSP's mission will be to explore their dynamic and harsh conditions. | <urn:uuid:6cb27d1c-22f5-47e6-8293-6b54ddbbca4a> | 2.84375 | 165 | Truncated | Science & Tech. | 44.7464 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
...Humphry Davy experimented on combustion, including measurements of flame temperatures, investigations of the effect on flames of rarefied gases, and dilution with various gases; he also discovered catalytic combustion—the oxidation of combustibles on a catalytic surface accompanied by the release of heat but without flame.
What made you want to look up "catalytic combustion"? Please share what surprised you most... | <urn:uuid:b85485fe-0d83-4b6f-83a5-44c570a3efb4> | 3.21875 | 119 | Truncated | Science & Tech. | 30.13613 |
Other mussels have inspired synthetic polymers that have been made into versatile adhesives and coatings, explained J. Herbert Waite, senior author and a professor in UC Santa Barbara's Marine Science Institute. They all rely on proteins that contain an amino acid called "Dopa," (identical to the Dopa used to treat Parkinson's disease) and have been studied extensively by Waite and his research group.
Waite learned that the green mussel, Perna viridis, relies on an alternative to the common "Dopa" chemistry, based on an elaborate modification of the amino acid tryptophan in the green mussel's adhesive protein. Its adhesive chemistry is much more complicated than that of mussels previously studied. It took Waite and his team six years to unravel the story.
The green mussel's sticky adhesiveness has the potential to help form strong bonds in wet surfaces, including teeth and bones. In addition, the adhesive could be used to repair ships that have developed cracks while at sea and must be repaired in a wet environment.
Waite was first alerted to the complicated adhesive of the green mussel when a Japanese group contacted him to comment on their research on the animal. He then learned of an infestation of green mussels in Tampa Bay, Fla.
On further study, he learned that the aggressive green mussel had moved from India's Sea of Bengal to many locations around the world, including the coasts of Japan, Australia, Korea, China, the Philippines, and Indonesia. Additionally, many Pacific Islands and the coasts of some countries surrounding the Gulf of Mexico have been invaded. "People are interested in how they invade, adapt, and spread so easily," said Waite.
Waite asked the U.S. Geological Survey and Florida Sea Grant to send him frozen specimens from Tampa Bay, as this is the only way that California would allow the green mussel to be shipped into the state. The feet were severed from about 100 freshly shucked mussels. After thawing, they were placed in a tissue grinder and then centrifuged for study.
"One aspect that is kind of scary is that the green mussel is more successful than other kinds of mussels at living in polluted water," said Waite. Coastal power plants that flush warm seawater into the ocean provide an ideal environment for the mussels. "Once they get a foothold, they stay."
The other authors on the paper are Hua Zhao of the Institute of Chemical and Engineering Sciences, part of the contract research corporation A*STAR in Singapore, Jason Sagert and Dong Soo Hwang of the Marine Science Institute at UCSB.
Issued: 8/27/09; Corrected: 9/15/09 | <urn:uuid:05d11987-5775-46a7-a1e5-83b9ca5e3589> | 3.515625 | 561 | Academic Writing | Science & Tech. | 42.730012 |
Report an inappropriate comment
The Moon Is Thought To Have Formed Closer To The Earth Than It Is Now
Mon Nov 14 03:29:45 GMT 2011 by Gary Thomas
IMHO, they were referring to radial velocity, not orbital speed as the two vary inversely. The faster an orbiting object moves, the farther from the gravitational center and the longer it takes to complete an orbit. | <urn:uuid:c8287b98-f784-49ac-b2d4-325cd61cbf49> | 3.28125 | 80 | Comment Section | Science & Tech. | 43.876364 |
Ever since physicists invented particle accelerators, nearly 80 years ago, they have used them for such exotic tasks as splitting atoms, transmuting elements, producing antimatter and creating particles not previously observed in nature. With luck, though, they could soon undertake a challenge that will make those achievements seem almost pedestrian. Accelerators may produce the most profoundly mysterious objects in the universe: black holes.
When one thinks of black holes, one usually envisions massive monsters that can swallow spaceships, or even stars, whole. But the holes that might be produced at the highest-energy accelerators--perhaps as early as 2007, when the Large Hadron Collider (LHC) at CERN near Geneva starts up--are distant cousins of such astrophysical behemoths. They would be microscopic, comparable in size to elementary particles. They would not rip apart stars, reign over galaxies or pose a threat to our planet, but in some respects their properties should be even more dramatic. Because of quantum effects, they would evaporate shortly after they formed, lighting up the particle detectors like Christmas trees. In so doing, they could give clues about how space-time is woven together and whether it has unseen higher dimensions. | <urn:uuid:9bcbbbd4-1316-4258-96ee-3fa5132fe11a> | 3.71875 | 242 | Knowledge Article | Science & Tech. | 29.960256 |
|Image Credit: Swinburne Astronomy Productions, Swinburne University of Technology|
First, let's go over what the astronomers observed. The team, headed by Professor Matthew Bailes of Swinburne University of Technology in Australia, was looking at a millisecond pulsar. Pulsar are the remains of very massive stars that exploded as supernovae at the end of the stars' lives. Pulsars typically contain about 1.3 times the mass of our sun squeezed into a sphere only a dozen miles across. This is so dense that ordinary atoms cannot exist, and most of the protons and electrons that made up the atoms in the original star merge to form neutrons. We therefore call these very dense remains of massive stars neutron stars.
Many neutron stars have very strong magnetic fields. These fields cause electrons and other particles in space to spiral around at speeds near the speed of light, a process that creates light and other radiation. As the star spins, the poles of the magnet sweep around. Every time a magnetic pole on the star sweeps across our line of sight, we see a flash, or pulse, of light. Neutron stars that are seen to flash are therefore called pulsars.
Just like an ice skater spins faster when he/she pulls her arms in close, stars that are squeezed down to the size of the sun spin faster and faster. New-born neutron stars often spin several times a second! The pulsar in the Crab Nebula (the remains of a star that exploded in 1054) spins at 30 times a second. Over time, as the pulsar's magnetic field interacts with gases in space, the pulsar slows its spinning. We can measure the Crab Pulsar slowing down, and we see pulsars that spin much more slowly than the Crab pulsar.
However, there are a class of very old pulsars which, though they should be spinning very slowly, are spinning at hundreds of times a second! Why are these pulsars spinning so fast? We think it is because these pulsars have companion stars. As the companion stars age, they swell up and get big enough that the pulsar's gravity can pull in the companion star's gas. As this stellar cannibalism continues, the pulsar's spin gets faster (because material is coming from far away and being moved very close to the center of the pulsar -- exactly what caused the pulsar to spin fast in the first place). Eventually, the pulsar will consume or blow away most of the companion star's mass, and we expect a small white dwarf to be left behind.
This idea of how to get pulsars spinning at hundreds of times a second predicts that these fast spinning pulsars should have companions, and most of those should be white dwarfs that have been whittled away to almost nothing. These companions should be detectable. Although they are small in both mass and radius, they still will have gravity that will pull on the pulsar. This causes the pulsar to move in a small orbit. This further means that the pulsar will be closer to Earth some times, and further away other times. When the pulsar is closer to Earth, its light reaches us a few seconds earlier than when it is further away. And since the pulsar spinning is often more reliable and constant than
atomic clocks, it is easy to see if pulses are arriving earlier, then later, then earlier again, then later again, as the pulsar traces out its tiny orbit.
Which brings us to yesterday's press release. Professor Bailes's group was studying one of these fast-spinning pulsars. They were looking for evidence of the companion star that sped up the pulsar's spin by looking for the changing arrival times of the pulsar's radio flashes. And they indeed saw that change. By measuring how early and late the pulses arrived and using some basic laws of gravity and orbits, the team was able to estimate how massive the companion is. The answer: it could be only as massive as our planet Jupiter!
Moreover, this pulsar companion orbits the pulsar every 2.2 hours, which means that it is only about half a million miles away from the pulsar. If we replaced our sun by the pulsar and its companion, the companion's orbit would be about the same size as the sun. This is shown in the artist's impression of the system at the top of this post, where you can see the pulsar (with squiggly lines representing the light coming from its magnetic poles), the companion, its orbit (the dotted oval), and our sun's size on the same scale (the golden circle).
If a companion is that close to a pulsar, the pulsar's gravity will try to steal material from the companion. If the companion is small enough, its gravity will be strong enough to hold on to its material. If it is too big, its gravity will be weak and material will get pulled onto the pulsar. We don't see any signs of any transfer of matter, so it must be pretty small. Remember that this companion could have about the same mass as the planet Jupiter. If we were to put Jupiter in the same orbit as this object, Jupiter's atmosphere would get pulled onto the pulsar. So the pulsar companion has to be smaller than Jupiter. Much smaller, in fact.
Some more calculations show that the largest the companion can be without transferring matter onto the pulsar is about 50,000 miles across. To still have the same mass as Jupiter, it would need to be twice as dense as lead, and even denser than platinum. And remember, this is the minimum density; if the companion is smaller it could be even denser.
So you can see why many people have called this companion a planet. It has about the same mass as Jupiter, it must be smaller in radius than Jupiter, and it is denser than our densest metals. But remember that, whatever this companion is, it had to transfer enough stuff onto the pulsar to make the pulsar spin hundreds of times a second. The amount of matter it takes to do that is far, far more matter than a planet that got too close would have. This companion used to be a star.
As I said above, fast-spinning pulsars are thought to form when the companion star is in the process of dying, because then it can swell up enough for the pulsar's gravity to begin pulling matter over. Most stars in the process of dying end up forming white dwarfs. White dwarfs are very dense (squeezing the sun into something the size of the Earth), and can be made out of either helium or carbon. Carbon white dwarfs are the most common kind. But white dwarfs tend to be about half the mass of the sun or more, not the mass of Jupiter. However, if the pulsar swallowed enough material and nearly swallowed the entire star, it is possible to whittle what was once a star like the sun down to a pile of ashes no bigger than Jupiter. And this object would be far more dense than platinum -- about 40,000 times more dense. So it seems reasonable to guess that the companion to the pulsar is a white dwarf made out of carbon. There are other, more complex arguments, too, and it is far from certain that this white dwarf is not made out of helium. But we cannot see the white dwarf directly, so we can't confirm what it is made out of.
Let's assume we indeed have a white dwarf made out of carbon that is 40,000 times the density of lead but only the mass of Jupiter. What would it be like?
When a white dwarf is first formed, it is hot. Hundreds of millions of degrees hot – this used to be the central fusion reactor of a star. Over time, it will cool off, and in general less massive white dwarfs cool off the fastest. So a Jupiter-mass white dwarf, only 1/500th the mass of a normal white dwarf. should cool off relatively quickly, astronomically speaking.
As a white dwarf gets cooler and cooler, it can begin to crystallize (think of it "freezing"). On earth, one form of crystallized carbon is what we call a diamond. So, we astronomers often say that crystallized white dwarfs are Earth-sized diamonds. In reality, it is not a diamond. Crystallized white dwarfs are a hundred thousand times denser than diamond, and the detailed atomic structure is very different from diamond, too. A ring with a gemstone of white dwarf "diamond" would weigh several hundred pounds. So, the "diamond" term is an analogy, used to explain a white dwarf in a way that paints a picture most people can understand. But is it really a diamond? No.
So, is it fair to call the whittled down, crystallized white dwarf that used to be a full-sized star but now is about the mass of Jupiter a "diamond planet". I think not. The term "diamond planet", to me, suggests something that was always planet sized but made out of diamond. It is catchy, though.
So, DeBeers doesn't need to invest in a rocket to protect their diamond monopoly, and there is no "diamond planet". Too bad. But reality, that we are seeing a whittled down chunk of superdense material, the mass of Jupiter but smaller in diameter, and that used to be a full-blown star perhaps similar to the sun, is just as cool as a diamond planet. Perhaps even cooler.
26 Aug 2011 11:37am CDT: Edited to correct broken link in citation.
M. Bailes, S. D. Bates, V. Bhalerao, N. D. R. Bhat, M. Burgay, S. Burke-Spolaor, N. D’Amico, S. Johnston, M. J. Keith, M. Kramer, S. R. Kulkarni, L. Levin, A. G. Lyne, S. Milia, A. Possenti, L. Spitler1, B. Stappers, & W. van Straten (2011). Transformation of a Star into a Planet in a Millisecond Pulsar Binary Science Express : doi 10.1126/science.1208890 | <urn:uuid:54497e57-53b4-47b0-bc3d-8f74b58fb614> | 3.984375 | 2,136 | Personal Blog | Science & Tech. | 66.074416 |
The sun is the nearest star to us, it is a [Yellow Dwarf star], the Sun is a medium star.
[why we see it the biggest?].
Due to its size and light, some people thought that it is a god, they offers gifts to it.
There was and old Chinese legend which say that a huge dragon eats the sun from time to time.(the eclipse process)
The sun revolve on its axis and around the galaxy. The sun was BORN from more than 4000 year.
Some people wonders : "How can the sun send all this light and heat to us?"
the Answer is : Huge [NUCLEAR FUSIONS] happens in the core of the Sun.
The symbol of the sun is ..
Time required for the sun to complete ONE rotation around the galaxy is 250 million Earth years!!!
Diameter of its Equatorial is 1.39 million km .
It gravitational pulling is equal to 28 Earth's gravitational pulling.
When heat generated in the core of the sun, it transfer to its surface then to the space in the form of LIGHT & HEAT.
The time taken by the heat to transfer from its core to its surface is 1000000 years, and it takes 8.5 minutes to transfer from its surface to earth!!
The Sun gives off many kinds of radiation other than light and heat. It also emits radio waves, ultraviolet rays, and X-rays. The Earth's atmosphere protects us from the harmful effects of some of the sun's rays like X-rays, But in nowadays the atmosphere starts to decay, this cause rise in earth's temperature. Ultraviolet rays causes cancer.
The sun rotate on it axis, But not all parts rotate with the same speed. This is called differential rotation.
The sun consists of:
The Sun has several layers: the core, the radiation zone, the convection zone, and the photosphere (which is the surface of the Sun). In addition, there are two layers of gas above the photosphere called the chromosphere and the corona. Events which occur on the Sun include sunspots, solar flares, solar wind, and solar prominences. Sunspots are magnetic storms on the photosphere which appear as dark areas. Sunspots regularly appear and disappear in eleven year cycles. Solar flares are spectacular discharges of magnetic energy from the corona. These discharges send streams of protons and electrons outward into space. Solar flares can interrupt the communications network here on Earth. Solar winds are the result of gas expansion in the corona. This expansion leads to ion formation. These ions are hurled outward from the corona at over 500 kilometers per second. Solar prominences are storms of gas which erupt from the surface in the form of columns which either shoot outward into space or twist and loop back to the Sun's surface.
ADOPTED FROM NASA CD [ STARCHILD : THE SUN ] "CD\docs\starch00\solars00\sun.htm"
[out side this site-not responsible for the content- ]
Why we see sun bigger than others while it is medium?|
A: Because it is nearer to us.
So it appear to be larger.
What is Nuclear Fusions?|
A: It is like hydrogren bombs-strgonger than nuclear bombs-,
it happen due to fusion of 2 hydrogen atoms, and the mass remains change in to energy
() It produces HUGE amount of energy and high Tempreture[15 millions of degrees] | <urn:uuid:c83dad9d-c070-4f46-a0ed-2ba746dfceeb> | 3.5625 | 729 | Q&A Forum | Science & Tech. | 65.395213 |
This article is a discussion of Atomic physics including a video.
Atomic physics is the study atoms as a system of electrons and an atomic nucleus. The main study in atomic physics is the manner in which electrons are arranged around the atoms nucleus and ways that those electron arrangements change. This includes the energy states of an atom and its interactions with outside electromagnetic fields and. particles.
Bohr model of the atom (Photo credit: Wikipedia)
This field of physics has produced many strange and interesting results, including providing a major contribution to quantum mechanics. One of the biggest surprises comes from field-emission electron microscopy, which as produce image of actual electron orbitals.
The quantum atomic model is the most successful model of the atom ever devised and it is currently accepted because it explains all know atomic phenomenon. As a result atomic physics it one of the major success stories of Quantum Mechanics. | <urn:uuid:4a41053c-1760-49c9-9a2d-9658eee8768f> | 3.171875 | 180 | Knowledge Article | Science & Tech. | 27.795735 |
♦ Keywords supported by the enumeration statement
♦ Defines an enumeration data type which contains one or more discrete values. The list of allowable values is entered within the braces, and each must conform to standard Aztec identifier naming rules. There must be at least one value entered in the list.
♦ An individual enumeration value is accessed using “EnumName.value1”.
♦ If the “expose” keyword is used, an individual enumeration value can be accessed using only the name of the value. Each value name must be unique among all Aztec keywords and global variables (which are visible based on the default space list).
♦ If "expose" is not used, the name only needs to be unique within the enumeration.
♦ When a new enumeration class is created, the system automatically creates a set of instance, global and compiler methods to operate on that enumeration. A set of class constants specific to the new enumeration is created as well.
♦ Every new enumeration is a primitive data type which is derived from 'Base'. The hierarchy map shows "Enumeration" as its base class in italics to reflect the automatic method and constant creation, but there is no actual class named "Enumeration".
♦ The ‘bool’ data type is an example of an enumeration with the “expose” keyword (with values ‘true’ and ‘false’). The individual values can be accessed using just “false” and “true”, but can also be accessed using “bool.false” and “bool.true”.
♦ Some example code snippets showing Aztec enumerations | <urn:uuid:085ff328-6f05-45bf-9ccd-eaad09ffaeff> | 3.328125 | 356 | Documentation | Software Dev. | 32.526645 |
Scientists using a Mars-orbiting camera designed and operated at Arizona State University's Space Flight Facility have discovered the first evidence for deposits of chloride minerals, salts, in numerous places on Mars. These deposits, say the scientists, show where water was once abundant and may also provide evidence for the existence of former life on Mars. Salt deposits point to a lot of water, which could potentially remain standing in pools as it evaporates. For life, it's all about a habitat that endures for some time.
Over a long period of time, water flowing into a basin can concentrate the organic materials that could be well preserved in the salt. On Earth, salt has proven remarkably good at preserving organic material. For example, bacteria have been revived in the laboratory after being preserved in salt deposits for millions of years.
Bright blue marks a deposit of salt minerals in the southern highlands of in the THEMIS false-color image highlights mineral composition differences. Using THEMIS, researchers have found more than 200 such features. These deposits typically lie within topographic depressions and suggest that was much wetter long ago.
Developed at Arizona State University, THEMIS is a multi-wavelength camera that takes images in five visual bands and 10 infrared ones. At infrared wavelengths, the smallest details THEMIS can see on the martian surface are 330 feet (100 meters) wide.
The scientists found about 200 individual places in the martian southern hemisphere that show spectral characteristics consistent with chloride minerals. These salt deposits occur in the middle to low latitudes all around the planet within ancient, heavily cratered terrain.
When plotted on a global map of Mars, the chloride sites appeared only in the southern highlands, the most ancient rocks on Mars.
The scientists think the salt deposits formed mostly in the middle to late Noachian epoch, a time that researchers have dated to about 3.9 to 3.5 billion years ago. Several lines of evidence suggest then had intermittent periods of substantially wetter and warmer conditions than today's dry, frigid climate.
Looking for evidence of life on Mars, for bacteria or higher plants that existed on or other planets in the solar system, looking for cellulose in salt deposits is probably one of the best ways to go. Cellulose appears to be highly stable and more resistant to ionizing radiation than DNA. And if it is relatively resistant to harsh conditions such as those found in space, it may provide the ideal ‘paper trail’ in the search for life on other planets. But first, find the salt!
Posted by Casey Kazan.
Related Galaxy posts:
Photo Credit: Credit: NASA/JPL/Arizona State University/University of Hawaii
TrackBack URL for this entry:
Listed below are links to weblogs that reference A New Place to Search for Life's Ancient Traces on the Red Planet: | <urn:uuid:68568751-d8e6-4bc9-a14f-65e70b788d32> | 3.8125 | 582 | Personal Blog | Science & Tech. | 36.521394 |
The file shell is a very distinctive bivalve. The two halves of its shell do not close completely, and a thick fringe of bright orange or red tentacles extends between them.
File shells are usually found on sand or gravel seabeds. They often use their sticky byssus thread ‘beards’ to form nests of debris, mud and seaweed fragments, within which the animals are completely hidden. This is thought to be a means of defence: a file shell cannot pull in its tentacles, which are vulnerable to being nipped off by passing predators. In tide-swept, shallow inlets and bays, where file shells occur in large numbers, the nests can join up to form a reef, which can extend to several hectares in size.
In shallow water, kelp seaweeds may settle onto the file shell beds. Maerl and horse mussels may also form part of the reef. Other animals that find food and shelter here include crabs, starfish, young cod and dogfish. Over 300 species have been found in file shell reefs.
File shell beds form a surface for attachment and a refuge in an otherwise featureless seascape. Where file shell beds have been lost, the seaweeds and young fish have also disappeared, illustrating the importance of these beds as a key feature in the local environment.
Scallop dredging and trawling are main threats to file shell beds. They have also been destroyed by pollution from the anti-fouling paints used to stop marine creatures attaching to ships’ hulls and underwater structures.
File shells are found on the west and south-west coasts of the UK, and are most common off western Scotland. The European range extends south to Mediterranean and Canary Isles. Large reefs of file shells are scarce, however, and are usually found in the mouths of sea lochs. They have only been recorded from the west coast of Scotland and from Donegal.
UKBAP Priority Habitat
Limaria hians (WoRMS)
Limaria hians (Marine Species Identification Portal)
Limaria hians (Conchological Society of Great Britain and Ireland)
File shells are one of the few species of bivalve that can swim: they achieve jet propulsion by ‘clapping’ their shells to force out streams of water. | <urn:uuid:c881cb13-162f-493e-ae40-dbe6b1d8f3c1> | 3.734375 | 479 | Knowledge Article | Science & Tech. | 50.313706 |
“Today’s debate about global warming is essentially a debate about freedom. The environmentalists would like to mastermind each and every possible (and impossible) aspect of our lives.”
Blue Planet in Green Shackles
May 24, 2012
To demonise CO2 yet again, a false claim is that human production of CO2 will cause the oceans to become acid.
‘Acid’ is an emotive word to the general public, which is why it is seized upon by the alarmists in their search for yet another scare. In reality increasing CO2 makes the ocean become ‘less alkaline’, but never ‘acid’.
pH is a measurement of the amount of hydrogen ion concentration in a solution, the log of the hydrogen ion concentration with the sign changed. Because it is a log scale it is very hard to move a pH of 8.2 to 7.0, which is neutral.
The pH needs to be less than 7 to be ‘acid’, and this has not happened through at least the past 600 million years because it would dissolve limestones, and limestone have been deposited in the sea and not re-dissolved in the sea through all that time.
Many marine organisms need CO2 to make their coral skeletons, carbonate shells and so on. Corals also have symbiotic plants within their flesh that use CO2 in photosynthesis.
Marine life flourishes where CO2 is abundant. Professor Walter Stark wrote about a favourite place for scuba divers, the ‘Bubble Bath’ near Dobu Island, Papua New Guinea. Here CO2 of volcanic origin is bubbling visibly through the water so that the water is saturated with CO2. Abundant life flourishes to make the spot a spectacular diver’s delight. He reported many accurate measurements of pH in the area and concluded “It seems that coral reefs are thriving at pH levels well below the most alarming projections for 2100.”
The pH of sea water can be very variable and makes temperature measurement look like child’s play. Ocean pH varies regionally by 0.3, and seasonally in a particular location by 0,3. But nobody has ever measured ocean water below 7, which is what “acid” means. Rhodes Fairbridge told me that he found the day-night variation in a coral pool was 9.4 to 7.5.
There is another factor called Henry’s Law. Cold water can hold more CO2 than warm, so if you warm saturated water it gives off CO2. You can see the effect if you warm a glass of fizzy drink: it goes flat. The ocean-air interface is usually rough so interchange is rapid. Actually if the aim of the AGW activists is to keep the world cooler by reducing atmospheric CO2 they are going in the direction of increasing ‘acidification’ of the oceans.
One of the factors affecting ocean pH is photosynthesis by plants. Experimental results show that plants grow better if CO2 is increased, and greenhouse managers commonly increase the CO2 artificially to increase crops, often by 30% or more. There is every reason to suppose that marine plants also thrive if CO2 is increased. There is also experimental evidence that carbonate secreting animals thrive in higher CO2. Herfort and colleagues concluded that the likely result of human emissions of CO2 would be an increase in oceanic CO2 that could stimulate photosynthesis and calcification in a wide variety of corals.
Marine life, including that part that fixes CO2 as the carbonate in limestones such as coral reefs, evolved on an Earth with CO2 levels many times higher than those of today, as reported by Berner and Kothaval. It may be true to say that today’s marine life is getting by in a CO2-deprived environment.
Tuvalu has long been ‘hot news’ as the favourite island to be doomed by sea level rise driven by global warming, allegedly caused in turn by anthropogenic carbon dioxide. But if a coral island is sinking slowly (or relative sea level rising slowly) the growth of coral can keep up with it. In the right circumstances some corals can grow over 2 cm in a year, but growth rate depends on many factors. Coral islands, made of living things, are not static dip-sticks against which sea level can be measured. We have to consider coral growth, erosion, transport and deposition of sediment and many other aspects of coral island evolution – not just the pH of seawater. Webb and Kench studied the changes in plan of 27 atoll islands located in the central Pacific, and found that most had remained stable or grown in area over about the past twenty years (despite measured rises in atmospheric CO2 over the same period), and only 15% underwent net reduction in area. One of the largest increases was the 28.3% on one of the islands of Tuvalu. This destroys the argument that the islands are drowning, and coral growth is not reduced by ‘acidity’.
Marine life depends on CO2, and some plants and animals fix it as limestone, which is not generally re-dissolved. Over geological time enormous amount of CO2 have been sequestered by living things, so that today there is far more CO2 in limestones than in the atmosphere or ocean. This sequestration of CO2 by living things is far more important than trivial additions to the atmosphere caused by human activity.
Emeritus Professor Cliff Ollier is a geologist and geomorphologist.
Berner., R.A. and Kothavala, Z. 2001. A revised model of atmospheric CO2 over Phanerozoic time. American Journal of Science, 301, 182-204.
Herfort, L, Thake, B. and Taubner, I. 2008. Bicarbonate stimulation of calcification and photosynthesis in two hermatypic corals. Journal of Phycology, 44, 91-8.
Webb, A. P. and Kench, P. S. 2010. The dynamic response of reef islands to sea level rise: evidence from multi-decadal analysis of island change in the central pacific, Global and Planetary Change, 72, 234-246.
Subscribe to Quadrant magazine here…
The Quadrant Book of Poetry: 2001 - 2010
edited by Les Murray | <urn:uuid:802bfcf8-14f0-4258-81c5-17419df07e40> | 2.796875 | 1,332 | Nonfiction Writing | Science & Tech. | 52.320287 |
This section illustrates you the concept of structure in C.
Structures in C defines the group of contiguous (adjacent) fields, such as records or control blocks. A structure is a collection of variables grouped together under a single name. It provides an elegant and powerful way for keeping related data together.
Once the structure is defined, you can declare a structure variable by preceding the variable name by the structure type name. In the given example, a small structure i.e struct is created student and declared three instances of it as shown below.
In structures, we have assigned the values to the instances i.e, id, name, percentage in the following way:
student2.name = "Angelina";
student3.percentage = 90.5;
Here is the code:
Output will be displayed as:
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:50abaa09-ce74-4429-a536-a77e23373f7e> | 3.578125 | 212 | Tutorial | Software Dev. | 49.363217 |
Dr Carl Edward Sagan (November 9, 1934 - December 20, 1996) was an American astronomer, astrobiologist and highly successful science popularizer.
He pioneered exobiology and promoted the Search for ExtraTerrestrial Intelligence (SETI).
He is world-famous for his popular science books and the award-winning television series Cosmos, which he co-wrote and presented and eventually released as a book.
During his lifetime, Sagan published more than 600 scientific papers and popular articles and was author, co-author, or editor of more than 20 books.
In his works, he frequently advocated scientific skepticism, humanism, and the scientific method.
For more information about the topic Carl Sagan, read the full article at Wikipedia.org, or see the following related articles:
Recommend this page on Facebook, Twitter,
and Google +1:
Other bookmarking and sharing tools: | <urn:uuid:545362b2-3504-4fa0-966d-909d33960a1f> | 2.765625 | 183 | Knowledge Article | Science & Tech. | 26.200455 |
Oct. 4, 2012 A trade-off between photon source settings and detector specific requirements allows the generation of high-fidelity single photons.
Many quantum technologies-such as cryptography, quantum computing and quantum networks-hinge on the use of single photons. While she was at the Kastler Brossel Laboratory (affiliated with the Pierre and Marie Curie University, École Normale Supérieure and CNRS) in Paris, France, Virginia d'Auria and her colleagues identified the extent to which photon detector characteristics shape the preparation of a photon source designed to reliably generate single photons. In a paper about to be published in EPJ D, the French team determined the value of key source parameters that are necessary to generate high-fidelity single photons.
The problem with photon detectors is that they can be noisy or have a limited ability to detect single photons. Some cannot identify the number of photons; they can only detect their presence. Given the influence of these factors, improving the fidelity of single-photon generation is very challenging. But it is also crucial for their subsequent use in quantum information protocols, including quantum communication and computing.
Single photons are typically generated using two laser beams that are correlated at the quantum level. This means that the detection of a single photon in the first beam heralds the generation of a single photon in the second one.
The authors first reviewed how to describe a detector from a mathematical point of view. They then simulated which photons would be obtained from different initial sources. This led the team to outline the conditions under which the heralding detector can deliver good resolution of the number of photons, as a means of improving the reliability in obtaining single photons. They corroborated their findings using two experimental detectors.
Other social bookmarking and sharing tools:
- V. D’Auria, O. Morin, C. Fabre, J. Laurat. Effect of the heralding detector properties on the conditional generation of single-photon states. The European Physical Journal D, 2012; 66 (10) DOI: 10.1140/epjd/e2012-30351-6
Note: If no author is given, the source is cited instead. | <urn:uuid:93e26d09-ec71-4855-ac50-2f76b86a31f1> | 3.546875 | 450 | Truncated | Science & Tech. | 37.175634 |
This page summarizes the relationships among specifications, whether they are finished standards or drafts. Below, each title links to the most recent version of a document. For related introductory information, see: Data, Linked Data.
W3C Recommendations have been reviewed by W3C Members, by software developers, and by other W3C groups and interested parties, and are endorsed by the Director as Web Standards. Learn more about the W3C Recommendation Track.
Group Notes are not standards and do not have the same level of W3C endorsement.
This document describes and includes test cases for software agents that extract RDF from XML source documents by following the set of mechanisms outlined in the Gleaning Resource Description from Dialects of Language specification.
GRDDL is a mechanism for Gleaning Resource Descriptions from Dialects of Languages. This GRDDL specification introduces markup based on existing standards for declaring that an XML document includes data compatible with the Resource Description Framework (RDF) and for linking to algorithms (typically represented in XSLT), for extracting this data from the document.
The markup includes a namespace-qualified attribute for use in general-purpose XML documents and a profile-qualified link relationship for use in valid XHTML documents. The GRDDL mechanism also allows an XML namespace document (or XHTML profile document) to declare that every document associated with that namespace (or profile) includes gleanable data and for linking to an algorithm for gleaning the data.
GRDDL is a mechanism for Gleaning Resource Descriptions from Dialects of Languages. It is a technique for obtaining RDF data from XML documents and in particular XHTML pages. Authors may explicitly associate documents with transformation algorithms, typically represented in XSLT, using a link element in the head of the document. Alternatively, the information needed to obtain the transformation may be held in an associated metadata profile document or namespace document. Clients reading the document can follow links across the Web using techniques described in the GRDDL specification to discover the appropriate transformations. This document uses a number of examples from the GRDDL Use Cases document to illustrate, in detail, the techniques GRDDL provides for associating documents with appropriate instructions for extracting any embedded data.
GRDDL is a mechanism for Gleaning Resource Descriptions from Dialects of Languages. It is a technique for obtaining RDF data from XML documents and in particular XHTML pages. This document provides some use cases that motivated the development of GRDDL. | <urn:uuid:13f7e0b3-3904-478b-9274-dd56eab108da> | 2.734375 | 510 | Documentation | Software Dev. | 25.798081 |
This program plots time series of monthly total
precipitation, or monthly average temperature, based on divisional
climate data for the U.S.
The user selects the quantity (precipitation or temperature) and the beginning and ending year (begins with January of the beginning year and ends with December of the ending year). Generally, periods of 10 years or less are depicted the best. Longer periods tend to be difficult to interpret.
For plotting purposes, the data are plotted on the ending year and month of the specified interval.
The individual months are shown by red X's and lines.
The running mean is simply the average of the selected number of months centered on the plotted month. An odd number of months will be plotted centered on an actual month, and an even number of months will be plotted in between month values (for example, the 12-month mean for points from January 1996 through December 1996 will be plotted in mid-June 1996). Although individual months before January of the beginning year aren't shown, they are used to determine the running mean if they exist. For example, if 1990 is selected as the beginning year and a 12-month running mean is also selected, the mid-January running mean begins with data in July of 1989.
The running means are shown as blue X's.
The long term monthly average, if selected, is shown as a series of green crosses connected by green lines. Naturally, these points will be a repeating cycle each 12 months.
... back to Time Plot Options page.
... back to SPI table.
... back to WRCC Home Page. | <urn:uuid:b5693538-dff6-42bb-813a-00a3418e93b7> | 3.328125 | 332 | Tutorial | Science & Tech. | 65.141186 |
Restoring California's Wild Watersheds
Emboldened by that success, the small coalition of county officials and businessmen expanded to include ranchers, environmentalists, and state and federal officials. Although many of them had been at odds over land management issues, they realized they could only heal the watershed if they cooperated. Wilcox had been a firsthand witness to stream dredging and other practices harmful to ranchlands and forests. A man more at home in a pickup truck than an office, he was eager to be a part of reversing the damage. “I believe in watershed restoration. It has always been in my bones,” he says. And that became the Feather River coalition’s goal: restoring entire meadows along with the creeks flowing through them.
Among the methods they have pioneered is a low-tech procedure known as “pond and plug.” Crews with heavy equipment dig several of the channels wider and deeper, creating small ponds. They use the excavated dirt to fill the remaining gullies back to the original ground level. Along Red Clover Creek, the groundwater began rising almost immediately after the crews finished plugging the channels. By the following spring the ponds were flush with the water that would otherwise have raced downstream in late winter. Above and below the pond where Wilcox sits, the creek has found its way across the meadow in a natural, meandering channel.
The Feather River group has completed 66 restoration projects, which include 3,900 acres of meadow and
44 miles of stream. Since the work began, the data from a series of permanent monitoring stations show that the flow out of restored meadows is greater and lasts longer into the summer. Water temperatures have dropped despite an increase in average air temperatures, and stream turbidity, a measure of the amount of dirt and debris suspended in the water, has decreased to almost half pre-project levels. Groundwater, which never reached the surface before the restoration work, is now consistently at or above ground level for at least part of the year.
From Water to Wildlife
The Feather River projects have inspired the much larger Sierra-wide meadow restoration coordinated by the National Fish and Wildlife Foundation. Private landowners, universities, local and national resource organizations, and the U.S. Forest Service are working together to design strategies that will raise the water table and slow the flow out of mountain meadows. In an area from the Pit River in the north to the Kern River in the south, they are evaluating potential projects to determine which will yield the maximum benefits to fish and wildlife and the greatest quantities of water. Their goal is to restore at least 20,000 acres a year by 2014, says Male.
“Nationwide, we’re looking for tangible actions that address the realities of climate change. This is one of the best examples in America of a restoration initiative that can directly help people and wildlife adapt to our changing planet,” Male says.
Leave it to Beavers?
Nature's water engineers can restore river channels.
The plan, over the first five years, calls for restoring 60,000 acres of meadow. As the water table rises and meadows soak up more water from melting snows, native habitat lost for decades should return. Among the endangered species expected to benefit are the yellow warbler, Yosemite toad, Lahontan cutthroat and golden trout, Townsend’s big-eared bat, and the Sierra Nevada red fox.
But the effects of widespread meadow restoration will also flow downstream to farmers and other water users. The Forest Service manages about half of the Sierra’s degraded meadowlands. The agency is determining which of the 11,700 separate meadows in 10 national forests need to be restored. All are located on streams important for water supply, says Barry Hill, a regional hydrologist. Using foundation funds, the Forest Service hopes to determine the amount of additional water available for downstream use once the meadows return to health.
The Sierra projects are unique among large-scale water restoration efforts in the United States because of their potential to increase the amount of water available in a river system, says Male. Comprehensive efforts to restore the Chesapeake Bay, the nation’s largest estuary, focus on improving the quality of water flows throughout the 64,000-square-mile region. In the Everglades, a wide-ranging plan to revive a dying ecosystem aims to improve the distribution of flows throughout 18,000 square miles in southern Florida. Along the lower Mississippi River and coastal Louisiana, the largest wetlands restoration effort is designed to reverse the pattern of land erosion by buffering against floods and hurricanes and, like all of the major projects, improving wildlife habitat.
Just how much more water healthy Sierra Nevada meadows can deliver is a matter of debate. Some scientists believe the boosts in stream flow may be absorbed by increases in vegetation in the new, restoration-created habitats. Others believe restoration could contribute up to 6.5 billion gallons of additional water storage throughout the California range. Over time, says Male, these restored meadows could hold 16 to 160 billion gallons of fresh water. That’s equal to the size of one of the new dams state officials have proposed for construction to offset the state’s declining snowpack.
Restoring mountain meadows will not solve California’s water crisis. That will take a collective commitment from the agriculture industry, from municipalities, and from everyone who depends on the Sierra snowmelt for their livelihoods and their lives. It will also require more political will than elected officials have traditionally marshaled. Wilcox believes the public recognizes the value of healthy watersheds. He is optimistic that stream restoration will become routine as more people understand its importance upstream and downstream.
Meanwhile, the benefits to wildlife are unequivocal. In the wet meadow surrounding Red Clover Creek, the number of waterfowl species has doubled since Wilcox and his crews completed the pond-and-plug project. He has seen buffleheads, gadwalls, and two species of teal breeding in early spring. Sandhill cranes, willow flycatchers and 10 other species on state and federal watch lists have returned to the area. Walking through Red Clover Valley from the pond, Wilcox bends down to study a clump of dancing hairgrass, one of a handful of plant types that have regenerated from seeds dormant in the soil for decades. He has yet to see elk but he has found their tracks—the first in the area in decades.
Jane Braxton Little wrote this article for Water Solutions, the Summer 2010 issue of YES! Magazine. Jane covers natural resource issues from California’s northern Sierra Nevada. Her work has appeared in Scientific American, Nature Conservancy, and Audubon, where she is a contributing editor.
- American Rivers, a conservation organization based in Washington, D.C., focuses on protecting rivers, wildlife, and water supply and quality. The organization's Web site also contains information about meadow restoration in California.
- The nonprofit National Fish and Wildlife Foundation provides grants to conservation projects across the United States.
- To find out more about the Feather River project, visit the Feather River Coordinated Resource Management Group.
That means, we rely on support from our readers.
Independent. Nonprofit. Subscriber-supported. | <urn:uuid:6211180f-b121-483f-aaa8-34cf431f41cb> | 3.171875 | 1,500 | Nonfiction Writing | Science & Tech. | 39.530652 |
ACS supports various types of data that you can use. Of these, most are handled as hacks. In reality ACS only supports one data type and that is the integer type.
The integer data type is your basic data type. An integer is any whole number and can be negative, positive or zero. The range for an integer is from -2,147,483,648 to 2,147,483,647. To declare a variable to hold this type you use the int keyword.
ACS has a hacked support for the boolean data type. This data type can hold either a true or false value or a 1 or 0 value. In ACS you use the bool keyword to declare a boolean variable. Even though you use a different name, it is the same thing as an integer and has all the same properties.
//This is legal bool bTest = 7;
Support for character data type is virtually the same as C/C++ except that you have to use the integer type. When assigning characters to a variable, you have to enclose them in single quotes and would look like this example:
//This assigns a variable to the letter a int Test = 'a';
You can also make use of the special characters: ASCII NUL '\0', '\\', '\n' for newline and '\t' for a horizontal tab.
In ACS you can define string literals which look like this:
//this is a string literal "OMG its a string"
When the compiler sees this what it does is add the string to its string table and then assigns an index number to it. The string variable, declared using the str keyword, does not hold the string, but rather, the index to string table, similar to how string literals in C decay into pointers in certain contexts. In other words, the string variable is the same as the integer and has all its properties. The only way that ACS can tell that it is a string is when used by special functions that expect a string.
//This function actually gets passed this strings index in the string table //but the function knows to take this number and use it to look up the string table CheckInventory("Fist");
This is another example on how strings are handled:
//Here we try to add two strings together str Test = "omg its a " + "string!"; print(s:Test); //This prints "string!" and NOT "omg its a string!" //what really happened was this Test = 0 + 1; //so you see that the index is what is being stored
ACS has a very basic support for fixed point types. What happens is you have to assign them to integer types. The integral part is located in bits 16–31 of the integer and the fractional part is located in bits 0–15.
3 2 1 0 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Integral Part | Fractional Part | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
To see this in action we have this example: | <urn:uuid:ff6ebd23-7a92-4a4d-b678-d7e6d4c50759> | 3.3125 | 662 | Knowledge Article | Software Dev. | 73.052735 |
Distributed Bragg reflectors (DBRs) are mirrors made of semiconductor materials like GaAs and AlAs. They consist of alternating thin layers of materials with different refractive indexes; the reflections from successive interfaces sum together for very high total reflectivities. For example, a stack of 30 pairs of alternating layers of AlAs and GaAs exhibits a reflectivity of 0.99993. These mirrors are mainly of interest for their use in vertical-cavity, surface-emitting semiconductor lasers. A modified version called a saturable Bragg reflector also is used as a key element in the pulsed lasers being developed for ultra-fast spectroscopy.
Modeling the reflectivity of a DBR is straightforward, but achieving the modeled result is unusual. Impurities (dopants) in the layered materials affect the index of refraction and thus the reflectivity of the mirror. Optimization of these mirror structures has been difficult because the DBR reflectivity is difficult to measure accurately.
Cavity ring-down spectroscopy is a technique originally designed to measure ultra-high reflectivities, but it has apparently not been previously applied to the DBR problem. The essence of the technique is simple. An optical cavity (similar to a Fabry-Perot cavity) may be filled with light if the frequency of light and the length of the cavity are in resonance. If we then shutter the input light off, the light already in the cavity will gradually decay as light leaks through the mirrors. If the mirrors are very good, the light will decay slowly; poorer mirrors will cause a more rapid decay. Thus, by measuring the decay time of light in the cavity, we can accurately measure the mirror reflectivity.
The ring-down cavity is currently being constructed. We will optimize this cavity and measure the reflectivity of a distributed Bragg reflector. The work is primarily experimental but may require some computer modeling of the laser beam in order to efficiently fill the cavity with light. We will compare our results to the predicted reflectivity of the DBR and study how the reflectivity varies over the DBR wafer. | <urn:uuid:486f1922-aeb1-4d05-94d3-49c81d77a007> | 3.71875 | 434 | Academic Writing | Science & Tech. | 30.591702 |
Alu elements are transposable DNA sequences—jumping genes—that "reproduce" by making a copy of themselves and then inserting that copy into a new spot on the chromosome. Alu elements did this so effectively in the past that they now comprise 10 percent of the human genome, and as such are the most abundant mobile elements in it.
Since they do not encode protein products, they used to be considered "junk DNA" or "selfish DNA," having no apparent function beyond their own replication. Yet, because they contain sequences that resemble canonical portions of genes, thousands of human genes contain pieces derived from them. Recent work in PNAS suggests that they may also play a role in regulating how genes get expressed, and possibly help differentiate humans from other primates.
Shen et al. used a method called RNA-Seq to find and categorize all of the Alu elements expressed in ten different human tissues. They found that Alu exons are found in the messenger RNAs of zinc finger (ZNF) transcription factors (proteins that regulate other genes) at seven times the rate they are found in other types of genes. They are found predominantly in an area known to be important for mRNA stability and protein translation.
ZNFs are one of the largest gene families in humans, with over seven hundred members, and many ZNF genes are primate-specific. ZNF genes underwent rapid expansion and adaptive evolution during primate evolution, and are therefore thought to be key contributors to the gene regulation that defines these lineages.
Like some of these ZNF genes, Alu elements are found only in primates. The researchers found that, in human cerebellum and liver, Alu elements are more common in younger genes than in older ones. But Alu’s preference for ZNF genes over other types held even after controlling for a gene's age; Alu elements were found in ancient as well as recent ZNF genes. They suggest that incorporation of Alu DNA has played a role in the evolution and expansion of ZNF genes in primates.
Recent work reported in PLoS Genetics indicated that the ZNF gene family arose from a small ancestral group of eukaryotic zinc-finger transcription factors through many repeated gene duplications. That's certainly compatible with the idea that Alu elements were involved, since DNA repeats can lead to instabilities that can include duplications.
To check if the presence of Alu sequences in a mRNA affects its translation into protein, Shen et al. compared the translation of fifteen different reporter genes. They made two versions, one lacking an Alu element, the other containing it. In ten of the fifteen, the Alu sequences altered translational efficiency. When they looked at how, they found two distinct molecular mechanisms by which this can occur, but they did not rule out the possibility of additional ones.
Human brains differ from those of other primates: ours are significantly larger, and we have different cognitive abilities. Many ZNF genes are differently expressed in human and chimpanzee brains, and since ZNFs function as transcription factors, it has been suggested that they contribute to the differences between primate species through gene regulation. This new work shows that Alu incorporation is an important mechanism in the evolution of ZNF genes; by regulating the regulators, they may be part of what truly distinguishes monkeys from men.
Listing image by University of Minnesota | <urn:uuid:4c76c1de-5741-402a-91c6-929d3f9c550a> | 3.8125 | 693 | Knowledge Article | Science & Tech. | 31.365786 |
- "why is the limit imposed by pion pair production as opposed to electron pair production? the energies required for electron pair production is /significantly/ less than pion pair production."
I recently came across the figure below, which depicts the energy loss of a proton times 1/E (the relative energy loss per year), due to electron-positron pair production, and - at higher energies - pion production:
[Fig 1 (a) from V.Berezinsky, A.Z.Gazizov, S.I.Grigorieva "On astrophysical solution to ultra high energy cosmic rays", Phys.Rev. D74 (2006) 043005, arXiv:hep-ph/0204357]
As one sees, the electron-positron pair production leads to a loss and is the dominant contribution at energies below ≈ 1019 eV, but at higher energies pion production takes over, increasing the energy loss by a factor of ~ 100 which is the effect responsible for the cut-off in the spectrum.
Essentially the same is depicted differently in the figure below from Roberto Aloisio's talk, slide 2. I didn't hear the talk but a good guess is that the y axis shows the attenuation length of the protons in the CMB background. Again one sees the electron-positron pair production having an effect already at smaller energies, but it does not result in a sharp cut-off as the pion production: If energy loss would be caused by electron-positron production only, the proton could travel as far as a third the size of the observable universe.
For an excellent introduction into the physics of ultra-high-energetic air showers, I recommend Angela Olinto's recent PI colloquium, PIRSA: 08010000. | <urn:uuid:3782b1a7-e030-4a65-9908-8e80e9a3e47b> | 2.890625 | 383 | Comment Section | Science & Tech. | 49.646039 |
One thing I like to see is kids getting their hands on doing science. There’s something about being involved with something, actually doing it for yourself, that gives you a sense of ownership over the knowledge, makes you part of something bigger.
Here’s another chance to do that for students across the world: the ERGO telescope project. ERGO stands for "Energetic Ray Global Observatory" and the idea is to build simple cosmic-ray detectors that can be sent to classrooms all over the world. Here’s a short video describing the project:
Cosmic rays are energetic subatomic particles that come blasting in from space. They’re created by the Sun, by exploding stars, but distant galaxies… basically, by cool, interesting objects. By distributing these detectors across the world, students can share their data and come up with their own ways of examining them.
If you’re a teacher and you want your students to not just learn science, but to experience it, then this sounds like a good way to do it! They even have a simple form you can fill out to apply for a grant to get started.
The way some of the media report on climate change can be simply stunning. For example, an opinion piece in The Financial Post has the headline "New, convincing evidence indicates global warming is caused by cosmic rays and the sun — not humans".
There’s only one problem: that’s completely wrong. In reality the study shows nothing of the sort. The evidence, as far as the limitations of the experiment go (that’s important, see below), do not show any effect of cosmic rays on global warming, and say nothing at all about the effect humans are having on the environment.
What did you do, Ray?
OK, first things first: why should we even think cosmic rays might affect climate? There are several steps to this, but it’s not too hard to explain.
We know that clouds form by water molecules accumulating on seed particles, called condensation nuclei. The physical processes are complex, but these particles (also called aerosols) are suspended in the air and water droplets form around them. The more of them available, the better water can condense and form clouds (although of course this also depends on a lot of other things, like how much water is in the air, the temperature, the height above the ground, and so on).
Cosmic rays, it turns out, may play a role in this too. They are subatomic particles that zip through space at high speed. We are bombarded by them all the time, in fact! They hit atoms and molecules in the Earth’s atmosphere, depositing their energy there. This affects aerosol formation rate, and therefore might affect cloud formation. Clouds are bright and white, and reflect sunlight. Therefore they affect global warming.
So the whole idea goes like this: the more cosmic rays there are, the more aerosols are made, the more easily clouds can form, the more sunlight gets reflected back into space, and the less global warming we get. It’s controversial, for sure (Discover Magazine interviewed a proponent of this idea in 2007) but worth looking into.
In practice, the actual connection between cosmic rays and cloud formation is really hard to determine. So a group of scientists at the European particle lab CERN decided to test the basics. They created a cloud chamber, bombarded it with cosmic rays, and examined the results. They found two very interesting things:
1) The number of aerosols created went up vastly as the particles blasted the chamber. That would seem to indicate that cosmic rays really are tied to global warming. Except…
2) The actual total number of aerosols created was way below what’s needed to account for cloud formation. Sure, there were more aerosols, but not nearly enough. | <urn:uuid:fd93f147-e01a-4208-81e2-1692fd8fd0ff> | 3.625 | 803 | Personal Blog | Science & Tech. | 53.158421 |
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index]
For those of you who believe recent molecular evidence suggesting early
mammalian ordinal divergence is not a challenge to the claimed competence
of our current fossil record, i.e., that genetic changes may lead to few
morphological changes (at least as recognizable from bones) and that we
have already found the bones of these genetically radical creatures, here
are Hedges' beliefs. The quote is from an on-line Science news service.
>Very few fossils resembling modern mammals or other vertebrates have been
>found in rocks formed during the Cretaceous period, says Hedges, partly
>because paleontologists hardly ever look for mammals in rocks that old.
>"There has not been enough convincing evidence until now for
>paleontologists to invest their time and money looking for mammal
>in a time before
>the dinosaurs became extinct," Hedges says. In addition, many scientists
>believe that a large number of species suddenly sprang into existence at
>very end of the Cretaceous period.
>Hedges says he hopes, as a result of this research, that paleontologists
>will now begin searching for vertebrates in geological strata where they
>have never looked before. "We are saying mammals definitely were living on
>Earth during the Cretaceous period from 70 to 100 million years ago. We
>don't yet know what they look like, but from the genes of their descendants
>we now know that they were there." | <urn:uuid:3f111d35-925e-4fa6-8d31-a02ba88fa9a3> | 2.796875 | 337 | Comment Section | Science & Tech. | 32.998446 |
) 6-by-0.2-inch (150 mm × 5.1 mm) tritium vials are simply tritium gas-filled, thin glass vials whose inner surfaces are coated with a phosphor
. The "gaseous tritium light source" vial shown here is brand new.
Radioluminescence is the phenomenon by which light is produced in a material by bombardment with ionizing radiation such as beta particles. Radioluminescence is used for emergency exit signs or other applications where light must be produced for long periods without external energy sources. Formerly, radioluminescent paint was used for clock hands and instrument dials allowing them to be read in the dark.
Tritium is used as a source of beta particles in a large variety of applications where electricity is not available for illumination. For example, gun sights and emergency exit signs.
Main article: Radium dials
Historically a mixture of radium and copper-doped zinc sulfide was used to paint instrument dials giving a greenish glow. Phosphors containing copper doped zinc sulfide (ZnS:Cu) yield blue-green light; copper and manganese doped zinc sulfide (ZnS:Cu,Mn), yielding yellow-orange light, are also used. Radium based luminescent paint is no longer used due to the radiation hazard posed to those manufacturing the dials. These phosphors are not suitable for use in layers thicker than 25 mg/cm², as the self-absorption of the light then becomes a problem. Furthermore, zinc sulfide undergoes degradation of its crystal lattice structure, leading to gradual loss of brightness significantly faster than the depletion of radium.
ZnS:Ag coated spinthariscope screens were used by Ernest Rutherford in his experiments discovering the atomic nucleus.
Radioluminescence occurs when an incoming radiation particle collides with an atom or molecule, exciting an orbital electron to a higher energy level. The electron then returns to its ground energy level by emitting the extra energy as a photon of light.
See also
External links | <urn:uuid:bbcf1b8c-64a8-4ed0-8fa6-3817cc30f1a1> | 3.234375 | 444 | Knowledge Article | Science & Tech. | 36.631765 |
<p>Generally, <span class="taxon"><em>Apis mellifera</em></span> are red/brown with black bands and orange yellow rings on abdomen. They have hair on thorax and less hair on abdomen. They also have a pollen basket on their hind legs. Honeybee legs are mostly dark brown/black.</p> <p>There are two castes of females, sterile workers are smaller (adults 10-15 mm long), fertile queens are larger (18-20 mm). Males, called drones, are 15-17 mm long at maturity. Though smaller, workers have longer wings than drones. Both castes of females have a stinger that is formed from modified ovipositor structures. In workers, the sting is barbed, and tears away from the body when used. In both castes, the stinger is supplied with venom from glands in the abdomen. Males have much larger eyes than females, probably to help locate flying queens during mating flights.</p> <p>There are currently 26 recognized subspecies of <span class="taxon"><em>Apis mellifera</em></span>, with differences based on differences in morphology and molecular characteristics. The differences among the subspecies is usually discussed in terms of their agricultural output in particular environmental conditions. Some subspecies have the ability to tolerate warmer or colder climates. Subspecies may also vary in their defensive behavior, tongue length, wingspan, and coloration. Abdominal banding patterns also differ - some darker and some with more of a mix between darker and lighter banding patterns.</p> <p>Honeybees are partially endothermic -- they can warm their bodies and the temperature in their hive by working their flight muscles.<span> (Clarke et al., 2002; Milne and Milne, 2000; Pinto et al., 2004; Seeley, Seeley, and Akratanakul, 1982)</span></p> <p><strong>Other Physical Features: </strong>Endothermic; Ectothermic; Heterothermic; Bilateral symmetry; Venomous</p><p><strong>Sexual Dimorphism: </strong>Female larger; Sexes shaped differently</p>
- Pinto, A., W. Rubink, R. Coulson, J. Patton, S. Johnston. 2004. Temporal pattern of africanization in a feral honeybee population from texas inferred from mitochondrial DNA. Evolution, 58/5: 1047-1055.
- Milne, M., L. Milne. 2000. National Audubon Society: Field Guide To Insects and Spiders. New York, Canada: Alfred A. Knopf, Inc..
- Seeley, T., R. Seeley, P. Akratanakul. 1982. Colony defense strategies of the honeybees in Thailand. Ecological Monographs, 52/1: 43-63. | <urn:uuid:d90769ce-6ace-48d9-9af6-4a9587c7c830> | 3.3125 | 607 | Knowledge Article | Science & Tech. | 54.093606 |
The laser turns 50 this year with the construction of the first ruby laser.
The first CO2 laser followed only 2 years later in 1963.
How Stuff Works and this video gives an overview of how lasers work, the video concentrates on the Ruby laser. The How Stuff Works gives a more general overview but talks about the transition of an electron between orbit levels for the generation of lasing photons. A CO2 laser, whilst the principles are the same, the energy level transitions are between vibration modes of the atoms within the molecule.
This is impossible according to the climate change science deniers. We have a few gifted persons about who can make up their science and do demand that CO2 is inert in the infra-red and thus cannot affect temperature. Yet the very thing they deny is exploited to build CO2 lasers that can emit beams of infra-red, powerful invisible beams of heat.
Shown below are the vibration modes of a molecule of water
Symmetric Stretch Asymmetric Stretch Bending
I went the water picture as I think they're needed (as my descriptions suck) but I couldn't find a CO2 diagram. The CO2 has the same vibration modes except the molecule is flat. It's handy to show the water diagrams as yet another lie that the specials plug "even if CO2 isn't IR-inert it is, it's absorption/emission spectral lines are the same as H2O". It isn't, of course, not that the specials care.
Ahhhh, found a gif of CO2 vibration modes
"Population inversion" is a necessary condition for lasers. It’s where the quantity of the higher energy level stuff (molecules in CO2’s case) is greater than the lower level after emission. This has to be so - the photon, as it approaches a molecule will be absorbed if the molecule is in the lower state, if it passes a molecule in the high state, it could stimulate emission and now we have 2 photons in phase, same wavelength, traveling in the same direction. It's the later we want so we definitely don't want too many in the low state.
A trick (now I've used that word, this is all impossible) that most lasers, including CO2, is to utilise 3 levels. The first is the ground state, the natural state of the supply of material. The 2nd state is the high energy level. The 3rd is an intermediately level that is the result of the 2nd level losing a photon, hopefully stimulated. The trick is to easily pump up level 2, and keep pumping level 2 and have level 3 be able to depopulate quickly itself or induced to do so.
A typical CO2 laser gas fill is 9.5% CO2, 13.5% N2, and 77%, the rest, He. The nitrogen gas is pumped electrically to an energy level that is close to the 001 mode of CO2, the nitrogen imparts energy to CO2.
N2, as you're probably aware, is the major gas of our atmosphere and is transparent to infra-red photons so will not affect the laser's operation nor does it keep Earth warmer than the moon (the moon being close enough to the same distance from the sun as us - specials often assert various made up stuff at this point). O2 is also IR inert, ie most of the Earth’s atmosphere is IR inert. The atmosphere's major IR active gases are CO2, water vapour and ozone O3. The specials will yell H2O is the most active (after lying about CO2, why not lie more) but the same problem arises if you try to make a steam powered laser - that is the amount of vapour is dependant on operating temperature and any extra H2O will condense out as a liquid.
The helium gas helps in removing kinetic energy from the 100 CO2 vibration mode which is the level 3 we mentioned earlier that we wish to depopulate quickly.
And that's the pew-pew( or rather the bgzzzzzzzzzzzzzt) that is a CO2 laser in a very short summary.
If you're a denier (or if you prefer a CO2 sceptic, I'll call you anything you wish if you agree to this) maybe we could organise a bit of an outing to a university - get a TV camera crew, televise you giving personal injury wager then we'll attempt to cut off your finger. In our world, based upon prior observation, and other hi-falootin theory crap, you won't feel a thing .. at first .. as flesh is burnt, blood vessels cauterised, nerve endings killed - there'll be an unmistakable visual and stench - a little after that I'd imagine it wouldn't tickle. BUT - in your world, the laser would fail as the CO2 infra-red crap is all a UN single government hoax (or something) and you will have heroically killed the global warming fraud at it's very base. You will so show the invisible nothing from the end of the laser to be a fraud perpetrated by grant hungry scientists that have been cooking their research because no-one would pay them if they showed teh truth. Please let us know if you want to settle this, I'll ring ACA, 60minutes etc - if any of them wish to pay I'll happily let you keep all the money, I just wish to watch - you could so stick it to those eggheads. | <urn:uuid:64c35945-581d-459b-a44d-b32c9ea2db07> | 3.125 | 1,144 | Personal Blog | Science & Tech. | 57.707439 |
WFIU Public Radio
WTIU Public Television
To create a bomb based on nuclear fission, the scientists working on the top secret Manhattan Project had to find fissionable fuel. They settled on enriched uranium.
What is RSS? RSS makes it possible to subscribe to a website's updates instead of visiting it by delivering new posts to your RSS reader automatically. Choose to receive some or all of the updates from A Moment of Science:
A Moment of Science is a daily audio podcast, public radio program and video series providing the scientific story behind some of life's most perplexing mysteries. Learn More »
Copyright © 2013, The Trustees of Indiana University | Copyright Complaints
1229 East Seventh Street, Bloomington, Indiana 47405 | <urn:uuid:1fae6114-6650-4295-bab6-025d95b1a6fb> | 2.890625 | 152 | Truncated | Science & Tech. | 37.649429 |
You can represent a list of distinct integers no larger than N using exactly N bits: if the integer i appears in your list, you set the i th bit to true. Bits for which there is no corresponding integers are set to false. For example, the integers 3, 4, 7 can be represented as 00011001. As another example, the integers 1, 2, 7 can be represented as 01100001.
Bitmaps are great to compute intersections and unions fast. For example, to compute the union between 3, 4, 7 and 1, 2, 7, all I need to do is compute the bitwise OR between 00011001 and 01100001 (=01111001) which a computer can do in one CPU cycle. Similarly, the intersection can be computed as the bitwise AND between 00011001 and 01100001 (=00000001).
Though it does not necessarily make use of fancy SSE instructions on your desktop, bitmaps are nevertheless an example of vectorization. That is, we use the fact that the processor can process several bits with one instruction.
There are some downsides to the bitmap approach: you first have to construct the bitmaps and then you have to extract the set bits. Thankfully, there are fast algorithms to decode bitmaps.
Nevertheless, we cannot expect bitmaps to be always faster. If most bits are set to false, then you are better off working over sets of sorted integers. So where is the threshold?
I decided to use the JavaEWAH library to test it out. This library is used, among other things, by Apache Hive to index queries over Hadoop. JavaEWAH uses compressed bitmaps (see Lemire at al. 2010 for details) instead of the simple bitmaps I just described, but the core idea remains the same. I have also added a simpler sparse bitmap implementation to this test.
I generated random numbers using the ClusterData model proposed by Vo Ngoc Anh and Alistair Moffat. It is a decent model for “real-world data”.
Consider the computation of the intersection between any two random sets of integers. The next figure gives the speed (in millions of integers per second) versus the density measured as the number of integers divided by the range of values.
I ran the test on a desktop core i7 computer.
Conclusion: Unsurprisingly, the break-even sparsity for JavaEWAH is about 1/32: if you have more than 1000 integers in the range [0,32000) then bitmaps might be faster. Of course, better speed is possible with some optimization and your data may differ from my synthetic data, but we have a ballpark estimate. A simpler sparse bitmap implementation can be useful over sparser data though it comes at a cost: the best speed is reduced compared to EWAH.
Source code: As usual, I provide full source code so that you can reproduce my results.
Update: This post was updated on Oct. 26th 2012.
Note: This post was translated in Slovakian. | <urn:uuid:67ddd1fc-5a0a-406d-b004-0f6fc6b3f02c> | 2.890625 | 631 | Personal Blog | Software Dev. | 56.183426 |
5.4. Cosmological scales
5.4.1. Foreground Cosmological Screen (~ 20 Gigaparsecs)
The physical conditions of the gas outside of clusters of galaxies are not well known. Forman et al. (1984) found values near n ~ 2 × 10-8 cm-3 and T ~ 3 × 108 K, from X-ray data and analysis. Somewhat higher n values have been used elsewhere.
On the scale of the Universe, a study of RM of 309 distant quasars (and galaxies), with both a measured redshift z and an observed rotation measure RM, led to a search for a possible increase of RM with z. No such increase was found up to a redshift z = 3.5, e.g., Figure 2 in Vallée (1990c). An observed upper limit of extragalactic RM = 2 rad/m2 for any cosmological contribution was deduced, which in turn corresponds to < 10-9 Gauss for a regular cosmic magnetic field Breg (outside clusters of galaxies), and a mean particle density of 10-7 cm-3 (or < 10-10 Gauss for 10-6 cm-3).
Thus there is no cosmological magnetized foreground screen, out to a redshift z = 3.5. For H0 = 50 km/s/Mpc, q0 = 1, then such a z corresponds to a distance of about 20 000 Mpc (= 20 Gpc).
Of course, the absence of a measurable value of Breg does not rule out the presence of a possible random magnetic field Bran throughout the Universe. Searches for such a Bran component have been reviewed by Kronberg (1994), and an upper limit of 10-9 Gauss is also indicated out to a redshift of z ~ 3.5. | <urn:uuid:1b3f2b0c-f602-4f17-8df2-187bce4a697d> | 2.765625 | 381 | Academic Writing | Science & Tech. | 83.347978 |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
A peptide bond is a chemical bond formed between two molecules when the carboxyl group of one molecule reacts with the amino group of the other molecule, releasing a molecule of water (H2O). This is a dehydration synthesis reaction (also known as a condensation reaction), and usually occurs between amino acids. The resulting CO-NH bond is called a peptide bond, and the resulting molecule is an amide. The four-atom functional group -C(=O)NH- is called an amide group or (in the context of proteins) a peptide group. Polypeptides and proteins are chains of amino acids held together by peptide bonds, as is the backbone of PNA.
A peptide bond can be broken by amide hydrolysis (the adding of water). The peptide bonds in proteins are metastable, meaning that in the presence of water they will break spontaneously, releasing about 10 kJ/mol of free energy, but this process is extremely slow. In living organisms, the process is facilitated by enzymes. Living organisms also employ enzymes to form peptide bonds; this process requires free energy. The wavelength of absorbance for a peptide bond is 190-230nm.
Resonance forms of the peptide groupEdit
The amide group has two resonance forms, which confer several important properties. First, it stabilizes the group by roughly 20 kcal/mol, making it less reactive than many similar groups (such as esters). The resonance suggests that the amide group has a partial double bond character, estimated at 40% under typical conditions. The peptide group is uncharged at all normal pH values, but its double-bonded resonance form gives it a unusually large dipole moment, roughly 3.5 Debye (0.7 electron-angstrom). These dipole moments can line up in certain secondary structures (such as the α-helix), producing a large net dipole.
The partial double bond character can be strengthened or weakened by modifications that favor one resonance form over another. For example, the double-bonded form is disfavored in hydrophobic environments, because of its charge. Conversely, donating a hydrogen bond to the amide oxygen or accepting a hydrogen bond from the amide nitrogen should favor the double-bonded form, because the hydrogen bond should be stronger to the charged form than to the uncharged, single-bonded form. By contrast, donating a hydrogen bond to an amide nitrogen in an X-Pro peptide bond should favor the single-bonded form; donating it to the double-bonded form would give the nitrogen five quasi-covalent bonds! (See Figure 3.) Similarly, a strongly electronegative substituent (such as fluorine) near the amide nitrogen favors the single-bonded form, by competing with the amide oxygen to "steal" an electron from the amide nitrogen (See Figure 4.)
Cis/trans isomers of the peptide groupEdit
The partial double bond renders the amide group planar, occurring in either the cis or trans isomers. In the unfolded state of proteins, the peptide groups are free to isomerize and adopt both isomers; however, in the folded state, only a single isomer is adopted at each position (with rare exceptions). The trans form is preferred overwhelmingly in most peptide bonds (roughly 1000:1 ratio in trans:cis populations). However, X-Pro peptide groups tend to have a roughly 3:1 ratio, presumably because the symmetry between the and atoms of proline makes the cis and trans isomers nearly equal in energy (See figure, below).
The dihedral angle associated with the peptide group (defined by the four atoms ) is denoted ; for the cis isomer and for the trans isomer. Amide groups can isomerize about the C-N bond between the cis and trans forms, albeit slowly (20 seconds at room temperature). The transition states requires that the partial double bond be broken, so that the activation energy is roughly 20 kcal/mol (See Figure below). However, the activation energy can be lowered (and the isomerization catalyzed) by changes that favor the single-bonded form, such as placing the peptide group in a hydrophobic environment or donating a hydrogen bond to the nitrogen atom of an X-Pro peptide group. Both of these mechanisms for lowering the activation energy have been observed in peptidyl prolyl isomerases (PPIases), which are naturally occurring enzymes that catalyze the cis-trans isomerization of X-Pro peptide bonds.
Conformational protein folding is usually much faster (typically 10-100 ms) than cis-trans isomerization (10-100 s). A nonnative isomer of some peptide groups can disrupt the conformational folding significantly, either slowing it or preventing it from even occurring until the native isomer is reached. However, not all peptide groups have the same effect on folding; nonnative isomers of other peptide groups may not affect folding at all.
Owing to its resonance stabilization, the peptide bond is relatively unreactive under physiological conditions, even less than similar compounds such as esters. Nevertheless, peptide bonds can undergo chemical reactions, usually through an attack of an electronegative atom on the carbonyl carbon, breaking the carbonyl double bond and forming a tetrahedral intermediate. This is the pathway followed in proteolysis and, more generally, in N-O acyl exchange reactions such as those of inteins. When the functional group attacking the peptide bond is a thiol, hydroxyl or amine, the resulting molecule may be called a cyclol or, more specifically, a thiacyclol, an oxacyclol or an azacyclol, respectively.
- Pauling L. (1960) The Nature of the Chemical Bond, 3rd. ed., Cornell University Press. ISBN 0-8014-0333-2
- Stein RL. (1993) "Mechanism of Enzymatic and Nonenzymatic Prolyl cis-trans Isomerization", Adv. Protein Chem., 44, 1-24.
- Schmid FX, Layr LM, Mücke M and Schönbrunner ER. (1993) "Prolyl Isomerases: Role in Protein Folding", Adv. Protein Chem., 44, 25-66.
- Fischer G. (1994) "Peptidyl-Prolyl cis/trans Isomerases and Their Effectors", Angew. Chem. Int. Ed. Engl., 33, 1415-1436.
da:Peptidbinding de:Peptidbindung es:Enlace peptídico fr:Liaison peptidique ko:펩타이드 결합 io:Peptida ligohe:קשר פפטידיnl:peptidebinding no:Peptidbindingpt:Ligação peptídica ru:Пептидная связь fi:Peptidisidos sv:Peptidbindning zh:肽键
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | <urn:uuid:12c5210c-6a6b-460b-a948-023e02566e78> | 3.84375 | 1,567 | Knowledge Article | Science & Tech. | 37.278924 |
Why Use PHP?
There are thousands of programming languages available and every year, new programs are being made. But only few of these have become sufficiently popular and are used by many people. One of which is PHP or hypertext preprocessor. It is a general-purpose server-side scripting language designed for web development. It is installed on more than 20 million web sites and 1 million web servers. Microsoft’s Active Server Page or ASP, a server-side script engine and a similar language to PHP, is one of PHP’s competitors. They are both programming languages that are commonly used to create websites. Both are more dynamic than the usual static html web pages and can allow users to interact and exchange information.
Here are some factors to consider in choosing between PHP and ASP. ASP needs a Microsoft server for the website to work and will be requiring an ASP-Apache program installed for it to work on a Linux platform. PHP, on the other hand, runs using Windows, Solaris, Linux or Unix and NT servers. That’s why PHP is more popular than ASP. PHP uses C/C++ as base language and most syntax are similar to each other while ASP is very similar to the syntax and interface of Visual Basic programming. When it comes to cost and expenses, ASP is more expensive as it will require one to purchase Windows with IIS installed on the server and a MS-SQL database to work. While PHP will only need a Linux server and connects to several databases, one of which is MySQL, which are both free. Talking about speed, PHP runs it its own memory space which makes its loading speed quicker compared to ASP as it uses an overhead server. Most tools that work on PHP program are open source software while one may need to purchase additional tools to work on ASP program.
Clearly, PHP has dominated when it comes to compatibility, popularity, cost, expenses, flexibility and a lot more but as users of the program, the choice is yours. | <urn:uuid:f419486e-9b54-494f-8872-4562540697ec> | 2.9375 | 404 | Personal Blog | Software Dev. | 50.792419 |
1 Introduction to MATLAB Classes
In MATLAB, like most programming languages, every value is a class. Typically the classes are double, char, float, single, cell, etc. (The class of a given value or object can be determined using the built-in MATLAB command class.)
For many repetitive tasks building your own classes makes sense. In my line of work we build a lot of very similar models and post-process most of them in a very similar manner. However, we may go a year or more between some of these proposal efforts meaning that many of the MATLAB implementation issues have been forgotten. As a result building the model and presenting the results usually took much longer than it should simply because we had to relearn a number of little gotchas. So recently I built a series of simple classes that encapsulate that knowledge and when we want the result we call the appropriate class method which already does all of the interpolation and data massaging that is required.
The following subsections will briefly touch the various parts of any class definition in MATLAB.
2 MATLAB Class Properties
The first thing you must define for any class object are the properties. The three primary property attributes are:
There are several variations that can be applied.
Public properties are properties that show up when the object is displayed. These properties are viewable and settable from the command prompt. Public properties are what most people think of when they think of object properties.
Private properties are designated by
properties (SetAccess = private)
and these properties are visible, read-only. The values of these properties can only be set using class properties though.
Finally Hidden properties are not visible from the class display method. If you know the property name you can both read and write to that property's value from the command prompt.
For more detailed information go here: MATLAB 2009a Class Property Attributes
3 MATLAB Class Methods
After the properties are defined the methods must be defined. There are 4 primary method attributes:
The Public and Hidden designations for methods are just like Public and Hidden properties. For Sealed methods a subclass cannot redefine the method. Static methods do not require an object argument.
For more detailed information go here: MATLAB 2009a Class Method Attributes
4 MATLAB Class Events
Events are an advanced topic where you create a listener for the event and action when the listener is triggered. The action can trigger other events and without careful design you can trigger events you don't intend to trigger.
For more detailed information go here: MATLAB 2009a Class Event Properties | <urn:uuid:671972b5-6581-4574-93e1-f217c4044b51> | 3.015625 | 529 | Documentation | Software Dev. | 41.23069 |
Created on Day 5
May 24, 2012
Scientists have discovered that coral has layers, and some scientists have measured these layers claiming that they show slow growth over long periods of time. However, it has been shown that coral can grow quickly. With this knowledge, even the “oldest” reefs would only be a few thousand years old. This is consistent with a young earth, created only about 6,000 years ago.
- Coral colonies grow in many shapes and come in many colors; there are three kinds of coral reefs: simple fringing reefs, barrier reefs, and atolls.
- A coral polyp is actually a small, marine invertebrate that lives in a large colony. A polyp looks like a tiny, upside down jellyfish. Some corals have eight tentacles, while others have them in multiples of six.
- Coral polyps are connected to one another in large colonies, sharing nutrients and even other organisms in a mutualistic relationship. These large colonies are called reefs.
- Most corals have a beneficial relationship with zooxanthellae (photosynthetic, one-celled algae). The oxygen and sugar produced by the algae within the coral polyp stimulates it to produce calcium carbonate (reef material). The coral produces carbon dioxide and waste products (nutrient-rich fertilizer) that the zooxanthellae use.
- Corals uses their tentacles to trap prey by stinging their prey with cells called nematocysts.
CLASS: Anthozoa (corals and sea anemones)
SUBCLASS: Alcyonaria (soft) and Zoantharia (stony)
ORDER: Nine orders within the two subclasses
GENUS/SPECIES: Over 5,000 different species
Size: A few millimeters
Depth: Up to 200 ft (60 m); one rare species found as deep as 9,800 ft (3,000 m)
Diet: Zooplankton, small fishes, organic debris
Habitat: In shallow, warm waters worldwide | <urn:uuid:0b5e72a6-15eb-4f36-884a-574fa2ffa43c> | 4.25 | 427 | Knowledge Article | Science & Tech. | 42.527394 |
beryllium (Be)Article Free Pass
beryllium (Be), formerly (until 1957) glucinium, chemical element, the lightest member of the alkaline-earth metals of Group 2 (IIa) of the periodic table, used in metallurgy as a hardening agent and in many outer space and nuclear applications.
Occurrence, properties, and uses
Beryllium is a steel-gray metal that is quite brittle at room temperature, and its chemical properties somewhat resemble those of aluminum. It does not occur free in nature. Beryllium is found in beryl and emerald, minerals that were known to the ancient Egyptians. Although it had long been suspected that the two minerals were similar, chemical confirmation of this did not occur until the late 18th century. Emerald is now known to be a green variety of beryl. Beryllium was discovered (1798) as the oxide by French chemist Nicolas-Louis Vauquelin in beryl and in emeralds and was isolated (1828) as the metal independently by German chemist Friedrich Wöhler and French chemist Antoine A.B. Bussy by the reduction of its chloride with potassium. Beryllium is widely distributed in Earth’s crust and is estimated to occur in Earth’s igneous rocks to the extent of 0.0002 percent. Its cosmic abundance is 20 on the scale in which silicon, the standard, is 1,000,000.
There are about 30 recognized minerals containing beryllium, including beryl (Al2Be3Si6O18, a beryllium aluminum silicate), bertrandite (Be4Si2O7(OH)2, a beryllium silicate), phenakite (Be2SiO4), and chrysoberyl (BeAl2O4). (The precious forms of beryl, emerald and aquamarine, have a composition closely approaching that given above, but industrial ores contain less beryllium; most beryl is obtained as a by-product of other mining operations, with the larger crystals being picked out by hand.) Beryl and bertrandite have been found in sufficient quantities to constitute commercial ores from which beryllium hydroxide or beryllium oxide is industrially produced. The extraction of beryllium is complicated by the fact that beryllium is a minor constituent in most ores (5 percent by mass even in pure beryl, less than 1 percent by mass in bertrandite) and is tightly bound to oxygen. Treatment with acids, roasting with complex fluorides, and liquid-liquid extraction have all been employed to concentrate beryllium in the form of its hydroxide. The hydroxide is converted to fluoride via ammonium beryllium fluoride and then heated with magnesium to form elemental beryllium. Alternatively, the hydroxide can be heated to form the oxide, which in turn can be treated with carbon and chlorine to form beryllium chloride; electrolysis of the molten chloride is then used to produce the metal. The element is purified by vacuum melting.
Beryllium is the only stable light metal with a relatively high melting point. Although it is readily attacked by alkalies and nonoxidizing acids, beryllium rapidly forms an adherent oxide surface film that protects the metal from further air oxidation under normal conditions. These chemical properties, coupled with its excellent electrical conductivity, high heat capacity and conductivity, good mechanical properties at elevated temperatures, and very high modulus of elasticity (one-third greater than that of steel), make it valuable for structural and thermal applications. Beryllium’s dimensional stability and its ability to take a high polish have made it useful for mirrors and camera shutters in space, military, and medical applications and in semiconductor manufacturing. Because of its low atomic weight, beryllium transmits X-rays 17 times as well as aluminum and has been extensively used in making windows for X-ray tubes. Beryllium is fabricated into gyroscopes, accelerometers, and computer parts for inertial guidance instruments and other devices for missiles, aircraft, and space vehicles, and it is used for heavy-duty brake drums and similar applications in which a good heat sink is important. Its ability to slow down fast neutrons has found considerable application in nuclear reactors.
Much beryllium is used as a low-percentage component of hard alloys, especially with copper as the main constituent but also with nickel- and iron-based alloys, for products such as springs. Beryllium-copper (2 percent beryllium) is made into tools for use when sparking might be dangerous, as in powder factories. Beryllium itself does not reduce sparking, but it strengthens the copper (by a factor of 6), which does not form sparks upon impact. Small amounts of beryllium added to oxidizable metals generate protecting surface films, reducing inflammability in magnesium and tarnishing in silver alloys.
Neutrons were discovered (1932) by British physicist Sir James Chadwick as particles ejected from beryllium bombarded by alpha particles from a radium source. Since then beryllium mixed with an alpha emitter such as radium, plutonium, or americium has been used as a neutron source. The alpha particles released by radioactive decay of radium atoms react with atoms of beryllium to give, among the products, neutrons with a wide range of energies—up to about 5 × 106 electron volts (eV). If radium is encapsulated, however, so that none of the alpha particles reach beryllium, neutrons of energy less than 600,000 eV are produced by the more-penetrating gamma radiation from the decay products of radium. Historically important examples of the use of beryllium/radium neutron sources include the bombardment of uranium by German chemists Otto Hahn and Fritz Strassmann and Austrian-born physicist Lise Meitner, which led to the discovery of nuclear fission (1939), and the triggering in uranium of the first controlled-fission chain reaction by Italian-born physicist Enrico Fermi (1942).
The only naturally occurring isotope is the stable beryllium-9, although 11 other synthetic isotopes are known. Their half-lives range from 1.5 million years (for beryllium-10, which undergoes beta decay) to 6.7 × 10−17 second for beryllium-8 (which decays by two-proton emission). The decay of beryllium-7 (53.2-day half-life) in the Sun is the source of observed solar neutrinos.
What made you want to look up "beryllium (Be)"? Please share what surprised you most... | <urn:uuid:e7bfcb54-e796-4bd6-9d98-21c429c18ace> | 3.671875 | 1,439 | Knowledge Article | Science & Tech. | 31.398436 |
An abnormally cool Arctic is seeing dramatic changes to ice levels. In sharp contrast to the rapid melting seen last year, the amount of global sea ice has rebounded sharply and is now growing rapidly. The total amount of ice, which set a record low value last year, grew in October at the fastest pace since record-keeping began in 1979.
The actual amount of ice area varies seasonally from about 16 to 23 million square kilometers. However, the mean anomaly-- defined as the difference between the current area and the seasonally-adjusted average-- changes much slower, and generally varies by only 2-3 million square kilometers.
That anomaly had been negative, indicating ice loss, for most of the current decade and reached a historic low in 2007. The current value is again zero, indicating an amount of ice exactly equal to the global average from 1979-2000.
Bill Chapman, a researcher with the Arctic Climate Center at the University of Illinois, says the rapid increase is "no big deal". He says that, while the Arctic has certainly been colder in recent months, the long-term decrease is still ongoing. Chapman, who predicts that sea ice will soon stop growing, sees nothing in the recent data to contradict predictions of global warming.
Others aren't quite so sure. Dr. Patrick Michaels, Professor of Environmental Science at the University of Virginia, says he sees some "very odd" things occurring in recent years. Michaels, who is also a Senior Fellow with the Cato Institute, tells DailyTech that, while the behavior of the Arctic seems to agree with climate models predictions, the Southern Hemisphere can't be explained by current theory. "The models predict a warming ocean around Antarctica, so why would we see more sea ice?" Michaels adds that large areas of the Southern Pacific are showing cooling trends, an occurrence not anticipated by any current climate model.
On average, ice covers roughly 7% of the ocean surface of the planet. Sea ice is floating and therefore doesn't affect sea level like the ice anchored on bedrock in Antarctica or Greenland. However, research has indicated that the Antarctic continent -- which is on a long-term cooling trend -- has also been gaining ice in recent years.
The primary instrument for measuring sea ice today is the AMSR-E microwave radiometer, an instrument package aboard NASA's AQUA satellite. AQUA was launched in 2002, as part of NASA's Earth Observing System (EOS).
quote: Computer analyses of global climate have consistently overstated warming in Antarctica, concludes new research by scientists at the National Center for Atmospheric Research (NCAR) and Ohio State University
quote: A new report on climate over the world's southernmost continent shows that temperatures during the late 20th century did not climb as had been predicted by many global climate models.
quote: Climate models generally predict amplified warming in the polar regions...In the Antarctic, over the past half-century there has been a marked warming trend in the Antarctic Peninsula.**** Elsewhere there is a general but not unambiguous warming trend
quote: Antarctica doesn't agree with model predictions.
quote: The term Anthropocene, proposed and increasingly employed to denote the current interval of anthropogenic global environmental change, may be discussed on stratigraphic grounds. A case can be made for its consideration as a formal epoch in that, since the start of the Industrial Revolution, Earth has endured changes sufficient to leave a global stratigraphic signature distinct from that of the Holocene or of previous Pleistocene interglacial phases, encompassing novel biotic, sedimentary, and geochemical change. These changes, although likely only in their initial phases, are sufficiently distinct and robustly established for suggestions of a Holocene–Anthropocene boundary in the recent historical past to be geologically reasonable. The boundary may be defined either via Global Stratigraphic Section and Point (“golden spike”) locations or by adopting a numerical date. Formal adoption of this term in the near future will largely depend on its utility, particularly to earth scientists working on late Holocene successions
quote: I would think this is an all or nothing scenario.... Either the predictions are right or they are not.
quote: You want to completely throw out predictions which said that Antarctica was warming and accept all the portions of them that say the entire earth is warming?
quote: There was a comment earlier about 'cherry picking' and I believe this conforms to that comment.
quote: Amazing. Everything has to be black or white, right or left, with us or against us. What a wonderful mindset.
quote: Anyone can edit any article
quote: The Jeffersonian philosophy that animates Cato's work has increasingly come to be called "libertarianism" or "market liberalism." It combines an appreciation for entrepreneurship, the market process, and lower taxes with strict respect for civil liberties and skepticism about the benefits of both the welfare state and foreign military adventurism.
quote: If only you knew how *stupid* you look to those of us with a working intellect.... | <urn:uuid:09e4272c-d3ea-4587-9827-7bd99ff6d184> | 3.515625 | 1,030 | Comment Section | Science & Tech. | 35.713183 |
Chemical bonding > Bicyclic
Organic chemistry > Bicyclic
Naming Bicyclic Alkanes according to IUPAC Rules
This is a video lesson about how to name bicyclic alkanes according to IUPAC rules. It is meant to accompany a worksheet that should be solved alongside the ...
Naming Cycloalkanes and Bicyclo Alkanes
Download My ebook: "10 Secrets To Acing Organic Chemistry" http://forms.aweber.com/form/32/1504658732.htm http://leah4sci.com/organicchemistry/ presents: Nam...
Cis and Trans on Bicyclic Alkanes: Von-Baeyer Nomenclature
Cis and Trans on Bicyclic Alkanes: Von-Baeyer Nomenclature http://organic.intergrader.net.
Challenge Bicyclic : Blegny-Mine
Reconnaissance du circuit.
Endo, Exo, Syn and Anti on Bicyclic Alkanes
Endo, Exo, Syn and Anti on Bicyclic Alkanes http://organic.intergrader.net.
Challenge Bicyclic : Grand-Rechain
Someone Impostor -- Bicyclic
Someone Impostor performing the song Bicyclic live at Open Podium De Leidsche Flesch, theater Ins Blau, Leiden.
East bicyclic high-grade apartment
http://wpost.in/p/9A555941-8A49-AFA1-BA11-BE136FE80D8E Briefing rooms in the houseStylish and cozy apartment renovation, home appliances readily available, n...
We're sorry, but there's no news about "Bicyclic" right now.
Oops, we seem to be having trouble contacting Twitter
You can talk about Bicyclic with people all over the world in our discussions.
Copyright © 2009-2013 Digparty. All rights reserved.
| Terms of Service
| Contact Us | <urn:uuid:b36d2894-1111-4b35-9bb3-68e9dfa06ecc> | 2.90625 | 446 | Content Listing | Science & Tech. | 44.292154 |
Endangered Mammals of Antarctica
Corals, Jellyfish, and Sea Anemones
Click on the species name to view its profile.
***What mammals are listed here?:The Antarctica section of this site lists the folllowing mammals appearing on select endangered species lists:
- Mammals that dwell in or migrate to Antarctica.
- Mammals of the oceans that travel to the Antarctic (Southern) Oceans.
In some cases, the creatures listed in this section may also be found on other continents or in other areas, which is why you may see additional
areas noted in the range column.
This list combines species from several endangered species lists.
Using the total count of species found on this site as an official count of
endangered species of the world is not recommended
. For more information on
what creatures are listed on this site, please visit our | <urn:uuid:993f0ddf-61c7-4b90-9d67-1320fa3a1683> | 2.734375 | 180 | Content Listing | Science & Tech. | 40.000657 |
C# LINQ Query Keywords
The LINQ queries are server side contextually language integrated queries that enable you to work with data items retrieved from any kind of data source. Likewise other database query language LINQ also contains few contextual keywords that instruct the compiler which action to perform. LINQ uses most common type of clauses that each type of SQL based query languages use such as where, select, from and the aggregate functions such as sum, min, max etc. Here is a list of LINQ Query Keywords that provide the similar function which can be implemented in C# code for ASP.Net web pages:
1. from: It specifies the data source and a range variable that belongs to each element of data source sequence.
2. where: It filters the results based on the Boolean expressions specified using logical AND and OR operations such as && and ||.
3. select: It specifies the type and shape of the returned result that will be retrieved after executing the query expression.
4. group: It groups the results sequence based on the specified key value
5. into: It provides the identifier for the results sequence retrieved from the group, select or join clause.
6. orderby: It sorts the results sequence in ascending or descending order according to the default comparer for the element type.
7. join: It joins the data sources based on the relational comparison between specified matching criteria.
8. let: It enables you to declare a range variable to store the sub-expression results inside the query expression.
9. in: It is used in join and from clause that points to each element of the data source sequence by applying iterations over them.
10. on: It is used in join clause to specify the comparison test between two data source elements.
11. equals: It is used in join clause to compare the two data source elements.
12. by: It is used in group clause of query expression to specify the grouping criteria.
13. ascending: It is used with orderby clause that instructs the compiler to sort the results sequence from smallest to largest.
14. descending: It is used with orderby clause that instructs the compiler to sort the results sequence from largest to smallest.
Continue to next tutorial: LINQ from Clause Using C# in ASP.Net to learn how to use the from LINQ query keyword. | <urn:uuid:bff541f4-24f1-4013-b2db-4ad1246a0a18> | 2.71875 | 484 | Tutorial | Software Dev. | 56.008132 |
Click here to pick up where
the print issue left off.
The past year provided a wealth of exciting data returned by several spacecraft exploring the solar system.
In October of 2001, the Mars Odyssey spacecraft joined the successful Mars Global Surveyor in orbit around Mars. Odyssey carries the first gamma-ray spectrometer and neutron detectors to map Mars, capable of detecting abundances of a number of elements in the martian regolith. Initial results show that the regolith poleward of 60 degrees south contains approximately 50 percent water within the upper meter, as do some areas in the northern plains of the planet (Boynton et al., Science online, May 30, 2002; 10.1126/science.1073722; Feldman et al., Science online, May 30, 2002; 10.1126/science.1073541; Mitrofanov, et al., Science online, May 30, 2002; 10.1126/ science.1073616). Odyssey also carries the Thermal Emission Imaging System (THEMIS) instrument, which will map the entire planet at infrared wavelengths with a resolution of 100 meters per pixel and with visible light images at 18 meters per pixel. THEMIS is returning the first nighttime thermal images of the surface, which show variations in the thermal inertia of materials. It is anticipated that the 100-meter-resolution multispectral data will reveal variations in mineralogy previously obscured by dust in images of poorer resolution. The result should be a complete mineralogical map of Mars.
Different mineral composition of the
rocks, sediments and dust on the surface provide a colorful infrared image of
geologic layering on Mars. This image shows a portion of Candor Chasma, a canyon
within the great Valles Marineris system of canyons, at approximately 5 degrees
south latitude, 285 degrees east (75 degrees west) longitude. The area shown
is approximately 30 by 175 kilometers (19 by 110 miles). The image combines
exposures taken by Odysseys thermal emission imaging system at three different
wavelengths of infrared light: 6.3 microns, 7.4 microns and 8.7 microns.
The Mars Global Surveyor (MGS) craft is in the second year of its extended mission and continues to collect meter-resolution images and topography of the planet. From July to September 2001, dust storms obscured the majority of the planet and imaging efforts were concentrated on the South Pole, where the atmosphere was clear. A comparison of images of the carbon dioxide-rich ice cap showed very fast sublimation rates of ice over the last two years (Malin et al., Science v. 294, p. 2146). Topographic measurements showed that similar amounts of ice sublimated seasonally from the northern cap as well (Smith et al., Science v. 294, p. 2141). Such high sublimation rates may significantly alter atmospheric pressures on Mars over hundreds or thousands of years, with consequences for the stability of liquid water on the surface and the location and erosive power of winds.
Another study models the amount of liquid water that may be produced by melting of ground ice in the martian regolith when the planet is at high obliquity (105 year timescale), which results in increased surface temperatures and pressure. This work suggests that recent gullies on Mars may be relicts from the last era of high obliquity (Costard et al., ((Science)) v. 295, p. 110). In addition to studies of martian hydrology, continuing analyses of MGS data improve models of martian gravity, magnetics, atmosphere and geologic history. Several such studies are included in the Oct. 25, 2001, issue of the Journal of Geophysical Research--Planets (JGR-Planets).
Planning continues for selecting sites for the 2003 Mars Exploration Rovers. A top candidate is an unusual, Texas-sized area of the surface containing grey, crystalline hematite, which on Earth is often formed by precipitation from hydrothermal fluids.
The Galileo spacecraft performed its last close flyby of Io in January of this year, placing the craft on a trajectory to crash into Jupiter in 2003. Io continues to erupt furiously, and the study of Galileo visible and near-infrared data over several years reveals that Io's volcanoes erupt in varying styles: steady and long-lived vs. short-lived, intense eruptions and ultrabasic vs. basic lavas (evidence for sulfur flows remains ambiguous). The images also lend insight into the tectonics of ionian mountains and mass wasting. The Dec. 25, 2001, issue of JGR-Planets contains many of these studies. Analyses continue on the surface morphology and magnetic fields of the other Galilean moons, each of which may currently have or at one time had an underground ocean.
A recent study (Schenk, Nature v. 417, p. 419) uses impact craters to constrain the thickness of the icy crust of Europa to a minimum of 19 kilometers, which may preclude exchange between the ocean and the surface, and will make future exploration of the europan ocean more difficult.
On Sept. 22, 2001, the Deep Space 1 mission passed by the nucleus of the comet 19P/Borrelly, returning visible and infrared data at a 200-meter resolution. The young, morphologically variable surface contains no evidence of water or hydrated minerals and has active dust jets (Soderblom et al., Science v. 296, 1087). A comet is also the target of the Stardust mission, currently on its way to intersect with comet Wild 2 in 2004, where it will collect particles in the coma and return them to Earth in 2006. Sample return of solar wind particles is the goal of the Genesis spacecraft, which was launched in August of 2001 and will return samples in 2004. These missions will significantly improve our understanding of the chemical and isotopic composition of interstellar dust, the outer solar system and the solar nebula. The curation and handling of these samples will also pave the way for other sample return missions, particularly those planned for Mars.
We continue to analyze data from the Moon and Venus and of meteorite chemistry to improve our knowledge of the context of the solar system. Terrestrial analogs of martian surfaces were the topic of field trips in the Mojave desert in October of 2001, sponsored by NASA and the Lunar and Planetary Institute. The planetary community looks forward to the arrival of the Cassini spacecraft at the Saturn system in 2004, where it will drop a probe (with cameras and a gas chromatograph/mass spectrometer) into the atmosphere of Titan. The MESSENGER mission to Mercury remains on track for launch in 2004 to explore this Moon-like world. It is anticipated that the Pluto-Kuiper Belt Mission will launch in 2006 to explore what could be the ninth planet. It looks to be another exciting decade in planetary geosciences as we test our understanding of geologic processes on other worlds.
Back to index | <urn:uuid:251c7c2b-ee88-411f-b97f-47347626dfa5> | 3.34375 | 1,449 | Content Listing | Science & Tech. | 47.554578 |
These tracks contain cDNA and gene alignments produced by the TransMap cross-species alignment algorithm from other vertebrate species in the UCSC Genome Browser. For closer evolutionary distances, the alignments are created using syntenically filtered BLASTZ alignment chains, resulting in a prediction of the orthologous genes in human.TransMap maps genes and related annotations in one species to another using synteny-filtered pairwise genome alignments (chains and nets) to determine the most likely orthologs. For example, for the mRNA TransMap track on the human assembly, more than 400,000 mRNAs from 25 vertebrate species were aligned at high stringency to the native assembly using BLAT. The alignments were then mapped to the human assembly using the chain and net alignments produced using blastz, which has higher sensitivity than BLAT for diverged organisms.
Compared to translated BLAT, TransMap finds fewer paralogs and aligns more UTR bases. For closely related low-coverage assemblies, a reciprocal-best relationship is used in the chains and nets to improve the synteny prediction.
This track follows the display conventions for PSL alignment tracks.
This track may also be configured to display codon coloring, a feature that allows the user to quickly compare cDNAs against the genomic sequence. For more information about this option, click here. Several types of alignment gap may also be colored; for more information, click here.
To ensure unique identifiers for each alignment, cDNA and gene accessions were made unique by appending a suffix for each location in the source genome and again for each mapped location in the destination genome. The format is:
accession.version-srcUniq.destUniqWhere srcUniq is a number added to make each source alignment unique, and destUniq is added to give the subsequent TransMap alignments unique identifiers.
For example, in the cow genome, there are two alignments of mRNA BC149621.1. These are assigned the identifiers BC149621.1-1 and BC149621.1-2. When these are mapped to the human genome, BC149621.1-1 maps to a single location and is given the identifier BC149621.1-1.1. However, BC149621.1-2 maps to two locations, resulting in BC149621.1-2.1 and BC149621.1-2.2. Note that multiple TransMap mappings are usually the result of tandem duplications, where both chains are identified as syntenic.
This track was produced by Mark Diekhans at UCSC from cDNA sequence data submitted to the international public sequence databases by scientists worldwide.
Zhu J, Sanborn JZ, Diekhans M, Lowe CB, Pringle TH, Haussler D. Comparative genomics search for losses of long-established genes on the human lineage. PLoS Comput Biol. 2007 Dec;3(12):e247.
Stanke M, Diekhans M, Baertsch R, Haussler D. Using native and syntenically mapped cDNA alignments to improve de novo gene finding. Bioinformatics. 2008 Mar 1;24(5):637-44.
Siepel A, Diekhans M, Brejová B, Langton L, Stevens M, Comstock CL, Davis C, Ewing B, Oommen S, Lau C et al. Targeted discovery of novel human exons by comparative genomics. Genome Res. 2007 Dec;17(12):1763-73. | <urn:uuid:aa0d7cf6-f9e5-41c4-8562-5d37572dc500> | 2.734375 | 754 | Knowledge Article | Science & Tech. | 52.420356 |
Steatoda nobilis, a species of comb-footed spider, characteristically hang upside down in webs known as tangle webs. Typically, their webs are built high off the ground in a darker corner.
This group of spiders does not specialise on particular prey but are opportunistic and feed on almost any invertebrate that becomes entangled in their webs.
Body length (excluding legs):
Size (including legs):
In general, the life cycle lasts about 1-2 years and the females usually live longer than the smaller males.
The white, spherical egg-sacs are produced at intervals, their number depending on the food supply. Eggs hatch within 2-4 months.
Males usually achieve maturity within a single year and die shortly after as they cease to feed once mature.
Females achieve maturity within the second year and depending on environmental conditions and sufficient prey may live into the third year.
Mature and reproductive male spiders are found in summer and autumn, mature females can be found all year round and egg sacks are laid from spring through to autumn.
Local dispersal is achieved through ballooning on silk threads. Longer distance dispersal is aided by transportation of goods by road, rail and the shipping network. | <urn:uuid:4b3195bb-a798-4dba-a42f-5769c25d71f7> | 3.578125 | 254 | Knowledge Article | Science & Tech. | 42.654637 |
gluLookAt — define a viewing transformation
Specifies the position of the eye point.
Specifies the position of the reference point.
Specifies the direction of the up vector.
gluLookAt creates a viewing matrix derived from an eye point, a reference
point indicating the center of the scene, and an UP vector.
The matrix maps the reference point to the negative z axis and the eye point to the origin. When a typical projection matrix is used, the center of the scene therefore maps to the center of the viewport. Similarly, the direction described by the UP vector projected onto the viewing plane is mapped to the positive y axis so that it points upward in the viewport. The UP vector must not be parallel to the line of sight from the eye point to the reference point.
Let UP be the vector .
Then normalize as follows:
Finally, let , and .
M is then constructed as follows:
gluLookAt is equivalent to
glMultMatrixf(M); glTranslated(-eyex, -eyey, -eyez);
Copyright © 1991-2006 Silicon Graphics, Inc. This document is licensed under the SGI Free Software B License. For details, see http://oss.sgi.com/projects/FreeB/. | <urn:uuid:1c89758b-6ebd-43cb-89aa-4e76cb6e38a8> | 2.828125 | 267 | Documentation | Software Dev. | 56.626952 |
Coupled Ocean-Atmosphere Models
Scientists use computer models to help them understand the Earth. Scientists who study the atmosphere use computer models of the atmosphere. Some scientists who study the oceans use computer models of the seas. Some scientists study both the atmosphere and the oceans. Those scientists use a special kind of model that includes both the seas and the air. These combined models are called "coupled models".
What is the difference between a coupled model and a model that isn't coupled? Let's look at an example. Computer models of the atmosphere keep track of many things, like how much carbon dioxide (CO2) is in the air. Some parts of the model keep track of how much CO2 is added to the air. For example, burning coal and gasoline adds CO2 to the atmosphere. Other parts of the model keep track of how much CO2 is taken out of the air. One way this happens is that oceans absorb CO2 from the air. In a normal uncoupled model of the atmosphere, scientists don't keep track of how much CO2 ends up in the oceans. The oceans don't change in that kind of model.
A coupled model is different. Changes in the atmosphere do cause changes in the ocean. Changes in the ocean part of the model can cause changes in the atmosphere part. So if lots of carbon dioxide moved from the atmosphere to the ocean, the ocean might get "full" of CO2. It might not be able to hold any more. Or it might take in more CO2 very slowly.
As you might guess, coupled models can be more realistic. So why don't scientists always use coupled models instead of uncoupled models? Coupled models are very, very complicated. It takes a lot of work to make sure the answers from them are right. It also takes a long time to run coupled models, even on fast supercomputers. Sometimes uncoupled models are good enough for certain types of problems. Other times scientists really need to use the more complex coupled models. | <urn:uuid:a5ff6255-41e4-4995-80da-0f1c1ce106e0> | 3.265625 | 415 | Knowledge Article | Science & Tech. | 62.358267 |
EXAMPLES OF FUNCTIONS
The examples given here:
¨ are basic types of functions that are used everywhere in abstract math
¨ are functions you need to be familiar with
¨ are (sigh) boring.
For any set A, the identity function is the function that takes an element to itself; in other words, for every element , . Its graph is the diagonal of .
¨ Warning: The word identity has two other commonly used meanings. This causes trouble because people may refer to the identity function as simply “the identity”, especially in conversation.
¨ The notation for the identity function on A is fairly common, but so is .
¨ The identity function on a set A is the function that does nothing to each element of A.
¨ The identity function on is the familiar function defined by . Its graph in the plane is the diagonal line from lower left to upper right through the origin. Its derivative is the constant function defined by g(x) = 1.
¨ There is a different identity function for each different set. See overloaded notation. These functions all have the “same” formula: for every a in A. But they are technically different functions because they have different domains.
If (see inclusion), then there is an inclusion function that takes every element in A to the same element. In other words, inc(a) = a for every element . This fits the property of codomain that requires (in this case) that because that is what “ ” means every element of A is an element of B.
¨ The notation for the identity function on A show which set A we are using, but the notation “inc” does not show either set.
¨ The notation “inc” is my own and is not common. Other notations I have seen are: and
¨ Many mathematicians who use the looser definition of function never talk about the inclusion function. For them, it is merely the identity function.
¨ The definition says the inclusion function “takes every element in A to the same element.” I could have worded it this way:
“The inclusion function takes each element of A to the same element regarded as an element of B.”
This wording incorporates elements of “how you think about X” into the definition of X. This is loose and unrigorous. But I’ll bet a lot of readers would understand it more quickly that way!
¨ The graph of inc is the same as the graph of and they have the same domain, so that the only difference between them is what is considered the codomain (A for , B for the inclusion of A in B). So inc is different from if you require that functions with different codomains be different (discussed here).
If A and B are nonempty sets and b is a specific element of B, then the constant function is the function that takes every element of A to b; that is, for all .
The notation is not common. There is no standard notation for constant functions.
¨ A constant function takes everything to the same thing. It has a one-track mind.
¨ A constant function from to has a horizontal line as its graph.
¨ The constant function is not the same thing as the element b of B.
If A is any set, there is exactly one function . Such a function is an empty function. Its graph is empty, and it has no values. An identity function does nothing. An empty function has nothing to do.
If A and B are sets, there are two coordinate functions (or projection functions) and . Thus for and , and . (See cartesian product).
In general for an n-fold cartesian product, the function takes an n-tuple to its i th coordinate.
¨ It is surjective if and only if A is empty or B is nonempty.
¨ If A (or B) is empty, then so is . In that case is the empty function.
¨ For any set S, there are two different coordinate functions and . For example, if S is the set of real numbers, then and .
The coordinate function may be denoted by or sometimes (for projection).
¨ The operation of adding two real numbers gives a binary operation
¨ Subtraction is also a binary operation on the real numbers. Observe that, unlike addition, it cannot be regarded as a binary operation on the positive real numbers.
¨ Multiplication of real numbers is also a binary operation .
¨ Division is not a binary operation on the real numbers because you can’t divide by 0. However, it is a binary operation on the nonzero real numbers ( is standard notation for the nonzero reals). You could also look at the function since 0 / y is defined even though y / 0 is not. But it is not a binary operation because by definition a binary operation has to fit the pattern where all three sets are the same.
¨ For any set S, the two projections and are both binary operations on S.
¨ With a binary operation symbol, infix notation is usually used: the name of the binary operation is put between the arguments. For example we write 3 + 5 = 8, not +(3, 5) = 8.
In this section I give you examples of really weird functions that you may never have thought of as functions before, because if you are a beginner in abstract math, you probably need to:
Other consciousness-expanding examples of functions are listed in an appendix.
Let be defined by
The graph of this function is pictured on the right.
¨ F is given by a split definition. It is defined by one formula for part of its domain and by another on the rest. F is nevertheless one function, defined on the closed interval [0,1].
¨ F is discontinuous at .
¨ F does not have a derivative at x = 0.5.
¨ The graph does not and cannot show the precise behavior of the function near .
The point is on the graph, because the definition of F says that F(x) is for x
any point x to the right
c. but .
d. Nevertheless, F(
¨ It would be wrong to say something like: “ starting at the first point to the right of x = 0.5”. There is no first point to the right of x = 0.5. See density.
Let the function F be defined on the set as follows: .
F is defined only for inputs
¨ F is not injective since F(1) = F(2).
¨ F is not defined by a formula. F(2) = 3 because the definition says it is.
¨ F could be defined by the formula for . (This is given by an interpolation formula (MW, Wik)). But it is not obligatory that a function be defined by a formula, only that a mathematical definition of the function be given. See Conceptual and Computational.
¨ You could give the function as a table, as in (a).
¨ You can show the function in an picture, with arrows going from each input to its output, as in (b).
Another finite function is studied here.
Let S be some set of English words, for example the set of words in a given dictionary. Then the length of a word is a function; call it L.
¨ L takes words as inputs.
¨ L outputs the number of letters in the word. For example, and .
¨ L is not injective. For example, .
¨ L is not surjective onto since there is a longest word in the set of words in any dictionary.
¨ This function illustrates the fact that a function can have one kind of input and another kind of output.
¨ There is a method of computation for this function (count the number of letters) but most people would not call it a formula.
There is a procedure for calculating . For example to calculate F(
For example, is the prime in order, but there is no way in the world you will ever find out the decimal representation of that prime. There are faster methods for calculating F(n), in particular the sieve method. but the number is so humongous that no method could calculate in anyone’s lifetime. See Conceptual and Computational.
Note that we know F is injective even though we can’t calculate its value for large n.
for real numbers . Its graph is shown to the right. It has asymptotes (shown in green) at .
Since E(x) is a continuous function on the interval, this integral exists for every t. So G(t) is a properly defined function of the real variable t.
¨ G is a function of t, not of x. The variable x is a bound variable (dummy variable) used in the integral. The definition of G(t) therefore depends on the value of E(x) for every value of x from 0 to t (or from t to 0). After all, the integral is the area under the curve between those values of x, so every little twist in the curve matters.
¨ If you try to use methods you learned in Calc 1 to find the indefinite integral of
with respect to x, you will fail. It’s known that this integral cannot be expressed in terms of familiar functions (polynomials, rational functions, log, exp, trig functions.) Nevertheless, for all real t with , the integral
exists and and has a specific value.
¨ The definition of G(t) makes it very easy to find the derivative (!): .
Let for all real x.
¨ (F(1/3)=1, F(42) = 1, but because is not rational.
¨ If all you know about x is that it is 3.14159 correct to five decimal places, then you don’t know what F(x) is. No matter how many decimal places you are given for x, you cannot tell what F(x) is. You need to have other information about x (whether it is rational or irrational) to determine its value.
¨ This function is not continuous, and therefore does not have a derivative.
¨ This function is not injective.
¨ You can read more about this function here.
The frequency goes up rapidly as you get close to the y-axis from the left, since grows very rapidly as x moves toward 0. Drawing the graph near the y-axis is impractical because the curve between x = 0 and x = any bigger number is infinitely long even though it occurs in a finite interval.
Let f be a function that has a derivative, and let D(f) be its derivative. Then D is a function from a set of functions to a set of functions.
¨ If then , or, using barred arrow notation, .
¨ If then , or .
These are pictured below.
D takes a function as input and outputs another function, namely the derivative of the first one. The whole function is the input, not some value of the function, not the rule that defines the function, not the graph. You have to think of the function as a thing, in other words as a math object.
¨ Functions whose inputs are complicated structures such as functions may be called operators . (Usage varies in different specialties.) This function D is the differentiation operator.
¨ The differentiation operator is not injective. For example, and have the same derivative, namely 2x.
¨ The domain of D must include only differentiable function (duh).
In each picture, the differentiation operator takes the blue function thought of as a single math object to the red one.
More pictures here like those above | <urn:uuid:590c70a7-2acd-4770-a87f-c8d8ca383a87> | 3.90625 | 2,468 | Documentation | Science & Tech. | 63.4672 |
Global Climate Change and Implications for Future Plant Disease Epidemics
Department of Crop Sciences
Human activities have altered the earth’s atmospheric composition, increasing levels of both carbon dioxide and ozone. Levels of these two gasses are predicted to continue to climb well into the 21st century. By the year 2050, carbon dioxide is expected to reach levels double those of the preindustrial era, and ozone levels are increasing by as much as 2.5% a year. Plant diseases are the result of interactions between susceptible host plants, virulent pathogens, and favorable environments. So it is logical to presume that changes in the climate will have an effect on the types and severity of plant diseases, including those on agricultural crops.
Over the past decade, climate change experiments have been used to look at the impact of elevated carbon dioxide levels (CO2 ), ozone (O3) levels, and atmospheric temperatures on crop growth and performance, and, to some extent, to evaluate the effect that changing climate conditions will have on the development of plant diseases. These studies have shown that some plant disease epidemics are likely to be more severe, some less severe, and some diseases will not be greatly impacted by the predicted changes.
While elevated CO2 levels might have a direct effect on plant pathogens, they are most likely to have an impact on plant diseases by the changes they cause in the plant hosts. Plants growing in a high CO2 environment tend to grow faster and larger, and they have denser canopies. These dense plant canopies favor the development of some diseases because the low light levels and reduced air circulation allow higher relative humidity levels to develop, and this promotes the grown and sporulation of many plant pathogens. However, plants grown in high CO2 environments also close their stomata more of the time. Stomata are the pores in the leaves that allow the plant to take in CO2 and give off oxygen. Some plant pathogens enter the plant through the stomata, and if the stomata are not open, the pathogen has a more difficult time getting into the plant. Plants grown in high ozone environments tend to be shorter, with less dense canopies, which slows the development of some diseases because the more open, less humid canopy slows the growth and reproduction of certain pathogens. However, O3 also damages plant tissues, which helps some pathogens to infect the plant. So both elevated levels of CO2 and O3 can make a plant more susceptible to some diseases, but less susceptible to others, and this is exactly what has been observed in climate change experiments.
Rising temperatures and changes in rainfall patterns will also have an impact on the development on plant disease epidemics. In some cases, changes of a only a few degrees have allowed plant diseases to become established earlier in the season, resulting in more severe disease epidemics, and the ranges of some diseases are expanding as rising temperatures are allowing pathogens to overwinter in regions that previously had been too cold for them. For example, warmer winter temperatures may allow Kudzu to expand its range northward. Because Kudzu is an alternate host for the soybean rust pathogen, one result of rising temperatures may be that soybean rust arrives in Illinois earlier in the soybean growing season.
The information derived from climate change studies will help us prepare for the changes to come by knowing which diseases are most likely to become more problematic. This knowledge will allow plant pathologists, plant breeders, agronomists, and horticulturalists to adapt disease management strategies to the changing environment. | <urn:uuid:36927be9-dd21-4b88-a5ed-5da96d678ea2> | 3.296875 | 722 | Academic Writing | Science & Tech. | 23.759482 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2012 October 7
Explanation: Are square A and B the same color? They are! To verify this, either run your cursor over the image or click here to see them connected. The above illusion, called the same color illusion, illustrates that purely human observations in science may be ambiguous or inaccurate. Even such a seemingly direct perception as relative color. Similar illusions exist on the sky, such as the size of the Moon near the horizon, or the apparent shapes of astronomical objects. The advent of automated, reproducible, measuring devices such as CCDs have made science in general and astronomy in particular less prone to, but not free of, human-biased illusions.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. | <urn:uuid:9ca33015-c880-49ef-bc73-70a2cbd7b04b> | 3.4375 | 208 | Knowledge Article | Science & Tech. | 34.676667 |
Once we've created an Elm, we can start manipulating them. Recall from earlier that we stated that an Elm was simply an HTMLElement that DomAPI adds some customs properties and methods to. Let's start looking now at what some of those methods are.
This is not meant to be an exhaustive list of methods available to Elms, but is intended merely to get you aquainted with some of the basics. For a full list, see the documentation on domapi.Elm().
setX( value )
Use this method to set the 'left' attribute of the Elm, relative to its parent element. Note that this and all other methods related to Elms automatically take into account all browser and DTD variations. In other words, they are already aware of any 'boxing' problems related to margins, paddings and borders, and will always faithfully produce whatever you ask of them, regardless of browser quirks.
setY( value )
Used to set the 'top' attribute of the Elm, relative to its parent.
setSize( x-value, y-value )
If you needed to set both the top and left at the same time, then calling setSize can save you some overhead. | <urn:uuid:07425928-61cb-4f3e-a494-dd4baea3816f> | 2.921875 | 250 | Tutorial | Software Dev. | 52.524886 |
Biology & Chemistry
Stan Botchway explains how lasers have a dual purpose when studying DNA – doing the damage and then monitoring the repair process.
Evolutionary biology professor Daniel Lieberman, whose studies are the scientific backbone for Chris McDougall’s BORN TO RUN, gives five pointers on how he thinks you can run long distances better and injury-free.
Most people would steer clear of any snakes or oversized spiders that crossed our paths. We have a sort of logic when it comes to animals that tells us “Bigger = Deadlier.” But more often, you will often find that the opposite is true. Many animals make up for their small size with deadly venom, and so what may look like a regular snail could actually be your final downfall. Below are ten tiny, but incredibly deadly, animals.
Scientists at Princeton University created a functional ear by 3D printing technology, that can “hear” radio frequencies far beyond the range of normal human capability.
Imitating nature to build a better (or possibly more terrifying) future. We’ve been trying to build flapping-wing robots for hundreds of years. And now, ornithopters are finally being developed, and may be used mostly for military purposes.
Why do ice cubes shrink, ice cream get frosty, and vegetables dry out in the freezer?
It started with hair. Donning a pair of rubber gloves, Heather Dewey-Hagborg collected hairs from a public bathroom at Penn Station and placed them in plastic baggies for safe keeping. Then, her search expanded to include other types of forensic evidence. As the artist traverses her usual routes through New York City from her home in Brooklyn, down sidewalks onto city buses and subway cars—even into art museums—she gathers fingernails, cigarette butts and wads of discarded chewing gum.
The scientific study of kissing is “philematology”
Science is working tirelessly night and day to disprove its own theories about how the universe works (or at least, that’s what science thinks it’s doing). Hank tells us a quick history of how we came to create and adopt the scientific method and then gives us a vision of the future of science (hint: it involves a lot more computers and a lot less pipetting).
Learning to talk about chemistry can be like learning a foreign language, but Hank is here to help with some straightforward and simple rules to help you learn to speak Chemistrian like a native.
Animals naturally synthesize a pigment called melanin, which determines the color of their eyes, fur (or feathers) and skin. Pigments are chemical compounds that create color in animals by absorbing certain wavelengths of light while reflecting others. Many animals can’t create pigments other than melanin on their own. Plant life, on the other hand, can produce a variety of them, and if a large quantity is ingested, those pigments can sometimes mask the melanin produced by the animal. Thus, some animals are often colored by the flowers, roots, seeds and fruits they consume.
Rob Linforth is an expert on food chemistry and flavour science – even though one of his nostrils does not work properly!
Wool&Prince claim they have invented a shirt that stays clean even after 100 days of wear, made from wool which is also wrinkle free. | <urn:uuid:4b0808b4-8160-4154-92a5-7f4cb6496aae> | 2.921875 | 699 | Content Listing | Science & Tech. | 45.168298 |
Jupiter's Great Red Spot
Photograph courtesy NASA
In 1979 Voyager 1 captured this photo of an immense high-pressure storm, called the Great Red Spot, swirling on Jupiter. Winds blow counterclockwise around the Great Red Spot at about 250 miles (400 kilometers) an hour. The storm is larger than one Earth diameter from north to south, and more than two Earth diameters from east to west. | <urn:uuid:ead7da87-6d55-4d0c-bc70-c4fa3c97a025> | 3.046875 | 83 | Knowledge Article | Science & Tech. | 60.423383 |
Science News has an exploration of the deeper implications of neutrino oscillation, one experimental confirmation of which we discussed last month. "The new findings could even signal a tiny breakdown of Einstein's theory of special relativity. ... MINOS [for Main Injector Neutrino Oscillation Search] found that during a 735-kilometer journey from Fermilab to the Soudan Underground Laboratory in Minnesota, about 37 percent of muon antineutrinos disappeared — presumably morphing into one of the other neutrino types — compared with just 19 percent of muon neutrinos. ... That difference in transformation rates suggests a difference in mass between antineutrinos and neutrinos. ... With the amount of data collected so far, there's just a 5% probability that the two types of particles weigh the same." | <urn:uuid:b3a75e8f-7fc1-4a1c-99a2-c842b869dcb8> | 2.75 | 174 | Comment Section | Science & Tech. | 37.252583 |
Pollution from Fires across Northwest Africa
Fires in the savanna south of the Sahara Desert, and in the tropical rainforests at latitudes just north of the equator, form an intense swath of biomass burning across the African continent from December through April each year. This is a region of high grass production and the majority of these fires are not wildfires, but result from agricultural waste burning associated with subsistence farming.
These fires produce large amounts of carbon monoxide, which is a good indicator of atmospheric pollution. The image above represents a composite of data collected over a 25-day period, from February 1-25, 2004, by the Measurements Of Pollution In The Troposphere (MOPITT) instrument aboard NASA’s Terra satellite. The colors represent the mixing ratios of carbon monoxide in the air, given in parts per billion by volume at an altitude of roughly 3 km (700 mb). The grey areas show where no data were collected due to persistent cloud cover. The high levels of carbon monoxide form a pollution plume that extends westward from Africa across the Atlantic Ocean to South America. In such high concentrations, carbon monoxide has a significant impact on tropical air quality.
The image above corresponds well with true-color images captured over the region by the Terra MODIS instrument on February 10 and February 17, 2004.
This image originally appeared on the Earth Observatory. Click here to view the full, original record. | <urn:uuid:78e62211-434c-4d0e-b35d-88333a8becf8> | 3.59375 | 294 | Knowledge Article | Science & Tech. | 30.999545 |
Sign up online now!
Antarctica and the surrounding area are natural laboratories for scientific research that can not be done anywhere else on Earth. Among the unusual aspects of the continent are its harsh climate and extreme cold, frigid ice-filled oceans, vast polar ice cap and large glaciers, geologic formations and structures that are related to more northerly land masses, uniquely adapted forms of plant and animal life, and unusual meteorological phenomena. These are covered by scientific disciplines that have attracted exploration and scientific curiosity for more than a hundred years. Here is the place for the meteorologist, oceanographer, atmospheric physicist, geologist, glaciologist, seismologist, geophysicist, biologist, and zoologist, and even the people of medicine who are examining the effects of the Antarctic environment on human physiology. The research involving so many disciplines is carried out by scientists among the faculty and students of colleges and universities, government agencies and private industry.
The polar regions have been called Earth's window to outer space. With the discovery of polar stratospheric ozone depletions, a window previously thought "closed" (the ultraviolet window) is now known to "open" in certain seasons. Current research focuses on stratospheric chemistry, aerosols, and the vital role played by ozone.
Antarctica is an astronomer's dream come true. The Amundsen-Scott South Pole Station is arguably one of the best places on earth to study the stars. Observers there take advantage of the unique characteristics of the South Pole to study the evolution and structure of the Universe.
Conditions on the frozen Antarctic surface are so harsh that few life forms survive year-round above the ice. Of particular interest to biologists, the McMurdo Dry Valleys represent a region where life approaches its environmental limits. While below the surface and along the coast, ocean ecosystems teem with life that is rich, complex, and abundant.
Much of the story of Antarctica is written beneath the ice, in the rocks that make up about 9 percent of Earth's continental crust. Geologic evidence indicates that at one time the continent had a temperate climate and was part of an ancient, considerably larger land mass, known as
An ice sheet covers all but 2.4 per cent of Antarctica's 14 million square kilometers. This ice contains 70 percent of all the world's fresh water. In order to predict the ice sheet's future behavior and its effect on global climate, glaciologists must have a thorough understanding of its history, current state, internal dynamics.
The weather systems that constantly circle Antarctica drive storms across the Southern Ocean and beyond, while the seasonal formation and melting of sea ice has an important effect on the world's weather. Antarctic stations collect daily meteorological observations and broadcast them to surrounding countries to help in weather forecasting.
The Antarctic Convergence divides the cold southern water masses from the warmer northern waters, creating the world's largest current flowing at an average speed of half a knot eastward around the continent. In addition, sea ice forms outward up to 1500 kilometers from the continent every winter. Oceanographic studies focus on these two interrelated phenomena and their effects on both marine ecosystems and Earth's climate patterns.
Sign up online now! | <urn:uuid:d24534b9-76ab-4e01-bd46-048811e0b5fd> | 3.859375 | 653 | Content Listing | Science & Tech. | 27.285385 |
The Lorentz Transformation as a Wave Function
We will demonstrate that translation-dependent transformations, of which the Lorentz Transformation is a special case, arise naturally out of wave systems. As a result, the Lorentz Transformation might be considered a natural consequence of the wave characteristics of matter. Does this concept form the link between quantum theory's wave-particle duality and relativity's Lorentz-Transformation? You decide.
Last updated September 2012
Imagine an entity composed of standing waves in one dimension. This system can be characterized by equations in various ways:
The perceptive will already see the implications in the second equation, but we will forgo the discussion until after a few simulation snap shots.
Related Pages at this site:
Lorentz & Standing Waves
In the Lorentz Transformation accelerating an object will contract its ruler, and dilate its clock. In standing waves changing the coefficients to contract the ruler and dilate the clock means changing the same variables that will produce acceleration.
Using a little algebra, the wave coefficients will relate to the velocity in the Lorentz Transformation:
With out the use of higher level multi-dimensional mathematics and special considerations from physical requirements, we can not determine for sure if this result is significant.
Evidence that the result is physically significant is above: standing wave systems transform intrinsically when accelerated.
Evidence that the result is mathematically trivial: whenever a system is transformed, all waves within that system must transform accordingly. Imagine a violin on a space ship, it should sound correct to the astronauts, regardless of the acceleration of the ship.
Physically Significant or Mathematically Trivial?
In 1983 I recognized (above) that the Lorentz Translation could result
from a wave equation. I was unable to determine the significance of the
result. One reason may have been that I assumed that a standing wave system
required both the forward and reverse parts to have the same wavelength.
Considerations and Special Cases
This pattern is interesting, because it gives us a standing wave overlapping a moving wave. This may be a good general description when we see ripples between rocks in a flowing stream. But this form probably does not apply to the Lorentz Transformation.
2: We could approach the problem from an energy and momentum perspective.
It's hard to see whether the interference pattern can be made to conform
to a moving "standing wave." But we can see some interesting
considerations in the equation. The total energy of the system should
be E1+E2. The total momentum of the system should be P1 - P2. From the
perspective of matter as a standing wave, we would expect momentum to
correlate to two de Broglie wavelengths: = 1 - 2 = h/P.
3: We could look at the general form and ask what special conditions must the general form meet to conform to the measured characteristics of relativity and quantum theory.
This is the challenge. Did Ivanov take this form far enough to show that it could fit the Lorentz Transformation? If not, what wave characteristics will make it do so?
Found online September 2012 | <urn:uuid:a71aba79-d5d8-46a3-87c5-6dcfc303393a> | 2.890625 | 650 | Knowledge Article | Science & Tech. | 35.312427 |
Increasing the amount of electricity a solar panel produces is one of the most effective ways to reduce the cost of solar power.
MIT’s Technology Review reports on a new $2.4 million project funded by the U.S. Advanced Research Projects Agency for Energy aims to greatly increase the amount of sunlight that becomes electricity.
Harry Atwater, a professor of applied physics and materials science at Caltech, is working on cells to sort sunlight into eight to 10 different colors (a rainbow!) and direct those to a second layer that contains an array of solar cells matched to each color. Light is absorbed more efficiently, and more energy is converted to electricity, rather ...read more | <urn:uuid:e978f1cc-7b97-4e30-a90c-9934f391aee4> | 3.4375 | 138 | Truncated | Science & Tech. | 49.621038 |
Tyrannosaurs are superb nightmare fuel. Exquisitely adapted for catching, killing and crunching other dinosaurs, they were among the most formidable predators ever, and now University of Alberta palaeontologist Philip Currie has proposed an even more frightening twist to tyrannosaur behaviour. The tyrants were not lone marauders, Currie suggests, but coordinated pack hunters.
Currie's hypothesis is presented in an absurdly hyped mass media package under the heading Dino Gangs. In its synopsis, the book Dino Gangs calls Currie's hypothesis "groundbreaking" three times in nearly as many sentences. The trailer for the documentary tie-in – aired on Discovery UK last month – doesn't hold back on the sensationalism, either, closing with the nonsequitur "If dinosaurs hadn't become extinct, would gangs of killer tyrannosaurs now rule the world?"
It should surprise no one that such nauseating hype emanates from Atlantic Productions. This is the same media company which, in 2009, pedalled exaggerated claims that a beautifully preserved fossil primate nicknamed "Ida" was "The Link" to our early simian ancestry. (As was immediately recognised by fossil primate experts, Ida was more closely related to lemurs than to us). And, as with Ida, the publicity blitz surrounding gangs of bloodthirsty tyrannosaurs has two parts: the media fluff, and the actual science.
The idea that some dinosaurs were cooperative predators is not new. Images of pack-hunting dinosaurs have been popular since multiple individuals of the sickle-clawed raptor Deinonychus were found entombed in a quarry with the bones of the herbivorous dinosaur Tenontosaurus in the 1970s, and the intelligent, cooperative Velociraptor – modeled upon Deinonychus – kept the heroes of Jurassic Park running in fear during the final act of the 1993 blockbuster (not to mention two sequels).
Social tyrannosaurs have been considered before, as well. Currie has been writing papers on pack-hunting tyrants for more than a decade.
Currie's hypothesis originated in a brilliant bit of fossil detective work. Over a century ago the famous fossil collector Barnum Brown discovered multiple specimens of the tyrannosaur Albertosaurus in the Late Cretaceous rock of Canada's Dry Island Buffalo Jump Provincial Park. Rather than fully excavate the rich site, however, Brown only collected a smattering of fossils before leaving.
Thanks to old notes and photographs, Currie relocated the quarry in 1998. Excavation yielded the remains of at least a dozen Albertosaurus of different ages. Currie speculated that they belonged to a social group with complex hunting behaviour, and in 2000 he published a paper "Possible evidence of gregarious behavior in tyrannosaurids" in which he imagined that the leggy, lightly built juvenile tyrannosaurs drove prey towards waiting adults. (On the basis of other bone beds, Currie has also proposed that the tyrannosaur Daspletosaurus and a very distantly related predator – a carcharodontosaur from Argentina named Mapusaurus – may have been social hunters.)
In Dino Gangs the press release, the book, and the documentary, though, the tyrannosaur Tarbosaurus is the star. The media materials say multiple Tarbosaurus specimens were found in relatively close proximity by the Korea-Mongolia International Dinosaur Project between 2006 and 2010. At one particular site, which is the peg for all the hubbub, six Tarbosaurus of different ages were purportedly found in close association.
No scientific paper has been published about the Tarbosaurus sites. The Dino Gangs media package is science by press release – studies of the sites have not yet been completed and communicated, yet sensational conclusions have been funneled to news sources that have credulously repeated assertions unconstrained by the actual data.
Nevertheless, even without the essential technical details of the Tarbosaurus sites, the characteristics of the Albertosaurus quarry and other dinosaur bonebeds provide plenty of reason to question the Dino Gangs hype. As Currie himself noted in a webchat promoted by Discovery UK – in which I also participated – "In fact, it really is only possible to interpret the Tarbosaurus sites in light of the Albertosaurus, Mapusaurus, Daspletosaurus and other discoveries."
Last year Currie and co-author David Eberth of Canada's Royal Tyrrell Museum published a paper "On gregarious behavior in Albertosaurus" in a special, all-Albertosaurus issue of the Canadian Journal of Earth Sciences. While Currie's notion of cooperative tyrannosaurs was entertained, the paper also noted that large numbers of animals are often brought together under harsh conditions, such as drought or flood. The fact that there is geological evidence for devastating local flooding at the site may indicate that the Albertosaurus were corralled into a small area before death.
There was no reason to automatically take the association of the bones as evidence of social behavior. Even what seemed to be the best evidence for gregarious tyrannosaurs is ambiguous.
Such is the trouble with bonebeds. Just because the skeletons of animals are found near each other does not necessarily mean that the creatures were together when they died or were socialising. About two hours' drive southeast from my apartment in Salt Lake City, Utah, there is a rich Jurassic dinosaur site known as the Cleveland-Lloyd Dinosaur Quarry. The jumbled remains of more than 44 individual Allosaurus specimens of different ages have been found here – they greatly outnumber the herbivores in the quarry – yet the proximity of these dinosaurs to each other does not mean that massive packs of Allosaurus terrorised the landscape. Cleveland-Lloyd probably was a predator trap which killed and preserved a large number of Allosaurus over an unknown stretch of time.
Simply put, bones alone are not enough to reconstruct dinosaur behaviour. The geological context in which those bones are found – the intricate details of ancient environments and the pace of prehistoric time – are essential to investigating the lives and deaths of dinosaurs. In the case of the Tarbosaurus sites, we need to know how each site formed, when each animal died, and if there were any local conditions that might have caused animals to assemble in a small area.
Similar caveats are aired in the Dino Gangs book and documentary, but, while I must credit the media masterminds behind the package for including dissenting voices, the arguments of the sceptical scientists are not taken seriously. Both the book and documentary record the search for evidence in support of Currie's hypothesis, and details are often spun in Currie's favor even when the evidence is contradictory or ambiguous.
In Dino Gangs the book, which reads like a bland transcript of the documentary, Currie visits animal locomotion specialist John Hutchinson at the Royal Veterinary College in Hatfield, England. Hutchinson explains that young tyrannosaurs were probably more agile and "bouncy" than adults, but whether they could truly run faster – acting as fleet-footed scouts as Currie suspects – is not clear. Yet the rest of the book considers Hutchinson's findings as a confirmation that Currie was correct.
Details of Tarbosaurus brain anatomy are spun in a similar way. Lawrence Witmer, an expert on dinosaur brains and soft-tissue anatomy at Ohio University, explains that there's no definite indicator of pack-hunting behaviour in the brain of Tarbosaurus – there is no "social lobe" to look at – only to have the book's author write "Currie prefers to reverse the line of enquiry: to ask whether there's anything in the brain to say that tyrannosaurs could not have complex social behavior."
The answer is no. Pack hunting remains a possibility, albeit one that has yet to be supported by hard evidence, but this short passage gives away the entire tone of the media campaign. The question is not "Were these dinosaurs social, and how can we know?" but "What evidence already exists that will support our desired image of dinosaur behaviour?"
Hence science is falsely shown as the search for only the data that will support preconceived conclusions, rather than an ongoing interrogation of nature in which the evidence does not always support what we had hoped or expected.
Tyrannosaurs may have lived in social groups, or they may not, and this uncertainty makes it all the more important to carefully weigh all the available evidence, not just that in favour of the sexiest idea. This is all the more unfortunate because, although he was there to promote tyrannosaur packs, during the webchat Currie took more care to qualify what is and isn't known, and responded "We can speculate ... but nothing more" when asked about the tactics Tarbosaurus gangs might use to stalk and attack prey.
When palaeontology is squeezed through the mass-media filter, scientific uncertainty is too often replaced by a ridiculous amount of hyperbolic assuredness that distorts how questions about nature are actually approached.
I would be thrilled if palaeontologists discovered compelling evidence that tyrannosaurs were social hunters. A trackway preserving the footsteps of several individuals moving in the same direction at the same time would be excellent. But until then, tableaus of tyrannosaur families dining together must remain tantalisingly speculative parts of prehistory. | <urn:uuid:b1d5769c-5ec2-4616-ba43-df03f4682d3a> | 2.765625 | 1,908 | Nonfiction Writing | Science & Tech. | 24.237029 |
The cloud is an abstract notion of a loosely connected group of computers working together to perform some task or service that appears as if it is being fulfilled by a single entity. The architecture behind the scenes is also abstract: each cloud provider is free to design its offering as it sees fit. Software as a Service (SaaS) is a related concept, in that the cloud offers some service to users. The cloud model potentially lowers users' costs because they don't need to buy software and the hardware to run it — the provider of the service has done that already.
Take, for example, Amazon's S3 offering. As its name implies, it is a publicly available service that lets Web developers store digital assets (such as images, video, music, and documents) for use in their applications. When you use S3, it looks like a machine sitting on the Internet that has a hard drive containing your digital assets. In reality, a number of machines (spread across a geographical area) contain the digital assets (or pieces of them, perhaps). Amazon also handles all the complexity of fulfilling a service request to store your data and to retrieve it. You pay a small fee (around 15 cents per gigabyte per month) to store assets on Amazon's servers and one to transfer data to and from Amazon's servers.
Rather than reinvent the wheel, Amazon's S3 service exposes a RESTful API, which enables you to access S3 in any language that supports communicating over HTTP. The JetS3t project is an open source Java library that abstracts away the details of working with S3's RESTful API, exposing the API as normal Java methods and classes. It's always best to write less code, right? And it makes a lot of sense to borrow someone else's hard work too. As you'll see in this article, JetS3t makes working with S3 and the Java language a lot easier and ultimately a lot more efficient.
Logically, S3 is a global storage area network (SAN), which appears as a super-big hard drive where you can store and retrieve digital assets. Technically though, Amazon's architecture is a bit different. Assets you choose to store and retrieve via S3 are called objects. Objects are stored in buckets. You can map this in your mind using the hard-drive analogy: objects are to files as buckets are to folders (or directories). And just like a hard drive, objects and buckets can be located via a Uniform Resource Identifier (URI).
For example, on my hard drive, I have a file named whitepaper.pdf, which is in the folder named documents in my home directory. Accordingly, the URI of the .pdf file is /home/aglover/documents/whitepaper.pdf. In S3's case, the URI is slightly different. First, buckets are top-level only — you can't nest them as you would folders (or directories) on a hard drive. Second, buckets must follow Internet naming rules; they can't include dashes next to periods, names shouldn't contain underscores, and so on. Lastly, because bucket names become part of a public URI within Amazon's domain (s3.amazonaws.com), bucket names must be unique across all of S3. (The good news is that you can only have 100 buckets per account, so it's doubtful there are squatters taking hundreds of good names.)
Buckets serve as the root of a URI in S3. That is, a bucket's name becomes part of the URI leading to an object within S3. for example, if I have a bucket named agdocs and an object named whitepaper.pdf, the URI would be http://agdocs.s3.amazonaws.com/whitepaper.pdf.
S3 also offers the ability to specify owners and permissions for buckets and objects, as you can do for files and folders on a hard drive. When you define an object or a bucket in S3, you can specify an access-control policy that states who can access your S3 assets and how (for example, read and write permissions). Accordingly, you can then provide access to your objects in a number of ways; using a RESTful API is just one of them.
To begin using S3, you need an account. S3 isn't free, so when you create your account you must provide Amazon with a means of payment (such as a credit card number). Don't worry — there are no setup fees; you only pay for usage. The nominal fees for the examples in this article will cost less than $1.
As part of the account-creation process, you also need to create some credentials: an access key and a secret key (think username and password). (You can also obtain x.509 certificates; however, they are only needed if you use Amazon's SOAP API.) As with any access information, it is imperative that you keep your secret key ... secret. Should anyone else get hold of your credentials and use them to access S3, you'll be billed. Consequently, the default behavior any time you create a bucket or an object is to make everything private; you must explicitly grant access to the outside world.
With an access key and a secret key in hand, you can download JetS3t and use it with abandon to interact with S3 via its RESTful API via.
Programmatically signing into S3 via JetS3t is a two-step process. First, you must
AWSCredentials object and then pass it into a
S3Service object. The
AWSCredentials object is fairly straightforward. It takes your access and secret keys as
S3Service object is actually an interface type. Because S3 offers both a RESTful API and a SOAP API, the JetS3t library offers two implementation types:
SoapS3Service. For the purposes of this article (and indeed, most, if not all of your S3 pursuits), the RESTful API's simplicity makes it a good choice.
Creating a connected
RestS3Service instance is simple, as shown in Listing 1:
Listing 1. Creating an instance of JetS3t's
def awsAccessKey = "blahblah" def awsSecretKey = "blah-blah" def awsCredentials = new AWSCredentials(awsAccessKey, awsSecretKey) def s3Service = new RestS3Service(awsCredentials)
Now you are set to do something interesting: create a bucket, say, add a movie to it, and then obtain a special limited-time-available URL. In fact, that sounds like a business process, right? It's a business process associated with releasing a limited asset, such as a movie.
For my imaginary movie business, I'm going to create a bucket dubbed bc50i. With
JetS3t, the process is simple. Via the
S3Service type, you have a few options. I prefer to use the
getOrCreateBucket call, shown in Listing 2. As the name implies, calling this method either returns an instance of the bucket (represented by an instance of the
S3Bucket type) or creates the bucket in S3.
Listing 2. Creating a bucket on a S3 server
def bucket = s3Service.getOrCreateBucket("bc50i")
Don't let my simple code examples fool you. The JetS3t library is fairly extensive. For
instance, you can quickly ascertain how many buckets you have by simply asking an instance of an
S3Service via the
listAllBuckets call. This method returns an array of
S3Bucket instances. With any instance of a bucket, you can ask for its name and creation date. More important, you can control permissions associated with it via JetS3t's
AccessControlList type. For instance, I can grab an instance of my bc50i bucket and make it publicly available for anyone to read and write to, as shown in Listing 3:
Listing 3. Altering the access-control list for a bucket
def bucket.acl = AccessControlList.REST_CANNED_PUBLIC_READ_WRITE
Of course, via the API, you are free to remove buckets too. Amazon even allows you to specify in which geographical areas you'd like your bucket created. Amazon handles the complexity of where the actual data is stored, but you can nudge Amazon to put your bucket (and then all objects within it) in either the United States or Europe (the currently available options).
Creating S3 objects with JetS3t's API is just as easy as bucket manipulation. The library is also smart enough to take care of some of the intricacies of dealing with content types associated with files within an S3 bucket. For instance, imagine that the movie I'd like to upload to S3 for customers to view for a limited time is nerfwars2.mp4. Creating an S3 object is as easy as creating a normal
java.io.File type and associating the
S3Object type with a bucket, as I've done in Listing 4:
Listing 4. Creating an S3 object
def s3obj = new S3Object(bucket, new File("/path/to/nerfwars2.mp4"))
Once you've got a
S3Object initialized with a file and a bucket, all you need to do is upload it via the
putObject method, as shown in Listing 5:
Listing 5. Uploading the movie is a piece of cake
With the code in Listing 5, you're done. The movie is now on Amazon's servers, and the key for the movie is its name. You could, of course, override that name should you feel the need to call the object something else. In truth, the JetS3t API (and by relation the Amazon S3 RESTful API) exposes a bit more information for you when you create objects. As you know, you can also provide access-control lists. Any object within S3 is capable of holding additional metadata, which the API allows you to create. You can later query any object via the S3 API (and by derivation, JetS3t) for that metadata.
At this point, my S3 instance has a bucket with a movie sitting in it. In fact, my movie can be found at this URI: http://bc50i.s3.amazonaws.com/nerfwars2.mp4. Yet, no one other than me can get to it. (And in this case, I can only access it programmatically, because the default access controls associated with everything are set to deny any noncredentialed access to it.) My goal is to provide select customers a way to view the new movie (for a limited time) until I'm ready to start charging for access (which S3 can facilitate as well).
Figure 1 shows the default access control in action. The XML document returned (and accordingly displayed in my browser) is informing me that access is denied to the asset I was trying to reach (http://bc50i.s3.amazonaws.com/nerfwars2.mp4).
Figure 1. Amazon's security in action
Creating a public URL is a handy feature exposed by S3; in fact, with S3, you can create a public URL that is only valid for a period of time (for instance, 24 hours). For the movie I've just stored on the S3 servers, I'm going to create a URL that is valid for 48 hours. Then I'll then provide this URL to select customers so they can download the movie and watch it at will (provided they download it within two days).
To create a time-sensitive URL for an S3 object, you can use JetS3t's
createSignedGetUrl method, which is a static method of the
S3Service type. It takes a bucket name, a object's key (the movie's
name in this case, remember?), some credentials (in the form of JetS3t's
AWSCredentials object), and an expiration date. If you know the desired bucket name and the object's key, you can quickly obtain a URL as shown in the Groovy code in Listing 6:
Listing 6. Creating a time-sensitive URL
def now = new Date() def url = S3Service.createSignedGetUrl( bucket.getName(), s3obj.key, awsCredentials, now + 2)
With Groovy, I can specify a date 48 hours in the future quite easily via the
+ 2 syntax. The resulting URL looks something like this (on a single line):
Now, with this resultant URL, browser requests will honored, as shown in Figure 2:
Figure 2. The URL facilitates downloading
Wasn't this process a piece of cake? With a few lines of code, I've created a secure asset in the cloud that can only be downloaded with a special URL.
S3 makes a lot of sense if your bandwidth and storage needs aren't constant. For example, imagine the business model I'm demonstrating — one in which movies are released at specific times throughout the year. In the traditional storage model, you'd need to buy a bunch of space on a rack somewhere (or provide your own hardware and pipe leading to it) and most likely see spikes of downloads followed by lulls of relatively low activity. You'd be paying, however, regardless of demand. With S3, the model is satisfied based on demand — the business pays for storage and bandwidth only when it's required. What's more, S3's security features let you further specify when people can download videos and even specify who can download them.
Achieving these requirements with S3 turns out to be quite easy. At a high level, creating a limited publicly available download for a movie requires four steps:
- Sign into S3.
- Create a bucket.
- Add a desired video (or object) to that bucket.
- Create a time-sensitive URL for the video.
S3's pay-as-you-go model has some obvious advantages over the traditional storage model. For instance, to store my music collection on my own hard drive, I must buy one — say a 500GB unit for $130 — up front. I don't have nearly 500GB of data to store, so in essence I'm paying roughly 25 cents per gigabyte for unneeded (albeit fairly inexpensive) capacity. I also must maintain my device and pay to power it. If I go the Amazon route, I don't need to fork out $130 up front for a deprecating asset. I'll pay about 10 cents less per gigabyte and needn't pay to manage and maintain the storage hardware. Now imagine the same benefits on an enterprise scale. Twitter, for example, stores the images for its more than 1 million user accounts on S3. By paying on a per-usage basis, Twitter is spared the high expense of acquiring a hardware infrastructure to store and serve up those images, as well as ongoing labor and parts costs to configure and maintain it.
The cloud's benefits don't end there. You also gain low latency and high availability. The presumption is that the assets stored on Amazon's cloud are physically located around the globe, so content is served up faster to varying locations. What's more, because your assets are distributed to various machines, your data remains highly available should some machine (or portion of the network) go down.
In summary, the benefits of Amazon's S3 are simple: low cost, high availability, and security. Unless you're a SAN guru and enjoy maintaining hardware assets for storing digital items, Amazon probably does a better job than you. So why spend the up-front money on hardware (which loses value over time, don't forget) when you can borrow someone else's?
Amazon S3: Visit home base for the Amazon Simple Storage Service.
JetS3t: Learn more about the JetS3t toolkit and application suite.
Cloud Computing: Visit IBM Cloud Computing Central for a wealth of cloud resources.
technology bookstore for books on these and other technical topics.
developerWorks Java technology zone: Find hundreds of articles about every aspect of Java programming.
Get products and technologies
JetS3t: Download JetS3t.
developerWorks Cloud Computing Resource Center: Access IBM software products in the Amazon Elastic Compute Cloud (EC2) virtual environment.
Andrew Glover is a developer, author, speaker, and entrepreneur. He is the founder of the easyb Behavior-Driven Development (BDD) framework and is the co-author of three books: Continuous Integration, Groovy in Action, and Java Testing Patterns. He teaches a wide variety of Groovy-, Grails-, and testing-related classes at ThirstyHead.com. You can keep up with Andy at thediscoblog.com, where he routinely blogs about software development. | <urn:uuid:08da3a69-a954-49f2-82f6-fa67a4700559> | 3.109375 | 3,568 | Documentation | Software Dev. | 56.195873 |
A cycle that contains finitely many elements is a finite cycle and the cycle length is the number of elements it contains.
Let be a permutation written as a finite product of disjoint cycles of finite length. The order of is the least common multiple of the lengths of the cycles.
A transposition is a cycle of length two. Clearly a transposition has order two, but there are permutations of order two that are not transpositions.
One important application of transpositions is that every permutation may be written as a product of transpositions (although not necessarily disjoint and not uniquely). A permutation is an even permutation if it is a product of an even number of transpositions and an odd permutation if it is a product of an odd number of transpositions.
As the definition suggests, a permutation cannot be both odd and even. However, the decomposition into a product of transpositions is not unique nor is the number of transpositions unique. For example, the permutation may be written as , or . Of course, the identity permutation is an even permutation. | <urn:uuid:2e557175-df06-4743-8f2d-5e030a2827d9> | 3.890625 | 234 | Q&A Forum | Science & Tech. | 37.475909 |
|flux, light||medical dictionary|
<microscopy> Sometimes called luminous flux, the visible portion of the radiant energy emitted by a light source. It is measured in lumens per solid angle. In electrical engineering, it is analogous to the lines of force in a magnetic field, spoken of as magnetic flux.
(05 Aug 1998)
|Bookmark with:||word visualiser||Go and visit our forums| | <urn:uuid:85a03aa7-2e60-48b0-815c-e88aa5aad402> | 2.953125 | 90 | Structured Data | Science & Tech. | 35.026175 |
- This article is about the electromagnetic phenomenon.
In physics, there are two kinds of dipoles (from the Greek terms di(s)-, meaning "two," and polos, meaning "pivot" or "hinge"): An electric dipole and a magnetic dipole. An electric dipole refers to an object or system in which positive and negative electric charges are located at two separate points. The simplest example is a pair of electric charges of equal magnitude but opposite sign, separated by some small distance. A permanent electric dipole is called an electret.
A magnetic dipole is an object or system in which opposite magnetic poles (north and south) are separated by a distance. A magnetic dipole is produced by a closed circuit of electric current. A simple example of this is a single loop of wire with some constant current flowing through it.
A dipole can be characterized by its dipole moment, a vector quantity. For the simple electric dipole mentioned above, the electric dipole moment points from the negative charge towards the positive charge, and its magnitude is equal to the strength of one of the charges multiplied by the separation between the charges. For a magnetic dipole, the magnetic dipole moment is the strength of either magnetic pole multiplied by the distance separating the two poles. For a magnetic dipole produced by a current loop, its dipole moment points through the loop (according to the right hand grip rule), and its magnitude is equal to the current in the loop times the area of the loop.
In addition to current loops, the electron, among other fundamental particles, is said to have a magnetic dipole moment. This is because it generates a magnetic field which is identical to that generated by a very small current loop. However, to the best of our knowledge, the electron's magnetic moment is not due to a current loop, but is instead an intrinsic property of the electron. It is also possible that the electron has an electric dipole moment, although this has not yet been observed.
A permanent magnet, such as a bar magnet, owes its magnetism to the intrinsic magnetic dipole moment of the electron. The two ends of a bar magnet are referred to as poles (not to be confused with monopoles), and are labeled "north" and "south." The dipole moment of the bar magnet points from its magnetic south to its magnetic north pole. It should be noted that the "north" and "south" convention for magnetic dipoles is the opposite of that used to describe the Earth's geographic and magnetic poles, so that the Earth's geomagnetic north pole is the south pole of its dipole moment. (This should not be difficult to remember; it simply means that the north pole of a bar magnet is the one that points north if used as a compass.)
The only known mechanisms for the creation of magnetic dipoles are by current loops or quantum-mechanical spin since the existence of magnetic monopoles has never been experimentally demonstrated.
Physical dipoles, point dipoles, and approximate dipoles
A physical dipole consists of two equal and opposite point charges: Literally, two poles. Its field at large distances (that is, distances large in comparison to the separation of the poles) depends almost entirely on the dipole moment as defined above. A point (electric) dipole is the limit obtained by letting the separation tend to 0 while keeping the dipole moment fixed. The field of a point dipole has a particularly simple form, and the order-1 term in the multipole expansion is precisely the point dipole field.
Although there are no known magnetic monopoles in nature, there are magnetic dipoles in the form of the quantum-mechanical spin associated with particles such as electrons (although the accurate description of such effects falls outside of classical electromagnetism). A theoretical magnetic point dipole has a magnetic field of the exact same form as the electric field of an electric point dipole. A very small current-carrying loop is approximately a magnetic point dipole; the magnetic dipole moment of such a loop is the product of the current flowing in the loop and the (vector) area of the loop.
Any configuration of charges or currents has a "dipole moment," which describes the dipole whose field is the best approximation, at large distances, to that of the given configuration. This is simply one term in the multipole expansion; when the charge ("monopole moment") is 0—as it always is for the magnetic case, since there are no magnetic monopoles—the dipole term is the dominant one at large distances: its field falls off in proportion to 1 / r3, as compared to 1 / r4 for the next (quadrupole) term and higher powers of 1 / r for higher terms, or 1 / r2 for the monopole term.
Many molecules have such dipole moments due to non-uniform distributions of positive and negative charges on their various atoms. For example:
- (positive) H-Cl (negative)
A molecule with a permanent dipole moment is called a polar molecule. A molecule is polarized when it carries an induced dipole. The physical chemist Peter J.W. Debye was the first scientist to study molecular dipoles extensively, and dipole moments are consequently measured in units named debye in his honor.
With respect to molecules there are three types of dipoles:
- Permanent dipoles: These occur when two atoms in a molecule have substantially different electronegativity—one atom attracts electrons more than another becoming more negative, while the other atom becomes more positive. See dipole-dipole attractions.
- Instantaneous dipoles: These occur due to chance when electrons happen to be more concentrated in one place than another in a molecule, creating a temporary dipole. See instantaneous dipole.
- Induced dipoles: These occur when one molecule with a permanent dipole repels another molecule's electrons, "inducing" a dipole moment in that molecule. See induced-dipole attraction.
The definition of an induced dipole given in the previous sentence is too restrictive and misleading. An induced dipole of any polarizable charge distribution ρ (remember that a molecule has a charge distribution) is caused by an electric field external to ρ. This field may, for instance, originate from an ion or polar molecule in the vicinity of ρ or may be macroscopic (for example, a molecule between the plates of a charged capacitor). The size of the induced dipole is equal to the product of the strength of the external field and the dipole polarizability of ρ.
Typical gas phase values of some chemical compounds in debye units:
- Carbon dioxide: 0
- Carbon monoxide: 0.112
- Ozone: 0.53
- Phosgene: 1.17
- Water vapor: 1.85
- Hydrogen cyanide: 2.98
- Cyanamide: 4.27
- Potassium bromide: 10.41
These values can be obtained from measurement of the dielectric constant. When the symmetry of a molecule cancels out a net dipole moment, the value is set at 0. The highest dipole moments are in the range of 10 to 11. From the dipole moment information can be deduced about the molecular geometry of the molecule. For example, the data illustrate that carbon dioxide is a linear molecule but ozone is not.
Field from a magnetic dipole
The far-field strength, B, of a dipole magnetic field is given by
- B is the strength of the field, measured in teslas;
- r is the distance from the center, measured in meters;
- λ is the magnetic latitude (90°-θ) where θ = magnetic colatitudes, measured in radians or degrees from the dipole axis (Magnetic colatitudes is 0 along the dipole's axis and 90° in the plane perpendicular to its axis.);
- m is the dipole moment, measured in ampere square-meters (A•m2), which equals joules per tesla;
- μ0 is the permeability of free space, measured in henrys per meter.
The field itself is a vector quantity:
- B is the field;
- r is the vector from the position of the dipole to the position where the field is being measured;
- r is the absolute value of r: the distance from the dipole;
- is the unit vector parallel to r;
- m is the (vector) dipole moment;
- μ0 is the permeability of free space;
- δ3 is the three-dimensional delta function. ( = 0 except at r = (0,0,0), so this term is ignored in multipole expansion.)
This is exactly the field of a point dipole, exactly the dipole term in the multipole expansion of an arbitrary field, and approximately the field of any dipole-like configuration at large distances.
Magnetic vector potential
The vector potential A of a magnetic dipole is
with the same definitions as above.
Field from an electric dipole
The electrostatic potential at position due to an electric dipole at the origin is given by:
- is a unit vector in the direction of ;
- p is the (vector) dipole moment;
- ε0 is the permittivity of free space.
This term appears as the second term in the multipole expansion of an arbitrary electrostatic potential Φ(r). If the source of Φ(r) is a dipole, as it is assumed here, this term is the only non-vanishing term in the multipole expansion of Φ(r). The electric field from a dipole can be found from the gradient of this potential:
where E is the electric field and δ3 is the 3-dimensional delta function. ( = 0 except at r = (0,0,0), so this term is ignored in multipole expansion.) Notice that this is formally identical to the magnetic field of a point magnetic dipole; only a few names have changed.
Torque on a dipole
Since the direction of an electric field is defined as the direction of the force on a positive charge, electric field lines point away from a positive charge and toward a negative charge.
for an electric dipole moment p (in coulomb-meters), or
for a magnetic dipole moment m (in ampere-square meters).
The resulting torque will tend to align the dipole with the applied field, which in the case of an electric dipole, yields a potential energy of
The energy of a magnetic dipole is similarly
Quantum mechanical dipole operator
Consider a collection of N particles with charges qi and position vectors . For instance, this collection may be a molecule consisting of electrons, all with charge -e, and nuclei with charge eZi, where Zi is the atomic number of the i th nucleus. The physical quantity (observable) dipole has the quantum mechanical operator:
A non-degenerate (S-state) atom can have only a zero permanent dipole. This fact follows quantum mechanically from the inversion symmetry of atoms. All 3 components of the dipole operator are antisymmetric under inversion with respect to the nucleus,
where is the dipole operator and is the inversion operator. The permanent dipole moment of an atom in a non-degenerate state (see degenerate energy level) is given as the expectation (average) value of the dipole operator,
where is an S-state, non-degenerate, wavefunction, which is symmetric or antisymmetric under inversion: . Since the product of the wavefunction (in the ket) and its complex conjugate (in the bra) is always symmetric under inversion and its inverse,
it follows that the expectation value changes sign under inversion. We used here the fact that , being a symmetry operator, is unitary: and by definition the Hermitian adjoint may be moved from bra to ket and then becomes . Since the only quantity that is equal to minus itself is the zero, the expectation value vanishes,
In the case of open-shell atoms with degenerate energy levels, one could define a dipole moment by the aid of the first-order Stark effect. This only gives a non-vanishing dipole (by definition proportional to a non-vanishing first-order Stark shift) if some of the wavefunctions belonging to the degenerate energies have opposite parity; that is, have different behavior under inversion. This is a rare occurrence, but happens for the excited H-atom, where 2s and 2p states are "accidentally" degenerate (see this article for the origin of this degeneracy) and have opposite parity (2s is even and 2p is odd).
In addition to dipoles in electrostatics, it is also common to consider an electric or magnetic dipole that is oscillating in time.
In particular, a harmonically oscillating electric dipole is described by a dipole moment of the form where ω is the angular frequency. In vacuum, this produces fields:
Far away (for ), the fields approach the limiting form of a radiating spherical wave:
which produces a total time-average radiated power P given by
This power is not distributed isotropically, but is rather concentrated around the directions lying perpendicular to the dipole moment. Usually such equations are described by spherical harmonics, but they look very different. A circular polarized dipole is described as a superposition of two linear dipoles.
- Battery (electricity)
- Electric field
- Electric motor
- Electrical conductor
- Electrical engineering
- Insulator (electrical)
- Magnetic field
- ↑ David J. Griffiths, Introduction to Electrodynamics, 3rd edition (Upper Saddle River, NJ: Prentice Hall, 1999, ISBN 013805326X).
- ↑ Charles A. Brau, Modern Problems in Classical Electrodynamics (Oxford: Oxford University Press, 2004, ISBN 0195146654).
- ↑ Robert C. Weast, CRC Handbook of Chemistry and Physics (Boca Raton, FL: CRC Press., 1984, ISBN 0-8493-0465-2).
- Bonin, Keith D., and Vitaly V. Kresin. 1997. Electric-Dipole Polarizabilities of Atoms, Molecules and Clusters. River Edge, NJ: World Scientific. ISBN 9810224931.
- Brau, Charles A. 2004. Modern Problems in Classical Electrodynamics. Oxford: Oxford University Press. ISBN 0195146654.
- Demaison, J., H. Hübner, W. Hüttner, and J. Vogt. 2002. Dipole Moments, Quadrupole Coupling Constants, Hindered Rotation and Magnetic Interaction Constants of Diamagnetic Molecules. Berlin: Springer. ISBN 3540410376.
- Gibilisco, Stan. 2005. Electricity Demystified. New York: McGraw-Hill. ISBN 0071439250.
- Griffiths, David J. 1999. Introduction to Electrodynamics, 3rd edition. Upper Saddle River, NJ: Prentice Hall. ISBN 013805326X.
- Young, Hugh D., and Roger A. Freedman. 2003. Physics for Scientists and Engineers, 11th edition. San Francisco: Pearson. ISBN 080538684X.
- USGS Geomagnetism Program. Retrieved June 19, 2008.
- Fields of Force: a chapter from an online textbook. Retrieved June 19, 2008.
- Electric Dipole Potential by Stephen Wolfram and Energy Density of a Magnetic Dipole by Franz Krafft. The Wolfram Demonstrations Project. Retrieved June 19, 2008.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | <urn:uuid:d6dbefa8-7d45-42ed-936e-e7ae70006b47> | 4.375 | 3,467 | Knowledge Article | Science & Tech. | 37.872208 |
Visual Basic 6's Variant data type is able to hold any other data type, including numbers, strings, dates, and object references. This data type can be very useful in a wide variety of situations. I'll show you a couple of useful tricks with Variants.
When a Variant has been declared but not assigned a value, it contains the special value Empty. You can test for this with the IsEmpty function. For example, suppose you want to determine whether a Variant has been assigned a value. If it has been assigned a value, you do nothing; if it hasn't been assigned a value, then assign the value 0. Here's how (assume that V is a type Variant variable):
If IsEmpty(V) Then V = 0
If an empty Variant is used in an expression, it will evaluate as either the value 0 or an empty string depending on the expression.
Another useful special value is Null, which is traditionally used to indicate that a variable does not contain valid data. You can assign this value to a Variant with the Null keyword:
V = Null
You can test for it with the IsNull function. For example, if Name is a type Variant:
If IsNull(Name) Then
MsgBox "The name is not valid"
MsgBox "The name is " & Name
You cannot use Null with any variable type other than Variant.
Advance your scripting skills to the next level with TechRepublic's free Visual Basic newsletter, delivered each Friday. Automatically sign up today! | <urn:uuid:20a6668d-f22f-425a-8928-c39273d24f13> | 3.203125 | 314 | Tutorial | Software Dev. | 54.776338 |
The copperhead is also called the death adder, chunk head and dry-land moccasin. Its scientific name is Agkistrodon contortrix. It is a pit viper and like all pit vipers, it is an ambush predator. The copperhead will sit and wait in the dry leaf matter waiting for an animal to come into its path.
Unlike many of hte viperids in North America, the copperhead will not flee the area if confronted by a human. Instead it will coil up and freeze. The attached species photo was taken in a patch of dry leaves in North Texas. We were able to get many photos of the copperhead because it did not slither away from the area.
About 90 percent of the diet of a copper head consists of mice, voles and other small rodents. They will also eat frogs and large insects (such as cicadas). | <urn:uuid:c72724a9-f3c2-4447-bb26-044e23e4b40d> | 2.78125 | 185 | Knowledge Article | Science & Tech. | 63.758879 |
31st August 2011
The research, published with little fanfare this week in the prestigious journal Nature, comes from über-prestigious CERN, the European Organization for Nuclear Research, one of the world’s largest centres for scientific research involving 60 countries and 8,000 scientists at more than 600 universities and national laboratories. CERN is the organization that invented the World Wide Web, that built the multi-billion dollar Large Hadron Collider, and that has now built a pristinely clean stainless steel chamber that precisely recreated the Earth’s atmosphere.
Not that any of the Global Warming Alarmists will actually look at the evidence. Their minds are made up — don’t confuse them with the facts. | <urn:uuid:2855d19e-9dce-486e-8fc3-c16f7d68ea21> | 2.78125 | 151 | Personal Blog | Science & Tech. | 37.144185 |
He-3 The Thermobile
Published: Monday 13 March 2006 - Updated: Monday 28 March 2011
To demonstrate the conversion of thermal energy into mechanical energy using a shape memory alloy (Nitinol).
- Beaker of hot water
Click on pictures to enlarge.
The thermobile consists of a shape memory alloy Nitinol (Nickel-Titanium). Above a temperature Tc this alloy returns to an earlier shape given to it previously by appropriate heat treatment. The alloy absorbs heat as it returns to this shape converting it into mechanical work.
The demonstration consists simply of dipping the small metal wheel of the thermobile into hot water whereupon the alloy is raised above Tc and rotation follows. | <urn:uuid:3ebe8125-1395-495a-8209-48d646de5597> | 3.015625 | 144 | Tutorial | Science & Tech. | 34.408485 |
By Kato Mivule
During a database development process, often database developers will engage with their clients and users to solicit the requirements that the new database is supposed to accomplish. One of the best ways to communicate with clients and users is with the use of ER diagrams to check if the developers have fully incorporated all the requirements in the design as dictated by the client.
The ER diagrams become easy to study, visualize, and amend during the requirements soliciting process with the client. Therefore, ER diagrams are conceptional and abstract presentation of the database, fully giving a formal description of database components such as schema, entities, relationships, and attributes.
At the same time the Relational Model which was first introduced in 1969 by E.F Codd, is a conceptual view of a database based on first-order predicate logic, depicting a database as a set of predicate variants. The model stipulates a declaratory way for defining data and queries by requiring users to straightaway state what data they need to get from the data and place into the database. Relational Models represent data as a collection of relations, depicting the database as a schema, the table as a relation, the columns as attributes, and rows as tuples.
The Relational Model is declarative and therefore easy to implement using declarative languages like SQL, making it a major difference between ER Diagrams and the Relational Model. ER diagrams tend to be highly conceptual and though could be easily understood by clients and users who are not experts in database terminology, ER diagrams are not declarative and straight forward when it comes to implementation. It is therefore important to have the Relational Model interface between the ER diagrams and the SQL declarative language so as to make implementation faster and easier.
Secondly, any errors that come up during the design phase with the ER diagrams, can be captured when mapping to the Relational Model. Yet even more, wile both the ER and the Relational Model are mathematical, we can still map ER diagrams to the Relational Model, thus making implementation of the conceptual view to physical view much easier.
Yet still, the ease of use between the ER Model and Relational Model is still debatable. Peter P. Chen, the author of the ER model in the 1970s notes three differences between the ER Model and E.F. Codd’s Relational Model:
- ER models employ mathematical relation constructs to show relationships between entities…
- ER model incorporates more semantic data than the Relational Model…
- ER model uses explicit linkage between entities while the Relational model uses implicit linkages between entities…
Yet still despite the advantages of more semantic information with the Chen’s ER model, Codd’s Relational model is closer to SQL implementation and has the declarative advantage. Therefore, for the conceptual view, the ER diagram work best but will work better when mapped to the Relational model for optimized physical view implementation.
Seyed M. M. Tahaghoghi, Hugh E. Williams, “Learning MySQL”, O’Reilly Media, Inc., 2006, ISBN 0596008643, 9780596008642, Page 112-118
Ramez Elmasri, Shamkant B. Navathe, Fundamentals of Database Systems, Edition 6, Addison Wesley Pub Co Inc, 2010, ISBN 0136086209, 9780136086208, Page 199-350
Heidi Gregersen and Leo Mark and Christian S. Jensen, “Mapping Temporal ER Diagrams to Relational Schemas”, IN PROC. OF THE 9TH INT. CONF, 1998.
“Relational model – Wikipedia, the free encyclopedia.” [Online]. Available: http://en.wikipedia.org/wiki/Relational_model. [Accessed: 02-Nov-2010].
“Entity-relationship model – Wikipedia, the free encyclopedia.” [Online]. Available: http://en.wikipedia.org/wiki/Entity-relationship_model. [Accessed: 02-Nov-2010].
Peter Chen, “Entity-Relationship Modeling: Historical Events, Future Trends, and Lessons Learned”, Software Pioneers: Contributions to Software Engineering, Springer, 2002.
E.F. Codd, “A relational model of data for large shared data banks. 1970.,” M.D. computing : computers in medical practice, vol. 15, 1970, pp. 162-6. | <urn:uuid:fed409a0-099c-4666-b991-645fdf9c2b89> | 3.015625 | 938 | Personal Blog | Software Dev. | 38.002318 |
Date: April 2003
Why does DNA decompose when heated to too high a temperature?
The most of the hydrogen bonds that join the two DNE stands break from the
heat. With more heat the actual bonds between the nucleotides can break
-fragmenting a strand.
Very few organic molecules can with stand high temperatures for very
long. Heat provides the activation energy to break covalent bonds and
denature/destroy molecular structure. Proteins are particularly sensitive.
There are exceptions, as in the case of thermophillic bacteria. There are
cases or heat shock-proteins or "chaperone" proteins that can protect
regular proteins from breakdown - to a point.
DNA doesn't actually decompose when heated. It just melts. DNA comes in two mirror image strands that you could visualize as a zipper. The chemical bonds that make up each strand of the zipper are permanent joins, but the teeth that connect the two strands are much weaker and sensitive to heat. So when you expose DNA to heat (for instance, by boiling it), the two strands of the zipper separate. By very slowly cooling that denatured DNA, you could actually get the strands to reanneal or zip up again.
Christine Ticknor, Ph.D.
Ireland Cancer Center
Case Western Reserve University
DNA in its native form is composed of two molecules that have the
characteristic of being complementary such that one strand associates
with the other in a particular fashion, forming a "double helix." The
two molecules are stabilized in this structure due to noncovalent bonds,
mostly hydrogen bonds and vanderWaals forces. These bonds are not
strong; accordingly, when the temperature rises sufficiently, the double
helix is said to "melt" into its two component molecules, and will
reassociate upon slow cooling under appropriate salt conditions. So,
the issue is not decomposition so much as disassociation upon exposure
to too high a temperature; and the reason for the disassociation is that
the energy input of the heat overcomes the ability of the noncovalent
bonds to keep the molecules together.
Heat "cooks" organic material, DNA or others, especially in the presence of oxyge. So
DNA can "burn" in the usual sense of the word forming CO2, N2 and other compounds. Even in the
absence of O2, heat can cause DNA, or other molecules, to change its structure either by losing
some degradation product, or just changing structure so that it cannot replicate. The
"technical" term for proteins is "denaturing", which is a catch-all phrase for "losing its
Chemical bonds have a certain stability based on the type of bonding. Some
bonds are quite strong The following are covalent bond strengths.
Bond Bond Strength(kJ/mole); Bond Bond Strength (kJ/mole)
Cl-Cl 239 H-Cl 427
H-H 432 C-H 413
N N 941 N-H 391
Bond strength can be overcome by adding heat...The GC and AT bonds in DNA
are hydrogen bonds (non-covalent) and can also be overcome with heat.
these hydrogen bonds are about 71 kilojoules per mole...relatively weak (but
strong in large numbers) and can be broken and reformed by heating and
cooling. This is the secret behind the polymerase chain reaction.
Click here to return to the Molecular Biology Archives
Update: June 2012 | <urn:uuid:5987cb50-6a4d-4616-95e0-78f67f20e21d> | 3.203125 | 746 | Knowledge Article | Science & Tech. | 50.517779 |
Not All Australian Marine Fauna Obeying Climate-Alarmist Dogma
Poloczanska, E.S., Smith, S., Fauconnet, L., Healy, J., Tibbetts, I.R., Burrows, M.T. and Richardson, A.J. 2011. Little change in the distribution of rocky shore faunal communities on the Australian east coast after 50 years of rapid warming. Journal of Experimental Marine Biology and Ecology 400: 145-154.
Poloczanska et al. (2011) investigate this hypothesis by resurveying a historical census of rocky-shore marine fauna that had been conducted in the 1940s and 1950s, in order to determine if there had been subsequent latitudinal changes in the distribution and abundance of intertidal marine species consistent with global climate change along Australia's east coast, which region, as they demonstrate, "has undergone rapid warming, with increases in temperature of ~1.5°C over the past 60 years." This survey was conducted at 22 rocky-shore sites that were located between 23 and 35°S latitude, which stretched across 1500 km of coastline.
Results of the analysis indicate that of the 37 species the authors encountered that had distributional data available from both time periods, "only six species showed poleward shifts consistent with predictions of global climate change." Four others actually moved in the opposite direction "inconsistent with expectations under climate change," while the rest "showed no significant changes in range edges."
In discussing the roles of wave exposure, local currents and the presence of large sand islands, the seven scientists state that it is the combination of those factors -- and "not temperature" -- that "is the primary factor influencing biogeographic distributions along the subtropical east coast of Australia." And supporting this conclusion is the contemporaneous study of Seabra et al. (2011), which describes how it is that intertidal marine species can easily withstand significant climatic warming without having to migrate poleward.
Seabra, R., Wethey, D.S., Santos, A.M. and Lima, F.P. 2011. Side matters: Microhabitat influence on intertidal heat stress over a large geographical scale. Journal of Experimental Marine Biology and Ecology 400: 200-208. | <urn:uuid:8f854e9e-a464-4397-a023-3a1151ae54ae> | 2.828125 | 476 | Academic Writing | Science & Tech. | 44.34032 |
The Complexity of Life
Cindy Lee Van Dover
College of William & Mary
The sea bed might seem simple enough. Dead plants, animals and microorganisms combine with wind- and rain-eroded soil dust, and rain down from sunlit surface waters to accumulate on the sea floor. The descent of dead material is itself a complex process. Each bit of tissue cycling through midwater organisms loses a little more of its nutritive value during each cycle. This organic material is distributed unevenly over the sea floor. The result is a mosaic of habitats with subtle variations.
It is this rain of surface-derived organic material upon which abyssal (very deep-sea) organisms eke out a living, generating additional micro patches of varying habitat quality as each one burrows or feeds. The volume of animals in a few square meters of mud might best be measured in teaspoonfuls and mere tens of individuals. Though sparse, the animals that do live in the globally vast deep-sea muds are diverse, representing millions of species, each adapted for a particular habitat or lifestyle. Research conducted near the Hudson River Canyon suggests that the diversity of these tiny invertebrates small worms, snails and shrimp-like crustaceans rivals that of tropical rainforests. It is the small-scale patchiness of deep-sea muds and the scarcity of food that likely sustain the complex assemblages of invertebrates that dwell in abyssal sediments.
Life in Chemosynthetic Environments
In deep-sea chemosynthetic environments, such as hot springs or methane seeps, an unusual picture emerges. In chemosynthesis, organisms use inorganic carbon (carbon dioxide) and the energy from chemical reactions to make food. In contrast to photosynthesis, the organisms do not need sunlight to make food. Rather, in deep-sea chemosynthetic habitats, rich sources of food are generated locally. Where there is food, life is abundant.
Chemosynthetic ecosystems in the deep sea are well known for their explosions of life. These highly localized communities generally not much bigger than a football field are food-rich environments within which myriad interactions take place among organisms. Rampant clumps of tubeworms and luxurious beds of mussels develop wherever chemically modified fluids are expelled from the sea floor. These fluids are rich in compounds such as hydrogen sulfide and methane. Microorganisms living in or near the larger sea-floor animals "burn" (oxidize) these compounds to produce metabolic energy and to transform carbon dioxide into their own food source. They, in turn, provide food for the larger animals. The fact that any multicellular animals thrive in sulfide-rich waters is surprising, because hydrogen sulfide is highly toxic to most life forms. From detailed studies of both microorganisms and invertebrates living in sulfidic waters, we learn how life adapts to extreme environments.
The Relationship Among Organisms
Symbiosis the close association of two organisms for mutual benefit is a critical survival strategy in chemosynthetic habitats. A typical mussel, for example, has a gas exchange organ called a gill, which allows the mussel to "breathe." The gill surface is covered with hair-like cilia, which generate currents that draw seawater into the mussel and over its gills. In deep-sea mussels, the gills are enlarged and filled with bacteria. The bacteria absorb the sulfide or methane from the seep water, burn it, and then use the energy acquired from this chemical reaction to "fix" the carbon dioxide expelled from the host mussel.
The bacteria grow, which benefits the mussel as well. The bacterial cells "leak," thus nourishing the mussel. This kind of symbiosis between bacteria and invertebrates is hugely successful and is a hallmark of nearly all known seep and hot-spring environments.
However, not all invertebrates in chemosynthetic communities have symbiotic bacteria. Some, like many small snails and limpets, are grazers and feed on free-living bacteria. Others, such as crabs, are scavengers or carnivores. Still others, including a blood-red worm that lives within the shells of mussels, are parasites.
Intriguingly, the diversity of invertebrates that live in food-rich seep and vent environments is very low compared to that of more typical deep-sea muds, although the numbers of species living among mussels at hot springs can be about the same as is found among shallow-water mussel beds. Thus, where food is limited, population density is low and diversity is high, many species coexist. Where food is abundant, population density is high and diversity is low, relatively few species coexist.
Species diversity is lower in seep and vent environments because most organisms cannot survive in such a toxic milieu. Thus, invertebrate species that live at vents and seeps are very different from the species that live in adjacent muds. As we explore new seep or vent sites, we are certain to discover new species. And as we move to new regions of the seafloor, we may even find entirely new biogeographic provinces.
Sign up for the Ocean Explorer E-mail Update List. | <urn:uuid:776d772b-8ce0-4fe9-b030-29f51121dec3> | 3.890625 | 1,084 | Knowledge Article | Science & Tech. | 34.30118 |
J. Prat-Camps et al./Autonomous Univ. of Barcelona
Field trip. Cylindrical shells with the right properties would expel magnetic field energy. Here, the field from a bar magnet within the shell on the left is pushed outward, increasing its strength in the exterior region. A second shell, on the right, captures the field passing through it and concentrates the field’s strength in this shell’s interior. This scheme could allow the transfer of magnetic energy with improved efficiency. | <urn:uuid:b8902f65-4034-4dff-898b-65067363ee41> | 2.859375 | 105 | Academic Writing | Science & Tech. | 64.4975 |
The pictures from Mars take my breath away. I am lost for words to describe the bravado and brilliance of the engineers and scientists who placed this robotic laboratory so gently onto the surface of Mars.
NASA should be very proud. Americans should be proud too: they paid for it -around $10 per head. And we should all reflect on just what human beings can achieve when we put our minds to something.
The brilliantly-conceived mission has revived talk about a possible manned mission to Mars. And I have even heard people talk again about the possibility of colonising the Moon or Mars. While such extrapolations are understandable I think it is important to understand that human beings cannot live anywhere except planet Earth.
This is not fundamentally true. It is conceivable that we could create ‘bubbles’ of survivability far distant from the Earth. Overlooking the many challenges (e.g. the increased radiation doses; the atrophy of muscles in reduced gravity; and the creation of a stable microbial population etc.), I will concede that it is possible that we could create ‘bio domes’ in which relatively large groups could live for extended periods away from the surface of Earth. But there is one problem that any colonists will not easily overcome: energy.
Curiosity is nuclear-powered which will allow it to operate through the Martian winter and at night when solar power is weak. Our putative colony might be nuclear-powered too, but for how long? Let’s say (optimistically) that the initial colonists brought with them enough nuclear power to last a century. In that century the colonists would undoubtedly achieve great things. But after the nuclear power station was shut down, what would they do? It is unlikely that a colony could develop the capability to build and fuel a new nuclear power station in just a century. Even if they struck oil on Mars – and refined hydrocarbons flowed easily from the rock – they would be of little use because there is no oxygen in the atmosphere with which to burn the fuel. Ultimately, the colonist’s engineers would find that the only sustainable method of generating energy was solar power.
Back on planet Earth we are in a similar position to our putative Mars colonists. Using fossil fuels we have achieved great things. But our use of fossil fuels is now affecting the flow of energy on and off the planet. Worryingly, the evidence that this is happening is becoming irrefutable. We could use nuclear power for a century or two. But if we want to replace all current energy use with a sustainable source then we have no choice: we need to capture 0.01% of the solar energy which reaches Earth’s surface. Yes. We need just one ten-thousandth part of the 123 000 000 000 000 000 watts of solar energy that constantly warms the Earth’s surface.
Earth is a ‘bio-dome’ driven by solar power. The flow of energy on and off our planet allows plants to thrive – and they provide us with the food and resources we need to live from day-to-day. It is the only place in the Universe where humans can live sustainably. If we want to avoid disturbing the climate that creates this home to which we are uniquely adapted, then we need a truly sustainable energy source. And there is only one
If our planet’s engineers and scientists can put the Curiosity rover onto Mars, and if taxpayers can fund this noble mission, then surely we can collectively decide to live sustainably on Earth and ask our scientists and engineers to make it possible. Can’t we? | <urn:uuid:95ea8efc-359a-4fe9-8f54-6086c0fad56b> | 3.34375 | 741 | Personal Blog | Science & Tech. | 52.296807 |
Astronauts Collect Cave Life
CAVES training sends astronauts from all the International Space Station partner nations underground for a week to learn about working in multi-cultural teams under extreme conditions.
During their six-night stay in caves in Sardinia, Italy, their scientific research included meteorology, surveying, geology and cataloguing underground life.
“Every year we scout the area to prepare for the training mission,” says Loredana Bessone, course designer and project manager. “This year, we noticed interesting-looking crustaceans in a small pond.”
The astronauts set bait near the pond and in other places to attract and identify as many life forms as possible. ESA astronaut Andreas Mogensen recalls: “We set four lures in pre-defined areas and had two mobile baits that we placed in areas of interest.”
Cave scientists usually leave bait for three weeks. CAVES training lasts only a week so the biological sampling programme developer, Paolo Marcia, cooked up a special menu to lure the underground life: “I created a really stinky bait made of liver and rotten cheese.”
After three to four days, the astronauts chose a few specimens of the less common species and preserved them in alcohol to take above ground.
“We were concerned that not enough cave life had been collected, so I asked the astronauts to go back to the pond on the last day – and bingo!” says Laura Sanna, science operations director.
Swimming underground woodlice
Molecular analysis confirms the samples belong to a new species of crustaceans. Just under 8 mm long, these animals belong to the suborder of terrestrial isopods, commonly known as woodlice.
Most crustaceans such as crabs, shrimps and lobsters live in water while woodlice are the only group that have fully adapted to life on land.
The ancestors of the terrestrial isopods seem to have evolved from aquatic life to live on land. Surprisingly, the astronauts found a species that has returned to living in water, completing an evolutionary full circle.
“This find is important because the few aquatic woodlice we know of were thought to be primitive forms from which terrestrial woodlice had evolved. Now it is clear that these animals have evolved to live in water again,” explains isopod specialist Stefano Taiti.
“It is changing our point of view on evolutionary processes in regards to terrestrial isopods living in an aquatic environment.
“The find also confirms the theory that evolution is not a one-way process but that species can evolve to live in previously forgotten habitats.“
“This shows that CAVES offers a truly interesting scientific programme while keeping to its main goal: to train spaceflight teams in an operational space analogue on Earth,” affirms Loredana. | <urn:uuid:ce1caf99-af92-4d23-8163-c6e0aabae48f> | 3.703125 | 604 | Knowledge Article | Science & Tech. | 33.880716 |
NASA's Voyager 1 spacecraft has entered a new region between our solar system and interstellar space, which scientists are calling the stagnation region. In the stagnation region, the wind of charged particles streaming out from our Sun has slowed and turned inward for the first time, our solar system's magnetic field has piled up, and higher-energy particles from inside our solar system appear to be leaking out into interstellar space. This image shows that the inner edge of the stagnation region is located about 113 astronomical units (10.5 billion miles or 16.9 billion kilometers) from the Sun. Voyager 1 is currently about 119 astronomical units (11 billion miles or 17.8 billion kilometers) from the Sun. The distance to the outer edge is unknown. | <urn:uuid:9ef6532f-d622-4ac3-b525-f7aaa0e8abaa> | 3.875 | 146 | Truncated | Science & Tech. | 53.403009 |