text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Finitely generated groups and their subgroups are important domains in GAP. They are represented as permutation groups, matrix groups, ag groups or even more complicated constructs as for instance automorphism groups, direct products or semi-direct products where the group elements are represented by records.
Groups are created using
Group (see Group), they are represented by
records that contain important information about the groups. Subgroups
are created as subgroups of a given group using
Subgroup, and are also
represented by records. See More about Groups and Subgroups for
details about the distinction between groups and subgroups.
Because this chapter is very large it is split into several parts. Each part consists of several sections.
Note that some functions will only work if the elements of a group are represented in an unique way. This is not true in finitely presented groups, see Group Functions for Finitely Presented Groups for a list of functions applicable to finitely presented groups.
The first part describes the operations and functions that are available
for group elements, e.g.,
Order (see Group Elements). The next part
tells your more about the distinction of parent groups and subgroups (see
More about Groups and Subgroups). The next parts describe the
functions that compute subgroups, e.g.,
Series of Subgroups). The next part describes the functions that compute and test
properties of groups, e.g.,
Properties and Property Tests), and that identify the isomorphism type.
The next parts describe conjugacy classes of elements and subgroups (see
Conjugacy Classes) and cosets (see Cosets of Subgroups). The next
part describes the functions that create new groups, e.g.,
DirectProduct (see Group Constructions). The next part describes
Group Homomorphisms). The last part tells you more about the implementation
Set Functions for Groups).
The functions described in this chapter are implemented in the following
LIBNAME/"grpelms.g" contains the functions for group
LIBNAME/"group.g" contains the dispatcher and default group
LIBNAME/"grpcoset.g" contains the functions for cosets and
LIBNAME/"grphomom.g" implements the group
LIBNAME/"grpprods.g" implements the group | <urn:uuid:f4807005-5d79-49e9-92b5-46ccb0c18b36> | 3.453125 | 484 | Documentation | Software Dev. | 51.76049 |
Visit additional Tabor Communication Publications
August 19, 2010
OAK RIDGE, Tenn., Aug. 19 -- Buried in mountains of meteorological and hydrological data are likely clues that could help in predicting floods, hurricanes and other extreme weather events.
Through a new multi-institution project that includes the University of Tennessee and the Department of Energy's Oak Ridge National Laboratory, Auroop Ganguly and colleagues plan to use data mining techniques to enhance the accuracy of climate and earth system models. The $10 million project is funded by the National Science Foundation and led by the University of Minnesota. This project is one of the largest investments made by NSF's Directorate for Computer and Information Science & Engineering.
"We want to be able to predict large shifts in regional climate patterns or statistical attributes of severe meteorological and hydrological events with greater accuracy to assist decision makers," said Ganguly, a senior staff member in ORNL's Computational Sciences and Engineering Division and faculty member at UT.
Ganguly noted, however, that there could be valuable information hidden within massive volumes of model data like temperature and humidity profiles or sea surface temperatures. Provided data mining algorithms are made sophisticated enough to capture the complexity of climate data, the extracted data could help improve predictions of crucial variables in impacts like extreme rainfall or hurricanes.
"Recent research appears to suggest this may be possible, but a systematic approach has been lacking," Ganguly said. "This is precisely what this NSF proposal aspires to achieve through innovative approaches in computational data sciences."
Based on the traditional strengths of UT and ORNL in climate modeling and impacts assessments as well as new vision in areas like knowledge discovery, Ganguly expects the collaboration to achieve great success in pushing these boundaries to more fully understand variables in climate change.
ORNL's Oak Ridge Climate Change Science Institute, or CCSI, is helping organize the lab's contributions to the project as part of its goal to integrate scientific projects in modeling, observations, and experimentation with ORNL's powerful computational and informatics capabilities to answer some of the most pressing global change science questions.
The researchers propose to leverage the Leadership Computing Facilities at ORNL, which are funded by the Department of Energy's Office of Science.
The interdisciplinary project team features 13 researchers from seven institutions, who hope to build a bridge between credible projections from physics-based climate models and crucial requirements for impacts and integrated assessment to inform stakeholder and policy needs.
"A data-driven approach to informing societal decisions can be a significant step forward for policy making as well as for sustaining the nation's critical infrastructures and key resources," Ganguly said.
ORNL is managed by UT-Battelle for the Department of Energy's Office of Science.
Source: Oak Ridge National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements. | <urn:uuid:6d30f491-01d6-4149-97de-b7e34cf913b1> | 2.953125 | 1,338 | Content Listing | Science & Tech. | 21.338296 |
If you want to learn a science, astronomy is the place to start. For most of what we do in this class higher mathematics is not a prerequisite (although being comfortable with numbers is helpful). Astronomy is a science based on geometrical, and often very visual, models. You will find that it's important to be able to visualize 3 dimensional spatial concepts, but often this is enough to have amazing predictive power toward understanding how the sky will behave. Such predictive power is the goal of all science.
We must develop a common understanding and vocabulary for describing the visible universe. Much of our observational exercises in this class will be aimed at illustrating and testing these geometrical models of the positions and motions of the stars and planets.
Astronomy would be pretty easy, and pretty dull, if the Earth stood still. In fact, the Earth has several different motions which astronomers long ago struggled hard to understand. One of these motions is the Earth's rotation on its axis; another is the Earth's revolution, or orbital motion, around the Sun.
Background Reading: Stars & Planets, p. 13 & 14 (Star positions); p. 16 & 17 (Appearance of the sky), p. 19 (The star charts).
If you watch the night sky for a few hours, you will see that the stars appear to rotate about a fixed point in the sky, known as the north celestial pole (which happens to be near the star Polaris). This motion is due to the Earth's rotation. As the spin of the Earth carries us eastward at almost one thousand miles per hour, we see stars rising in the east, passing overhead, and setting in the west. The Sun, Moon, and planets appear to move across the sky much like the stars.
Because of the Earth's rotation, everything in the sky seems to move together, turning once around us every 24 hours. Ancient astronomers explained this phenomenon by supposing that the Sun, Moon, planets, and stars were attached to a huge celestial sphere, centered on the Earth, which rotated on a fixed axis once per day. Of course, this sphere does not really exist; the Sun, Moon, planets, and stars all fall freely through space, and only appear to move together because of the Earth's rotation. Nonetheless, we still use the concept of the celestial sphere in talking about the positions of stars.
The celestial pole is 21.3° above the horizon as seen from Oahu. The point on the horizon directly below the celestial pole is north, while the opposite direction is south. If you face north, west is on your left and east is on your right. Finally, the Zenith is the point exactly overhead.
Since the apparent rotation of the celestial sphere is due to the actual rotation of the Earth, the north celestial pole is exactly overhead as seen from Earth's north pole. Likewise, every point on the celestial equator is exactly overhead from some point on the Earth's equator.
Over the course of a year, the Earth makes one complete orbit about the Sun. As a result, the Sun seems to move with respect to the stars, appearing in front of one constellation after another, as shown in the diagram on p. 12 of Stars & Planets. After one year, the Sun is back where it started. The Sun's annual path across the sky is called the ecliptic. Traditionally, the ecliptic was divided into twelve equal parts, each associated with a different constellation of the zodiac. The planets also appear to move along the ecliptic, although, as we will see, they don't always move in the same direction as the Sun.
The night sky is just that part of the sky which we see when the islands of Hawaii have turned away from the Sun. As we orbit the Sun, different constellations are visible at different times of the year. In January, for example, the evening sky is still dominated by winter constellations like Orion and Taurus; by April, these constellations will be low in the western sky, and summer constellations like Cygnus and Saggitarius will be rising in the east. You can get a 'sneak preview' of the summer sky by staying up late, thanks to the Earth's rotation. For example, the constellations visible at 8 pm in late April can also be seen at 2 am in late January.
The Earth's axis of rotation is not exactly parallel to its axis of revolution; the angle between them is 23.5°. As a result, the ecliptic is inclined by the same angle of 23.5° with respect to the celestial equator. This misalignment causes seasons; when the Sun appears north of the celestial equator the Earth's northern hemisphere receives more sunlight, while when the Sun appears south of the celestial equator the northern hemisphere receives less sunlight.
If we could view the Solar System from a point far above the north pole, we'd see the Earth revolving counter-clockwise about the Sun and rotating counter-clockwise on its axis. The other planets would likelwise revolve counter-clockwise around the Sun, and most would also rotate counter-clockwise. In addition, the Moon would appear to orbit the Earth in a counter-clockwise direction, as would most other planetary satellites.
In this class, we will use a 24-hour clock instead of writing 'am' or 'pm'. Since our class meets in the evening, most of the times we will record are after noon, and the 24-hour time is the time on your watch plus 12 hours. For example, our class starts at 19:00 (= 7:00 pm + 12:00), and ends at 22:00 (= 10:00 pm + 12:00). Sometimes we need to record the date and the time together; for example, our first class begins at 01/13/05, 19:00.
Astronomers everywhere in world use a single time system to coordinate their observations. This system is called Universal Time, abbreviated as UT or UTC. (Greenwich Mean Time, abbreviated GMT, is the same thing as UT.) Universal Time is exactly 10 hours ahead of Hawaii time. To convert 24-hour Hawaii time to UT, you add 10 hours; if the result is more than 24, subtract 24 and go to the next day. For example, our first observing session (weather permitting) will be at 01/20/05, 19:00, or 01/20/05 , 05:00 UT. To convert from UT to Hawaii time, subtract 10 hours; if the result is less than 0, add 24 and go to the previous day. For example, observers in Honolulu can see the star eta Leonis occulted, or hidden by the Moon, at 4/29/04, 8:35 UT; that's 4/28/04 22:35 Hawaiian time.
As a rule, we will use Hawaii time in this class, and write times and dates without any time zone. The 'UT' symbol will be used to indicate universal time.
Astronomers represent the appearance of the entire sky as seen at some particular place and time by drawing circular all-sky charts. Unfortunately, it's hard to show the appearance of the sky on a flat piece of paper, so reading an all-sky chart and relating it to what you see in the sky is a little tricky. For example, these charts distort the patterns of stars near the horizon, so you may find it hard to recognize constellations from an all-sky chart. The only way to correct this distortion is to break the sky up into several separate charts (this approach, used in The Sky Tonight, helps to find the constellations). For some purposes, however, it's very convenient to show the entire sky in one chart, so you should learn to read these charts. All-sky charts for each month of the year appear in Stars & Planets, starting on p. 24.
To read an all-sky chart, hold it in front of you with the side labeled 'N' at the top. Now imagine you are lying flat on your back with your head pointing north; then east will be on your left, south at your feet, west on your right, and the Zenith right in front of you. Mentally stretch the disk of the chart so that it forms a dome over your position. The positions of stars on this imaginary dome now correspond to their positions in the sky.
|Fig. 1. The sky over Honolulu on 01/14/05, 21:00 (01/15/04, 07:00 UT), produced using Your Sky. Stars are shown as dots, with larger dots for brighter stars; the lines between stars outline constellations. The blue cross near the top of the chart is the north celestial pole, and the blue curve is the celestial equator; blue numbers are celestial coordinates in right ascension (see below). The red curve is the ecliptic. From left to right, the five round symbols show the positions of Jupiter, Saturn, the Moon, Mars and Venus respectively; notice that all these objects are near to the ecliptic. Compass points are shown around the edge of the chart.|
You can get a pretty good idea of how the sky will look on 01/14/05, 21:00 by using the chart shown in Fig. 1. For example, the constellation of Taurus appears near the center of the chart, so it will be nearly overhead. The Moon appears on the right side of the chart, about one-third the distance from the center to the edge, so you can expect to see it in the west-south-west, about 30° above the horizon.
If you are used to reading maps of the Earth, the east and west compass points in Fig. 1 may seem to be reversed. On a terrestrial map with north at the top, you would expect to find west to the left and east to the right. However, a celestial map with north at the top has west at the right and east at the left. The reason for this reversal is that a terrestrial map shows a view looking down at the Earth, while a celestial map shows a view looking up at the sky. Astronomical charts usually have north at the top and west to the right. When using a telescope, you'll notice that stars drift toward the west as a result of the Earth's rotation; this is a convenient way to determine the correct way to view a star chart.
There are several coordinate systems that can be used to describe the positions of objects in the sky. The one that is the most useful to us is called 'altitude-azimuth'. The altitude coordinate is just the distance above the horizon in degrees; it goes from zero to 90°. The azimuth is the direction you're facing: You can use compass directions, or the angle in degrees. The azimuth angle starts at north, increases toward the east and runs from zero to 360°. So east is 90°, southwest is 225° and so on. Our telescopes work in this kind of coordinate system: the base swivels around in azimuth, and the tube can be moved up and down in altitude. With these two adjustments, the telescope can be pointed at any part of the sky.
In describing positions in the sky, it's sometimes hard to estimate the angles. Fortunately, we have some built-in scales. With your arm outstretched and your fingers spread, your thumb and little finger cover an angle of about 20° on the sky. Your closed fist covers about 10°, and one finger covers about two degrees--all with outstretched arm, of course. These approximations work pretty well for people of very different sizes.
While alt-az coordinates are fixed to an observer on earth's surface, celestial coordinates are fixed to the sky. To a good approximation, each star has a constant location in celestial coordinates.
Just as latitude and longitude can be used to specify any point on the Earth's surface, two celestial coordinates can be used to specify any point on the celestial sphere. Imagine starting from the point on the sky the point where the Sun, moving north, crosses the celestial equator (this is the point labeled '0 h' in the chart above). To reach any given point on the celestial sphere, you could first travel east along the celestial equator, and then towards one of the celestial poles, until you reach your destination. The angle you've traveled along the equator is called the right ascension; it's measured in units of hours, where 1 hour = 15°. The angle you've traveled towards one of the poles is called the declination; it's measured in degrees, with positive declinations towards the north celestial pole, and negative declinations towards the south celestial pole.
Celestial coordinates are often useful for locating objects. Stars & Planets, for example, often gives celestial coordinates when discussing stars If you look at the description of alpha Orionis on p. 194, you'll see '5h 55m +7°.4' just after the star's name. This means that alpha Orionis has a right ascension of 5h 55m (just slightly less than 6 hours) and a declination of +7°.4 (a little north of the celestial equator). Celestial coordinates also appear on the constellation charts; for example, see the chart of Orion on p. 195, which shows that Orion lies across the celestial equator at about 5h 30m right ascension.
An interactive planetarium, set up to show the sky now above Honolulu. You can chose other dates and times, select other viewing sites, and zoom in on selected areas; for these and other options, see http://www.fourmilab.to/yoursky. Created by John Walker.
Shows how the sky above Honolulu changes during one day, from 01/01/04, 20:00 to 01/02/04, 20:00. This animation illustrates the effect of the Earth's rotation. The various symbols you see moving along with the stars represent the Sun, Moon, and planets. On this particular day, the Moon appeared very close to the Sun, though it was not actually close enough to cause an eclipse.
Shows how the sky above Honolulu changes during one year, from 01/01/04, 20:00 to 12/31/04, 20:00. This animation illustrates the effect of the Earth's revolution around the Sun. Note how the constellations visible in the night sky change as the Earth revolves around the Sun. Also, note the Moon's monthly passages across the sky along the ecliptic and the fairly gradual motion of other planets, especially red Mars.
Last modified: January 10, 2005 | <urn:uuid:2692e778-2a68-4f81-bab4-80bffdbe6b8c> | 4.125 | 3,057 | Tutorial | Science & Tech. | 63.034634 |
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
Post a reply
Topic review (newest first)
With these problems I first work out the whole circumference, and then work out the fraction for the arc.
So cancel a 2 top and bottom and it's done.
Now for question 10.
What you have done is exactly right so something else is wrong here.
I started the next lesson about Circumference and Arc Length and I got all the 18/20
No I did #20 and got it correct
You are very welcome.
Thank you for the help!
Yes, that's it. Excellent!
The thing is this. There is a property that is always true for all circles and Q19 is testing it.
is always a right angle right?
I said #102 before seeing post #101
when I measure the one you drew on the screen I get A is 30 degrees C is 70 D is 80
You've made up 60 and 30 because that's what those angles look like. But remember D could be anywhere on the circumference.
I drew a circle and the first half is 30, 30, 120 the second is 60 60 60 it looks exactly like your triangle in the diagram but cut in half at 120
but i think i should add 60 +30 which gives me 90 | <urn:uuid:7734ba0f-9f73-4b3d-b478-df78f95818bd> | 2.84375 | 325 | Comment Section | Science & Tech. | 89.071872 |
Editorial: "Wild weather: Extreme is the new normal"
Explore our interactive map: "Your warming world"
THE year has just begun, yet it is already shaping up to be an unusual one. Millions of people in Australia, Brazil, China and the US are having a rough January as extreme weather events wreak havoc around the world.
In the absence of natural climatic triggers like an El Niño event, such an accumulation of extremes is highly unusual, says Omar Baddour of the World Meteorological Organization in Geneva, Switzerland. Until further studies are carried out, it is impossible to rule out that some of the extremes are freak events. But they all coincide with regional increases in extreme weather linked to climate change.
The US, for example, is in the grip of worsening drought. Last week, the US Global Change ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:880a0656-42b2-4f70-82c8-efa28a3e9dd1> | 2.921875 | 198 | Truncated | Science & Tech. | 43.085761 |
In the 1850s, Robert Bunsen and Gustav Kirchhoff devised the first working spectroscopes. Two decades later, Georg Dragendorff and other scientists began using spectroscopy for medical research and criminal investigations.
The fields of toxicology and serology—the study of blood and other body fluids—were the first to benefit. A small specimen of blood, subjected to a flame from a Bunsen burner, gave off light that could be subjected to spectroscopic analysis. This analysis could reveal the presence of carbon monoxide and other poisons.
Black Box Effect
In the 1940s the Beckman spectrophotometer revolutionized the laboratory. Encased in a metal container, the device occupied less counter space than conventional bench apparatus, and hid the procedures inside a box. A technician placed a specimen on a slide, inserted it into the instrument, and recorded the readings.
Historians of science theorize that "black box" devices are basic to modern laboratory practice. A device and its output replace the hand, eye, and judgment of the scientist. The standardized inner workings and seemingly objective output of the black box can more easily evade or withstand legal scrutiny. | <urn:uuid:7db3dde3-2ce0-4412-bcba-13ed8c28e33a> | 3.4375 | 237 | Knowledge Article | Science & Tech. | 29.070402 |
There's a very cool slideshow on the Discovery Channel site in recognition of Earth Day 2009. The collection of photos that make up the "On Earth Day, a Bird's-Eye View" slideshow are beautifully done and are accompanied by explanatory text that highlight the impact of climate change -- from dust storms to floods to volcanic eruptions and earthquakes. The imagery is amazing - and certainly good for classroom sharing.
Geodesic dome math project: A model dome like this can be made in any size (as long as you figure out the relative lengths of the struts). This one is pretty big!
Born on May 15, 1863: Frank Hornby, an inventor whose "toys" included Meccano, an engineering construction set of nuts, bolts, and strips of sheet metal. Hornby first devised the system for his children. When he moved on to mass produce...
Christina Ren, a high school junior and founder of Science Alliance Network believes student-to-student mentorship is key to keeping young kids excited about science. | <urn:uuid:90686167-e36a-46a5-88ee-2d71bf09b28f> | 3.453125 | 210 | Personal Blog | Science & Tech. | 54.361753 |
Highway to the Danger Zone
"The further on the edge, the hotter the intensity," sings Kenny Loggins in "Danger Zone," a song made famous by the movie Top Gun. The same words ring true for young, cooler stars like our sun that live in the danger zones around scorching hot stars, called O-stars. The closer a young, maverick star happens to be to a super hot O-star, the more likely its burgeoning planets will be blasted into space.
This artist's animation illustrates how this process works. The movie begins by showing an O-star in a murky star-forming region. It then pans out to show a young, cooler star and its swirling disk of planet-forming material. Disks like this one, called protoplanetary disks, are where planets are born. Gas and dust in a disk clumps together into tiny balls that sweep through the material, growing in size to eventually become full-grown planets.
The young star happens to lie within the "danger zone" around the O-star, which means that it is too close to the hot star to keep its disk. Radiation and winds from the O-star boil and blow away the material, respectively. This process, called photoevaporation, is sped up here but takes anywhere from 100,000 to about 1,000,000 years. Without a disk, the young star will not be able to produce planets.
Our own sun and its suite of planets might have grown up on the edge of an O-star's danger zone before migrating to its current, spacious home. However, we know that our young sun didn't linger for too long in any hazardous territory, or our planets, and life, wouldn't be here today.
NASA's Spitzer Space Telescope surveyed the danger zones around five O-stars in the Rosette nebula. It was able to determine that the zones are spheres with a radius of approximately 1.6 light-years, or 10 trillion miles.
Browse Videos in Science Animations
This artist's animation illustrates how silicate crystals like those found in comets can be created by an outburst fr... | <urn:uuid:543e3947-117c-4347-b0ec-4df0ce31371b> | 3.828125 | 440 | Truncated | Science & Tech. | 61.084665 |
Average temperatures have climbed 1.4 degrees Fahrenheit (0.8 degree Celsius) around the world since 1880, much of this in recent decades, according to NASA's Goddard Institute for Space Studies.
The U. S. is the top oil consuming nation, using about 25% of total world oil production.
The Earth has been around for 4.6 billion years. Scaling this time down to 46 years we have been around for 4 hours and our Industrial Revolution began just 1 minute ago. During this short time period we have ransacked the planet to get fuels and raw materials.
- The 500 million automobiles on earth burn an average of 2 gallons of fuel a day.
- Each gallon of fuel releases 20 pounds of carbon dioxide into the air.
- Approximately 5 million tons of oil produced in the world each ear ends up in the ocean. | <urn:uuid:16f05c68-6522-46c2-bb3c-a5127d641a4c> | 3.296875 | 174 | Listicle | Science & Tech. | 71.041923 |
Let m(t) be the mass as a function of time t and let v(t) be the velocity as a function of time t, then the momentum as a
function of time t is p(t) = m(t)*v(t). Further let ' represent one derivative with respect to time, i.e. the instantaneous time
rate of chang of a quantity, then
F1 + F2 + ... + Fn = p'(t) = [m(t)*v(t)] ' = m'(t)*v(t) + v'(t)*m(t) [by the product rule for derivatives]
This form of Newton's 2nd Law is needed to compute the force, for example, on a rocket whose total mass is changing
because it is burning its onboard fuel. So, to describe the force on a constant mass m (or the acceleration produced by a
force on the constant mass m) the simplified version is:
F = m*v'(t) = m*a(t), where the acceleration (the instantanteous time rate of change of the velocity) is a(t) = v'(t).
If we take the (over simplified) example of the 1-dim motion of an object in the gravitational field of the earth (e.g. hold
your pencil up over the floor and then release it) then the acceleration is also constant, i.e. a(t) = -9.8 m/s^2.
However, the acceleration can also increase and a familiar example is driving your car.
Case 1. a(t) = 0
Suppose you are driving your car down the street at a constant rate v(t) = 30 mi/hr, and so your acceleration a(t) = 0.
Case 2. a(t) = c (constant number) != 0
Now, suppose you enter a zone in which the speed limit is 40 mi/hr, and you uniformly press down on the accelerator
producing a constant, nonzero acceleration which gradually and uniformly increases you velocity to 40 mi/hr.
Case 3. a(t) = nonconstant function of time
Now, a key thing to notice for our striking application is that the acceleration may be itself increasing and this occurs when
"punch it" into passing gear, that is, instead of uniformly pressing down on the accelerator, you floor it. In this situation you
experience what is called the "jerk" and that is a nonzero time rate of change of the acceleration itself.
Summarizing, if s(t) is the position as a function of time t, then
v(t) = s'(t), the velocity function, i.e. the instantanteous time rate of change of the position
a(t) = v'(t), the acceleration function, i.e. the instantanteous time rate of change of the velocity
j(t) = a'(t), the jerk function, i.e. the instantanteous time rate of change of the acceleration
F = m*a(t), if we look at this simplified case of Newton's 2nd Law in which the jerk is nonzero and positive, then that
means that the time rate of change of the acceleration is positive, and hence in this case the acceleration is increasing,
which means that the force is increasing, when the jerk is nonzero and positive. This is the interesting case that applies to
the striking situation.
Energy analysis is of course good stuff, but it's not the only approach. One can start with the above, F = m*a(t) = m*(dv/dt) and approximate it for small delta t , yielding the impulse-momentum theorem:
F*(delta t) = delta (m*v) = m*(delta v), which says the impulse is equal to the change in momentum. So, we can use consevation of momentum instead of conservation of energy to compute F in newtons or pounds. Similar to:
http://www.fas.harvard.edu/%7Escdiroff/ ... eBlow.html
Either approach allows us to compute a quantity at impact, however, neither by itself explains why one method of striking is "harder" or more effective than another. An analysis of the jerk which occurs before impact, partially describes a key feature of the why. In order to do an experimental analysis we would need a very sophisticated, high speed motion capture camera. This type of analysis is standard stuff for sports scientists, but is nontrivial even for the experts. Just because one may have only a high school or non calculus-based understanding of physics and can't see how to do it doesn't mean that it can't be done.
Tony, BTW, this is classical mechanics which we can apply using simplifying assumptions and that the strike has been made. The probabilistic situation you are interested in, in which evasion occurs, and which isn't really random, would require a statistical mechanics type approach, and yes the math would be different. | <urn:uuid:c08a0880-e07b-448a-b9d7-9c073a1510a4> | 3.84375 | 1,080 | Comment Section | Science & Tech. | 67.440313 |
Catalog of Earth Satellite Orbits
Just as different seats in a theater provide different perspectives on a performance, different Earth orbits give satellites varying perspectives, each valuable for different reasons. Some seem to hover over a single spot, providing a constant view of one face of the Earth, while others circle the planet, zipping over many different places in a day.
There are essentially three types of Earth orbits: high Earth orbit, medium Earth orbit, and low Earth orbit. Many weather and some communications satellites tend to have a high Earth orbit, farthest away from the surface. Satellites that orbit in a medium (mid) Earth orbit include navigation and specialty satellites, designed to monitor a particular region. Most scientific satellites, including NASA’s Earth Observing System fleet, have a low Earth orbit.
The height of the orbit, or distance between the satellite and Earth’s surface, determines how quickly the satellite moves around the Earth. An Earth-orbiting satellite’s motion is mostly controlled by Earth’s gravity. As satellites get closer to Earth, the pull of gravity gets stronger, and the satellite moves more quickly. NASA’s Aqua satellite, for example, requires about 99 minutes to orbit the Earth at about 705 kilometers up, while a weather satellite about 36,000 kilometers from Earth’s surface takes 23 hours, 56 minutes, and 4 seconds to complete an orbit. At 384,403 kilometers from the center of the Earth, the Moon completes a single orbit in 28 days.
Changing a satellite’s height will also change its orbital speed. This introduces a strange paradox. If a satellite operator wants to increase the satellite’s orbital speed, he can’t simply fire the thrusters to accelerate the satellite. Doing so would boost the orbit (increase the altitude), which would slow the orbital speed. Instead, he must fire the thrusters in a direction opposite to the satellite’s forward motion, an action that on the ground would slow a moving vehicle. This change will push the satellite into a lower orbit, which will increase its forward velocity.
In addition to height, eccentricity and inclination also shape a satellite’s orbit. Eccentricity refers to the shape of the orbit. A satellite with a low eccentricity orbit moves in a near circle around the Earth. An eccentric orbit is elliptical, with the satellite’s distance from Earth changing depending on where it is in its orbit.
Inclination is the angle of the orbit in relation to Earth’s equator. A satellite that orbits directly above the equator has zero inclination. If a satellite orbits from the north pole (geographic, not magnetic) to the south pole, its inclination is 90 degrees.
Together, the satellite’s height, eccentricity, and inclination determine the satellite’s path and what view it will have of Earth. | <urn:uuid:42895869-e0fc-46e6-bee3-f9a6d53ad5a9> | 4.1875 | 593 | Knowledge Article | Science & Tech. | 38.601995 |
Abundance in oceans
ppb by weight
These data reflect the average composition of ocean water. Values vary slightly with location and depth. Units are parts per billion by weight.
Data given in different sources vary somewhat, reflecting the difficulty in assessing these numbers. Values given here are estimates of the average composition of ocean water and are derived by a consensus and averaging process for data abstracted from references 1-5. Values for the more rare elements are probably accurate to within an order of magnitude. Values in any particular location may be very different from those given here.
The units used in WebElements for all abundance data are ppb by weight which means parts per billion by weight, that is mg tonne-3 or mg per 1000 kg. All abundance data are also presented as ppb by atoms, which means atoms of the element per billion atoms.
The reason for rescaling all data is as follows. It is common to see, say, solar abundances expressed as the number of atoms of the element relative to a scale upon which the abundance of hydrogen is defined as 1012. This makes comparison with, say, crustal abundances difficult, since crustal abundances are often expressed in terms of parts per million by weight. Hence a common scale is used throughout and I chose ppb as this gives manageable numbers for most elements.
For access to other abundance data as ppb by weight, select from:
For access to other abundance data as ppb by atoms, select from:
WebElements now has an online chemistry shop at which you can buy periodic table posters, mugs, T-shirts, games, molecular models, and more. | <urn:uuid:d3d20be7-ce20-41c9-b119-23b956c3a946> | 2.859375 | 340 | Knowledge Article | Science & Tech. | 40.001087 |
This pie chart shows the relative likelihood of observing particular other species commonly observed near Salmo trutta fario
These species are those which most commonly occur in our observation database near Salmo trutta fario. Observations favor some phyla over others. Typically Bacteria, Fungi, Protozoa, and Arthropods are more common in the field than in our records.
Northeast Atlantic: southward to southern Norway; Iceland; southern Greenland. Non-migratory and land-locked relict populations south to the British Isles and central France. Reported from Greece and Estonia. Elsewhere circumpolar. Likely to benefit from environmental regulation passed in France on 8/12/88. Considered a synonym of Salmo trutta trutta by Kottelat.
Repopulation of stocks usual in Europe. Original population still exists in the island of Corse in the Mediterranean Sea. Often found in fast-flowing streams of mountain and sub-mountainous regions and sometimes even valleys.
Fresh water, brackish water, saltwater. Demersal.
In sections below, we make some habitat inferences based on the known habitat preferences of those species most commonly associated with Salmo trutta fario.
alpine, circumboreal, montane, subalpine, temperate.
alpine meadows, boreal forest, coniferous forests, croplands, cultivated areas, desert, disturbed sites, fields, forests, gardens, grasslands, meadows, open forests, pasture, pine forests, plantations, steppes, subalpine meadows, swamp forests, temperate forest, thickets, tundra grassland.
arable land, flood plains, hillsides, mountain slopes, plantations, roadsides, sand dunes, streamsides, valleys.
clay, gypsum, limestone, loam, sandy areas, sandy soil, stony areas, thin soil.
along rivers, bays, bogs, brackish water, ditches, dry areas, fens, flood plains, lakes, marshes, mesic areas, ponds, river banks, rivers, shores, stream banks, streams, swamps, swampy areas, wet woods.
hillsides, rocky slopes. | <urn:uuid:8da54ae0-ad20-4b77-beb9-c43b61f6afd3> | 2.84375 | 476 | Structured Data | Science & Tech. | 23.689839 |
Round tubs teem with live fish for sale since carp is the traditional centerpiece of the Christmas feast in much of central Europe. Oddly enough, when contained in these round tubs, the carp tend to align themselves with an invisible north-south line. One might assume this is because the fish are seeking Santa in his North Pole home, but scientists argue the behavior is the result of a previously unknown capacity to perceive geomagnetic fields.
This detection of the Earth’s magnetic poles is a well-studied phenomenon in birds and other migratory species. These animals travel long distances north and south every year based on a compass that scientists typically explain with the pull of the planet’s magnetic poles.
But no one had thought to explore this phenomenon in carp until the researchers noticed the curious behavior of fish at the market. The large numbers of carp in their pre-Christmas tubs provided a perfect set-up for a larger scientific analysis, which looked at over 14,000 carp from 25 different markets around the Czech Republic. The researchers found that factors such as light, noise, and onlookers didn’t seem to have an effect on the carp’s orientation—regardless of those, the fish tended to line up as if on cue. Geomagnetism was the most likely force, they concluded, and this is the first time such a response has been recorded in carp.
Images courtesy of visivastudio via Shutterstock (left) and Vlastimil Hart et al. (right).
Here’s the latest lesson from the ant world: kidnapped babysitters may not be the most reliable. Evolutionary myrmecologist Susanne Foitzik studies a species of ants that, instead of using its own workers to raise its young, kidnaps larvae from another species and puts them to work as babysitters. But, she’s found, the free labor has a price.
The folks at Backyard Brains, a DIY-neurobiology project, made these pigment-producing cells in a dead squid pulse to the base beats of Cypress Hill’s “Insane in the Brain.” Go watch that thing right now.
Done? Wowed? Prepare to be more wowed: They did it by exploiting the fact that electrical current is key to both the actions of cells and the playing of mp3s. These pigmented cells, called chromatophores, are surrounded by muscle cells, and it’s by flexing these muscles that the squid reveals its colorful spots. By hooked up the nerve that sends the flexing orders to the wire of a set of earbuds, they got these amazing results.
Phallostethus cuulong was swimming quietly in Vietnam’s Mekong River, minding its own business, when humans discovered the fish in 2009. And now that researchers have described P. cuulong [pdf], we can’t help violating its privacy by gazing unabashed at its most interesting feature. That feature sits on the throat in the form of a priapium, an organ with as many parts as a Swiss Army knife, most of which contribute to a single function: making as many babies as possible.
My sister, a medical student who has worked in a pathology lab, recently mentioned in passing that specific strains of bacteria, grown in an incubator, can have some pretty unusual smells. When I asked what she meant, she drew me this table (on some handy Discover stationary).
Now, I’ve grown plenty of yeast in my day, and they just smell like gym socks. Maybe, if you get some wild ones in there, like gym shorts (I’ve never enjoyed fancy beer made with wild yeast. Too redolent of crotch).
This level of olfactory whimsy, then, was totally new to me: Pseudomonas aeruginosa smells like flowers? Streptococcus milleri smells of browned butter? Clostridium difficule, scourge of elderly intestines, bringer of fecal transplants, smells like horse poo? I’ll confess, I never quite thought about what happens when you get millions of a single kind of bacteria all together in one place and take a nice long sniff. I did not think it would ever be pleasant. I was wrong.
Robots like this? That’s nuts.
If mechanical engineer David Hu ruled the world, it would be crawling with robots based on mosquitoes, snakes, and Mexican jumping beans. Hu’s lab studies animal locomotion, but the research goes beyond the traditional slow-motion footage of creatures running. Instead, Hu examines topics like how water striders and rafts of ants stay afloat on water’s surface, the mechanics of giant pumpkins collapsing into amorphous blobs under their own weight, how snakes’ scales affect their slither, the optimal way for furry animals to shake off water, and how mosquitoes survive collisions with comparatively huge raindrops. His group has even analyzed the motion of Mexican jumping beans, which is due not to some inherent magic in the “beans,” but rather to temperature-sensing moth larvae in hollow seeds. (When the ground heats up, the larvae sense the change in temperature and make their seedy houses twitch into rolling movements towards cooler, shadier ground.) These topics are weird and interesting enough to have garnered Hu’s work plenty of media coverage. But when it comes to earning funding, “weird and interesting” doesn’t always cut it. What’s the practical purpose of this research? Instead of shrugging and saying, “Now we know how mosquitoes struggle out from water droplets 50 times their size! That’s pretty cool!” Hu has come up with a standard one-size-fits-all application. At the end of his papers, he adds that whatever wacky phenomenon he studied could inspire…robots! Read More
If these fossilized turtles had a final thought, it was probably, “If you’ve gotta go, go out with a bang!” New evidence suggests that the ancient reptiles died while mating and were preserved in their final embrace.
Germany’s Messel Pit Fossil Site contains black oil shale that has preserved even the soft tissues of tens of thousands of 47-million-year-old fossils. Among them, the only ones found in pairs were nine sets of coupled carettochelyid turtles, and although previous research speculated that the reptiles were copulating, there was no proof until now. German researchers discovered that the turtles were all in male-female pairs (in the above image, the larger fossil on the left is the female), and that their tails were aligned, a position that indicates the close contact of a mating stance.
Adélie penguin chicks chase an adult.
Penguins are undeniably adorable. What other animal waddles around in a little tuxedo? But don’t let that cute exterior fool you: on a 1910–1913 Antarctic expedition, surgeon and zoologist George Levick bore witness to some surprising sexual behaviors of Adélie penguins, including coerced sex and necrophilia. In fact, the paper he wrote on the penguins’ sexual habits was considered too explicit to be published during the Edwardian era, and has only recently been rediscovered after spending almost a century hidden away in the Natural History Museum at Tring.
Levick travelled to Antarctica with Captain Robert Scott’s 1910 Terra Nova expedition, where he spent 12 weeks in the world’s largest Adélie penguin colony at Cape Adare, observing the birds, taking photographs, and even collecting nine penguin skins. After his return, Levick used his daily zoological notes as source material for two published penguin studies, one for the general public and a more scientific one to be included in the expedition’s official report. Intriguingly, this second account includes vague references to “’hooligan’ cocks” preying on chicks. Levick merely writes, “The crimes which they commit are such as to find no place in this book, but it is interesting indeed to note that, when nature intends them to find employment, these birds, like men, degenerate in idleness.”
Now, modern-day researchers have discovered that Levick did in fact describe the hooligans’ crimes in the paper, “The sexual habits of the Adélie penguin.” This paper was expunged from his official account, probably because it was too disturbing for Edwardian mores. Levick himself covered some explicit passages of his personal notes with coded versions, rewritten in the Greek alphabet and pasted over the original entries. Although the paper was withheld from the official record, researchers at the Natural History Museum did preserve it in pamphlet form, printing 100 copies labeled, “Not for Publication.” But most of the originals have been lost or destroyed, and no later works on Adélie penguins cited this paper until recently, when researchers in the Bird Group at the National History Museum at Tring discovered one of the original pamphlets in their reprints section.
An album of songs inspired by animals doesn’t sound immediately promising. It brings to mind certain cassette tapes from my youth, featuring bearded men singing earnest ballads about the banana slug (not a joke; I actually had that tape).
But Songs for Unusual Creatures, by composer Michael Hearst, is a beast of a different color. If you popped it into your player without any context at all, you’d hear a catchy, rhythmic cross between classical music and jazz, threaded with eerie theremin solos and digeridoo bass lines. It’s the kind of music you might play on endless loop while you study, work out, or write (ahem). Lots of syncopation and kooky instruments, as well as clear melodies, keep the sonic landscape interesting. (You can see Hearst perform one of the songs above.)
But it’s not just pretty sounds. Each track on the CD draws its inspiration from one of 15 unusual creatures, the kind of evolution-honed weirdoes that readers of this blog and science writers like myself enjoy so much, like the blue-footed booby, the Chinese giant salamander, the honey badger, and the humpback anglerfish. Each of these animals is profoundly odd—the tardigrade (track 11), for example, is one of the few creatures that can survive the vacuum of space—and their eponymous songs are also distinctively strange. “Dugong,” about the cigar-shaped, seagrass-grazing marine mammals, is a spacey, blue little tune. “Tardigrade” sounds like the love child of a Gypsy circus band and a jazz quartet.
Don’t mess with this.
Folks, those turtle-shaped sandboxes are not just a consumerist fantasy: Carbonemys cofrinii is an extinct turtle with a 10-inch skull and, more impressively, a shell that rounds out to five feet, seven inches in length. That really is big enough to dig around in. That’s also the same height as the grad student who found the 60-million-year-old fossil in a Colombian mine.
The turtle was so big that it probably drove off other turtle competitors and dominated the lake by itself, scientists say. They think that C. cofrinni preyed on mollusks and small reptiles, like the one depicted in this artist’s interpretation. If we’re going to be spending time in the belly of a turtle, though, we’d personally prefer it to be full of sand and toys rather than chewed up food. | <urn:uuid:17a3658f-4c96-4b61-bbf4-b1142f60852a> | 2.921875 | 2,471 | Personal Blog | Science & Tech. | 47.774281 |
This post is going to be short and sweet, because the point is very simple: If you use a functional programming language (and, if you want to learn to think outside of the Delphi box, you should), then you will be using garbage collection.
There are many reasons why this is the case. Garbage collection is a core feature of functional programming, because functional programming features do not work well with the semantics of explicit memory release. It is telling that language implementors have found it easier and more correct to create real-time garbage collectors than to create and work with non-garbage-collected variants of functional languages.
I was unable to find a single example of a truly functional language which did not use garbage collection in any standard version. Haskell, OCaml, Common Lisp, Erlang, ML, Scheme, etc., are all garbage collected. Moreover, functional features are commonly added to languages like C#, Ruby, Python, ECMAScript, and others, which weren’t designed as functional languages, but happen to be garbage collected. Whereas languages like Delphi/Win32 get fewer such features, despite their continuing evolution. The FC++ library, which adds some functional programming features to C++, uses a reference-counting method of garbage collection internally, to its great benefit. The authors note:
For instance, compared to Läufer’s approach, we achieve an equally safe but more efficient implementation of the basic framework for higher order functions. This is done by allowing function objects to be multiply referenced (aliased), albeit only through garbage collected “pointers”. The difference in performance is substantial: compared to Läufer’s framework, and running with the same client code (the main example implemented by Läufer) we achieve a 4- to 8-fold speedup.
The D programming language is designed to combine the power of C++ with the productivity and contemporary features of Ruby Python. It supports function literals, closures, and lazy evaluation. It doesn’t use a runtime, or a JIT compiler, but it uses garbage collection for heap management. | <urn:uuid:a8065ee0-b2b9-4004-8e7e-9fa716391ae1> | 2.984375 | 439 | Personal Blog | Software Dev. | 30.991394 |
Pub. date: 2010 | Online Pub. Date: December 16, 2009 | DOI: 10.4135/9781412972000 | Print ISBN: 9781412940818 | Online ISBN: 9781412972000 | Publisher:SAGE Publications, Inc.About this encyclopedia
Animal Color Vision
Gerald H. Jacobs
Under daylight conditions, most humans experience a richly colorful world in which objects appear to maintain consistent color appearances—such as green grass or blue sky. That familiar association makes it natural to believe that color is an inherent property of objects and lights. Although at first glance this idea seems reasonable, it is wrong. Color is actually a feature of our experience that is constructed from the overall pattern of illumination reaching the eye at any moment as subsequently analyzed and conditioned by the particular details of the organization of the eye and the visual system. Eyes and visual systems show great variation across the animal kingdom, so it is hardly surprising that other animals may experience color in ways that are strikingly different from those familiar to humans. This entry describes how and why color vision varies among the animals. Light reaches the eye directly from illuminants, such as ... | <urn:uuid:b967d73c-675b-4d4c-9e33-4fdc72c46f07> | 3.6875 | 235 | Knowledge Article | Science & Tech. | 44.708484 |
Dangling ends are nucleotides that stack on the ends of helices. In secondary structures, they occur in multibranch and exterior loops. They occur as either 5' dangling ends (an unpaired nucleotide 5' to the helix end) or 3' dangling ends (an unpaired nucleotide 3' to the helix end). In RNA, 3' dangling ends are generally more stabilizing than 5' dangling ends. Note that if a helix end is extended on both the 5' and 3' strands, then a terminal mismatch exists (not the sum of 5' and 3' dangling ends).
A list of references is here.
A tutorial is available. | <urn:uuid:a9a0a0a1-35d6-435d-abf4-c1278b31b59e> | 3 | 140 | Knowledge Article | Science & Tech. | 64.589 |
Case Study: Students Investigate the Impact of Cities on Temperature
Worldwide Surface Temperature Field Campaign
On the coldest day of the winter of 2000, a group of dedicated, excited high school students congregated at a research arboretum to meet Dr. Kevin Czajkowski, a remote sensing scientist and meteorology professor from The University of Toledo. The professor had invited the students to participate in a project researching global climate change science and the urban heat island effect. The ensuing program built a relationship between research scientists, students, and teachers that has lasted to this day, 10 years later. The ongoing relationship is maintained to the mutual benefit of all involved. Teachers gain professional development in cutting edge technology; students benefit by actively participating in research in real world science problems; and the research scientists benefit by increasing their data collection capabilities.
From 2006 to 2008, K-12 students from Ohio worked with students from around the world to study the impact of snow and ice on climate and the relationship between land cover and the urban heat island effect. They participated in the GLOBE Program Surface Temperature Research Project directed by the University of Toledo and OhioView. Thousands of students worldwide collected surface temperature data, recorded cloud type and percent cover, and took snow depth measurements during the school day for two weeks each December in a coordinated field campaign. The students collected Earth surface temperature data around their schools using a handheld infrared thermometer in an effort to understand the way in which Earth's temperature is affected by land cover. This broad collection of data is particularly important in the study of the urban heat island effect, which suggests that due to changes in land cover type from natural to man-made, cities of significant size capture and retain additional heat. The goal of this chapter is to demonstrate how the GLOBE student data collected during the field campaign supports the hypothesis of the urban heat island effect in larger cities such as Toledo.Read more about the Surface Temperature Field Campaigns on the GLOBE Program Web site. | <urn:uuid:01de3f2e-688b-4b2d-9483-68a35b66245c> | 3.640625 | 401 | Academic Writing | Science & Tech. | 24.203346 |
Every time I leave the nearby city of Grenoble, to return to Choranche, I drive alongside a vast scientific research zone, snuggled in the northern tip of the big triangle located between the two great waterways known as the Snake and the Dragon: that's to say, the Isère and the Drac. (The latter looks and behaves like a normal stream, but it's actually an Alpine torrent.)
This zone houses two extraordinary research tools, whose construction was financed by a consortium of nations:
— The ILL [Institut Laue-Langevin] is a nuclear reactor that produces neutrons. This research reactor produces the most intense neutron flux in the world. Its thermal power is over 58 megawatts. By comparison, Australia's recently-inaugurated Opal reactor, which is also designed to produce neutrons for research, has a power output of only 20 megawatts. Grenoble's ILL reactor is funded by France, Germany, the UK, Spain, Switzerland, Austria, Russia, Italy, the Czech Republic, Sweden, Hungary, Belgium and Poland.
— The ESRF [European Synchronotron Radiation Facility] is a giant ring-shaped tunnel that accelerates X-rays. Grenoble's accelerator, which is one of the three biggest synchrotrons in the world (the others existing in the US and Japan), is funded by France, Germany, Italy, the United Kingdom, Spain, Switzerland, Belgium, the Netherlands, Denmark, Finland, Norway, Sweden, Portugal, Israel, Austria, Poland, the Czech Republic and Hungary.
If I've listed all the nations whose scientists use these tools, it's to give you an idea of the kind of international atmosphere that reigns in the great provincial city of Grenoble, which has always been a major center of learning.
The two facilities lie side-by-side. In the above photo, you can see the circular dome of the ILL reactor just behind the big ring of the synchrotron. To a certain extent, they might be considered as complementary tools, since beams of neutrons and high-energy X-rays can both be used to analyze the physical nature of targets that are placed in their way. The differences between neutrons and X-rays are illustrated in the following radiographs:
I was reminded of Grenoble's extraordinary scientific research facilities a few days ago. In his book called Programming the Universe [click here to see my previous article on this theme], Seth Lloyd tells us that he had been thrown into a stupor when told that, "not only was an electron allowed to be in many places at the same time, it was in fact required to be there (and there, and there, and there)". He couldn't seize this weird conclusion in a totally intuitive fashion, so he remained in a state of intellectual trance. It was not until years later, when Seth Lloyd happened to be working at the ILL in Grenoble, that the American researcher finally saw the light, as described here: "I awoke from my trance. Neutrons, I saw, had to spin clockwise and counterclockwise at the same time. They had no choice: it was in their nature. The language that neutrons spoke was not the ordinary language of yes or no, it was yes and no at once. If I wanted to talk to neutrons and have them talk back, I had to listen when they said yes and no at the same time. If this sounds confusing, it is. But I had finally learned my first words in the quantum language of love."
In the context of Lloyd's fascinating book, I got a kick out of hearing him say that an arrow from a quantum Cupid [a Qupid?] had finally hit him while he was working in the capital of the French Alps. Over the last 14 years, I've visited Grenoble on countless occasions. But I still find that I'm overcome by a tingling sensation of excitement whenever I set foot there. I don't know whether it has anything to do with Lloyd's "quantum language of love". Often, I've imagined that some kind of tellurian energy is accumulated in the celebrated mountains which, as Stendhal once said, can be glimpsed at the end of every street in this fabulous city at the heart of the ancient Dauphiné province. | <urn:uuid:f9d699a2-eb55-42db-beae-f60ab6c0a66f> | 3.109375 | 903 | Personal Blog | Science & Tech. | 46.650357 |
Nothing bad will happen to the Earth in 2012. Our planet has been getting along just fine for more than 4 billion years, and credible scientists worldwide know of no threat associated with 2012.
For any claims of disaster or dramatic changes in 2012, where is the science? Where is the evidence? There is none, and for all the fictional assertions, whether they are made in books, movies, documentaries or over the Internet, we cannot change that simple fact. There is no credible evidence for any of the assertions made in support of unusual events taking place in December 2012.
The story started with claims that Nibiru, a supposed planet discovered by the Sumerians, is headed toward Earth. This catastrophe was initially predicted for May 2003, but when nothing happened the doomsday date was moved forward to December 2012. Then these two fables were linked to the end of one of the cycles in the ancient Mayan calendar at the winter solstice in 2012 -- hence the predicted doomsday date of December 21, 2012.
Nibiru and other stories about wayward planets are an Internet hoax. There is no factual basis for these claims. If Nibiru or Planet X were real and headed for an encounter with the Earth in 2012, astronomers would have been tracking it for at least the past decade, and it would be visible by now to the naked eye. Obviously, it does not exist. Eris is real, but it is a dwarf planet similar to Pluto that will remain in the outer solar system; the closest it can come to Earth is about 4 billion miles.
Just as the calendar you have on your kitchen wall does not cease to exist after December 31, the Mayan calendar does not cease to exist on December 21, 2012. This date is the end of the Mayan long-count period but then -- just as your calendar begins again on January 1 -- another long-count period begins for the Mayan calendar.
There are no planetary alignments in the next few decades, Earth will not cross the galactic plane in 2012, and even if these alignments were to occur, their effects on the Earth would be negligible. Each December the Earth and sun align with the approximate center of the Milky Way Galaxy but that is an annual event of no consequence.
Solar activity has a regular cycle, with peaks approximately every 11 years. Near these activity peaks, solar flares can cause some interruption of satellite communications, although engineers are learning how to build electronics that are protected against most solar storms. But there is no special risk associated with 2012. The next solar maximum will occur in the 2012-2014 time frame and is predicted to be an average solar cycle, no different than previous cycles throughout history. | <urn:uuid:cad2e919-119d-489f-bd7c-802b9ba7c34f> | 3.3125 | 544 | Knowledge Article | Science & Tech. | 50.146706 |
Mission Type: Impact
Launch Vehicle: 8K78 (no. L1-7)
Launch Site: Tyuratam (Baikonur Cosmodrome), NIIP-5 / launch site 1, USSR
Spacecraft Mass: c. 645 kg
Spacecraft Instruments: 1) three-component magnetometer; 2) variometer and 3) charged-particle traps
Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000, Monographs in Aerospace History No. 24, by Asif A. Siddiqi
National Space Science Data Center, http://nssdc.gsfc.nasa.gov/
This mission was the first attempt to send a spacecraft to Venus. Original intentions had been to send the 1V spacecraft to take pictures of the Venusian surface, but this proved to be far too ambitious a goal. Engineers instead downgraded the mission and used the 1VA spacecraft for a simple Venus atmospheric entry. The 1VA was essentially a modified 1M spacecraft used for Martian exploration.
The spacecraft contained a small globe containing various souvenirs and medals commemorating the mission. This flight was also the first occasion on which the Soviets used an intermediate Earth orbit to launch a spacecraft into interplanetary space.
Although the booster successfully placed the probe into Earth orbit, the fourth stage (the Blok L) never fired to send the spacecraft to Venus. A subsequent investigation showed that there had been a failure in the PT-200 DC transformer that ensured power supply to the Blok L guidance system. The system had evidently not been designed to work in a vacuum. The spacecraft + upper-stage stack reentered Earth's atmosphere on 26 February 1961. The Soviets announced the total weight of the combination as 6,483 kilograms. | <urn:uuid:130e80bc-9ac9-4ccb-aed0-5fadf9d6407a> | 3.4375 | 365 | Knowledge Article | Science & Tech. | 50.630577 |
Lesson Plans >
& Celcius Conversion
convert between Fahrenheit and Celcius temperatures - may
not work with some older browsers.
Objective: Students will learn how latitude affects weather patterns.
Resources: Teacher: World globe.Students: paper and
Teacher Preparation: This lesson can follow an introductory lesson
on latitude and longitude.
- Have students locate on the globe the latitude for where they live.
- Using the globe, have them find other countries, either north or south,
with the same latitude.
- Have students research another country's weather, using encyclopedias.
- Students write a paper describing similarities and differences between
their own weather and the other country's weather from the same latitude.
Variations/Options: Using the Internet, students could contact
other students to discuss weather.
March is also the time to start thinking about kites, and you're probably
thinking they're something you'd like to make, but don't know how. In addition
to seeing the sites below, remember that kite clubs abound, and that a big
business has popped up in the last few years for high-performance, fairly
pricey stunt kites. These clubs or businesses may be willing to put on a
demonstration for your school or class.
Kids - 20 Kites - 20 Minutes gives a list of materials, directions
and diagram for constructing a kite in the classroom.
Olson Kites Activity is another example of an activity allowing
students to construct a kite using a paper bag and tissue paper streamers.
Kite Using Straws is more challenging to make than the other two
examples. It is constructed of drinking straws, lightweight multi-filament
nylon, hat elastic and shopping bags. (Note from Lee: After seeing examples
of this kite constructed by Alexander Graham Bell, as a high school
student I built tetrahedral kites using drinking straws, string, and
actually stapled newspaper around the straws to cover it. My version
was heavy to say the least, but flew very well in strong Kansas winds,
and also was a good learning experience about the integrity of the triangle.) | <urn:uuid:173a7850-9894-4620-a323-855ed9938f01> | 4.125 | 459 | Tutorial | Science & Tech. | 42.612082 |
Serial Dilutions Made Easy
Author: Jan Hilten and Carol Sanders
Woodrow Wilson Biology Institute
Many areas of science use serial dilutions in the preparations for different experiments. This exercise is presented as an aid for the teacher in helping his/her students improve their skills and more quickly understand the particular application for which serial dilutions are a tool. Serial dilutions are often used in microbiology, biotechnology, and in chemistry classes, to name just a few. Therefore a clear, concise, and nonthreatening approach to the learning of this very important concept is essential.
Serial dilutions are usually made in increments of 1000, 100 or 10. The con-centration of the original solution and the desired concentration will determine how great the dilutions need to be and how many dilutions are required. Important also is the total volume of solution needed. If only small quantities of solutions are needed then greater numbers of dilutions are necessary.
The most common examples deal with concentration of cells or organisms, or the concentration of a solute. The approximate concentration should be known at the start of the experiment before the appropriate number and amount of dilutions can be made. In order to arrive at the desired concentration, use serial dilutions, instead of making one big dilution, in order to finally arrive at the desired concentration. This method is not only cost effective but it also allows for small aliquots to be diluted instead of unnecessarily large quantities of materials.
This technique involves the removal of a small amount of an original
solution to another container which is then brought up to the original
volume using the required buffer or water. In the example below, if you
have 1 mL of your original solution, and you remove 10 µL and place
it in a tube containing 990 µL of water or media you have made a
1:100 dilution. If the original solution contained 5 x 108
organisms or cells/mL, we now have a concentration of 5 x 106
cells/mL, because we have simply divided our concentration by 100. Now,
if we want to dilute this by a factor of 1:1000, we must remove 1 µL
of the second solution and place it in a tube containing 999 µL
of media. We have now diluted our secondary concentration by 1000, and
would then divide our concentration by 1000 to yield a 5 x 103
- You are given a test tube containing 10 mL of a solution with 8.4
x 107 cells/mL. You are to produce a solution that contains
less than 100 cells/mL. What dilutions must you perform in order to
arrive at the desired result?
ANSWER: You should perform a series of three 1:100 dilutions
to yield 84 cells/mL.
1 mL of original solution to 99 mL of water = 8.4 x 105
1 mL of second solution to 99 mL of water = 8.4 x 103 cells/mL.
1 mL of third solution to 99 mL of water = 8.4 x 101 or
- You have a microtube containing 1 mL of a solution with 4.3 x 104
cells/mL and you are to produce a solution that contains 43 cells/mL.
What dilutions must you perform?
ANSWER: You could perform the following dilutions:
of original solution to 990 µL of water = 4.3 x 102 cells/mL.
100 µL of second solution to 900 µL of water = 4.3 x 101
or 43 cells/mL.
- You are given a container with 5 mL of a solution containing 5.1 x
103 cells/mL. You are to produce a solution that contains
approximately 100 cells/mL.
ANSWER: You would perform the following dilutions:
0.5 mL of original solution to 4.5 mL of water = 5.1 x 102
1 mL of second solution to 4 mL of water = 1.02 x 102
cells/mL or 102 cells/mL
- You are given a container of yeast cells for which Klett units have
been determined on a Klett Summerson Colorimeter. The container contains
a population whose concentration is 2.6 x106 cells/mL. You are to prepare
a suspension which, when you spread 1 mL of the suspension on appropriate
media, will result in about 100 cells.
10 µl of original solution to 990 µl (or 1.0 mL) of
sterile water = 2.6 x 104 cells/mL
10 µl of second solution to 990 µl (or 1.0 mL) of sterile
water = 2.6 x 102 cells/mL
0.5 mL of third solution to 0.5 mL of sterile water = 1.3 x 102
or 130 cells/mL
0.77 mL of fourth solution to 0.23 mL of sterile water
= 100 cells/mL
Note: Corrected 9 March 2005
Thomas R. Manney and Monta L. Manney. pp. 28-29.
Evelyn Morholt and Paul F. Brandwein pp. 458-460. Harcourt Brace Jovanovich, Inc.
David A. Micklos and Greg A. Freyer. pp. 244-245. Cold Springs Harbor | <urn:uuid:e9041cf8-bfda-4acf-859a-c6231c7c861a> | 4.0625 | 1,098 | Tutorial | Science & Tech. | 76.674 |
The Power of Symmetry
Why Beauty is Truth: A History of Symmetry. Ian Stewart. xiv + 290 pp. Basic Books, 2007. $26.95.
Symmetry is a fundamental concept pervading both science and culture. In popular terms, symmetry is often viewed as a kind of "balance," as when Doris Day's character in the 1951 movie On Moonlight Bay insists that if her beau kisses her on the right cheek, then he should kiss her on the left cheek too. But in mathematics, symmetry has been given a more precise meaning. In his new history of mathematical symmetry, Why Beauty Is Truth, Ian Stewart gives this definition: "A symmetry of some mathematical object is a transformation that preserves the object's structure." So a symmetrical structure looks the same before and after you do something to it. A butterfly looks the same as its mirror image. The (idealized) wheel of a car may look the same after being rotated on its axle by 90 degrees (or possibly by 72 or 120 degrees, depending on the particular design).
Although mathematical symmetry may bring to mind a regular polygon or other geometric pattern, its roots (pun unavoidable) lie in algebra, in the solutions to polynomial equations. Thus Stewart begins his account in ancient Babylon with the solution to quadratic equations. The familiar quadratic formula gives the two roots of the degree-two polynomial equation ax 2 + bx + c = 0. The Babylonians didn't have the algebraic notation to write down such a formula, but they had a recipe that was equivalent to it.
By the 18th century, mathematics and mathematical notation had matured to the point of finding explicit formulas for the roots of general polynomials of degree three (cubic) and degree four (quartic). The formulas look complicated, but they are made of simple building blocks: addition, subtraction, multiplication, division and radicals—such as square roots, cube roots, fourth roots and so on. The existence of such formulas is summarized by the statement that all polynomials of degree four or less are solvable by radicals.
For polynomials of degree five (quintic) and higher, no such formula has been found, because none exists: Although some fifth-degree polynomials are solvable by radicals, other fifth-degree polynomials are not. This was proved by Niels Henrik Abel in 1823, although nearly correct proofs were proposed as far back as 1799. These days, the result is not considered surprising: The requirements for an equation to be "solvable by radicals" are very restrictive. With such a small vocabulary of operations, one would expect that most interesting numbers cannot be written in that form.
But Abel's proof was not the end of the story. A mystery remained: What distinguishes those polynomials of degree five that are solvable? The answer was determined in 1832 by Évariste Galois, who invented group theory to find it.
The official mathematical definition of a group does not explicitly mention symmetries; it concerns a set of elements that can be combined in pairs to form another element of the set, and certain axioms have to be satisfied. The precise definition of a group is not important for the purpose of this review; also, Why Beauty Is Truth does not provide one. Suffice it to say that the mathematical concept of a group captures the essence of symmetry in abstract terms. The focus is on the operation that reveals the symmetry. The collection of symmetries of any object is a group, and every group is the collection of symmetries of some object.
The symmetric objects of interest to mathematicians are not physical objects but mathematical entities. In Galois's study of polynomials, the object was the set of roots of the polynomial. For polynomials of degree five, this is just a list of five numbers. And for Galois, the operation was rearranging the list of roots.
Galois's great achievement was discovering that there are several different possibilities for which rearrangements of the set of roots can occur and that the specific collection of rearrangements determines whether or not the roots of the polynomial can be expressed as a simple formula involving square roots, cube roots and so on. Every algebraic equation has a symmetry group, called its Galois group, with an abstract structure that determines whether the roots of the polynomial can be expressed in terms of radicals. The Galois group is able tell you that the result can be expressed as a finite formula involving radicals, but it does not actually provide the formula. Today there are computer programs that can compute the Galois group of a polynomial; they can also tell you a formula for the roots—if the polynomial is solvable by radicals.
The first half of Stewart's book takes us up to the time of Galois's work. After Galois, group theory became its own separate mathematical subject, and today only a small part of group theory concerns the Galois groups of polynomials. But there are still some major unsolved problems. It is not known, for example, whether every finite group can occur as the Galois group of a polynomial with integer coefficients.
Group theory is diverse. Stewart makes the case that symmetry, as captured by group theory, is all-pervasive and that much of our physical reality has group theory as its underlying explanation. Because he focuses on the subject as it applies to relativity and string theory, Stewart misses an opportunity to explain this connection in those areas of physics that most of his readers are likely to understand.
In 1915, Emmy Noether proved that all conservation laws arise from a symmetry of the physical system. For example, conservation of momentum is a consequence of the fact that the laws of physics are the same everywhere. That is, Newton's laws are invariant under translation. Physics students use this principle every time they analyze a collision by switching to the center-of-mass reference frame. Another example is conservation of energy, which is a consequence of the fact that Newton's laws look the same with time running in reverse. Stewart gives Noether's theorem short shrift: one sentence on page 165.
Of course, modern physics provides wonderful examples of the power of symmetry. Special relativity is founded on the postulates that the laws of physics are the same in all inertial reference frames and the speed of light is the same for all observers. Both of those assumptions are symmetries in the mathematical sense of showing "invariance under an operation." Elementary particle physics and string theory rely on group theory more explicitly, with gauge groups such as SO(2) and SU(3). Stewart does not fully describe those groups in his book, although he does try to give a sense of the subject.
Why Beauty Is Truth is more about history than about mathematics, with a focus on the people who did the math. Stewart weaves in short biographies, which often begin with the mathematician's parents. We learn of their occupations and the career they planned for their child (usually not mathematics), and we also read about the mathematician's foibles and the quirks of his or her spouse. The various formulas, graphs and diagrams make it clear that this is a book about mathematics—not a math book. And that is exactly right: You won't really learn much math by reading it. It does, however, tell a reasonably entertaining story about the history of symmetry. | <urn:uuid:62d104d5-42a1-4c4b-8352-1de7fa03c0bd> | 3.5625 | 1,544 | Content Listing | Science & Tech. | 43.108552 |
UVa Astronomy News Picture Archive
This is a small extract from a huge mosaic of images of the deep universe. The original mosaic was assembled from hundreds of exposures taken with the new Wide Field Camera 3 (WFC3) and the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope. More than 12 billion years of cosmic history are shown in this unprecedented, panoramic, full-color view of thousands of galaxies stretching back through most of the universe's history, covering a portion of the southern field of a large galaxy census called the Great Observatories Origins Deep Survey (GOODS). Such a detailed view of the universe has never before been assembled with this much color, clarity, accuracy, and depth. The closest galaxies, seen in the foreground, emitted their observed light about a billion years ago. The farthest galaxies, a few of the very faint red specks, are seen as they appeared more than 13 billion years ago, or roughly 650 million years after the Big Bang. Ultraviolet light taken by WFC3 shows the blue glow of hot, young stars in galaxies teeming with star birth. The orange light reveals the final buildup of massive galaxies about 8 billion to 10 billion years ago. The near-infrared light displays the red glow of very distant galaxies---in a few cases as far as 12 to 13 billion light-years away---whose light has been stretched, from ultraviolet light to longer-wavelength infrared light due to the expansion of the universe.
Credit: NASA, ESA, R. Windhorst, S. Cohen, M. Mechtley, and M. Rutkowski (Arizona State University, Tempe), R. O‘Connell (University of Virginia), P. McCarthy (Carnegie Observatories), N. Hathi (University of California, Riverside), R. Ryan (University of California, Davis), H. Yan (Ohio State University), and A. Koekemoer (Space Telescope Science Institute)
|<-- previous picture||next picture -->| | <urn:uuid:d8556211-2d20-4d21-b794-216dbee92134> | 3.453125 | 413 | Truncated | Science & Tech. | 45.884474 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
maintenance of biological rhythm
The internal mechanism by which such a rhythmic phenomenon occurs and is maintained even in the absence of the apparent environmental stimulus is termed a biological clock. When an animal that functions according to such a clock is rapidly translocated to a geographic point where the environmental cycle is no longer synchronous with the animal’s cycle, the clock continues for a time to function...
study by Richter
...the faculty, both in 1921. In 1922 he became director of Johns Hopkins’ psychiatric clinic, a post that he held until becoming professor of psychobiology in 1957. He introduced the concept of the biological clock in a 1927 paper on the internal cycles that govern animals’ drinking, eating, running, and sexual behaviour. In studying the influence of learned behaviour on human biology, Richter...
What made you want to look up "biological clock"? Please share what surprised you most... | <urn:uuid:976cf377-eed2-4f65-9bbe-4ed21d0bcad8> | 3.3125 | 222 | Truncated | Science & Tech. | 48.998939 |
Facts about the Definition of the Element Zinc The Element Zinc is defined as... A bluish-white, lustrous metallic element that is brittle at room temperature but malleable with heating. It is used to form a wide variety of alloys including brass, bronze, various solders, and nickel silver, in galvanizing iron and other metals, for electric fuses, anodes, and meter cases, and in roofing, gutters, and various household objects. A Zinc Reaction involves a process in which Zinc is mixed with another substance which react to form something else.
Interesting Facts about the Origin and Meaning of the element name Zinc What are the origins of the word Zinc ? The name originates from the German word 'zin' meaning tin.
Facts about the Classification of the Element Zinc Zinc is classified as a "Transition Metal" which are located in Groups 3 - 12 of the Periodic Table. An Element classified as a Transition Metals is ductile, malleable, and able to conduct electricity and heat.
Brief Facts about the Discovery and History of the Element Zinc Zinc alloys have been used since ancient times by the Asians, Greeks, Chinese and Romans. Zinc was discovered by the chemist Andreas Marggraf in 1746. It was isolated two years earlier by Anton von Swab.
Occurrence of the element Zinc in the Atmosphere Obtained from zinc blende & calamine Zinc is the fourth most common metal in use
Common Uses of Zinc Die castings by the automobile industry Used to form a wide variety of alloys Galvanizing metals Electric fuses Anodes Rolled zinc is used as part of the containers of batteries Zinc oxide is used in paints, chloride used as a deodorant, chloride used as a wood preservative, sulfide is used in luminescents Medical use to treat rashes Meter cases Roofing Gutters Zinc phosphate
The Properties of the Element Zinc
Name of Element : Zinc Symbol of Element : Zn Atomic Number of Zinc : 30 Atomic Mass: 65.39 amu Melting Point: 419.58 °C - 692.73 °K Boiling Point: 907.0 °C - 1180.15 °K Number of Protons/Electrons in Zinc : 30 Number of Neutrons in Zinc : 35 Crystal Structure: Hexagonal Density @ 293 K: 7.133 g/cm3 Color of Zinc : bluish-white
The element Zinc and the Periodic Table Find out more facts about Zinc on the Periodic Table which arranges every chemical element according to its atomic number, as based on the periodic law, so that chemical elements with similar properties are in the same column. Our Periodic Table is simple to use - just click on the symbol for Zinc for additional facts and info and for an instant comparison of the Atomic Weight, Melting Point, Boiling Point and Mass - G/cc of Zinc with any other element. An invaluable source for more interesting facts and information about the Zinc element and as a Chemistry reference guide.
Facts and Info about the element Argon - IUPAC and the Modern Standardised Periodic Table The Standardised Periodic Table in use today was agreed by the International Union of Pure Applied Chemistry, IUPAC, in 1985 which includes the Zinc element. The famous Russian Scientist, Dimitri Mendeleev, perceived the correct classification method of "the periodic table" for the 65 elements which were known in his time. Zinc alloys have been used since ancient times by the Asians, Greeks, Chinese and Romans. Zinc was discovered by the chemist Andreas Marggraf in 1746. It was isolated two years earlier by Anton von Swab. The Standardised Periodic Table now recognises more periods and elements than Dimitri Mendeleev knew in his day but still all fitting into his concept of the "Periodic Table" in which Zinc is just one element that can be found. | <urn:uuid:7a2a6cb1-547f-4185-bec1-4a63817c45f8> | 3.625 | 834 | Knowledge Article | Science & Tech. | 42.64742 |
The symbol is usually used in mathematics as a part of other symbols, as an indication of something that may be arbitrarily large. For example, the customary interval notation refers to the set of all real numbers greater than or equal to the number a. That is, is the set of numbers starting at a, and moving arbitrarily far to the right. Although we read the symbol "" by saying "infinity," we must be careful not to think of as a number.
Countably Infinite Sets
On the other hand, mathematicians frequently use the word "infinite" as an adjective, when describing the size, or "cardinality," of a set. Two sets A and B are said to have the same cardinality if there is a one-to-one correspondence between them. For example, the sets and have the same cardinality, since there is a very obvious one-to-one correspondence between them. A set A is called countably infinite if it has the same cardinality as the set of positive integers, . That is, a set is countably infinite if it is possible to devise a systematic way of pointing to each of its elements in turn, and counting them: one, two, three, ... .
It is rather surprising that this can be done with the set of all rational numbers! Even though it appears that there are many more rational numbers than positive integers, it is nonetheless possible to devise a one-to-one correspondence between them. For simplicity, let's first consider the problem of devising a one-to-one correspondence between and , the set of positive rationals. (After we have done this, see if you can use this to devise a one-to-one correspondence between and .) Since our goal is effectively to count the positive rationals, let's make a systematic list of all them. Recalling that a positive rational number can be expressed as a quotient of two positive integers, consider this listing:
What's Larger than Countably Infinite?
We have just seen that both and are countably infinite. Is it true that every infinite set is countably infinite? That is, can every infinite set be put into one-to-one correspondence with , or are some infinite sets "larger" than in some sense? It turns out that there are indeed different "sizes" of infinite sets; not all infinite sets are of the same infinite "size"! An infinite set which cannot be put into one-to-one correspondence with is, naturally enough, referred to as an uncountable set. A simple example of an uncountable set is the set of real numbers between 0 and 1, the unit interval . To see that is uncountable, we will use the technique of proof by contradiction, which was used in Vignette 2 to show the existence of infinitely many primes.
It is clear that is infinite, so let's suppose that were a countably infinite set. (If we show that this leads to a contradiction, then we will have proven that is uncountable.) Now the numbers in can all be expressed in decimal form, beginning with the whole number part 0. (Even the number 1 can be written this way, as .) If the set of such numbers were countable, then we could list them in order of their correspondence with the positive integers:
Is There More?
The branch of mathematics that deals with sets and their cardinality is known as set theory. Although Georg Cantor's discovery of the differences among the cardinalities of various infinite sets came as quite a shock to the mathematical community in the 1870's, it is now well understood that there is a whole new type of arithmetic based on the transfinite numbers, the cardinalities of infinite sets. | <urn:uuid:1d827838-e4cb-470c-9f6a-3756e6ba54f4> | 3.828125 | 763 | Knowledge Article | Science & Tech. | 41.989123 |
Do you enjoy walking through piles of autumn leaves and hearing the crunch underneath your feet? Those leaves you're stepping on might actually be home to a wide variety of plants and animals! Leaves, twigs and pieces of bark that have fallen to the ground make up leaf litter. Leaf litter is an important component of healthy soil. Decomposing leaf litter releases nutrients into the soil and also keeps it moist. It also serves as great nesting material, hiding places and protected spots for animals. This dead organic material provides the perfect habitat for a plethora of organisms, including worms, snails, spiders, and microscopic decomposers like fungi and bacteria. For this reason, leaf litter is considered very biodiverse.
"Biodiversity" is a concept that refers to the variety of different life forms, from the genetic level to the species level. Species diversity in particular is a subcategory of biodiversity that refers to the number of different species represented in a set. Where can biodiversity be seen in your everyday life? Just look outside your window! The different types of trees, flowers and insects are all examples of a biodiverse community.
Scientists use various mathematical formulas and indices to calculate biodiversity. The level of biodiversity in an ecosystem is believed to indicate how healthy and stable the ecosystem is. Generally, a higher biodiversity level indicates a healthy ecosystem that is capable of supporting life well. To learn more about biodiversity, you can examine leaf litter in your neighborhood!
Most of the tiny animals that are found in leaf litter are invertebrates, meaning that they lack a backbone. Some of these little critters feed on the litter and break it up into smaller pieces. Microscopic organisms like bacteria and fungi then decompose the litter, converting it into beneficial chemicals and minerals that can be absorbed by plants.
Animals you may find living in leaf litter include slugs and snails, worms, animals with jointed legs (like millipedes and centipedes), spiders and beetles. The type and amount of organisms found varies with time of year. Some animals spend their entire lives in soil and leaf litter, while others are found there only at certain points in their lives. Some use the litter specifically for nesting or hibernating.
In this activity, you will be able to see the biodiversity levels of leaf litter in your neighborhood and how human activity has impacted these levels.
• A partner
• A trowel or short shovel
• Gloves (gardening gloves or winter gloves will do)
• One large tray (aluminum trays work well)
• One small tray
• Two magnifying glasses
• Meter stick
• Pencil and paper
• Field guide or identification key (optional)
• Locate a nearby public park or forest to conduct the activity. Acquire a map of the area.
• The best time to conduct this activity is in the autumn months.
• Travel to a nearby park or public forest. Locate an area that has been relatively undisturbed by human activity. There should be many fallen leaves, twigs and pieces of bark. Remember to have a partner with you at all times.
• Once you've found a suitable patch of leaf litter, measure out an area of one meter squared using the meter stick. Use the rope to build a rough frame. This will serve as a one-meter vegetation sampling frame. Have you seen any animals in your sampling area so far? How many types of animals do you think you will find?
• With the vegetation sampling frame on the ground, put on your gloves and use the trowel to collect all the leaf layer and soil within the frame (down to a depth of approximately two centimeters (cm) from the surface).
• Place all the leaf litter in the large tray. Using your fingers but keeping your gloves on, spread out the leaf litter so an even layer is created. Keep your eye out for little critters!
• With your magnifying glasses, you and your partner should examine the leaf litter for any worms, snails, spiders or other insects. Use your gloved fingers to gently sift through the litter.
• Using tweezers, gently place any animals found in the litter into the smaller tray. You should find specimens like snails, worms and spiders. Don't worry if you can't get all the insects.
• Using the magnifying glasses, examine the small animals. How many legs are there, if any? Do they have an external skeleton or a hard shell? Do their legs appear to be jointed? Scientists use questions like these to categorize animals into different groups.
• Look out for obvious differences like color, size, and shape to distinguish species. Record the number of different species you can see. If you see many different species, there is a high biodiversity. If there are only a few species present, there is a low biodiversity.
• Return all specimens and leaf litter material to where you originally found them. Disassemble your rope frame. Make sure you do not leave anything that you brought behind!
• Travel to another location with leaf litter. However, this time find an area that has more human interference. These are areas that people frequently use – places like walking and hiking trails.
• Reassemble the vegetation sampling frame and place on the ground.
• As with the previous location, gather the leaf litter and soil within the sampling frame (down to approximately two cm deep). With your gloved fingers, sort the leaf litter in the large tray. Transfer any specimens found into the smaller tray and examine them under the magnifying glass.
• Count and record the number of species you see in this set. If you see many different species, there is a high biodiversity. If there are only a few species present, there is a low biodiversity.
• Return all specimens and leaf litter material to where you originally found them. Again, make sure you do not leave anything you brought with you behind!
• Which area had the higher biodiversity level – the one with little human interference or the one with much more? What types of animals did you find? Remember that organisms like fungi and bacteria also live in leaf litter, and these organisms cannot be seen without a microscope.
• Extra: Instead of just counting the number of species, you can identify them like a real research scientist would! Using a dichotomous key like the ones listed in the "More to explore" section, you can narrow down your specimen to the family, genus or species level! | <urn:uuid:9ced01ac-9174-45ab-9081-a6da6c884204> | 3.9375 | 1,326 | Tutorial | Science & Tech. | 46.988821 |
Oxygen, O2, is a colourless odourless gaseous main group element
which belong to
Group VIb of the
Atmospheric oxygen is of vital importance for all aerobic organisms.
For industrial purposes, oxygen is obtained by fractional distillation
of liquid air. It is used in metallurgical processes,
in high-temperature flames and in breathing apparatus.
- Atomic Number : 8
- Atomic Mass : 15.9994
- Melting Point : -214 degC
- Boiling Point : -183 degC
- Density : 1.429
The discovery of Oxygen was credited to Priestley in 1774 AD.
However in a paper looking into Alchemy,
by Richard Brzezinski, an expert in the history of science and Zbigniew Szydlo, a chemistry
lecturer, published in the authoritative magazine History Today credit the discovery of
Oxygen to a Polish alchemist called Michael Sendivogius who found that heated saltpeter
produced "the elixir of life" and who, in 1604, described his experiments in a book regarded
as so authoritative that it found its way into every major scientific library in Europe.
They say that Priestley, would surely have had access to it. Cornelis Drebbel a Dutch
inventor employed by the King of England James 1 in 1621 used Sendivogius work which
was about 150 years before Priestley was credited with the discovery of Oxygen. Drebbel
built a submarine which was manned by 12 oarsmen, made of wood and waterproofed by
a coat of greases leather. It successfully traveled along the River Thames from Westminster
to Greenwich, at a dept of 15 ft. The trip, and the method used to keep the oarsmen alive,
was subsequently verified by Robert Boyle.
Oxygen occurs in the free state as a gas, to the extent of
21 per cent by volume or 23 per cent by weight in the atmosphere.
Combined Oxygen also occurs
Oxygen occurs to a larger extent in the earth's crust than
any other element.
- in water,
- in vegetable and animal tissues,
- in nearly all rocks and
- in many minerals.
Because oxygen is a component of air, it has been studies extensively
over the centuries and there is a large number of different methods
for its preparation.
The most convenient method for preparing oxygen in the laboratory
involves either the catalytic decomposition of solid potassium chlorate
or the catalytic decomposition of hydrogen peroxide.
Preparation of oxygen Using potassium chlorate
Potassium chlorate decomposes at a low temperature if previously
mixed with manganese dioxide which is a catalyst
for the decomposition.
Only the potassium chlorate is decomposed, and no perchlorate is formed :
2 KClO3 ==> 2 KCl + 3 O2
Preparation of oxygen using hydrogen peroxide
The decomposition of hydrogen peroxide using manganese dioxide
as a catalyst also results in the production of oxygen gas.
2 H2O2 ==> 2 H2O + O2
Preparation of oxygen by electrolysis of water
The electrolysis of acidified
is carried out in a Hofmann
Oxygen is evolved at the positive electrode in the electrolysis.
2 H2O ==> 2 H2 + O2
A solution of barium hydroxide with nickel electrodes may also be used.
However, on prolonged electrolysis an explosive mixture of oxygen and
hydrogen may be evolved at the positive electrode.
Preparation of oxygen by the chemical decomposition of water
Oxygen is obtained from water by passing a mixture of steam and chlorine
through a strongly heated silica tube containing pieces of broken
2 H2O + 2 Cl2 ==> 4 HCl + O2
The hydrogen chloride is removed by a wash-bottle containing
sodium hydroxide solution and the Oxygen collected over water.
Preparation of oxygen By decomposition of oxides
Oxygen may be obtained by heating some metallic oxides.
Preparation of oxygen by the decomposition of salts
Some salts containing oxygen decompose and release oxygen gas on heating.
- Potassium nitrate melts on heating and at a slightly high
temperature decomposes, giving off bubbles of oxygen and forming
potassium nitrite which solidifies on cooling.
2 KNO3 ==> 2 KNO2 + O2
- Potassium chlorate crystals melt when heated in a hard
glass tube at 360 degC and then decompose to form potassium
chloride and releasing oxygen.
2 KClO3 ==> 2 KCl + 3 O2
- Potassium permanganate which is a purple crystalline solid,
decomposed without fusing on heating to 240 degC, forming a
black powder consisting of a mixture of potassium manganate and
manganese dioxide and releasing oxygen.
2 KMnO4 ==> K2MnO4 + MnO2 + O2
- Potassium permanganate explodes violently when heated with
concentrated sulphuric acid. However, when a solution of
hydrogen peroxide is mixed with a solution of the permanganate and
diluted sulphuric acid added, the two compounds decompose together,
forming a nearly colourless solution, and oxygen is evolved.
2 KMnO4 + 3 H2SO4 + 5 H2O2 ==> K2SO4 + 2MnSO4 + 8H2O +5O2
- Chromic trioxide which is a red crystalline solid, melts on
heating at about 420 degC, leaving a green residue of chromic oxide
and evolves oxygen.
4 CrO3 ==> 2 Cr2O3 + 3 O2
- Potassium dichromate which is a bright-red crystalline solid,
melts on heating and when strongly heated releases oxygen
leaving a mixture of yellow potassium chromate which is
soluble in water, and green chromic oxide, which is insoluble in water.
4 K2Cr2O7 ==> 4 K2CrO4 + 2 Cr2O3 + 3 O2
- Chromium trioxide and potassium dichromate when heated with
concentrated sulphuric acid forms chromic sulphate and releases oxygen.
4 CrO3 + 6 H2SO4 ==> 2 Cr2(SO4)3 + 6 H2O +3 O2
2 K2Cr2O7 + 10H2SO4 ==> 4 KHSO4 + 2 Cr2(SO4)3 + 8 H2O +3 O2
Preparation of oxygen from air
Oxygen may be obtained from the atmosphere in a chemical process, by
heating mercury in a confined volume of air, when the oxygen reacts
with the mercury to form mercuric oxide. The mercuric oxide so
formed is then heated strongly, when it decomposes and pure oxygen
In a similar process, if yellow lead monoxide is carefully heated
in an iron dish and freely exposed to air, it takes up oxygen from
the air and forms red lead.
6 PbO + O2 ==> 2 Pb3O4
On heating strongly, the red lead decomposes into lead monoxide and
Oxygen gas which is evolved.
2 Pb3O4 ==> 6 PbO + O2
Various methods have been used for the large scale production of oxygen,
but at present the two mostly used are the electrolysis of an aqueous
solution of dilute sulphuric acid, and the fractional distillation of
Manufacture from liquefied air
Oxygen may be obtained from the atmosphere by the liquefaction and
fractional distillation of air. Liquid air is a mixture of
liquid nitrogen, boiling point -196 degC, and liquid oxygen, boiling
point -183 degC. The nitrogen is more volatile (i.e. it has a lower
boiling point) and boils off first during evaporation. Because
some oxygen evaporates with the nitrogen, separation of the two gases
is brought about by fractionation (i.e. by letting the evolved gas
mixture bubble through liquid air rich in oxygen in a tall
rectifying column). The oxygen in the gas mixture condenses and
almost pure nitrogen gas leaves the top of the column, leaving
almost pure liquid oxygen which is then evaporated to give oxygen gas.
The oxygen gas is distributed as a compressed gas in high pressure
- a colourless gas, without smell or taste,
- is slightly heavier than air,
- is sparingly soluble in water,
- is difficult to liquefy, boiling point -183 degC, and the
liquid is pale blue in colour and is appreciably magnetic.
At still lower temperatures, light-blue solid oxygen is obtained,
which has a melting point of -218.4 degC.
Oxygen is essential for life and it takes part in processes of
combustion, its biological functions in respiration make it important.
Oxygen is sparingly soluble in water, but the small quantity of
dissolved oxygen in is essential to the life of fish.
Oxygen gas is used with hydrogen or coal gas in blowpipes and with
acetylene in the oxy-acetylene torch for welding and cutting metals.
Oxygen gas is also used in a number of industrial processes.
Medicinally, oxygen gas is used in the treatment of pneumonia
and gas poisoning, and it is used as an anesthetic when mixed
with nitrous oxide, ether vapour, etc..
Carbon Dioxide is often mixed with the oxygen as this stimulates
breathing, and this mixture is also used in cases of poisoning
and collapse for restoring respiration.
Liquid oxygen mixed with powdered charcoal has been used as an explosive.
Start of Hypertext ....
Hypertext Copyright (c) 2000 Donal O'Leary. All Rights Reserved. | <urn:uuid:318b0ddd-2d0a-43ca-9efd-9731b5de4069> | 3.65625 | 2,143 | Knowledge Article | Science & Tech. | 33.634124 |
A group of ASP files that work together to perform some purpose is called an application. The Application object is used to tie these files together.
An application on the Web may consists of several ASP files that work together to perform some purpose. The Application object is used to tie these files together.
The Application object is used to store and access variables from any page, just like the Session object. The difference is that ALL users share ONE Application object (with Sessions there is ONE Session object for EACH user).
The Application object holds information that will be used by many pages in the application (like database connection information). The information can be accessed from any page. The information can also be changed in one place, and the changes will automatically be reflected on all pages.
The Application object's collections, methods, and events are described below:
|Contents||Contains all the items appended to the application through a script command|
|StaticObjects||Contains all the objects appended to the application with the HTML <object> tag|
|Contents.Remove||Deletes an item from the Contents collection|
|Contents.RemoveAll()||Deletes all items from the Contents collection|
|Lock||Prevents other users from modifying the variables in the Application object|
|Unlock||Enables other users to modify the variables in the Application object (after it has been locked using the Lock method)|
|Application_OnEnd||Occurs when all user sessions are over, and the application ends|
|Application_OnStart||Occurs before the first new session is created (when the Application object is first referenced)|
The perfect solution for professionals who need to balance work, family, and career building.
More than 10 000 certificates already issued!
The HTML Certificate documents your knowledge of HTML.
The HTML5 Certificate documents your knowledge of advanced HTML5.
The CSS Certificate documents your knowledge of advanced CSS.
The jQuery Certificate documents your knowledge of jQuery.
The XML Certificate documents your knowledge of XML, XML DOM and XSLT.
The ASP Certificate documents your knowledge of ASP, SQL, and ADO.
The PHP Certificate documents your knowledge of PHP and SQL (MySQL).
Your message has been sent to W3Schools. | <urn:uuid:3fe14597-de62-4cb0-bd6e-30a4129adc84> | 2.90625 | 467 | Knowledge Article | Software Dev. | 34.890513 |
BC8 Si has a small Fermi surface and thus electronically can be regarded as a semimetal as suggested by its band structure. Its fourfold coordination and brittleness suggest that it is predominantly held together by directional covalent bonds. To resolve this apparent contradiction, the valence charge density has been examined within BC8 and ST12. Figure 4.29 shows an electron density isosurface in ST12 silicon. It bears an uncanny resemblance to a ball and stick model of the structure, showing clearly that the electron density is concentrated into four `bonds' emanating from each atom - the covalent picture.
Figure 4.29: Three dimensional representation of a valence charge density isosurface in ST12 silicon.
This covalency is illustrated even more clearly in Figures 4.7, 4.8 and 4.15 which shows the valence charge density in the 110 plane of BC8. Along the 111 direction are atoms separated by and alternately. Although these distances are similar, it can very clearly be seen that the slightly closer pairs are bonded, while the more distant pairs are not. This effect cannot be seen in diamond because there are no `second neighbours' as close. An interesting aspect of this structure is that if the topology of the crystal is defined by bonds, it requires six steps to get from an atom to its `second neighbour', and there is only one such second neighbour per atom. Under pressure the increase in x has the effect of pushing these second neighbours together, but there is still no increase in the charge density between the atoms.
It is therefore deduced that while the electronic properties of BC8 are dominated by its small Fermi surface, and hence it is regarded as a semimetal, the cohesion is dominated by covalent bonding of each atom to four neighbours. This observation has been used in constructing a simple empirical model potential for silicon, which will then apply to structural features of the semimetallic BC8 phase. Calculations using the empirical model are presented in Chapter 4.
ST12 is a semiconducting phase, and again the charge density plots suggest that a covalent picture for the bonding is appropriate. | <urn:uuid:3b65cc55-9f7d-47ce-86c0-e67cb37fa57a> | 3.234375 | 447 | Academic Writing | Science & Tech. | 39.779765 |
PL/SQL declares a cursor implicitly for all SQL data manipulation statements, including quries that return only one row. However,queries that return more than one row you must declare an explicit cursor or use a cursor FOR loop. Explicit cursor is a cursor in which the cursor name is explicitly assigned to a SELECT statement via the CURSOR...IS statement. An implicit cursor is used for all SQL statements Declare, Open, Fetch, Close. An explicit cursors are used to process multirow SELECT statements An implicit cursor is used to process INSERT, UPDATE, DELETE and single row SELECT. .INTO statements.
The implicit cursor is used to process INSERT, UPDATE, DELETE, and SELECT INTO statements. During the processing of an implicit cursor,Oracle automatically performs the OPEN, FETCH, and CLOSE operations.
Where as in explicit cursors,the process of its working is done in 4 steps namely DECLARE a cursor,OPEN a cursor, FETCH from cursor and CLOSE a cursor.
IMPLICT CURSOR:- Automatically porvide by oracle which perform DML statements. queries returns only one row.
EXPLICT CURSOR:- Defined by user. queries returns more than rows.
Explicit Cursor:-We are not able to Handle NO_DATA_FOUND Exception.
Implicit Cursor:-We are able to Handle NO_DATA_FOUND Exception.
If you have the better answer, then send it to us. We will display your answer after the approval.
Rules to Post Answers in CoolInterview.com:-
There should not be any Spelling Mistakes.
There should not be any Gramatical Errors.
Answers must not contain any bad words.
Answers should not be the repeat of same answer, already approved. | <urn:uuid:1efa0e6b-81da-4999-9755-4bdd24f939b2> | 3.09375 | 383 | Q&A Forum | Software Dev. | 43.32872 |
The rest of the examples in this section will assume that a file object called f has already been created.
To read a file's contents, call f.read(size), which reads some quantity of data and returns it as a string. size is an optional numeric argument. When size is omitted or negative, the entire contents of the file will be read and returned; it's your problem if the file is twice as large as your machine's memory. Otherwise, at most size bytes are read and returned. If the end of the file has been reached, f.read() will return an empty string ("").
>>> f.read() 'This is the entire file.\012' >>> f.read() ''
f.readline() reads a single line from the file; a newline character (\n) is left at the end of the string, and is only omitted on the last line of the file if the file doesn't end in a newline. This makes the return value unambiguous; if f.readline() returns an empty string, the end of the file has been reached, while a blank line is represented by '\n', a string containing only a single newline.
>>> f.readline() 'This is the first line of the file.\012' >>> f.readline() 'Second line of the file\012' >>> f.readline() ''
f.readlines() uses f.readline() repeatedly, and returns a list containing all the lines of data in the file.
>>> f.readlines() ['This is the first line of the file.\012', 'Second line of the file\012']
f.write(string) writes the contents of string to the file, returning None.
>>> f.write('This is a test\n')
f.tell() returns an integer giving the file object's current position in the file, measured in bytes from the beginning of the file. To change the file object's position, use "f.seek(offset, from_what)". The position is computed from adding offset to a reference point; the reference point is selected by the from_what argument. A from_what value of 0 measures from the beginning of the file, 1 uses the current file position, and 2 uses the end of the file as the reference point. from_what can be omitted and defaults to 0, using the beginning of the file as the reference point.
>>> f=open('/tmp/workfile', 'r+') >>> f.write('0123456789abcdef') >>> f.seek(5) # Go to the 5th byte in the file >>> f.read(1) '5' >>> f.seek(-3, 2) # Go to the 3rd byte before the end >>> f.read(1) 'd'
When you're done with a file, call f.close() to close it and free up any system resources taken up by the open file. After calling f.close(), attempts to use the file object will automatically fail.
>>> f.close() >>> f.read() Traceback (innermost last): File "<stdin>", line 1, in ? ValueError: I/O operation on closed file
File objects have some additional methods, such as isatty() and truncate() which are less frequently used; consult the Library Reference for a complete guide to file objects. | <urn:uuid:79481f1d-f493-4e07-bcdb-16d3ee2f8b92> | 4.28125 | 715 | Documentation | Software Dev. | 72.861673 |
by Don Stewart
Haskell is a functional language built for parallel and concurrent programming. You can take an off-the-shelf copy of GHC and write high performance parallel programs right now. This tutorial will teach you how to exploit parallelism through Haskell on your commodity multicore machine, to make your code faster. We will introduce key parallel programming models, as implemented in Haskell, including:
and look at how to build faster programs using these abstractions. We will also look at the engineering considerations when writing parallel programs, and the tools Haskell provides for debugging and reasoning about parallel programs.
This is a hands on tutorial session: bring your laptops, there will be code!
1st–4th June 2010 | <urn:uuid:8745b819-48c8-46c7-bb1b-5a53cdf18ea7> | 3.515625 | 145 | Content Listing | Software Dev. | 41.332655 |
This unnamed 740 m diameter crater has bouldery walls and is morphologically similar to many <1 km diameter craters in the mare. Image width is 847 m, LROC NAC M127328861L [NASA/GSFC/Arizona State University].
The LROC NAC has imaged thousands of blocky craters similar to this example found in Oceanus Procellarum. The numerous boulders may be fragments of bedrock, regolith breccias formed by the impact itself, or a combination of both. Since a crater depth of excavation is roughly one tenth its diameter, this small crater has probably distributed material from about 75 meters depth around its rim. Usually, ejecta material on the rim comes from the deepest part of the crater, and ejecta farther away from the crater comes from shallower depths. Thus astronauts can walk towards a crater rim, sampling material from greater depths as the rim is approached in a radial cross-section of the ejecta blanket. As you look at these boulders, you are witnessing a history of the emplacement of this mare. If we could just pick up samples and bring them back to Earth, we could figure out how much time elapsed between mare basalt flows in this area and how much the composition changed with time.
The vast Oceanus Procellarum mare basalts are observed in this portion of LROC Wide Angle Camera monochrome M117895651M. Prominent across the scene are wrinkle ridges and secondary crater clusters. The arrow points to the blocky crater in the opening image [NASA/GSFC/Arizona State University]. | <urn:uuid:77f7b522-f40b-4b7c-9936-f58b946de627> | 3.203125 | 337 | Knowledge Article | Science & Tech. | 43.074955 |
This should enable exotic new states of matter and enable better quantum computers.
Nature - Orbital excitation blockade and algorithmic cooling in quantum gases
1. the researchers cooled atoms of rubidium with lasers. When set up properly, these beams can force atoms to glow in a way that makes them emit more energy than they absorb, thus making them colder.
When the atoms gave off light as a result of being hit with the laser, this exerted a slight pressure on them. The scientists took advantage of that pressure to control the atoms, either keeping them in place or moving them around, sometimes creating collisions.
2. The researchers then made the atoms even colder with evaporative cooling, in which matter gets cooled in much the same way as a cup of coffee loses its warmth — the hottest atoms are allowed to evaporate, leaving behind the colder ones.
3. the researchers used webs of lasers known as "optical lattices." When two atoms are made to collide within the optical lattice, the excitations of one suppress the excitations of the other, a phenomenon called "orbital excitation blockade." The excited atoms are then removed from the system -- taking away entropy, the amount of energy available for work -- thus causing the remaining atoms to chill down.
Interaction blockade occurs when strong interactions in a confined, few-body system prevent a particle from occupying an otherwise accessible quantum state. Blockade phenomena reveal the underlying granular nature of quantum systems and allow for the detection and manipulation of the constituent particles, be they electrons, spins, atoms or photons. Applications include single-electron transistors based on electronic Coulomb blockade and quantum logic gates in Rydberg atoms. Here we report a form of interaction blockade that occurs when transferring ultracold atoms between orbitals in an optical lattice. We call this orbital excitation blockade (OEB). In this system, atoms at the same lattice site undergo coherent collisions described by a contact interaction whose strength depends strongly on the orbital wavefunctions of the atoms. We induce coherent orbital excitations by modulating the lattice depth, and observe staircase-like excitation behaviour as we cross the interaction-split resonances by tuning the modulation frequency. As an application of OEB, we demonstrate algorithmic cooling of quantum gases: a sequence of reversible OEB-based quantum operations isolates the entropy in one part of the system and then an irreversible step removes the entropy from the gas. This technique may make it possible to cool quantum gases to have the ultralow entropies required for quantum simulation of strongly correlated electron systems. In addition, the close analogy between OEB and dipole blockade in Rydberg atoms provides a plan for the implementation of two-quantum-bit gates in a quantum computing architecture with natural scalability.
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks | <urn:uuid:7016348e-28e8-4af8-8a9f-62212de1feb4> | 3.734375 | 590 | Truncated | Science & Tech. | 23.557922 |
First of all, the scheme of the CERN accelerator complex you posted contains not only that single chain which brings the protons to the LHC, but also several other chains which are used for many lower energy experiments conducted in parallel at CERN.
But let's focus on the LHC accelerator chain: why do we need several successive accelerators instead of a single one? The answer is very intuitive from the engineering point of view: technologically, it is much easier to construct several devices specialized for different but limited ranges of some physical parameter than to build a single device that exhibits excellent performance in the entire vast range.
An example in transportation: cars are good to travel at lower speed, airplanes are good for traveling fast. It is exceedingly difficult to build a vehicle that would have equally good performance in the entire range of speeds from 1 km/h to 1000 km/h. Another example is given by professional acoustic systems which usually include several speakers optimized for different frequency ranges.
The same thing for the accelerators. Here the key point is not the energy of the protons per se, but the quality of the beam at a given energy. Accelerating protons is (technically) a minor issue; the major issue is to keep the beam safe and well-behaved. If you dig a bit into the CERN website, you'll soon realize that these accelerators have quite distinct features and each of them was optimized for its own energy range and its own purposes. Some of them serve to accumulate particles, other are specialized in breaking the beams into bunches. At each stage you need to cool the beams down, dump oscillations, etc., and this is done in a different way at different energies.
In principle, you could think of an accelerator that would occupy the LHC ring and would take the protons at a very low energy (say, 1 MeV) and accelerate them up to multi-TeV. Since the protons must always circulate in the same ring, the magnetic field must be adjustable (with very high precision and very high spacial homogeneity) from few microtesla to several tesla, in the range of six orders of magnitude. The beam monitoring system must also be adapted to accurately measure the proton currents differing by 6 orders of magnitude. The similar requirements are placed on the beam steering, orbit correcting and focusing magnets, on the kicker magnets, and on many other components of the accelerator. In short, although technically possible, it would be just too difficult and too expensive. A sequence of several accelerators (which existed before the LHC anyway) is a preferred solution. | <urn:uuid:dfb98e68-cf87-4ac7-a550-a5332d5efa2f> | 3.140625 | 531 | Q&A Forum | Science & Tech. | 34.684972 |
Photons are massless, therefore, in the Standard Model, the Higgs boson does not couple directly to photons. Instead, the decay of the Higgs into 2 photons is a loop mediated process. It happens to be dominated by two contributions: one, larger, from the W boson, and the other, 5 times smaller, from the top quark. Incidentally, these two contributions enter with opposite signs (in fact it's not quite an accident but a consequence of the theorem that links these contributions to the electromagnetic beta function). The other known charged particles are expected to contribute much less because their coupling to the Higgs is much smaller.
Now it is clear what tricks can be played to pump up the Higgs decay width into photons:
- Increase the Higgs couplings to W bosons. But then one needs to explain why no excess in the WW channel has been observed.
- Decrease the Higgs coupling to the top quark. But then the Higgs production rate would be decreased as well, and one needs to cook up new production channels.
- Introduce one (or more) new charged particle that contributes as much as the top quark, but with the opposite sign. | <urn:uuid:a8b19fb5-df30-42f1-a459-8fc9cc3113e1> | 3.109375 | 248 | Personal Blog | Science & Tech. | 54.719423 |
In this section, you will learn to convert Time to
seconds. An hour has 3600 seconds and a minute has sixty seconds.
Description of program:
This example helps you in converting a Time to seconds. Here, first of all we have changed Hours to minutes, minutes to seconds etc as shown below. We also want that the Time would be displayed in the form of String so we have used a TimetoStr constructor here and we have passed time to it. Therefore we get the following output.
Output of the program:
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:3510fc0b-5a9b-48bb-97a6-eb04e37b1a34> | 3.625 | 151 | Documentation | Software Dev. | 60.824237 |
Researchers have developed a technique that takes brain mapping to a new level, allowing them to label individual neurons in the brain in different colors. The technique, dubbed 'brainbow' by the researchers involved, could help scientists gain a better understanding of brain function than previous staining techniques allow.
"There are few tools neuroscientists can use to tease out the wiring diagram of the nervous system; Brainbow should help us much better map out the brain and nervous system's complex tangle of neurons," said Jeff Lichtman, one of the authors of a report on the technique published in the journal Nature. (Full disclosure: Lichtman is also the father of Science Friday digital media producer Flora Lichtman.)
In this segment, Ira talks with Lichtman about the technique and its potential applications to neuroscience.
Produced by Karin Vergoth | <urn:uuid:9f6651c2-f988-4ab1-bae4-abe04c83793f> | 3.34375 | 175 | Truncated | Science & Tech. | 31.799059 |
Global warming less extreme than feared?
Policymakers are attempting to contain global warming at less than 2°C. New estimates from a Norwegian project on climate calculations indicate this target may be more attainable than many experts have feared.
Internationally renowned climate researcher Caroline Leck of Stockholm University has evaluated the Norwegian project and is enthusiastic.
“These results are truly sensational,” says Dr Leck. “If confirmed by other studies, this could have far-reaching impacts on efforts to achieve the political targets for climate.”
Temperature rise is levelling off
After Earth’s mean surface temperature climbed sharply through the 1990s, the increase has levelled off nearly completely at its 2000 level. Ocean warming also appears to have stabilised somewhat, despite the fact that CO2 emissions and other anthropogenic factors thought to contribute to global warming are still on the rise.
It is the focus on this post-2000 trend that sets the Norwegian researchers’ calculations on global warming apart.
Sensitive to greenhouse gases
Climate sensitivity is a measure of how much the global mean temperature is expected to rise if we continue increasing our emissions of greenhouse gases into the atmosphere.
CO2 is the primary greenhouse gas emitted by human activity. A simple way to measure climate sensitivity is to calculate how much the mean air temperature will rise if we were to double the level of overall CO2 emissions compared to the world’s pre-industrialised level around the year 1750.
If we continue to emit greenhouse gases at our current rate, we risk doubling that atmospheric CO2 level in roughly 2050.
A number of factors affect the formation of climate development. The complexity of the climate system is further compounded by a phenomenon known as feedback mechanisms, i.e. how factors such as clouds, evaporation, snow and ice mutually affect one another.
Uncertainties about the overall results of feedback mechanisms make it very difficult to predict just how much of the rise in Earth’s mean surface temperature is due to manmade emissions. According to the Intergovernmental Panel on Climate Change (IPCC) the climate sensitivity to doubled atmospheric CO2 levels is probably between 2°C and 4.5°C, with the most probable being 3°C of warming.
In the Norwegian project, however, researchers have arrived at an estimate of 1.9°C as the most likely level of warming.
Manmade climate forcing
“In our project we have worked on finding out the overall effect of all known feedback mechanisms,” says project manager Terje Berntsen, who is a professor at the University of Oslo’s Department of Geosciences and a senior research fellow at the Center for International Climate and Environmental Research – Oslo (CICERO). The project has received funding from the Research Council of Norway’s Large-scale Programme on Climate Change and its Impacts in Norway (NORKLIMA).
“We used a method that enables us to view the entire earth as one giant ‘laboratory’ where humankind has been conducting a collective experiment through our emissions of greenhouse gases and particulates, deforestation, and other activities that affect climate.”
For their analysis, Professor Berntsen and his colleagues entered all the factors contributing to human-induced climate forcings since 1750 into their model. In addition, they entered fluctuations in climate caused by natural factors such as volcanic eruptions and solar activity. They also entered measurements of temperatures taken in the air, on ground, and in the oceans.
The researchers used a single climate model that repeated calculations millions of times in order to form a basis for statistical analysis. Highly advanced calculations based on Bayesian statistics were carried out by statisticians at the Norwegian Computing Center.
2000 figures make the difference
When the researchers at CICERO and the Norwegian Computing Center applied their model and statistics to analyse temperature readings from the air and ocean for the period ending in 2000, they found that climate sensitivity to a doubling of atmospheric CO2 concentration will most likely be 3.7°C, which is somewhat higher than the IPCC prognosis.
But the researchers were surprised when they entered temperatures and other data from the decade 2000-2010 into the model; climate sensitivity was greatly reduced to a “mere” 1.9°C.
Professor Berntsen says this temperature increase will first be upon us only after we reach the doubled level of CO2 concentration (compared to 1750) and maintain that level for an extended time, because the oceans delay the effect by several decades.
The figure of 1.9°C as a prediction of global warming from a doubling of atmospheric CO2 concentration is an average. When researchers instead calculate a probability interval of what will occur, including observations and data up to 2010, they determine with 90% probability that global warming from a doubling of CO2 concentration would lie between 1.2°C and 2.9°C.
This maximum of 2.9°C global warming is substantially lower than many previous calculations have estimated. Thus, when the researchers factor in the observations of temperature trends from 2000 to 2010, they significantly reduce the probability of our experiencing the most dramatic climate change forecast up to now.
Professor Berntsen explains the changed predictions:
“The Earth’s mean temperature rose sharply during the 1990s. This may have caused us to overestimate climate sensitivity.
“We are most likely witnessing natural fluctuations in the climate system – changes that can occur over several decades – and which are coming on top of a long-term warming. The natural changes resulted in a rapid global temperature rise in the 1990s, whereas the natural variations between 2000 and 2010 may have resulted in the levelling off we are observing now.”
Climate issues must be dealt with
Terje Berntsen emphasises that his project’s findings must not be construed as an excuse for complacency in addressing human-induced global warming. The results do indicate, however, that it may be more within our reach to achieve global climate targets than previously thought.
Regardless, the fight cannot be won without implementing substantial climate measures within the next few years.
The project’s researchers may have shed new light on another factor: the effects of sulphur-containing atmospheric particulates.
Burning coal is the main way that humans continue to add to the vast amounts of tiny sulphate particulates in the atmosphere. These particulates can act as condensation nuclei for cloud formation, cooling the climate indirectly by causing more cloud cover, scientists believe. According to this reasoning, if Europe, the US and potentially China reduce their particulate emissions in the coming years as planned, it should actually contribute to more global warming.
But the findings of the Norwegian project indicate that particulate emissions probably have less of an impact on climate through indirect cooling effects than previously thought.
So the good news is that even if we do manage to cut emissions of sulphate particulates in the coming years, global warming will probably be less extreme than feared.
|About the project|
|Geophysicists at the research institute CICERO collaborated with statisticians at the Norwegian Computing Center on a novel approach to global climate calculations in the project “Constraining total feedback in the climate system by observations and models”. The project received funding from the Research Council of Norway’s NORKLIMA programme.The researchers succeeded in reducing uncertainty around the climatic effects of feedback mechanisms, and their findings indicate a lowered estimate of probable global temperature increase as a result of human-induced emissions of greenhouse gases.The project researchers were able to carry out their calculations thanks to the free use of the high-performance computing facility in Oslo under the Norwegian Metacenter for Computational Science (Notur). The research project is a prime example of how collaboration across subject fields can generate surprising new findings.|
- Written by:
- Bård Amundsen/Else Lie. Translation: Darren McKellep/Carol B. Eckmann
- h/t to Andrew Montford via Leo Hickman | <urn:uuid:76462052-2a23-4bc0-8dc7-787f02225343> | 3.765625 | 1,661 | Knowledge Article | Science & Tech. | 29.070356 |
joeberry at biosphere.Stanford.EDU
Sun Sep 28 22:58:57 EST 1997
Dear Photosynthesis Researchers,
I received an interesting question that might be a useful topic for
discussion on the photosynthesis net. The question comes from an
astronomer via Maxine Singer, President of the Carnegie Institution.
Allan Sandage at the Observatories sent me the following
question. Can you help with an answer? He doesn't 'do' email, so
email me the answer and I will send it on to him.
"Why are plants green?? (I suppose this means and not yellow or
blue or red) What evolutionary advantage does green have re
I would appreciate hearing the thoughts of other photosynthesis
researchers. I have included my answer and a response from Winslow
Carnegie Institution of Washington
Stanford, CA 94305
joeberry at biosphere.stanford.edu
Here is one way to look at it: Chlorophyll's absorption is at
wavelenths <700 and >400 nm. This "window" was probably prescribed by
the chemistry of the primordial oceans. These are thought to have
contained high concentrations of Fe+2 ion (which absorbs strongly at
wavelengths >700 nm) and dissolved organic compounds (which absorb in
the blue and near UV). Thus, chlorophyll is a pigment that "fits"
into a window of available light energy. In this sense, it is ideally
suited for photosynthesis. On the other hand, chlorophyll is green
because it dosen't completely fill the window. This is not an
advantage, and plants have evolved a number of accessory pigments to
fill the hole in the chorophyll absorption spectrum. These pigments
donate absorbed photon energy to chorophyll.
Subject: Re: (Fwd) Question
Author: "Winslow Briggs" <BRIGGS at andrew.stanford.edu> at Internet
Date: 9/25/97 11:26 AM
Let me add to Joe's comment:
There aren't any conjugated double bond pigments that I know that have
extremely broad absorption bands. Below 400nm, the increasing energy
of the photons raise the spectre of photochemical damage. Beyond 700
nm, the energy levels are sufficiently low that except in exceptional
cases they are insufficient for effectively driving photochemistry. A
compromise: an absorption band safely above the UV, and one
sufficiently down in the red that useful photochemistry is still
possible. My guess is that a single band in either wavelength region
would probably be selected against. The situation in higher plants is
not perfect, as Joe points out, and accessory pigments are made in
some algae to fill in the gaps. Even higher plants use carotenoids,
absorbing in the blue, to enhance energy capture, but these still do
not extend too far into the green window left by chlorophyll.
It seems to me that given the properties of conjugated double bond
systems in absorbing light energy, making a molecule with two major
bands within the biologically constrained wavelength range is not all
that simple, and chlorophyll is an ideal solution.
(Note the waving of hands!).
More information about the Photosyn | <urn:uuid:e70d28ea-f5ec-4f0f-a4c9-eec09e018a7b> | 2.859375 | 704 | Comment Section | Science & Tech. | 48.41918 |
- About Us
- SW Climate
Global warming inspires a look at solar, wind energy
Published January 23, 2007
Now that many Americans accept the reality of global warming, they want to do something about it. In the Southwest, that desire is being harnessed into initiatives to improve energy efficiency and boost alternative forms of power, such as solar and wind energy.
The rising temperatures of recent decades trace back largely to emissions of greenhouse gases, mainly from the burning of fossil fuels like coal, gas, and oil. So the first step toward reigning in global warming involves reducing fossil fuel emissions.
The United States releases more greenhouse gases from fossil fuels than any other nation. Per-person emissions tally about six times higher in the United States than in China, the runner-up for title of world’s biggest producer of greenhouse gases. Yet the U.S. government has declined to join the international effort to reduce greenhouse gas emissions, known as the Kyoto Protocol.
Many states, cities, companies, and individuals are attempting to fill the void left by the federal government. New Mexico and Arizona are making efforts to reduce fossil fuel emissions by supporting alternative fuels and improving energy efficiency. The state efforts also affect cities, companies, and individuals, especially those interested in powering their homes and offices with solar energy.
“The governors are moving on this primarily because the federal government is not,” explained Sandra Ely, New Mexico’s Energy and Environment Coordinator. Ely served as the point person for the state’s Climate Change Advisory Group, which released an action report in December. Arizona released its action report in mid-2006. The groups Global warming inspires a look at solar, wind energy identified major sources of greenhouse gases (Figure 1) and recommended ways to reduce them. (See links to these documents on page 6.)
In September, Arizona Governor Janet Napolitano responded to the report by issuing an executive order requiring the state to drop back to 2000 levels by 2020, and to 50 percent below 2000 levels by 2040. At the time, she noted that the proposed recommendations would actually save money, amounting to $5.5 billion through 2020 and more in subsequent years.
In New Mexico, Governor Bill Richardson had issued an executive order in 2005 setting up the advisory group and asking it to think of ways to reduce the state’s total greenhouse gas emissions to 2000 levels by the year 2012, to 10 percent below those levels by 2020 and to 75 percent below by 2050. To address the quotas, the advisory group decided to focus on the electricity consumed within the state, which represents roughly a quarter of the all the greenhouse gas emissions produced. The governor followed up with an order last month prescribing some actions, including making new buildings and cars more energy-efficient.
Both states face the challenge of trying to stabilize greenhouse gas emissions even as their populations explode. The number of Arizona residents rose by 40 percent during the 1990s, while New Mexico’s population increased by 20 percent. Population growth averaged 13 percent in the nation during this time frame.
Arizona’s population growth is translating directly into the country’s highest growth rates in greenhouse gas emissions, noted Kurt Maurer, an Arizona Department of Environmental Quality employee who helped organize Arizona’s Climate Change Advisory Group.
“Our growth rate is outpacing astronomically what other states are experiencing. We’re the fastest growing state in the country,” Maurer said.
In both states and the country as a whole, per-capita greenhouse gas emissions—measured in metric tons of carbon dioxide equivalent per person— remained roughly stable since 1990.
New Mexico’s large coal industry coupled with its relatively small population help make the state’s per-capita greenhouse gas emissions about double the national average. The New Mexico advisory group targeted changes in this sector as one of the most effective ways to reduce overall emissions.
Arizona falls below the national average for greenhouse gas emissions per person, in part because the region’s mild winters demand less heating. Still, electricity demands for Arizona homes have quadrupled in recent years as developers build larger structures and air conditioners replace swamp coolers.
Energy use in buildings accounts for about two-fifths of greenhouse gas emissions in the Southwest, counting the lighting and cooling provided by electricity. This has inspired leaders in both states to push for more energyefficient structures.
Governor Richardson has promised to move forward on several regulatory fronts that don’t need legislative approval. These include requiring contractors to follow the green building rating standards known as LEED, for Leadership in Energy and Environmental Design. This energy-efficient approach offers one of the best economic returns, Ely said.
“You may have some initial upfront costs of maybe 2 percent more, but you get so much back from that initial continued on page 5 investment that you make the money back fairly quickly,” she noted. Although homeowners will pay a bit extra for the home, the longer-term energy savings would amount to about $12 per ton of carbon dioxide equivalent by 2020, the report projects.
Even existing homes sometimes can benefit from improvements in energy efficiency, noted Tom Goldtooth, executive director of the Indigenous Environmental Network. Some reservation homes even have ice building up in corners, a sign that energy is leaking out of the cracks, he explained during a December Tribal Lands Climate Conference held in Yuma.
Federal tax credits for home improvements including insulation continue through this year. (See links on page 6.)
Greenhouse gas emissions from vehicles rival the amount coming from energizing buildings in Arizona. Transportation accounts for about 39 percent of fossil fuel emissions in Arizona and 17 percent in New Mexico. Our nation’s driving habits account for about half of the auto emissions around the planet, a 2006 Environmental Defense study showed, in part because Americans favor large vehicles with low gas mileage.
New Mexico plans to shift into more stringent vehicle emission standards by adopting California’s Clean Car guidelines. California’s interest in reducing its greenhouse gas emissions and related air pollution inspired Fran Pavley and other legislators to set a quota for electrical cars and restrict the sale of vehicles with low fuel efficiency. Auto makers and their organizations have sued to keep the state from implementing the law.
Arizona is holding off on adopting the California standards until the lawsuit is settled, Maurer said. In the meantime, the governor issued an executive order requiring that departments purchasefuel-efficient or hybrid vehicles so that the official fleet will meet these standards by 2010.
Plans are also moving forward for Arizona Grain, Inc., to open an ethanol production plant in Maricopa by mid-year. The company plans to convert corn into 50 million gallons a year of a fuel blend containing 85 percent ethanol. Ethanol is an alternative to oil that emits fewer pollutants than a conventional system, including perhaps 20 percent fewer greenhouse gases.
However, some policy experts worry that its widespread adoption could worsen conditions for the world’s poor in the long run. Lester Brown, president of the Earth Policy Institute, has cautioned that a large-scale move to ethanol would force less developed countries to compete with wealthy countries for world grain supplies. Because of this risk, Brown instead promotes developing wind energy to power electric vehicles.
Whether cars reap the bounty of wind energy in the Southwest or not, utilities in both states will be employing more windmills to meet requirements that renewable energy comprise a greater share of their generating capacity. Existing laws require Arizona to meet 15 percent of its electrical needs from renewable sources by 2025, while New Mexico must obtain 10 percent by 2011.
New Mexico already has a 204- megawatt wind farm in House, with windmills dotting the landscape on private ranches amid grazing cattle, Ely pointed out.
“The ranchers love it. It’s a great utilization of their ranchland,” she added. The leases for windmills provide an ongoing source of income to ranchers with a livelihood that is subject to change with climate fluctuations.
According to Ben Luce, director of the New Mexico Coalition for Clean and Affordable Energy, New Mexico will need more electricity transmission lines to profit from wind potential. The coalition supports adding transmission lines throughout eastern New Mexico, a windy area that could eventually supply 4,000 to 8,000 megawatts of wind power—enough to power the whole state, he said
“A lot could change in this whole discussion in the next couple of years if we get this off the ground. Basically we could displace coal in the Southwest,” Luce ventured. “The beauty is this is all local technology, so it won’t hurt the economy. It could even help it.”
New Mexico could develop a wind turbine manufacturing plant in an Albuquerque railyard under one proposal on the table, Luce said. Discussions call for the plant to produce windmills that can generate between 1.5 megawatts and 4 megawatts of electrical power each.
A shortage of windmills threatens to derail some U.S. projects in the short term. Many experts consider the shortage temporary, soon to be relieved with upcoming windmill production plans.
China currently overwhelms the windmill market with its demand, but lately the nation of 1.3 billion people has been stepping up its own production of windmills in hopes of meeting its needs independently. An upswing in windmill production in China and other countries is expected to ease the shortage within a few years.
At 10 cents a kilowatt-hour and falling, wind energy prices compete directly with electricity produced from fossil fuels. This helps explain their growing popularity around the world (Figure 2). Creating solar-powered electricity, meanwhile, remains relatively expensive, although passive solar heating of water pays off quickly. As a result, solar electrical systems haven’t been keeping pace with wind except in rates of increase (Figure 3).
Both Arizona and New Mexico provide cash incentives to homeowners to supplement federal subsidies for renewable energy. As a result, the government covers about half the cost of rooftop panels using photovoltaic cells (PVCs). (See links to the right.)
A roof-mounted system from American Solar, which participated in Arizona’s climate change advisory group, would cost about $14,000 after cashing in federal and state credits, explained spokesman Tom Alston. A system this size would supply half the electrical needs of a typical Arizona home, he said.
American Solar’s systems run about $3 per watt of electrical energy installed, or $3,000 per kilowatt, Alston estimated. Electric bills come in kilowatt-hours, which measures the number of hours in which a system uses 1,000 watts of energy. Although solar energy is produced only while the sun is shining, Southwestern homeowners generally can sell their extra electricity to their utility companies at retail prices, then buy back what they need during the night.
The investment pays off before the 25- year warranty runs out, Alston said, noting it would yield a 6 and a half percent return over its lifetime assuming a modest increase of about 3 percent a year in electricity rates.
“But it’s also like buying an energy future,” Alston added, referring to the stock market tactic of banking on the likelihood of future price increases. “Every time the rates go up, the system becomes more valuable. I’m essentially ensuring my rates don’t go up for the next 25 years.”
Arizona Public Service has one of the world’s largest electrical plants using solar power. Its Springerville, Arizona, plant hosts a 5-megawatt facility. American Solar also is finalizing plans for a 1-megawatt solar plant on the Gila River Indian Reservation south of Phoenix.
Luce hopes to lure PVC manufacturing plants into New Mexico, especially in places like Demming and Las Cruces where they could supply viable sunny sites nearby. An Albuquerque development known as Mesa del Sol might benefit from Sandia Laboratory efforts on a version of power known as concentrated solar power, he said.
With the concentrated approach, lens arrays follow the sun’s daytime passage through the sky, focusing the captured light onto PVCs, explained Roger Angel, director of The University of Arizona’s Mirror Lab.The Mirror Lab is researching concentrated solar power, applying its expertise in astronomy to the effort.
“It’s like many little telescopes looking at the sun,” said Angel. With the focused energy, fewer PVCs can yield more electricity compared to conventional solar. Angel has a team of investigators working to refine the materials and technique in the hope of bringing costs into the commercial range. “There’s no difficulty in making energy from the sun,” he said. “The key issue is can you do it for $1 a watt [installed], not $4 a watt.”
Creating energy from PVCs remains relatively high for several reasons. Germany’s appetite for solar panels is helping to keep demand greater than supply. Also, a shortage of refined silicon, an essential material for PVCs, limits production. Concentrating solar power could help get past this barrier because it provides more energy per unit-area of PVCs.
The Southwest is leading the way on concentrated solar, as befits the region with the lion’s share of the nation’s harvestable sunshine. An APS project in Red Rock, Arizona, is planning to use concentrated solar power to heat oil to generate power, Alston said.
By tapping into the power of the sun and wind and improving the energy efficiency of buildings and cars, officials hope to curb the growth of greenhouse gas emissions. This, in turn, could help stabilize climate and avoid some of the impacts of the ongoing global warming.
There’s still a long way to go, but government mandates are fueling a revived interest in alternative power and conservation. Those who buy into these efforts enjoy the satisfaction of knowing they’re doing their share to stabilize climate.
Arizona Climate Change Advisory Group http://www.azclimatechange.us/
New Mexico Climate Change Advisory Group http://www.nmclimatechange.us/
Database of State Incentives for Renewables & Efficiency http://www.dsireusa.org/
Energy Star on federal incentives http://www.energystar.gov/index.cfm?c=products.pr_tax_credits#1
New Mexico Coalition for Clean & Affordable Energy www.nmccae.org
Calculating individual greenhouse gas emissions http://www.cool-it.us/index.php?refer=&task=carbon | <urn:uuid:65569618-4dab-4c3b-94a2-7198c79498e0> | 2.984375 | 3,019 | Knowledge Article | Science & Tech. | 37.751773 |
Few topics in science excite the popular imagination more than antimatter, perhaps because the idea sounds like science fiction that you can really believe in. For the same reason, it should not be surprising that when a story about antimatter surfaces in the news cycle—as regularly happens—the real science sometimes gets parked on the shelf and media-promoted pseudoscience takes over.
To put antimatter pseudoscience in perspective, we first need to recall a few things about the genuine article. Atoms of the chemical elements that form the matter of the world around us are themselves constructed from just three varieties of fundamental particle: protons, which carry a positive electric charge; electrons, which carry an equal amount of negative charge; and neutrons, which have no charge at all, although, like the other two, they have magnetic properties. The hydrogen atom is the simplest of all. It consists of a single electron bound by electrical attraction to a nucleus consisting of a single proton. It is therefore electrically neutral. Atoms of heavier elements just have more electrons bound to their nuclei consisting of an equal number of protons and a variable number of neutrons. They are, then, normally neutral.
Even before all this was established, the idea was entertained that a kind of mirror matter might exist made of particles with reverse “signed” attributes like charge and magnetism. Thus Arthur Schuster (Schuster 1898), observing that electric charge plays an important role in nature, asked the rhetorical question, “If there is negative electricity, why not negative gold?” Some thirty years later, English physicist Paul Dirac showed that if quantum mechanics and special relativity—the two cornerstones of modern physics—are to hold simultaneously, it must be possible for counterparts of the three fundamental particles to exist that have the same mass as, but opposite electric, magnetic, and other properties to, the originals. These he called “antiparticles.” We might then legitimately ask: If there are antiparticles, why not antimatter?1 Foreseeing this possibility, Dirac (Dirac 1933) intimated: “We must regard it as an accident that the earth (and presumably the whole solar system) contains a preponderance of negative electrons and positive protons. It is quite possible that for some of the stars it is the other way about. . . . There would be no way of distinguishing them by present astronomical methods.” Positively charged antielectrons, produced in the atmosphere by cosmic rays, were duly discovered (and soon afterwards found emerging from some radioactive substances). A few decades later negatively charged antiprotons were produced in the laboratory by directing fast-moving protons into metal targets. Antineutrons, displaying opposite magnetic properties to neutrons, were almost immediately added to the list, and nuclei of heavy antihydrogen (which consist of one antineutron and one antiproton) followed ten years or so later. Very recently, a few antihelium nuclei have been seen.
Were this the whole story it would not be such an inviting field for pseudoscientific mumbo jumbo. However, Dirac had also shown that antiparticles and their corresponding particles annihilate each other on contact, thereby releasing, via Einstein’s famous equation E=mc2, the energy equivalent of twice their mass. And if antiparticle annihilation produces enormous quantities of energy, why not antimatter bombs and antimatter power generation? Clearly intrigued by the former possibility, members of the U.S. Air Force began to appear at scientific conferences on antimatter in the early 1990s. So did the late science fiction writer Robert L. Forward,2 who had a background in research and a reputation for including only reasonably hard science in his novels. In 1996 a few very fast-moving antihydrogen atoms—antielectrons electrically bound to antiprotons—were produced at the CERN laboratory in Geneva (CERN 1996). Could this breakthrough lead to limitless power sources or unbelievably destructive bombs? Some sections of the world media evidently thought so and exploded with headlines like “Scientists create the fuel of science fiction,” and “Antiworld flashes into view.” Even Sir Joseph Rotblat, world-renowned physicist, Nobel Peace Prize winner, and one of the architects of the 1963 nuclear test ban treaty, warned of antimatter bombs thousands of times more devastating than the hydrogen bomb (Rotblat 1996).
NASA and Hollywood Fantasies
Things ratcheted up several more notches when in 2004 the San Francisco Chronicle published the article “Air Force Pursuing Antimatter Weapons” (Davidson 2004). This concerned a talk given by a certain Kenneth Edwards, director of the U.S. Air Force’s Revolutionary Munitions team, at a conference organized by NASA as part of its Institute for Advanced Concepts (NIAC) program. His report—sprinkled with Old Testament biblical references and maps showing missiles zigzagging over the Middle East—talked enthusiastically of revolutionary antimatter energy sources, rocket propulsion systems, and hand-held (yes indeed!) antimatter weapons.
We haven’t heard much of Edwards in recent years. We have, however, heard a lot about the 2009 movie version of Dan Brown’s Angels and Demons. In Brown’s book a lump of antimatter produced by the CERN Large Hadron Collider (LHC) is stolen and used to create mayhem in the Vatican as an act of revenge against the church for its persecution of Galileo. CERN had had an embarrassing mishap the previous year when the LHC had to be switched off for serious repairs shortly after it was inaugurated. Evidently thinking that all publicity is good publicity, it allowed the Hollywood promotional machine to take over its premises and use its good name as a platform to sell the movie, the lead actors, and their preferred brands of fashionable consumer goods. It also, apparently, chose not to point out on that occasion that the LHC has exactly the wrong characteristics for making antimatter anyway. Rather than having too little energy to function as efficient producers of antiparticles, its colliding protons have far too much. Even the 1996 antihydrogen atoms, made from antiprotons emerging from less violent collisions in a much lower energy accelerator, were moving so fast that they zipped through the laboratory in nanoseconds and annihilated on encountering the first obstacle they met.
Collecting Antiparticles for Research, Fuel, and Bombs
Here is the big problem: Whether you just want to study antiparticles closely or use them in fuel cells or bombs, you must hold large numbers of them still rather than have them self-destruct in nanoseconds. For this you need some kind of bottle. An antiparticle striking the walls of any ordinary bottle will, of course, instantly annihilate. It is not too difficult, however, to construct “bottles” with “walls” made of electric and magnetic fields, which will send any approaching electrically charged antiparticle backward if it is moving slowly enough. Since antiparticles will also annihilate if they hit any gas molecules remaining in the bottle, its air must be pumped out to a level such that few will be lost in this way in the course of a given experiment. Even at one billionth of the pressure of the atmosphere, each cubic centimeter of air contains many billions of molecules, so this is not easy.
Particles and antiparticles of either charge can nevertheless now be bottled without much trouble. In late 2010, again at CERN and again without recourse to the LHC, simultaneously bottled antiprotons and antielectrons were once more induced to bind together into a small number of antihydrogen atoms (Hangst 2011, Yamazaki 2011). This time they were moving very slowly—only a few hundred meters per second. At such low speeds, a configuration of electric and magnetic fields could be found that was able to bottle even these electrically neutral entities, albeit for only a fraction of a second. This has triggered yet another wave of pseudoscientific speculation about antimatter fuel and weaponry.
Running the Numbers
How are we to deal with these fevered imaginings? In his book Superstition, Robert Park (Park 2008) demolished Gerald O’Neil’s 1974 fantasy of solving the world’s overpopulation problem by accommodating the surplus in space colonies. This he did by what he calls “running the numbers.” Let’s try to highlight the absurdity of these antimatter fantasies by doing the same.
First we can discard the idea of collecting antiatoms (antihydrogen, for example) for such purposes instead of their component antiparticles. Getting the latter to bind together presents enormous technical headaches, which is why it has taken so many years to synthesize a mere handful of antihydrogen atoms. Moreover, there is no concomitant gain in the resulting annihilation energy yield, since it is the unbound components that annihilate anyway. We can also reject antielectrons and antineutrons as fuel or bomb material since the former have only about 1/2000 the mass of antiprotons (and therefore produce that much less energy per annihilation)3 and the latter because after about fifteen minutes, they decay into antiprotons anyway. We are thus left with antiprotons as our fuel or high explosive.
Suppose then that we had bottled every antiproton ever produced at CERN since 1956, the year they were first observed. How much energy would be released if we opened the bottle and allowed them all to annihilate? A rough guess at CERN’s aggregated antiproton yield since 1956 is about one hundred trillion (1014). Current technology limits the largest number that can be bottled as described above to about 10 million (107). Even with the best vacuum presently achievable, these will only survive against annihilation by air molecules for a few weeks. Let us nevertheless put our faith in technological advances, dismiss such objections as the product of insufficiently imaginative minds, and do our energy calculation as if we could indeed have bottled all 1014 antiprotons produced at CERN over the fifty-five years since 1956.
The energy equivalent of the proton and antiproton masses amounts to a few ten-billionths (3×10-10) of a joule.4 One joule will power a one-watt flashlamp bulb for one second. Our 1014 bottled antiprotons would therefore produce 30,000 joules, just about enough energy to light a sixty-watt bulb for eight or nine minutes. Trying to solve the world’s energy problems by antimatter annihilation evidently brings an entirely new dimension to the idea of doing things the hard way.
Now let’s calculate how long it would take to accumulate enough antiprotons to get the explosive power of a large hydrogen bomb, say ten megatons, with an energy equivalent of around forty thousand trillion (4.18×1016) joules. Presently we can bottle about two million antiprotons per minute, equivalent to six ten-thousandths (6×10-4) of a joule. The accumulation time needed for our 10 Mt bomb is therefore seventy million trillion (7×1019) minutes.
This is roughly 10,000 times the age of the universe—rather a tall order, you might think. Technophiles will nevertheless again argue that future improvements should be taken into account. Let’s be generous and allow them a factor of 20 billion or so to cover this, bringing the accumulation time down to a mere 7,000 years. So to have made such a bomb available now, with these quite impossible improvements, we would “only” have had to start accumulating around the dawn of recorded history. No time out for equipment maintenance or breakdowns; no holiday, meals, or comfort breaks for the operators of course; just continuous accumulation, day and night, week after week, month after month, year in, year out, century after century.
Still More Problems
There is one overriding consideration I haven’t yet mentioned: any antiparticles we make in the laboratory, we create out of energy itself (E=mc2 is here working backward, so to speak). So when they annihilate we only get back energy we originally used up. Not all of it by any means. Nature is not so kind as to give us back even the energy we expended, having decreed that antiparticle creation is a hopelessly inefficient way of storing energy.
What this suggests is that we would be better off going and getting a few bucketfuls of the stuff from one of those antimatter stars. Here I refer you once again to Robert Park’s book (Park 2008) in which he shows that travel to even a nearby star within a human lifetime would consume many thousands of times the entire annual energy production of Earth. And as if this wasn’t enough, astrophysicists, far and wide in the cosmos though they have looked, have never seen even a hint of any such stars.
Back to Real Science
Why this should be so is one of the great mysteries of modern physics. It is as if nature provided herself with two Lego kits for assembling different worlds but then left one of them in the box. To the best of our knowledge the instruction manuals—the laws of nature—are identical for the two kits. Have we perhaps misunderstood these laws?
When all else fails, read the manual is worthwhile advice for anyone faced with computer hardware behaving in similarly mysterious ways. Likewise, a guiding principle of real science is that our understanding of even well-established natural laws should always remain open to question. Careful re-reading of what might be called the fine print of these laws may yet reveal some minute asymmetry between matter and antimatter that has so far escaped our attention but that on a cosmic scale results in the world we see rather than the one we expected to see. Such is indeed the aim of current space-based (AMS 2011) and laboratory (Hangst 2011; Yamazaki 2011) experiments.5 But that, of course, is another matter.
1. The media normally make no distinction between antiparticles—the fundamental building blocks of antimatter—and antimatter itself (antiatoms, antimolecules, antistars etc.). In this article, I use antimatter in the latter, more correct sense.
2. Bob Forward was both likeable and knowledgeable, and I got on well with him on the few occasions we met. His pet project was neither power generation nor weaponry but an antimatter space propulsion system, in which annihilating antielectrons (which are usually and illogically known as positrons) heat a working rocket propellant instead of being used as a primary fuel.
3. NIAC, buried by NASA in 2007, was resurrected in 2011. Apparently following up Forward’s antielectron idea (www.nasa.gov/exploration/home/antimatter_spaceship.html), it nevertheless estimated in 2006 that “only” ten trillion trillion antielectrons (ten milligrams) would be needed for a manned trip to Mars along these lines. This is outside the scope of the present article, but their subsequent silence implies that even this relatively modest scheme got nowhere.
4. 2mc2, with m=1.67×10-27 kg and c=3×108 m/s.
5. In antimatter research things are currently moving somewhat faster than a speeding bullet. Since March 2011 when I began to write this article, the Alpha Magnetic Spectrometer (AMS above) has been launched to the International Space Station, antihydrogen atoms have been bottled on the order of fifteen minutes, the antiproton has been “weighed” with precision equivalent to weighing the Eiffel tower on a machine sensitive enough to detect a sparrow landing on it, and a kind of Van Allen belt of trapped antiprotons has been detected near Earth. Not surprisingly, some reports of these items of science news have indulged in the usual hand-waving fantasies about weapons, power, and space propulsion. Discussion of these developments must, however, await a future article.
AMS 2011. http://ams.cern.ch/.
CERN. 1996. Atoms of antimatter. CERN Courier 36 (2): 1–3.
Davidson, K. 2004. Air force pursuing antimatter weapons, Program was touted publicly, then became official gag order. San Francisco Chronicle (Oct 4).
Dirac, P.A.M. 1933. Nobel Prize lecture. Available at http://nobelprize.org/nobel_prizes/physics/laureates/1933/dirac-lecture.html.
Hangst, J. 2011. ALPHA collaboration gets antihydrogen in the trap. CERN Courier 51(2): 13–15. Available at http://cerncourier.com/cws/article/cern/45129.
Park, Robert. 2008. Superstition: Belief in the Age of Science, Princeton University Press.
Rotblat, Sir J. 1996. Private view. Financial Times (Jan 13/14).
Schuster, A. 1898. Potential matter—A holiday dream. Nature 58: 367.
Yamazaki, Y. 2011. At the cusp in ASACUSA. CERN Courier 51(2): 17–19. Available at http://cerncourier.com/cws/article/cern/45130. | <urn:uuid:8d9509d5-97c8-4f33-94e7-66d97f6101c7> | 3.34375 | 3,643 | Knowledge Article | Science & Tech. | 42.148036 |
Changing locale-specific information by loading the appropriate DLL
In Symbian OS locale-specific information is built as a DLL. The user can change the locale information related to a specific country by loading the appropriate DLL.
The convention followed in naming the DLL is as follows:
elocl.language_index - The list of languages supported by Symbian OS is enumerated in TLanguage. Each value in the TLanguage enumeration uniquely identifies a language. TLanguage enumeration can be found in the e32Const.h header file.
ELangEnglish and DLL name is elocl.01.
ELangFrench and DLL name is elocl.02.
ELangGerman and DLL name is elocl.03.
ELangFinnish and DLL name is elocl.09.
Symbian OS provides the TLocale class to set the system-wide locale settings and to retrieve system-wide locale setting. The TLocale class provides methods for setting the following information: calendar settings, country code, currency format, date and time formatting, numeric values, time zone information, and units of distance.
In Symbian OS most applications do not need to change the locale settings; they just use the system setting instead.
The user can load the appropriate locale-specific DLL to fetch the locale data. The function given below is a way to fetch the locale-specific information.
TInt r = loader.Connect();
if(KErrNone == r)
//Load the language variant DLL
TInt size = KNumLocaleExports * sizeof(TLibraryFunction);
TPtr8 functionListBuf((TUint8*) data, size, size);
r = loader.SendReceive(ELoadLocale, TIpcArgs(0, KDllName, &functionListBuf ) );
if(KErrNone == r)
retVal = (TText*) aLocale.Ptr();
Setting the locale by writing a program in POSIX is easier compared to the locale settings in Symbian OS. In POSIX the user does not need to be aware of the locale information, whereas in Symbian OS the user should be aware of it.
Example: The code below sets the monetary information of Finland using POSIX:
//before fetching the monetray information it is assumed that locale is set to finnish
struct lconv* localeinfo = localeconv();
The lconv structure has the complete monetary information.
The code below sets the monetary information of Finland using Symbian C++:
//similarly other monetary information should be set individually. | <urn:uuid:5c0b50eb-f842-4446-94bc-edfb246aaf9e> | 3 | 553 | Documentation | Software Dev. | 36.364379 |
Lab-3: Properties of the Discrete Fourier Transform
The goal of this lab is to look at different properties of the Discrete
3.1: Spectral leakage
- Create a cosine of 0.1 second at a sampling rate of 44100 Hz, with
a frequency multiple of 44100/1024, for example
- x1 = cos(2*pi*f*t) where t = [0:1/44100:0.1-1/44100]
- Display 1024 samples of the cosine with the horizontal axis expressed in seconds.
- Compute an FFT of size N = 1024 of the signal x1 and display its magnitude spectrum (only the
positive part). Use the "*" mode for plot, plot(abs(y),"*"). The horizontal axis should show the
frequency ω in radians per second. This means that the frequency range goes from 0 to π.
- From the magnitude spectrum, read out the frequency of the cosine and compare it
with the original frequency you used when generating it. To calculate the frequency in
Hz use the formula f = (fs ω)/(2 π)
- Compute two other cosine functions x2 and x3 with frequencies at 10.25 and 10.50 times
44100/1024 and display 1024 samples of the cosines.
- Calculate the FFT of the two cosines x2 and x3, display the magnitude spectrum and
try to identify their frequencies. What has changed ? Try to explain it ?
3.2: DFT phases
The phases spectrum gives information about the exact position in time
of the different spectral components. In order to better visualize the
phase, the sound has to be centered at time zero, called zero-phase
windowing, otherwise it has a time-offset and that
means that there is a complex exponential factor in the phase. So,
before computing the FFT of size N of an array x of size x-length (which should be an odd value and smaller than N) you have to shift the array samples to time zero by:
- fftbuffer = zeros(N, 1);
- fftbuffer(1:(xlength+1)/2) = x((xlength+1)/2: xlength);
- fftbuffer(N-(xlength-1)/2:N) = x(1: (xlength-1)/2);
In Octave, the phases are computed using arg() (help arg)
and in Matlab using angle() (help angle). In order to remove the effect of the modulus 2π
of the phase spectrum use the function unwrap(). Be aware that the only
relevant phase values are the ones for which there is some energy at
the given spectral components you are looking at.
- Compute the fft of size 1024 of an odd number of samples (ex: 801) of sound x1, using zero-phase windowing
and unwrapping. Use plot(unwrap(arg(fft(fftbuffer, 1024)))).
- Compute the of the signal x1 with a different phase offset (ex: start the FFT at sample
5 of the vector x1 instead of sample 0). Display the phase of this FFT.
- Describe the phase relation between the negative and the positive frequency bins of the analyzed sinusoid.
- Compute and display the amplitude and the phases of signal x2 using subplot(2,1,1). The first plot
should show the amplitude and the second one the phase.
3.3: Time - Frequency duality
An impulse in the time domain corresponds to a constant offset in the frequency
domain (and the other way around).
A rectangular function in the time domain corresponds to a sinc (sin(x)/x) function in the frequency
- Create a vector z of 1024 zeros and set the first value
to 1. ( z = zeros(1,1024); z(1)=1 ). Display the magnitude, the real values and the complex values
of the FFT of the vector (use the functions real() and imag())
The convolution is defined by
- Set all values from index 1 to 30 of vector z to one and display the function and
its spectral content (the magnitude of the FFT).
What can you say about the resulting function ?
Set the values from index 1 to 60 to 1 and display its spectral content. What changed ?
One of the properties of the DFT is that multiplication in the spectral domain corresponds to
circular convolution in the time domain and the other way round. The convolution of a function
with an impulse function is equal to the function itself.
The inverse discrete Fourier transform (or IFFT) converts a signal from the spectral domain to
the time domain.
Multiply the cosine function from exercise 1 with the rectangular function you have just created.
Calculate the FFT from the result and display its magnitude. Explain the result.
a signal in the spectral domain by puting a value in the appropiate
spectral bin and convert it to the time domain using ifft(). The resulting
time-domain signal should be a cosine with frequency of 646 Hz. Choose
fft size that is a power of 2. Sampling rate is 44100 Hz. Display both,
the original spectrum
and the time domain signal. | <urn:uuid:d6e52654-cbee-4620-845c-2e769fcb71ba> | 3.796875 | 1,151 | Tutorial | Science & Tech. | 68.874118 |
The Center of Mass (center of gravity) of a solid is similar to the Centroid of Solid. However, calculating the centroid involves only the geometrical shape of the solid.
The center of gravity will equal to the centroid if the body is homogenous i.e. constant density.
Integration formulas for calculating the Center of Mass are: | <urn:uuid:23055cfe-34c5-4f72-9620-3e9e70039c1e> | 3.40625 | 74 | Knowledge Article | Science & Tech. | 42.884857 |
Sunlight and Clouds
A cloud is a visible mass of water droplets or frozen ice crystals suspended in the Earth's atmosphere above the surface of the Earth or other planetary body. On a cloudy day the surface under the clouds appears darker and cooler. Atmospheric scientists trying to pin down how clouds curb the amount of sunlight available to warm the earth have found that it depends on the wavelength of sunlight being measured. This unexpected result will help researchers improve how they portray clouds in climate models. Additionally, the researchers found that sunlight scattered by clouds — the reason why beach goers can get sunburned on overcast days — is an important component of cloud contributions to the earth's energy balance. Capturing such contributions will increase the accuracy of climate models, the team from the Department of Energy's Pacific Northwest National Laboratory reported in Geophysical Research Letters earlier this month.
The color of a cloud tells much about what is going on inside the cloud. Dense deep tropospheric clouds exhibit a high reflectance throughout the visible spectrum. Tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color. Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth. As a result, the cloud base can vary from a very light to a very dark gray depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer.
The role of clouds in regulating weather and climate remains a leading source of uncertainty in projections of global warming. This uncertainty arises because of the delicate balance of processes related to clouds, spanning scales from millimeters to planetary. The complexity and diversity of clouds, as outlined above, adds to the problem. On the one hand, white colored cloud tops promote cooling of the Earth's surface by reflecting shortwave radiation from the Sun. However radiation that makes it to the ground is reflected back in long wavelengths that are easily absorbed by water in the clouds resulting in a net warming at surface level.
Overall, most clouds have a net cooling effect, but atmospheric scientists need to accurately measure when they cool and warm to produce better climate models that incorporate clouds faithfully.
Fair-weather clouds are big puffy white objects that bounce a lot of light around. They can make the sky around them look brighter when they're there, but they float about and reform constantly. Cloud droplets and aerosol particles in the sky — tiny bits of dirt and water in the air that cause haziness — scatter light in three dimensions, even into cloud shadows.
To determine the net cloud effect, researchers need two numbers. First they need to measure the total amount of sunlight in a cloudy sky. Then they need to determine how bright that sky would be without the clouds, imagining that same sky to be blue and cloudless, when aerosols are in charge of a sky's brightness. The difference between those numbers is the net cloud effect.
Researchers have traditionally estimated the net cloud effect by measuring a broad spectrum of sunlight that makes it to the earth's surface, from ultraviolet to infrared. Clouds are white because water droplets within them scatter light of all colors almost equally in the visible spectrum, the part of the electromagnetic spectrum that includes the colors of the rainbow.
On the other hand, aerosols — both within clouds and in the open sky — bounce different-colored light unequally. Broadband measurements that fail to distinguish color differences might be covering up important details.
Instead of taking one broadband measurement that covers everything from ultraviolet to infrared, the new research wanted to determine how individual wavelengths contribute to the net cloud effect. To do so, the team used an instrument that can measure brightness at four different wavelengths of color — violet, green, orange, red — and two of infrared.
This instrument, a spectral radiometer, allowed the team to calculate what the brightness would be if the day sported a cloudless, blue sky. The spectral measurements taken by the radiometer can be converted into the amount and properties of aerosols. Then aerosol properties can be used to calculate clear blue sky brightness.
Comparing measured values for cloudy sky to the calculated values for clear sky, the researchers found that, on average, puffy fair-weather clouds cool down the earth's surface by several percent on a summer day. Although clouds cool overall, two components that the researchers looked at — from direct and scattered sunlight — had opposite effects.
The direct component accounts for the shade provided by clouds and cools the earth. The second component accounts for the sunlight scattered between and under clouds, which makes the sky brighter, warming the earth.
In the Oklahoma summer, the scattered-light effect measured by the researchers could be quite large. For example, if a cloud passed over the instrument, the measured cloudy sky brightness exceeded calculated clear sky value by up to 30 percent. Kassianov, one of the authors, attributes the large difference to scattered sunlight being "caught on tape" by the radiometer.
The team also found that the effect changed depending on the measured visible-spectrum wavelength, and whether the light was direct or scattered.
With direct light, the cooling caused by clouds was weakest on the violet end of the spectrum and strongest at infrared. With scattered light, warming caused by clouds was also weakest at violet and the strongest at infrared. Overall, the least cooling and warming occurred at violet, and the most cooling and warming occurred at infrared.
These results suggest that aerosols — which not only cause haziness but contribute to cloud formation as well — are responsible for the wavelength differences, something researchers need to be aware of as they study clouds in the sky.
For further information: http://en.wikipedia.org/wiki/Cloud | <urn:uuid:d72546df-d122-46ef-b8bf-65d9ff570bd2> | 4.34375 | 1,167 | Truncated | Science & Tech. | 40.549445 |
The availability of online planning applications, wildlife and forest data, and map resources makes planning conservation projects simple. Sharing your experiences and learning from others can spread awareness of conservation issues and increase understanding of how GIS technology can aid conservation efforts.
- Society for Conservation GIS
- Young People Use GIS for Conservation
- The Green Belt Movement
this organization founded by Wangari Maathai to conserve the environment and provide Africans with the resources needed for sustainable lifestyles. Read how members of the Green Belt Movement use GIS technology to achieve their conservation goals. Learn more.
- Green Map System
This organization provides tools to map local natural and cultural resources for cities, towns, and neighborhoods across the globe with the vision to develop local sustainable networks. The Green Map System Project in Westchester County, New York, empowers its residents to make green choices.
- Service at Sea
This organization is dedicated to providing GIS training to communities across the globe. The members of Service at Sea sail around the world, stopping at certain checkpoints to host training sessions, workshops, and other GIS experiences as well as work with local educators to implement GIS curricula, to help create sustainable practices. Follow the journey at the Service at Sea blog.
- The Trust for Public Land
The organization has designed a way to visualize conservation scenarios with Greenprinting. The geoenabled Web site GreenPrint.org, which will translate local data into a "priorities map" to address prescribed conservation goals, assign weights for prioritization, and create the ability to add alternate scenarios to a map, will be available online in the fall of 2007. The Web site will provide data and tools to local communities for planning purposes.
- The Orangutan Foundation International
The Orangutan Foundation International and Dr. Biruté Mary Galdikas use GIS technology to monitor the habitats and behaviors of the Borneo orangutan to ensure its conservation. Their primary task is to interpret satellite imagery to identify threats to the orangutans and their habitats to better focus conservation efforts on areas in need. Read more. | <urn:uuid:ebaf87e6-0c89-482e-9597-2ef066f6b9d1> | 3.453125 | 429 | Content Listing | Science & Tech. | 23.831769 |
Get flash to fully experience Pearltrees
by Gilles Brassard Département IRO, Université de Montréal.
The idea that a vote cast by a person remains the same after he submitted it is taken very seriously in any democracy. Voting is the right of the citizen, and it's how we choose the people who make important decisions on our behalf. When the security of the ballot is compromised, so, too, is the individual's right to choose his leaders.
Privacy is paramount when communicating sensitive information, and humans have invented some unusual ways to encode their conversations. In World War II , for example, the Nazis created a bulky machine called the Enigma that resembles a typewriter on steroids. This machine created one of the most difficult ciphers (encoded messages) of the pre-computer age. Even after Polish resistance fighters made knockoffs of the machines -- complete with instructions on how the Enigma worked -- decoding messages was still a constant struggle for the Allies [source: Cambridge University ]. As the codes were deciphered, however, the secrets yielded by the Enigma machine were so helpful that many historians have credited the code breaking as a important factor in the Allies' victory in the war.
Both the secret-key and public-key methods of cryptology have unique flaws. Oddly enough, quantum physics can be used to either solve or expand these flaws. The problem with public-key cryptology is that it's based on the staggering size of the numbers created by the combination of the key and the algorithm used to encode the message. These numbers can reach unbelievable proportions. What's more, they can be made so that in order to understand each bit of output data, you have to also understand every other bit as well.
Photons are some pretty amazing particles. They have no mass, they're the smallest measure of light , and they can exist in all of their possible states at once, called the wave function . This means that whatever direction a photon can spin in -- say, diagonally, vertically and horizontally -- it does all at once. Light in this state is called unpolarized .
Quantum cryptography uses photons to transmit a key. Once the key is transmitted, coding and encoding using the normal secret-key method can take place. But how does a photon become a key?
The goal of quantum cryptology is to thwart attempts by a third party to eavesdrop on the encrypted message. In cryptology, an eavesdropper is referred to as Eve . In modern cryptology, Eve (E) can passively intercept Alice and Bob's encrypted message -- she can get her hands on the encrypted message and work to decode it without Bob and Alice knowing she has their message. Eve can accomplish this in different ways, such as wiretapping Bob or Alice's phone or reading their secure e-mails .
Despite all of the security it offers, quantum cryptology also has a few fundamental flaws. Chief among these flaws is the length under which the system will work: It’s too short. The original quantum cryptography system, built in 1989 by Charles Bennett, Gilles Brassard and John Smolin, sent a key over a distance of 36 centimeters [source: Scientific American ]. Since then, newer models have reached a distance of 150 kilometers (about 93 miles). But this is still far short of the distance requirements needed to transmit information with modern computer and telecommunication systems. | <urn:uuid:814c5a3f-10db-468e-922c-522b0a9b416c> | 3.3125 | 697 | Knowledge Article | Science & Tech. | 40.009008 |
of the Swamps
How early vertebrates established a footholdwith all 10
Paleontologists have found the
earliest known vertebrate adapted to life on land.
Paton, R.L., T.R. Smithson, and J.A. Clack. 1999. An amniote-like
skeleton from the early Carboniferous of Scotland. Nature 398(April
Ahlberg, P.E., and A.R. Milner. 1994. The origin and early diversification
of tetrapods. Nature 368(April 7):507.
Daeschler, E.B., and N. Shubin. 1998. Fish with fingers? Nature
Zimmer, C. 1998. At the Water's Edge. New York: Free Press.
Robert L. Carroll
859 Sherbrooke Street
West Montreal, PQ H3A 2K6
University of Cambridge
University Museum of Zoology
Cambridge CB2 3EJ
Michael I. Coates
University College London
Department of Biological Sciences
London WC1E 6BT
Edward B. Daeschler
Academy of Natural Sciences of Philadelphia
1900 Benjamin Franklin Parkway
Philadelphia, PA 19103
University of Pennsylvania
Department of Biology
Philadelphia, PA 19104
Tim R. Smithson
Cambridge Regional College
Kings Hedges Road
Cambridge CB4 2QT
News, Vol. 155, No. 21, May 22, 1999, p. 328.
Copyright © 1999, Science Service. | <urn:uuid:e405ab67-107e-44bb-ae6a-c05673262589> | 3.21875 | 323 | Content Listing | Science & Tech. | 67.641181 |
Close-up of Castor and
Pollux showing the contrast in colors.
In March, Gemini - "The Twins"
- is high in the sky in the early evening. On closer inspection,
the twins, Castor and Pollux, are not identical. Castor is much
hotter than Pollux and as a result the color of castor is quite blue,
whereas Pollux is a yellowish tint. These colors are captured
with a digital camera aimed at the constellation. No telescope is
Saturn is very prominent in
Gemini this season. The Physical Science class has measured the
apparent motion of Saturn throughout December, January, and February
and observed that Saturn has been moving westward throughout the
season. The December 25 location is indicated.
fainter star Zeta Geminorum is an interesting star in that it varies in
brightness with a cycle of 10.2 days. During the past 6 weeks I
have been photographing Zeta Geminorum with the digital camera without
a telescope. Here is a graph of the brightness of Zeta as a
function of the fraction of a cycle. The brightness is measured
relative to the brightness of a nearby star to compensate for varying
atmospheric effects. The digital camera also permits a
quantitative measure of the color of stars. A plot of the color
shows a similar pattern because the temperature of a pulsating star
such as Zeta Geminorum oscillates. I am presenting these results
Saturday, Mar. 27 at the spring meeting of the NC Section of the
American Association of Physics Teachers. | <urn:uuid:f0d15e8a-1fc9-490f-b8a7-c5d9776e5c81> | 2.75 | 336 | Personal Blog | Science & Tech. | 47.205866 |
This week's cartoon describes compressed air, a potentially effective way to store excess energy from renewable sources and provide power on demand. Currently, renewable sources of energy, like wind and solar, can produce a lot of power, but only when the sun is shining or the wind is blowing. Using the energy when we have it to pressurize air, we can release the air gradually against a piston or a turbine to provide power on demand. As Jer once told Alex, you can think of compressed air like an "air battery." The idea of using air energy storage to heat or cool has been inspiring inventors for years. Some have even been working to find a way to propel a car using only air. More work still needs to be done, but it's an exciting set of ideas.
Editor's note: This post is part of a series featuring Worldchanging ally Andy Lubershane's original graphics. While many of the issues covered in the comics have been discussed on Worldchanging in the past, we hope that you'll be able to use this new medium in a different way … whether it's in your classroom, on your office wall, or to help explain ideas to friends and family.
Andy Lubershane researches writes and cartoons about sustainability from his home in Ann Arbor, MI. He is currently pursuing a master's degree at the University of Michigan School of Natural Resources and Environment. Check out more of his illustrations at www.earthlycomics.com.
Compressed air for energy storage is good.
In a lot of respects, the decision to base energy transmission and use on electricity was a questionable one.
Compressed air makes more sense for short range storage and use.
Motors for pneumatic systems are much more compact than electric ones.
Elevators, in particular, would be much more efficient if they were pneumatic since the difference between down-bound traffic and up-bound traffic can be captured, stored, and used between morning and evening rush hours.
If we just had a practical way to store light without converting it to another form of energy, we could be done with the electrical and petroleum age.
Using compressed air as energy storage is quite an old idea and used at a few places. But it is not suitable as a general storage system. For one thing, the energy capacity per kg weight is very low, which means that it's not suitable for mobile applications (or would you like to carry around a 1kg cellphone? ). For another, devices that contain gasses or fluids with a pressure higher than (IIRC) 4bar are subject to special security regulations and are regarded as hazardous materials. Hence they are prohibited in most public transport systems and their use in residential areas is restricted. | <urn:uuid:44fcdbf1-869b-4cb5-9a48-5b66e1a3c8ee> | 3.5 | 560 | Personal Blog | Science & Tech. | 45.757308 |
The 70 meter deep space station antenna named 'Mars', a.k.a. DSS-14
Good morning everyone.
Today I thought that we’d take a quick look at the network of instruments that send and receive data to our spacecraft, satellites, and space probes, including the Mars Science Laboratory, also known as the Curiosity rover. In particular, we’ll look at the Goldstone Deep Space Communications Complex and the radio antennas (antennae are found on critters, FYI) located there.
Goldstone is one of three facilities of the NASA Deep Space Network that are spaced approximately 120o around the world to allow constant communications with spacecraft as the Earth rotates. There is the Goldstone complex, also called the Goldstone Observatory, in the Mojave Desert in California; the Spanish Complex near Madrid; and the Australian Complex near Canberra.
From the NASA/JPL website:
“Each complex consists of at least four deep space stations equipped with ultrasensitive receiving systems and large parabolic dish antennas. There are:
Continue reading BCMs #15 – The Goldstone Deep Space Communications Complex
A KH-9 spy satellite, which has nothing to do with the article.
Last year, or possibly the year before, some of the bean-counting type of spooks that work for the National Reconnaissance Office (NRO) were evidently sorting through the excess hardware at one of their storage facilities when they found that they had a couple of telescopes that they didn’t need anymore. They were brand new and it seemed a shame to just chuck them out, so they considered who might want to take them off of their hands. NASA and its continually shrinking budget came to mind, so they gave NASA a call.
Continue reading The NRO Takes Pity on NASA, Gives Them Spare “Hardware”
The Robert C. Byrd Green Bank Telescope. Image courtesy of NRAO/AUI
Good morning everyone.
Today we’re going to take a quick look at radio telescopes in general and the Green Bank Telescope in particular.
A radio telescope is a type of steerable radio antenna that is used in astronomy for studying celestial radio sources. The same types of antennas are also used for tracking and communicating with satellites and spaces probes. When used as telescopes for astronomy, they collect electromagnetic radiation in the radio frequency spectrum, from ~3kHz to 300Ghz, as opposed to optical telescopes which collect visible light. They are used for the study of many celestial objects that optical telescopes either cannot or have difficulty in observing, such as the objects in the center of our galaxy. Radio telescopes are typically very large parabolic, or dish antennas, and are used singly or in arrays. The diameter of the antenna dish is called the aperture of the telescope, and just like optical telescopes, a larger aperture means that a telescope can detect and study fainter objects. Radio telescopes that are thousands of miles apart can be linked together in a technique called Very Long Baseline Interferometry which gives the resolution of a single telescope that is thousands of miles in diameter. Radio telescopes are the giant constructs of astronomy, the largest being the Arecibo Radio Telescope in Puerto Rico at 1,000 feet in diameter.
Continue reading Big, Complicated machines #12 – The Green Bank Telescope
Mirror cell for the 200 inch mirror at the Babcock & Wilcox factory
Good morning, everyone.
Today I’m going talk about the Hale telescope again, but not about the parts of the telescope that I was originally going to talk about, because while researching the part I was going to talk about, I found this really interesting other part of the telescope to talk about, so I’m going to talk about that instead of what I was going to talk about. I just want to be clear about that.
The last time we talked about the Hale telescope, we talked about how the mirror was cast, ground, and polished. The next step for the mirror would be its journey to Mt. Palomar, and its installation into the telescope structure. What we’re going to look at this time is the supporting structure for the 200 inch mirror, a collection of mechanisms that are more important and more complex than you might imagine.
Continue reading Supporting the 200 Inch Mirror in the Hale Telescope
X-ray image of Sagittarius A*
Good morning, everyone.
Just about everyone here has had some experience with X-rays; you get your teeth X-rayed at the dentist, if you break a bone, you get X-rayed at the hospital. Critical welds in gas pipelines are quality checked using X-rays (they are supposed to be, anyway). The inner workings of machines can be examined using X-rays, and even fossils of dinosaurs can be examined while still encased in rock. It seems that X-rays can be used to see through just about anything.
However, consider the photograph at the top of this article. It is an image of Sagittarius A*, the supermassive black hole at the center of our galaxy, and it was taken with the X-ray telescope on the Chandra X-Ray Observatory.
An X-ray telescope? How do you make a telescope to work with X-rays? How do you make a mirror or a lens to focus photons that are energetic enough to penetrate pretty much any material that we know of?
Continue reading X-Ray Telescopes
The back side of a JWST mirror segment
Good morning, everyone.
Today I’m going to talk more about the mirrors on the James Webb Space Telescope (JWST) and what went into making them. As you remember from the article on the journey that the mirror segments take during manufacturing (you all did read that article, correct?), the segments make 14 stops during the process. I’m not going write up each stage of the fabrication, otherwise we’d be here all day and I have other things that need to be done, plus my foot hurts.
Continue reading Making the James Webb Space Telescope’s Mirrors
An artist's rendering of the JWST
Good morning again, everyone.
It still is the morning, correct? My coffee is still hot and [opens curtains] it’s quite bright out. Morning it is.
Today we’re going to look at the New Technology Space Telescope, now known as the James Webb Space Telescope. As I’m sure you all know, James Webb was the second administrator of NASA, and evidently did some good things as a bureaucrat during his reign there. It looks to be an ill-omened name, however, considering all of the bureaucratic bungling and huge cost overruns that the project has had, and is still having. The project was originally estimated to cost $1.6 billion, but as development progressed, that grew to $5 billion by the time that construction was confirmed and scheduled to start in 2008, with a launch date of 2011. Because of the cost overruns, NASA shuffled the management, but that caused a big delay in the planned launch date, which was now pushed back to 2018 at least, and maybe out to 2020. Maybe longer. And the cost keeps going up. In July 2011, the cost had risen to $6.5 billion, and in August it rose to 8.7 billion for the cost of the telescope and 5 years of operation.
Eight point seven billion dollars. Jeebus H. Fooking Kleist on a ladder.
Continue reading A Look at the James Webb Space Telescope
Inspecting the 200 inch mirror blank
Today I’m going to talk about the making of the 200 inch mirror for the Hale telescope. I was originally going to talk about the telescope as a whole, but that would require an article that’s far too long for the bulk of our readers to deal with. There is a lot of history associated with the Hale telescope, in its creation and construction, and in the myriad discoveries that have been made using it. On top of that, I’ve been fascinated by the instrument for decades, and I know enough about the thing to bore a zombie into a stupor. What I’ve decided to do, is to write two or three articles about the telescope, with lots of references so that those of you who want to know more about the Hale telescope can do so, while the casual reader won’t be driven off by the extent of my voluminous verbosity. That’s the plan, anyway. Continue reading Making the 200 Inch Mirror for the Hale Telescope
A somewhat challenging coordinate system.
Today I want to talk about how celestial objects are mapped and located in the sky by astronomers and telescope wielding enthusiasts. It is really not all that complicated, it is much like terrestrial coordinates except mapped out to the sky onto what is called the celestial sphere. I’ll try to keep this as short as I can, but there are several terms that will need explaining. Yes, I’m going to require that you read again. Ah, I can hear the groans already.
Continue reading A Celestial Coordinate System
The 200 inch primary mirror for the Hale telescope, ready to be coated with aluminum.
Today I’m going to talk to you about reflecting telescopes, or telescopes that use a mirror to focus incoming light rather than a series of lenses, such as the refracting telescopes that we’ve already looked at. The reason for this is that I’m going to be showing you some of the great telescopes of our time in later articles, and the bulk of them are the reflector type. If I give you the basic background on those instruments now, then I won’t have to natter on and on in each article, explaining things over and over again, and forcing you to read, heaven forbid. It will also cut down on the amount of writing I’ll have to do, and that way we’ll all be happy. I will anyway, and that’s the important thing.
Continue reading Some Background on Reflecting Telescopes | <urn:uuid:780607d9-df52-4e8f-b08b-59a7280b430d> | 3.296875 | 2,097 | Content Listing | Science & Tech. | 50.298751 |
In what some are calling a major breakthrough for renewable energy, MIT chemists Daniel Nocera and Matthew Kanan discover a new catalyst that speeds up the splitting of water into oxygen and hydrogen. The discovery may heighten interest in pollution-free fuel cell vehicles, which generate energy by combining hydrogen and oxygen chemically, emitting only water. The catalyst, made from cheap materials and working in ordinary water, may also make it easier to convert sunlight into chemical fuels, storing solar energy in much the way plants do.
Chemical Explorers is a series of short videos about interesting developments in modern chemistry. These are not “instructional” videos meant only for the classroom; they're more like TV science magazine pieces, but delivered over the Internet instead of on television.The project is a collaboration among Moreno/Lyons Productions, the Chemical Heritage Foundation and the Filmmakers Collaborative. It is made possible by a grant from the Camille and Henry Dreyfus Foundation. What do you think?Have a comment on or a question about one of our videos? A suggestion for a future video? Please let us know what you think. Linkswww.chemheritage.orghttp://filmmakerscollab.org/www.dreyfus.org/ | <urn:uuid:c65d7bf2-baa7-4216-87ba-54fce994b195> | 3.734375 | 260 | Truncated | Science & Tech. | 36.423699 |
While there has been some minor solar activity lately, the sun has been actively evolving for hundreds of millions if not billions of years. On September 1, 1859 a massive solar flare known as the Carrington Event was felt around the globe. Beginning August 28 and lasting until September 2, sun spots and flares were recorded by British Astronomer Richard Carrington and independently noted by astronomer Richard Hodson; the resulting effects were felt by hundreds of millions around the world. Carrington noted the reports and recordings of disturbances in the Earth’s magnetosphere recorded at Kew Observatory and connected the solar activity with electro-magnetic disturbance on earth.
Following the Carrington Event, aurorae were seen all over the world, far further south than typically recorded-as far south as the Carribean. In the American Southwest, goldminers awoke in the middle of the night thinking it was dawn. At the dawn of the Electric Age, telegraphs world-wide were interrupted or showed disturbances ranging from disrupted signals to poles and equipment catching fire and emmitting sparks and electrical discharge. Scientists have calculated that such storms occur appoximately every 500 years.
On September 3, 1859, the Baltimore American and Commercial Advertiser reported, “Those who happened to be out late on Thursday night had an opportunity of witnessing another magnificent display of the auroral lights. The phenomenon was very similar to the display on Sunday night, though at times the light was, if possible, more brilliant, and the prismatic hues more varied and gorgeous. The light appeared to cover the whole firmament, apparently like a luminous cloud, through which the stars of the larger magnitude indistinctly shone. The light was greater than that of the moon at its full, but had an indescribable softness and delicacy that seemed to envelop everything upon which it rested. Between 12 and 1 o’clock, when the display was at its full brilliancy, the quiet streets of the city resting under this strange light, presented a beautiful as well as singular appearance.”
Image of Richard Carrington’s sunspot observations in the public domain, used courtesy Wikipedia.
Image of a solar flare courtesy NASA.
Excerpt from The Baltimore American in the public domain, used courtesy Wikipedia. | <urn:uuid:f0e575ad-131d-4429-ba2d-fcb472472e6d> | 3.59375 | 466 | Personal Blog | Science & Tech. | 29.922609 |
5.8 Refractive Index and Snells Law
For physicists, it is good to know that waves react, thought it is more helpful to know when, and by how much. Refraction can be quantified by relating the angle of incidence (to the boundary between the two media in question) to the angle of refraction.. These, and the refractive indexes of the two media can be used to precisely calculate the change in direction of a wave.
The refractive index of a medium (for a certain wave) is the ratio of the speed of the wave in unrestrained conditions (the absolute fastest speed) to the speed of the wave in that medium. The refractive index has symbol n, and, being a ratio, has no unit. In some cases, a single refractive index is given for the two materials involved, but this is simply the combined ratios of their two ns. However, in this unit, we will discuss refractive indexes for individual materials.
The following relates the refractive indices, n1 and n2, of two media with two more familiar terms, the angle of incidence i, and the angle of refraction, r:
This is known as Snells Law. However, since n, the refractive index is a ratio of the fastest possible speed of the wave to the speed in the medium, we can simplify to get one more equation:
If u is the maximum speed of the wave (e.g speed of light in a vacuum), and c1 and c2 are the speeds of the wave in their respective media 1 and 2,
Where c is the speed of the wave in the medium.
A special case of Snells Law is applied for Total Internal Reflection (T.I.R.), Unit 8. | <urn:uuid:14c57f8a-227d-4040-b76c-3ba1fa17cacd> | 4.625 | 361 | Knowledge Article | Science & Tech. | 60.991504 |
The average rainforest is very moist, receiving over 80 inches (203 centimeters) annually. Some rainforests can even receive more rain than the average rainforest. For instance, the Colombian Choc' gets more than 360 inches (914 centimeters) of rain a year. Cloud forests usually do not get much rain, however they do get sufficient amounts of water through constant, misty clouds.
The average temperature in rainforests are fairly constant. This is because the rainforests are located near the tropics where the sun sets and rises at the same time all year long. The average temperature in the Amazon basin during the dry season may be 82.2° F (27.9° C), and change only a few degrees to 78.5° F (25.8° C) during the wet, cloudy season.
Some rainforests only have two seasons: a dry and a wet season. Tropical rainforest areas furthest from the equator can experience two wet and two dry seasons. Rainforests closer to the equator experience no seasons, where it mainly rains all year long.
Rainforests are very humid. During the day the humidity averages eighty percent, which keeps the rainforest warm. During the night the thickness of the humidity stays in the rainforest, keeping it warm, as well.
The higher a rainforest gets in altitude, the lower the temperatures get. There is a set rule that every 1,000 feet (300 meters) a rainforest goes up a mountain, the temperature drops about 3° F (1.7° C) cooler. In very rare instances, temperatures in mountainous rainforests can drop below freezing.
Half of the rain that the rainforest receives comes from the Atlantic Ocean. The other half is made in the rainforest itself. The heat evaporates moisture out of many plants leaves. As this moisture rises from the forest and cools, it forms rain clouds. Through this process, the rainforest is responsible for most of the moisture it receives. | <urn:uuid:7c3a38a3-7a43-48c8-989b-7e43db3923dd> | 4.03125 | 413 | Knowledge Article | Science & Tech. | 57.97756 |
Part 2—Open and Explore Ozone Images
Step 1 – Open Images in ImageJ
- Launch ImageJ by double-clicking its icon on your desktop (Mac or PC) or by clicking the icon in the dock (Mac) or the Start menu (PC).
- Start ImageJ, choose File > Import > Image Sequence.... Browse to the location where you saved the TOMS images, then select the first image. Go with the default values in the Image Sequence dialog.
- Open the ImageJ folder and double-click ImageJ's application icon (looks like a microscope) to start the image processing software.
- From ImageJ's menu bar, choose File > Import > Image Sequence... then navigate to the folder where you stored the downloaded TOMS images. Select the first image (es961001), then click Open.
- Go with the default the values in the Sequence Options (don't change any of the settings) and click OK.
- Once the images are open in a stack, you should have the tools of ImageJ available.
- The Import Image Sequence command puts all the images into a single window named Stack. You can flip between the images by dragging the slider button along the bottom of the stack window, or by pressing the comma (,) or period (.) keys on your keyboard.
- Save the stack as a TIFF file.
- Choose File > Save As > TIFF...
- Select a place on your computer to store the stack (for instance, in your TOMS images folder).
- Give the stack a new name or use the default name.
Step 2 – Explore the Images
The images you've opened show the amount of ozone overhead, measured in Dobson Units (DU). The measurements can be thought of as showing the "thickness" of the ozone layer: if all of the ozone molecules overhead could be brought down to Earth's surface, the "layer" of ozone would only be about 3 millimeters thickabout the same height as two stacked pennies. This amount of ozone is assigned a value of 300 DU. The ozone "hole" is defined as the area where the total amount of ozone is less than 220 DU. The color scale in these images has a boundary at 225 DU, so you will use that value as the threshold value for the ozone hole.
- Look at the color scale below the first image to figure out where ozone is "thickest" and "thinnest." Can you identify the area that would be considered as part of the ozone hole?
- Use the , and . keys to flip through the stack of images and examine them. Watch how the ozone levels and the size of the low-ozone area change over time.
- To zoom in on the image, select the magnifying glass on the tool bar, then click any location on the image.
- To zoom out, double-click the magnifying glass on the tool bar.
- To scroll around the image while it is enlarged, select the scrolling tool (the hand) on the tool bar, then click and drag on the image.
- To read the x,y location or the value of the pixel under your cursor, look just below the tools in the status bar.
- To animate the stack of images, choose Image > Stacks > Tools > Start Animation. Choose Image > Stacks > Tools> Animation Options... to control the animation speed.
Images centered on the South Pole allow us to see the shape and size of the ozone hole in comparison with the rest of the globe. The surface area represented by each pixel in the images is not equal, but as all the yearly images use the same viewpoint, we can measure changes the hole by counting the number of pixels that are part of the hole each year. You won't have to count the pixels by handImageJ gives you a convenient way to highlight and measure the number of pixels that represent the hole each year. | <urn:uuid:bd065c2e-5ec6-4021-9a88-6ea07d4007da> | 3.078125 | 813 | Tutorial | Science & Tech. | 59.674043 |
Pseudo-Dipole Signal Removal from WMAP Data
It is discovered in our previous work that different observational systematics, e.g., errors of antenna pointing directions, asynchronous between the attitude and science data, can generate pseudo-dipole signal in full-sky maps of the cosmic microwave background (CMB) anisotropy published by The Wilkinson Microwave Anisotropy Probe (WMAP) team. Now the antenna sidelobe response to the Doppler signal is found to be able to produce similar effect as well. In this work, independent to the sources, we uniformly model the pseudo-dipole signal and remove it from published WMAP7 CMB maps by model fitting. The result demonstrates that most of the released WMAP CMB quadrupole is artificial. | <urn:uuid:3fd51e24-0fdd-4f51-80c3-7204879b0674> | 2.6875 | 165 | Academic Writing | Science & Tech. | 26.273007 |
Nitrate (NO3) is a compound that is comprised of nitrogen and oxygen. Nitrogen comes from decomposing organic materials like manure, plants, and human wastes. Often the nitrogen (N) is derived from ammonia (NH3) or ammonium (NH4).
Plant species need nitrogen to form amino acids and proteins, which are essential for plant cell growth, but plants cannot use organic nitrogen directly. Microorganisms in the soil convert the nitrogen locked in crop residues, human and animal wastes, and compost to ammonium (NH4). Another specific group of microorganisms convert ammonium to nitrate (NO3), and since nitrate is water soluble, excess nitrate not used by plants can leach through the soil and into the groundwater.
Nitrate is also present wherever biotic biproducts are breaking down or decomposing like animal waste, and septic system absorption fields or mounds.
On Friday, October 12th, our class took a trip out to the Municipal Wetlands in Springfield, Ohio for various sampling procedures; one of which involved the concentration of nitrate in the ground water. We used varying techniques of water acquisition depending on the state of the water (standing, flowing, or ground). For standing and flowing water, we collected a predetermined volume of water in vials. Once the sample was collected, a chemical indicator was added to produce a color reaction corresponding to a concentration of nitrate in the water of that particular locale. Using the color wheel on the measuring device, the specific color-concentration reading was obtained. For ground water samples, an additional step was needed to be performed before the chemical indicator was to be added. First, the area of interest was cored, allowing ground water to flow into the new opening. The water was collected, solid particles were allowed to settle out until the above testing procedures involving the indicator and color comparison were done.
- Evan A. | <urn:uuid:94e6bf1d-d90d-4b12-a1f9-17961857e905> | 4.28125 | 391 | Personal Blog | Science & Tech. | 31.53621 |
The Gravity Theory Of Gigantism and Extinction
by John Stojanowski 1/20/07
This document is the third in a series entitled “The Rise And Fall Of The Dinosaurs.” Parts I and II describe a new theory that explains the gigantism of dinosaurs during the Mesozoic Era and how it related to their eventual extinction. The theory posits that a gradual reduction in the Earth’s gravitational field occurred as the terrestrial continents coalesced to form the super-continent of Pangea over 200 million years ago. The subsequent gradual breakup and drifting apart of those continental land masses was accompanied by an increasing gravitational field. This increase in gravity led to the extinction of many land and sea animals, especially the dinosaurs. The dinosaur/mammal equilibrium was disrupted, allowing the mammals to eventually displace the dinosaurs.
This document supplements the prior two and introduces an additional factor to explain how the gravitational field strength could have been altered during the period in question.
II. OTHER GRAVITY THEORIES
Others have also come to the conclusion, based on the megafauna of the Mesozoic, that gravity must have been different during that time. Some have suggested that a large celestial body could have been in proximity to the Earth during that period and thereby acted to counter the gravitational field of the Earth. Some have suggested that the gravitational constant (the “G” in Newton’s Universal Law of Gravitation) had changed. They suggest that it instantaneously decreased prior to the advent of the dinosaurs and then increased around 65mya.
The Expanding Earth Theory, initially proposed by Samuel Warren Carey in 1956,1976 was also used as a basis for a gravitational change. Carey hypothesized that because the continental outlines seemed to mesh, the Earth must have been much smaller, with an expansion of 33% since 200mya. Those who supported this theory, in its relation to reduced gravity, attempted to explain dinosaur gigantism on the basis that a smaller Earth would entail weaker gravity. Since Alfred Wegener’s theory of continental drift has been almost universally accepted and explains the profile matching of the continents, the Expanding Earth Theory has been discarded by most scientists.
The theory described in this and the other two prior documents mentioned earlier offers a different explanation. And, that explanation is directly related to the formation and breakup of the super-continent of Pangea due to plate tectonic activity.
Could the consolidation of the continents cause a substantial change in the surface gravity on Pangea? Since there doesn’t seem to be another adequate explanation for the anomalous size of the Mesozoic fauna , one has to explain how the gravitational change could occur. One explanation was given in the prior two documents. Further study has added another factor to explain the gravitational change. The following section addresses that subject.
III. THE SHIFT OF THE EARTH’S CORE
The shift of the Earth’s solid inner core or both the solid inner core and the liquid outer core must be considered. With the consolidation of the continental land masses on a relatively confined surface area of the Earth, a shifting of the core away from Pangea within the equatorial plane could account for a lowering of the surface gravity of Pangea.
The shift of the core would act to maintain the rotational center of mass (not the center of mass) of the Earth at its axis. Figure 1 below is a representation of this situation. A rough estimate of the change in the gravitational force can be made using Newton’s Universal Gravity Law:
W=weight of an object of mass “m” on Earth.
M=mass of Earth
r=distance from center of core of Earth to
location of “m” on Earth’s surface
G= a constant
The resulting percentage decrease in gravitational force based on core-shift is, as shown in Fig. 1:
(Current radius of Earth) squared
(Distance from shifted core to surface) squared
Using a core-shift of 1000km would result in a ratio of about .75 (i.e. the weight of an object with the core-shift would be 75% of that without it at the equator) ignoring other factors. Again, this is only a crude estimate because the assumption being made is that the Earth’s mass is all concentrated at a single point.
A shift of 1000km of the inner core would represent a distance less than half of the inner core’s diameter.
Inner core diameter = 2400km
Outer core diameter =7000km
A shift of the inner core (and possibly outer core) would reduce the net surface gravity of Pangea. A core-shift would also help to explain another apparent anomaly described in the following section.
IV. SAUROPOD HABITATS
When studying sauropods, one is struck by a certain pattern which is hard to explain. The relevant literature states that dinosaurs inhabited every continent. Yet when one studies sauropods, there seems to be a high occurrence of sauropods in areas which were in lower latitudes (i.e., closer to the equator) during the Mesozoic Era. The literature also states that during the Mesozoic, tropical conditions existed on all land masses.
South America, Africa, mid-western United States, mid to southern Europe, India and southern China are places where abundant sauropod fossils have been found. Places like Canada, northern Europe and Asia, Greenland, Siberia and Antarctica seem to have a dearth or even complete absence of sauropods although other dinosaur remains have been found in those locations.
Is it possible that insufficient searching of those areas is the reason? Is it possible that conditions in those other areas were not conducive to preserving their remains? If that is not the case and there is no other reasonable explanation for their absence in the higher latitudes, then this would support the core-shift explanation embodied in the theory presented in this document.
It can be seen from Figure 1 that the distance to the equator, with the core-shift, is greater than the distance to both of the high (north and south) latitudes on Pangea and therefore the lowest gravitational values would be on land masses closest to the equator.
Figure 2 is a representation of Pangea during the late Jurassic Period. The small circles drawn on that map represent locations where sauropod remains have been found including prosauropods. It is only a partial list of sauropods but it serves to illustrate the point being made. The sauropods represented by the small circles are listed on the page following Figure 2.
SAUROPODS (and prosauropods) USED IN FIGURE 2
| NAME || LOCATION OF FOSSIL REMAINS |
|Isanosaurus || SE Asia (Thailand) |
|Antetonitrus ||South Africa |
|Euskelosaurus ||South Africa (Lesotho, Zimbabwe) |
|Blikanasaurus ||South Africa (Lesotho) |
|Plateosaurus ||Europe (France, Germany, Switzerland) |
|Lessemsaurus ||South America (Argentina) |
|Riojasaurus ||South America (Argentina) |
|Camelotia ||Europe (England) |
|Melanorosaurus ||South Africa |
|Amargasaurus ||South America (Argentina) |
|Nigersaurus ||Africa (Niger) |
|Cedarosaurus ||USA (Utah) |
|Sauroposeidon ||USA (Oklahoma) |
|Malawisaurus ||South Africa (Malawi) |
|Agustinia ||South America (Argentina) |
|Phuwiangosaurus ||SE Asia (Thailand) |
|Chubutisaurus ||South America (Argentina) |
|Saltasaurus ||South America (Argentina) |
|Rapetosaurus ||Africa (Madagascar) |
|Jingshanosaurus ||China (Lufeng Province) |
|Yunnanosaurus ||China |
|Massospondylus ||South Africa (Lesotho,Namibia,Zimbabwe) |
|Kunmingosaurus ||China (Yunnan) |
|Kotasaurus ||India |
|Vulcanodon ||South Africa (Zimbabwe) |
|Barapasaurus ||India |
|Omeisaurus ||China |
|Shunosaurus ||China |
|Mamenchisaurus ||China, Mongolia |
|Datousaurus ||China |
|Cetiosaurus ||Europe (England, Portugal),Morocco |
|Amygdalodon ||South America (Argentina) |
|Patagosaurus ||South America (Argentina) |
|Giraffatitan ||South Africa (Tanzania) |
|Lusotitan ||Europe (Portugal) |
|Camarasaurus ||USA (N. Mexico to Montana) |
|Diplodicus ||USA (Colorado, Utah, Wyoming) |
|Supersaurus ||USA (Colorado) |
|Seismosaurus ||USA (N. Mexico) |
|Barosaurus ||USA (S. Dakota, Utah), Tanzania |
|Apatosaurus ||USA( Colorado,Wyoming,Utah,Oklahoma) |
|Brachytrachelopan ||South America (Argentina) |
|Eobrontosaurus ||USA (Wyoming) |
|Dicraeosaurus ||South Africa (Tanzania) |
|Turiasaurus riodevensis ||Europe (Spain) |
|Erketu ellisoni ||Mongolia (Gobi desert) |
|Suuwassea emilieae ||USA (Montana) |
A possible explanation for an additional, and probably more important, source of a reduction in the Mesozoic gravitational force on Pangea could be due to a shift of the Earth’s solid inner core, and possibly the liquid outer core as well. The breakup of Pangea and drifting apart of the continents would have been accompanied by a shift of the core(s) to a more Earth-centric position, thus increasing the terrestrial gravitational force.
The abundance of sauropod fossils in lower latitudes (i.e., closer to the equator) and their scarcity or absence at higher latitudes would be explainable by a shift in the Earth’s core(s) prior to the Mesozoic Era. | <urn:uuid:0935d491-b886-4f53-9b0a-aba7d623c9c0> | 3.375 | 2,225 | Academic Writing | Science & Tech. | 23.584203 |
The ability to manipulate arrays in Perl is just beautiful. You can dynamically add/remove elements from the end of the array using PUSH/POP, and dynamically move elements into the front of the array using UNSHIFT/SHIFT. Arrays can be indexed positively from the beginning of the array, or with negative indices from the end of the array. Building FIFOs, LIFOs, queues, and stacks is easy.
join are two built-in functions that allow for the creation and collapsing of arrays.
Splice gives us the power to remove and add elements within the array.
reverse sort allow for sorting in both directions.
Beginning programmers can easily be turned off by Perl's lack of support for multi-dimensional arrays. But the better way to create complex structures including multi-dimensional arrays is to use references (and you cannot fully exploit the power of any language without the use of references).
Smart, High Level Language with Great Built-In Functions
Perl is a high level language. It is a smart language. It simplifies its variable types to scalar, array, and hash.
We never have to worry about variable types. Perl knows when to interpret a value as an integer or a string. The right-hand side knows whether the left-hand operand is asking for an array or single-value. And using "
wantarray," you can write your own subroutine modules with similar smartness. And arrays, hashes, and structures can grow and shrink dynamically Perl takes care of it automagically.
It also supports a large number of great built-in functions. Take a look at the Schwartzian transform to see the power of sorting complex structures using the
Map function. And I love the ability to look at a variable and determine whether it is of type
%hash by the prefix.
Full Programming Language
Perl is more than a scripting language. It is a general-purpose language. It supports threads and recursion. In one application, I have 16 threads running diagnostics in parallel on a chassis in factory. Perl supports object-oriented programming, and the new Moose library automates and simplifies much of the work. And, finally, GUI support is made possible by using Perl/Tk.
CPAN, Community, and Books
Perl is an open-source language with a strong community. The Perl Comprehensive Perl Archive Network (CPAN) is a repository with more than 25,000 modules. My use of Perl today would be very limited without the ability to use modules like DBI, Excel, Telnet, FTP, Ping, and Win32:Serial Communications to name a few of the modules critical to my field of work.
Learning Perl is made easy by the many free tutorials on the web. Simply start with www.perl.org to begin the discovery and learning process. In addition, Learning Perl, Intermediate Perl, and Programming Perl are three essential books for mastering the language. In this past year, new editions of these books were released to reflect language changes. Other good books worth mentioning are Effective Perl Programming and Perl Best Practices.
ActiveState deserves special mention here. For a small fee, you can purchase a compiler that creates a pseudo executable. In deploying applications to remote sites, Active State's PDK allows programmers to compile an application into an executable. Simply drop an executable at one of the numerous factory test stations and you are done. You only need to maintain Perl Libraries at your development workstation. There is no need to worry about loading the latest libraries at every test station.
The aforementioned strengths noted for Perl may well be the strengths of other competing languages. I am not in a position to compare Perl to Python or other similar languages.
The power of Perl as outlined here relates to several domains of work related to data, tools, and automation. My experience is testimony to Perl creator, Larry Wall's statement that Perl is a language for "making simple things easy and difficult things possible." And for that reason, I hope that Mr. Binstock is wrong in his contention that Perl is in a tailspin; it's a wonderful language!
Sammy Esmail is a data, tools, and automation developer. | <urn:uuid:d5289367-2163-4a53-a32e-fa97b724f40c> | 2.71875 | 861 | Personal Blog | Software Dev. | 47.99609 |
You cracked a puzzle about the structure of strange crystals called approximants that had gone unsolved for eight years. Tell us more.
Sven: Approximants are related to quasicrystals, which are ordered atomic structures, but with symmetries that were believed to be impossible, for example fivefold symmetry. The approximants we studied have fivefold and 10-fold symmetry.
The result was Linus's name on a paper that was published in Philosophical Transactions of the Royal Society A this month. What did you make of that?
Linus: It's rare and strange and cool. I don't know how many other 10-year-old kids have done this.
How did this father-son collaboration begin?
Linus: Me and my father did some sudoku. He was, like, "Let's put this number here and this number here", but I said that he was wrong. Then he was, like, "You're better at puzzles than me", and he asked if I wanted ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:68c4de39-7837-420c-aa95-97063763bd34> | 2.734375 | 242 | Truncated | Science & Tech. | 60.088069 |
In this tutorial, We will introduce you about the readonly attribute of the <textarea> tag in html5. This attribute is used to specify that the input textarea is used only for reading. The user is not able to input any character or editing in the textarea.
The attribute is used as in the <textarea> tag:
|<textarea readonly="value">Text Here</textarea>|
This attribute have following possible values.
Difference between html4.01 and html5
This attribute is new in the html5.
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:11c9ba55-cd40-42d9-a9d9-78b7db60170e> | 2.6875 | 159 | Documentation | Software Dev. | 48.443362 |
Richard Gross, a geophysicist at NASA's Jet Propulsion Laboratory who studies the earth's rotation, explains.
Scientific American: Did the undersea earthquake affect the earth's rotation?
Models predict that the earthquake should have affected rotation of the earth by shortening the length of a day by about three microseconds, or three millionths of a second. This happens because during the earthquake one of the tectonic plates [the India plate] subducted down beneath another plate [the Burma plate]. The downward mass movement of the plate changed the earth's rotation just like a spinning ice skater bringing her arms closer to her body increases her rotation. When the earth spins faster, the days are shorter.
SA: Has this shift been measured?
This rotation change is a prediction from a model, and the data [collected by ground- and space-based position sensors] is being analyzed to see if the predicted change actually occurred. The data comes in every day, but it will take a few weeks for the most accurate data to be received and analyzed.
SA: Is this change permanent, or will it shift again?
The length of the day changes all the time in response to many different processes such as changes in the atmospheric winds or ocean currents. Changes in winds have by far the greatest effect on the length of the day: their effect is actually about 300 times larger than that predicted to be changed by this earthquake.
SA: Did the tilt of the earth's axis change as well?
The earth wobbles as it rotates because its mass is not balanced about its rotation axis, just like a tire on a car will wobble as it rotates if the tire is not perfectly balanced. The size of the planet's wobble is usually about 33 feet. As the India plate subducted beneath the Burma plate, the mass of Earth was rearranged, not only causing the speed of rotation to change, which causes the length of the day to change, but also causing the wobbling motion of the planet to change by about an inch. The wobble is also affected by other influences, such as changes in atmospheric pressure. | <urn:uuid:ba534f4e-8239-4fb1-8cd7-85b0f90f81bb> | 4 | 439 | Audio Transcript | Science & Tech. | 50.299861 |
El Niño and Snow
Will El Niño bring us more snowfall this year?
It's been a record setting December in terms of snowfall. North America's weather is being effected by El Niño. But is El Niño solely responsible for bringing us all this snow? I spoke to Jared Klein at the National Weather Service to find out.
He explains, "El Niño is typically part of a whole phase called the ENSO phase. El Niño is the warm phase of the ENSO phase, and it's typically an unusual warming in the tropical Pacific near the Equator. A lot of times there's cooler air located in the eastern part of the tropical Pacific near Peru, and during El Niño phases there are warmer than normal waters."
El Niño phases can occur every two to seven years, but most commonly they occur every three to four years. It lasts for about twelve to eighteen months.
Klein says, "Currently, since July, we've been in an El Niño phase. Right now we're in a moderate phase and it's borderline strong; so the forecast for El Niño this winter is either borderline moderate or strong."
El Niño has global effects. Here in North America it changes the typical subtropical jetstream which brings moisture from the Pacific Ocean, especially during the winter months.
Klein adds, "Usually the subtropical jet is very active so a lot of times we have wetter than normal conditions in the southern part of the U.S. And a lot of times that suppresses the cold air from the north, so we could have above normal temperatures in the central and northern part of North America and below average temperatures and wetter weather in the southern part of the U.S."
But does an El Niño phase guarantee us more snow? Klein says not really, but he notes in a recent report that more snowfall was observed in D.C. during moderate El Niño phases since 1950.
"We do find some influence in the D.C. area in the wintertime. Unfortunately it's very variable so we don't always get wetter than normal seasons, or we don't get above normal temperatures. It varies, but we have seen some big snowfall seasons. We've also had some very dry, less snowy winters. There's a lot of variability because El Niño is not the only determining factor for our climate," he says.
The official NOAA winter outlook has our region with below normal temperatures, but equal chances of seeing a wet or dry winter. Winter has only just begun, so we'll have to wait and see what the rest of the season has in store.
One more thing...I would like to wish everyone a very Merry Christmas and a Happy New Year! | <urn:uuid:f61b07a5-240c-47e1-a38b-acc753cb1d4c> | 3.296875 | 556 | Personal Blog | Science & Tech. | 61.193062 |
The Facts About Radiation Damage
How does radiation hurt a living cell?
Can cells repair themselves?
Radiation Dose Limits for Astronauts
Radiation Dose Limits for the Rest of Us
How Much Radiation Are You Exposed To?
Space Weather Impacts on Astronauts
What are lethal levels of radiation?
How serious can solar protons events be for astronauts?
How many space travelers have there been?
Space Station construction
Poor assumptions to make in a rocket suit.
In space no one can hear ice cream.
The source of this material is Windows to the Universe, at http://windows2universe.org/ from the National Earth Science Teachers Association (NESTA). The Website was developed in part with the support of UCAR and NCAR, where it resided from 2000 - 2010. © 2010 National Earth Science Teachers Association. Windows to the Universe® is a registered trademark of NESTA. All Rights Reserved. Site policies and disclaimer. | <urn:uuid:dfd235a7-ff08-485b-a5a7-c0af1ee90a62> | 3.34375 | 198 | Content Listing | Science & Tech. | 49.195393 |
ASTR 1040 Spring, 2005
1.a. Estimate the hydrogen-burning lifetime of a star of 15 solar masses and a luminosity 10,000 times greater than the solar luminosity.
This is easiest to do by comparing with the Sun, using the relationship t = to(M/L), where t_o is the Sun's lifetime of 10^10 years, and M and L are the mass and luminosity of the star, in solar units. Then we get t = 10^10(15/10,000) = 1.5 X 10^7 yrs.
b. For a star having a mass of 0.2 M_o and a luminosity of 0.008 L_o,
Using the same method yields
t = 10^10(0.2/0.008) = 2.5 X 10^11 yrs.
c. If 10 percent of the star's original mass is now in the form of helium in the core, calculate the helium-burning lifetime of the 15-solar mass star.
The mass converted to energy is the difference between the sum of the masses of three helium nuclei (i.e. 3 X 4.002603 u = 12.007809 u) and the mass of the resulting carbon nucleus (12.000000 u), which is 0.007809 u. This represents a fractional loss of mass of 0.007809/12.007809 = 0.00065 of the original mass. Thus 0.00065 of one-tenth of the star's mass of 1.99 X 1030 kg will be converted into energy during triple-a burning, or E = mc2 = 0.1(15)1.99 X 10^30)(0.00065)(3 X 10^8)2 = 1.8 X 10^44 J.
The helium-burning lifetime is t = E/L = 1.8 X 10^44/100(3.83 X 10^26) = 4.6 X 10^15 sec = 1.4 X 10^7 yrs.
2. Suppose that when the Sun becomes a red giant it will increase its radius by a factor of 100 while its surface temperature drops to half of its current value.
a. What will be the Sun's luminosity at that point, compared to its current luminosity (i.e., expressed as a ratio) and in basic units (i.e., watts)?
This is easiest to do if we take the ratio of the Stefan-Boltzmann equations representing the Sun as a main sequence star (subscript m, below) and as a red giant (subscript g). Then we get Lg/Lm = (Rg/Rm)^2(Tg/Tm)^4 = (10)^2(1/2)^4 = 625; i.e., as a red giant the Sun will have 625 times its main sequence luminosity. This is equivalent to L = 2.39 X 10^29 W.
b. How far away could this red giant be detected with a telescope whose limiting magnitude is mv = +20, if the star's bolometric correction is B.C. = -2.0?
This requires several steps, because we need to know the star's visual absolute magnitude before we can use the distance modulus relation. First we find the bolometric absolute magnitude and then apply the bolometric correction to find Mv. The easiest way to get Mbol is to use the magnitude equation to compare bolometric absolute magnitudes and luminosities of the Sun and the red giant: Mbol(m) - Mbol(g) = 2.5log(Lg/Lm), where again g represents the red giant and m the main sequence star (which has the Sun's bolometric absolute magnitude of +4.75). Solving for Mbol(g), we find Mbol(g) = Mbol(m) - 2.5log((Lg/Lm) = 4.75 - 2.5log(625) = -2.24.
The visual absolute magnitude is Mv = Mbol - B.C. = -2.24 -(-2.0) = -0.24.
Now we can use the distance modulus relation to find the distance to this star if its visual apparent magnitude is mv = 20:
mv - Mv = 5log(d/10) yields d = 10[(m - M)/5] + 1 = 1.1 X 10^5 parsecs.
c. What will the surface gravity be (expressed as a ratio with the Sun's current surface gravity)?
Surface gravity is proportional to M/R^2; in this case M is constant while R increases by a factor of 100, so the new surface gravity is g = (1/100)^2 = 10^4 times less than it was before expansion to the red giant phase (this low surface gravity helps red giants to lose mass through winds; the outer layers are only very weakly bonded to the star, and escape easily).
d. What will the escape speed be from the surface of the Sun as a red giant? Compare with the escape speed now.
Escape speed is given by v_e = sqrt[ 2GM/R]; for M = the solar mass and R = 100 times the solar radius, we get v_e = sqrt[2(6.67 X 10-11)(1.99 X 1030/(100)(6.96 X 108)] = 61,800 m/sec = 61.8 km/sec.
When the Sun becomes a red giant, its radius will increase by a factor of 100; since the escape speed is inversely proportional to the square root of the radius, the escape speed in the red giant phase is sqrt[1/(100)] = 0.1 times as fast as the main sequence escape speed (hence the main sequence escape speed is 10 times the red giant escape speed, or 618 km/sec).
3. Now suppose that after red giant mass-loss phase the Sun consists of a core (a white dwarf) having a mass equal to half the original main sequence mass, a radius equal to 0.01 of the original radius, and a surface temperature equal to twice the original temperature of 5780 K.
a. What are the luminosity and bolometric absolute magnitude of the white dwarf remnant?
Compare with the Sun, using the Stefan-Boltzmann equation:
Lwd/Lm = (Rwd/Rm)^2(Twd/Tm)^4 = (1/100)^2(2)^4 = 0.0016.
In basic units, this means that the white dwarf's luminosity is 0.0016(3.83 X 10^26) = 6.13 X 10^23W. To find the bolometric absolute magnitude, use the same technique as in Problem 2.b., above:
Mbol(wd) = Mbol(m) - 2.5log(L_wd/Lm) = 4.75 - 2.5log(0.0016) = 11.74.
b. If the star's bolometric correction is B.C. = -0.4, how far away can this remnant be detected with a telescope whose limiting visual magnitude is +20? Compare your answer with the distance you found in 2.b., above.
The visual absolute magnitude is Mv = Mbol - B.C. = 11.74 - (-0.4) = 12.11. Using the distance modulus relation (again, as we did in Problem 2.b.), d = 10[(m - M)/5] + 1 = 10[(20.00 - 12.11)/5] + 1 = 378 parsecs. This is a much smaller distance than we found in 2.b., showing that white dwarfs cannot be detected as far away as red giants, not surprisingly.
c. Calculate the average density (converted to units of grams/cm3) and the escape speed for the white dwarf.
Density is given by r = M/V (mass divided by volume). For a sphere, the volume is V = 4pR^3/3, so the density is r = 3M/4pR^3. For 0.5 solar mass and 0.01 solar radius, this becomes r = 3(0.5)(1.99 X 10^30)/4p[(0.01)(6.96 X 10^8)]3 = 7.05 X 10^8 kg/m^3 = 7.05 X 10^5 gm/cm^3.
The escape speed is
ve = sqrt[2(6.67 X 10^-11)(0.5)(1.99 X 10^30/(0.01)(6.96 X 10^8)] = 4.37 X 10^6 m/sec = 4370 km/sec.
d. Calculate the wavelength of maximum emission for the white dwarf. What kind of telescope would be best for observing it?
To find (lambda)max you need to use Wien's law: (lambda)max = 0.0029/T. In this case T = 2 X 5780 = 11,560 K, so
(lambda)max = 0.0029/11560 = 2.51 X 10^-7 m = 2510Å.
This requires an ultraviolet telescope.
4. A star of initial mass 30 solar masses loses mass at a rate of 5 X 10^-6 solar masses per year during its main sequence lifetime of 3 X 10^6 years. Then it blows up in a supernova explosion, leaving behind a remnant whose mass is 1.6 solar masses. The absolute visual magnitude of the supernova at its peak is Mv = -19.0.
a. What is the star's mass just before it blows up?
The mass is the initial mass of 30 solar masses minus the mass that is lost at a rate of 5 X 10^-6 solar masses per year over a time of 3 X 10^6 years. Thus the final mass is M = 30 - (5 X 10^-6)(3 X 10^6) = 30 - 15 = 15 solar masses.
b. If the mass blown off in the explosion travels outward with an average speed of 2000 km/sec, what is the kinetic energy of the explosion?
Kinetic energy is given by E_kin = mv^2/2. In this case the mass in motion is 13.4 solar masses (i.e. the final mass of the star minus the 1.6 solar masses that remains behind in the remnant), and the speed is 2000 km/sec (= 2 X 10^6 m/sec), so the kinetic energy is
Ekin = (13.4)(1.99 X 10^30)(2 X 10^6)2/2 = 5.3 X 10^43 J.
c. If the bolometric correction at peak brightness is B.C. = -1.0, and the duration of the peak is 3 days, how much energy is radiated away during this time? Compare this with the kinetic energy you found in part a.
To find the total luminous energy radiated during the 3 days (= 2.6 X 10^5 sec), we need to know the luminosity. We have the visual absolute magnitude and the bolometric correction, so we can get the luminosity by first finding the bolometric absolute magnitude and then comparing with the Sun to get the luminosity in solar units. The bolometric absolute magnitude is
Mbol = Mv + B.C. = -19.0 + (-1.0) = -20.0.
Using the Sun as a comparison, we get
Mbol(sun) - Mbol(SN) = 2.5log(LSN/LSun), or
(LSN/LSun) = 10[4.75 - (-20)]/2.5 = 7.9 X 10^9,
which means that the luminosity of the supernova is (7.9 X 10^9)(3.83 X 10^26) = 3.0 X 10^36 W. Now the total energy emitted during the three days is
Elum = Lt = (3.0 X 10^36)(2.6 X 10^5) = 7.8 X 10^41 J.
This is only about 0.015 times the kinetic energy of the explosion.
d. Assume that the energy released in the form of neutrinos is 100 times greater than the sum of the kinetic plus luminous energy of the explosion. How does the total energy (including all three forms) released in the supernova explosion compare with the total energy the Sun can produce over its entire hydrogen-burning lifetime?
The total energy is
Etot = 100(E_kin + E_lum) = 100(5.3 X 10^43 + 7.8 X 10^41) = 5.4 X 10^45 J.
In its lifetime of t = 10^10 yrs = 3.2 X 10^17 sec, the Sun will produce a total energy of
E = tL = (3.2 X 10^17)(3.83 X 10^26) = 1.2 X 10^44 J.
So the supernova's total energy is about a factor of 50 greater, and is released instantaneously. (Note: You could have calculated the Sun's total lifetime energy by using E = mc^2, assuming that 0.007 of the innermost 10 percent of the solar mass is converted to energy; this is a bit more tedious, but produces the same result.)
e. If the remnant has a radius of 10 km, calculate its average density (expressed in units of grams/cm3). Compare your value with those for the white dwarf and for the Sun before it became a red giant.
The mass is 1.6 solar masses and the radius is 10 km = 1 X 10^4 m. Hence the density is
r = 3M/4pR^3 = 3(1.6)(1.99 X 10^30)/4p(1 X 10^4)3 = 5.07 X 10^17 kg/m3 = 5.07 X 10^14 gm/cm3.
This is comparable to the density of an atomic nucleus!
f. If the remnant's surface temperature is 106 K, what is its luminosity? If its bolometric correction is B.C. = -3.0, how far away can this remnant be detected by our telescope whose limiting visual apparent magnitude is +20?
We can get the luminosity from the Stefan-Boltzmann equation:
L = 4(pi)R^2(sigma)T^4 = 4(pi)(1 X 10^4)^2(5.67 X 10^-8)(1 X 10^6)^4 = 7.1 X 10^25 W.
In solar units, this is Lns/Lsun = (7.1 X 10^25)/(3.83 X 10^26) = 0.185. Now we can use the magnitude equation to find the bolometric absolute magnitude of the neutron star:
Mbol(ns) = Mbol(m) - 2.5log(Lns/Lsun) = 4.75 - 2.5log(0.185) = 6.58.
Given the bolometric correction B.C. = -3.0, the visual absolute magnitude is
Mv = Mbol - B.C. = 6.58 - (-3.0) = 9.58.
Finally, using the distance modulus relation, we find the distance
d = 10[m - M]/5 + 1 = 10[20 - 9.58]/5 + 1 = 1210 parsecs.
This is comparable to the distance to which we found white dwarfs to be detectable (in Problem 3.b.), but we don't detect as many neutron stars directly (exactly 1 so far!), because they are far more rare, and they don't stay this hot for long.
Return to the APAS 1040 Home Page | <urn:uuid:bc127e1d-86ec-4554-a4a6-be378e97e71a> | 3.40625 | 3,455 | Tutorial | Science & Tech. | 104.917132 |
Dictionary and translator for handheld
New : sensagent is now available on your handheld
A windows (pop-into) of information (full-content of Sensagent) triggered by double-clicking any word on your webpage. Give contextual explanation and translation from your sites !
With a SensagentBox, visitors to your site can access reliable information on over 5 million pages provided by Sensagent.com. Choose the design that fits your site.
Improve your site content
Add new content to your site from Sensagent by XML.
Crawl products or adds
Get XML access to reach the best products.
Index images and define metadata
Get XML access to fix the meaning of your metadata.
Please, email us to describe your idea.
Lettris is a curious tetris-clone game where all the bricks have the same square shape but different content. Each square carries a letter. To make squares disappear and save space for other squares you have to assemble English words (left, right, up, down) from the falling squares.
Boggle gives you 3 minutes to find as many words (3 letters or more) as you can in a grid of 16 letters. You can also try the grid of 16 letters. Letters must be adjacent and longer words score better. See if you can get into the grid Hall of Fame !
Change the target language to find translations.
Tips: browse the semantic fields (see From ideas to words) in two languages to learn more.
||This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (November 2006)|
In computer science, code generation is the process by which a compiler's code generator converts some intermediate representation of source code into a form (e.g., machine code) that can be readily executed by a machine.
Sophisticated compilers typically perform multiple passes over various intermediate forms. This multi-stage process is used because many algorithms for code optimization are easier to apply one at a time, or because the input to one optimization relies on the processing performed by another optimization. This organization also facilitates the creation of a single compiler that can target multiple architectures, as only the last of the code generation stages (the backend) needs to change from target to target. (For more information on compiler design, see Compiler.)
The input to the code generator typically consists of a parse tree or an abstract syntax tree. The tree is converted into a linear sequence of instructions, usually in an intermediate language such as three address code. Further stages of compilation may or may not be referred to as "code generation", depending on whether they involve a significant change in the representation of the program. (For example, a peephole optimization pass would not likely be called "code generation", although a code generator might incorporate a peephole optimization pass.)
In addition to the basic conversion from an intermediate representation into a linear sequence of machine instructions, a typical code generator tries to optimize the generated code in some way. The generator may try to use faster instructions, use fewer instructions, exploit available registers, and avoid redundant computations.
Tasks which are typically part of a sophisticated compiler's "code generation" phase include:
Instruction selection is typically carried out by doing a recursive postorder traversal on the abstract syntax tree, matching particular tree configurations against templates; for example, the tree
W := ADD(X,MUL(Y,Z)) might be transformed into a linear sequence of instructions by recursively generating the sequences for
t1 := X and
t2 := MUL(Y,Z), and then emitting the instruction
ADD W, t1, t2.
In a compiler that uses an intermediate language, there may be two instruction selection stages — one to convert the parse tree into intermediate code, and a second phase much later to convert the intermediate code into instructions from the instruction set of the target machine. This second phase does not require a tree traversal; it can be done linearly, and typically involves a simple replacement of intermediate-language operations with their corresponding opcodes. However, if the compiler is actually a language translator (for example, one that converts Eiffel to C), then the second code-generation phase may involve building a tree from the linear intermediate code.
When code generation occurs at runtime, as in just-in-time compilation (JIT), it is important that the entire process be efficient with respect to space and time. For example, when regular expressions are interpreted and used to generate code at runtime, a non-determistic finite state machine is often generated instead of a deterministic one, because usually the former can be created more quickly and occupies less memory space than the latter. Despite its generally generating less efficient code, JIT code generation can take advantage of profiling information that is available only at runtime.
The fundamental task of taking input in one language and producing output in a non-trivially different language can be understood in terms of the core transformational operations of formal language theory. Consequently, some techniques that were originally developed for use in compilers have come to be employed in other ways as well. For example, YACC (Yet Another Compiler Compiler) takes input in Backus-Naur form and converts it to a parser in C. Though it was originally created for automatic generation of a parser for a compiler, yacc is also often used to automate writing code that needs to be modified each time specifications are changed. (For example, see .)
Many integrated development environments (IDEs) support some form of automatic source code generation, often using algorithms in common with compiler code generators, although commonly less complicated. (See also: Program transformation, Data transformation.)
In general, a syntax and semantic analyzer tries to retrieve the structure of the program from the source code, while a code generator uses this structural information (e.g., data types) to produce code. In other words, the former adds information while the latter loses some of the information. One consequence of this information loss is that reflection becomes difficult or even impossible. To counter this problem, code generators often embed syntactic and semantic information in addition to the code necessary for execution. | <urn:uuid:e41989f1-e5e7-4125-a86d-df05699253f9> | 2.75 | 1,280 | Knowledge Article | Software Dev. | 26.295092 |
Why do we Need Dark Energy to Explain the Observable Universe?
Posted on September 27, 2009 Comments (2)
An accelerating wave of expansion following the Big Bang could push what later became matter out across the universe, spreading galaxies farther apart the more distant they got from the wave’s center. If this did happen, it would account for the fact that supernovae were dim- they were in fact shoved far away at the very beginning of the universe. But this would’ve been an isolated event, not a constant accelerating force. Their explanation of the 1998 observations does away with the need for dark energy.
And Smoller and Temple say that once they have worked out a further version of their solutions, they should have a testable prediction that they can use to see if the theory fits observations.
Another interesting example of the scientific inquiry process at work in cosmology.
Shouldn’t the National Academy of Science (NAS), a congressionally chartered institution, promote open science instead of erecting pay walls to block papers from open access? The paper (by 2 public school professors) is not freely available online. It seems like it will be available 6 months after publication (which is good) but shouldn’t the NAS do better? Delayed open access, for organizations with a focus other than promoting science (journal companies etc.), is acceptable at the current time, but the NAS should do better to promote science, I think. | <urn:uuid:334566e3-90a0-4a10-8c4b-8d26dac42a9a> | 2.796875 | 299 | Comment Section | Science & Tech. | 43.888657 |
Suppose the elements of are . In other words the element in row , column is . Suppose also that the elements of are , and the elements of are
Now for each is formed when the elements of row of matrix are combined (by multiplication and addition) with the elements of . In other words:
And 's elements are a re-arrangement of 's elements. So for each for some . Therefore, from the above equation:
So each row of contains a single , and zeros. Moreover, each is equal to a different , so no column of contains more than a single . So is bistochastic, with elements and . | <urn:uuid:32e0a67e-9139-4c21-8212-7501fc0b51d3> | 3 | 134 | Q&A Forum | Science & Tech. | 55.381985 |
Magnetic Field Penetration
Date: April 28, 2011
Will magnetic fields pass through a nonmagnetic metal and
>still be attracted to a magnet on the other side?
Yes. happens all the time.
Just try putting an aluminum sheet between a magnet and some iron,
or between two magnets.
If the Aluminum is holding still, the blindfolded guy holding the magnets cannot tell the difference.
If your posted notes were made of paper-thin aluminum,
your refrigerator magnets would still work fine.
What the non-magnetic metal does do is act like magnetic molasses.
Not meaning sticky, but meaning gooey, as in viscous. Flows as needed, but not quickly.
Field lines through aluminum or copper do not want to change strength or move around fast.
If the aluminum sheet was kind of thick and it was slid sideways back-and-forth,
then the guy holding the magnets would get a hint that something was gently tugging sideways on this toys.
This a very easy question to answer.
Think about hanging drawings on the refrigerator door and you should be
able to come up with your own answer.
Tennant Creek NT
A magnetic field is not "attracted to" a magnet. When a magnet comes in
contact with a magnetic field, the magnetic field pushes or pulls the
magnet. A metal that is not attracted to magnet does not produce a
magnetic field of its own. It does not interfere with the original
field, so the field does reach the magnet.
Dr. Ken Mellendorf
Illinois Central College
Andrew, think of a simple experiment to test it yourself.
See my simple experiment after your question.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:378e18b5-9de5-4ca9-bede-63e895638ff1> | 2.953125 | 370 | Q&A Forum | Science & Tech. | 60.706634 |
Forces and Photons
Country: United Kingdom
Date: Summer 2012
Forces between elementary particles are mediated via particles. For example, the Electromagnetic force is mediated via photons. What I want to know is - how do the photons know where to go? The cannot see/feel the other particle. The cannot feel its effect because the cause the effect.
Photons are incapable of knowing anything.
They are like balls on a pool table that just get knocked around by their environment.
Sometimes they go to beneficial places but most other times they just pass on out of the space not interacting with other things.
Like when I strike a match, light goes out all three dimensional directions. Some light rays hit a human retina, but most do not.
The photons do not know where to go, they just go. They are in motion and as they propagate, the E and B perpendicular wave functions move with them. Unless directed, they run into each other, reflect and scatter... go all over the place.
We have devices to focus their direction, but there is always scatter as they bump into atoms and particles. Focus a flashlight as best you may and there will always be loss. Even lasers in a vacuum deviate. These deviations result in spectra that are quite useful in measuring many factors, such as galaxy distance and the age of the Universe.
In photon direction experiments, we try to keep distances short, use lasers, vacuums and collimators... but it is still a bit like herding cats, the photons mostly will go in a general direction, but there are always escapees.
Hoping this helps, PEHughes, Ph.D. Milford, NH
Click here to return to the Physics Archives
Update: November 2011 | <urn:uuid:6b01b39b-f41e-4edb-9cab-f547d72136a9> | 3.296875 | 363 | Knowledge Article | Science & Tech. | 56.648173 |
To understand the property of refraction through rainbows, first you could think Raindrops as falling prism-like objects... The same answer here - Refraction & Dispersion of light
(Wikipedia & Howstuffworks has a pretty good article on it...)
These images show the refraction and dispersion of light by raindrops (just like an optical prism). But, it has a small difference from a prism.. The refraction occurs twice inside a raindrop... (due to the spherical shape I think so). For a typical rainbow, the angle of refraction at which light would enter and exit the raindrop is 40°-42° for colors in the order violet-red. This 2° range is due to increasing wavelengths of different colors...
Howstuffworks shows a good comparison for the image below:
When A disperses light, only the red light exits at the correct angle (42°) to travel to the observer's eyes. The other colored beams exit at a lower angle, so the observer doesn't see them. The sunlight will hit all the surrounding raindrops in the same way, so they will all bounce red light onto the observer. But, B is much lower in the sky, so it doesn't bounce red light to the observer. At its height, the violet light exits at the correct angle (40°) to travel to the observer's eye. All the drops surrounding B bounce light in the same way. The raindrops in between A and B all bounce different colors of light to the observer, so the observer sees the full color spectrum.
Also If you were above the ground (at some pretty higher altitude), you would be able to see the rainbow as a full circle. On ground, only the arc of rainbow which is above the horizon is visible.
- A rainbow doesn't depend upon where ever you are... Even if you're above the ground, you could be able to see the colors in the same order VIBGYOR... It depends on whether the sun is at the horizon or not, whether the environment has a fog, mist or rain
But, in case of Twinned rainbows, double reflection is supposed to happen... It depends upon the difference in size of the drops and the angle of refraction would be within 50-53° range... A wonderful thing is for a twinned rainbow, an inverted rainbow would be above the real one at some distance (each being separated by a common base...
Supernumerary rainbows have a collection of bright and dark bands from violet to red due to interference. At angles very close to the required rainbow angle, two paths of light through the droplet differ only slightly, and so the two rays interfere constructively. As the angle increases, the two rays follow paths of substantially different lengths. When the difference equals half of the wavelength, the interference is completely destructive. And at even larger angles, the beams reinforce again. The result is a periodic variation in the intensity of the scattered light, a series of alternately bright and dark bands. | <urn:uuid:6b473243-04d4-494f-9a53-53c9823a4fd5> | 4 | 618 | Q&A Forum | Science & Tech. | 64.978406 |
An anonymous reader lets us know about the initial release of data from the Kepler spacecraft, launched in the spring of 2009, which has been hunting extrasolar planets. The instrument has found 752 candidates to examine in its first 43 days of operation. This is exciting news, because even if only half of the possibilities pan out as exoplanets (as the Kepler team expects) the results would still almost double the count of known planets. And some of the new ones could be Earth-sized, or not too much larger. Controversy has erupted however because NASA has decided to allow the Kepler team to withhold 400 of the best candidates for its own examination, releasing about 350 others to the worldwide community. The reasons for this are complicated and the New York Times does a good job of digging into the issue of proprietary vs. public data. Nature.com first reported two months ago on the decision to hold back some of the data. | <urn:uuid:9905e869-fdbb-4b11-84ed-5cd209e62fab> | 2.890625 | 187 | Comment Section | Science & Tech. | 51.638225 |
of surface tension demonstration using water that is being
held in place by a metal loop.
Space Chronicles #17
Previous | Next >
ISS Science Officer Don Pettit
I set out to explore one thing and discovered another. And what
was discovered was more interesting than the prosaic task I had
planned. What I had intended was to make a sphere of water about
50 millimeters in diameter and blow a bubble inside of increasing
size until a thin spherical shell was made. As the bubble grew in
diameter, would it center itself inside the sphere? I did not know
and intended to find out so our shower-hygiene area was once again
turned into a wet chemistry laboratory.
A new and improved
technique was used to make the water sphere. First, about 25 milliliters
of water were squirted into a small plastic bag. This was placed
over and then withdrawn from a 50 millimeter diameter wire loop
leaving a thin film. By squeezing extra water out as the bag was
withdrawn, a fat film of perhaps 4 millimeters thick was produced.
Water was then added to this film inflating it into an undulating
sphere. Small occluded air bubbles were sucked out with a coated
cannula attached to a syringe. After about five to ten minutes,
the sphere had settled down into a nice crystal ball. I marveled
at how simple a lens this made and how tiny details in my fingerprints
were seen with a somewhat aberrant view.
Next an air
bubble was injected inside using the cannula and syringe. I expected
a single bubble to grow on the end of the cannula as air was slowly
injected into the sphere. What happened was different. A bubble
formed on the cannula tip and after it grew somewhere between 15
to 20 millimeters in diameter, it moved from the tip and attached
to the side of the cannula, crawling towards the syringe until it
broke away from the tip. A new bubble immediately formed, grew and
detached from the tip. This process repeated leaving a number of
15 to 20 millimeter diameter bubbles. It seemed that a single large
air bubble did not want to form.
With some effort,
a single bubble of about 30 millimeters in diameter was finally
made by coaxing the smaller bubbles to coalesce. This bubble did
not want to remain centered. It preferred to be attached to the
outer surface. It could be centered with the cannula but then small
fluid motions within the sphere would bring it back to the surface.
After some experimenting, it was found that the best way to make
a single bubble and have it remain centered was to first stir the
sphere into a slow rotation and then inject the bubble along the
axis of rotation. The fluid dynamic forces associated with the rotation
allowed for a single bubble to cling to the cannula tip and grow
to the desired diameter while remaining nearly centered. After about
ten minutes the rotation stopped and the bubble stayed approximately
in the center of the sphere.
The next step
was to inflate the bubble to larger diameters. Submerging the cannula
tip in the water but holding it about 5 millimeters from the bubble,
air injected at a rate of about 50 milliliters per minute formed
small bubbles that quickly contacted and collapsed into the large
bubble. This process drove steady state oscillations in the large
bubble creating a series of standing waves with a wavelength of
about 6 to 10 millimeters. The bubble became warped into a football
where each end became a node point for the oscillations. When it
oscillated, it reminded me of squirming bark beetle grubs kicked
out from a rotting Douglas fir stump.
But the most
amazing phenomena appeared after the air injection stopped and the
oscillations died down. Created inside the air bubble were a half-dozen
or so small spherical droplets of water, one to four millimeters
in diameter, orbiting around like a miniature solar system. They
ricocheted off of the inner surface of the bubble and continued
to swirl around inside their little world. Perhaps once out of every
six to eight collisions with the interface, a small amount of water
was absorbed from the drop, resulting in a dramatic increase in
velocity and an abrupt change in direction of the remaining droplet.
It was as if a small rocket thruster propelled the droplet with
an impulsive force perpendicular to the interface. As the droplets
became smaller and smaller, their velocity continued to increase
until the subsequent collision caused complete absorption. This
motion appeared almost life-like so that for a minute I thought
we were looking through a magnifier at some new form of creatures
zooming around inside of a three-dimensional petri dish. I let out
such a loud shout of exclamation that a crewmate from the next module
came over to see if I was all right.
Soon we were
all marveling over the show inside this fluid crystal ball. Of all
the things on orbit I have seen to date, this is by far the most
amazing. If one were to postulate that this phenomenon would occur,
your colleagues would shake their heads and quickly find some excuse
to dash off to another laboratory. It is one more example that demonstrates
how nature has such a vivid imagination, more so than human beings
will ever possess, and the only way to discover what nature has
to offer is to seek it ourselves in the wilderness of the unknown.
We found that
the easiest way to produce these droplets was to directly inject
water into the bubble using a coated cannula. It was easy to penetrate
both the water sphere and the bubble without significantly disrupting
things. The droplets could readily be injected with various diameters
at different velocities and directions. If given a velocity tangential
to the bubble interface, the droplets moved around in circles, pressed
against the inside by the forces from the radial acceleration of
circular motion. Sometimes this radial force was sufficient to create
a small flat spot at the point of contact, particularly for larger
droplets of 6 to 8 millimeters in diameter. They either rolled,
or perhaps scooted - I did not know which at this time - for several
minutes before becoming absorbed. After the first absorption event,
the remaining portion of the droplet was propelled away from the
surface and ricocheted around inside of the bubble as if it were
a ball in a three-dimensional game of billiards.
Why do some
droplets undergo many collisions or roll on the interface for minutes
and then exchange a fraction of their volume while others may become
completely absorbed after one collision? It was almost as if there
is some active repulsion force distributed over the droplet surface
and when it wears thin absorption takes place. Perhaps there was
some induced charge on the outer surface that keeps the interfaces
at a distance sufficient to prevent coalescence. Like the fluid
mechanical version of scuffing your feet across a carpet and sparking
your dog on the nose, fluid friction at the wall of a non-conducting
tube can cause the liquid to become charged. Friction mechanically
separates electrons from their atomic hosts and a static charge
of hundreds to thousands of volts can be produced. If you are pumping
fuel into an airplane, this effect can create a spark and a fire,
thus refueling operations require a grounding wire to shunt away
the flow-induced static charge. This type of flow-induced charging
may cause the droplets to become charged, especially when injected
through a small diameter coated cannula (inside diameter of 200
micrometers). If present, this residual static charge could result
in a repulsion force across the interface and keep the droplets
at a distance sufficient to prevent immediate coalescence. When
this charge wears thin, then the process of coalescence can begin.
A gas bubble
or droplet has an internal pressure that arises inside due to surface
tension forces manifesting themselves across the curved interface.
This pressure differential is inversely proportional to the radius
of curvature so the smaller the droplet, the greater the internal
pressure. When the droplet becomes so small that the internal pressure
matches the vapor pressure of the liquid, the droplet is no longer
stable and evaporates in a flash. This critical radius was discovered
by Lord Kelvin and now bears his name.
inside these droplets has to be greater than in the surrounding
water across the bubble interface due to the droplet's smaller radius
of curvature. Thus when the exchange of water occurs it must be
from the droplet to the surrounding water. This exchange of mass
from the droplet across the interface imparts momentum to the droplet
like a miniature rocket thruster. The exchange of mass happens quickly;
from examining individual video frames taken at a rate of about
30 per second, the exchange happens in less than one frame. This
bounds the mass exchange to somewhere less than about 33 milliseconds.
The exchange of mass thus creates a force impulse and results in
a velocity component perpendicular to the interface. What remains
of the droplet shoots across the inside of the bubble as if it were
some sort of darting organism.
the mixing in the water during the droplet mass exchange, food coloring
was diluted at a ratio of about 100 to 1 and injected as a droplet
into a bubble. It was obvious right away that the presence of the
food coloring affected the process of coalescence. The food-colored
droplets were completely absorbed after one to four collisions,
making it difficult to observe the process since it was over after
such a short time. Perhaps the addition of food coloring altered
the magnitude of the droplet's static charge. The water used for
this experiment came from our water reprocessor that makes high
resistivity water through an ion exchange process. The components
in food coloring may have added ions that increased the resistivity
and thus prevented the static charge from forming.
As the food-colored
droplet rattled around inside the bubble, my jaw dropped as we saw
that each mass exchange shot a small vortex ring into the surrounding
water. The vortex ring's size and velocity depended on how much
water was transferred during a collision. Sometimes a series of
little tiny rings shot out and stopped within the body of the surrounding
water. Sometimes a large mushroom cloud formed that looked like
a miniature nuclear explosion. This was most definitely another
"wow" moment. Again, nature was tickling our imaginations.
Since the food
coloring dramatically shortened the droplet lifetime, we next tried
a suspension of tracer particles. This was a dilute solution of
water with 5 micrometer diameter mica flakes. These flakes are made
commercially by the ton for putting the sparkle in eye shadow and
other cosmetics. It is well known among fluid dynamists that they
also make great tracer particles for fluid experiments. Unfortunately,
this knowledge forever changes your view of sparkling eyelids. Tracer-laden
particle droplets lived through more collisions than those with
food coloring, however their lifetime was still significantly shorter
than for pure water. Each collision with mass exchange created a
small glittering vortex ring in the surrounding water. Again, the
effect was most definitely a "wow" moment as these vortex
rings shot out from the bubble into the water leaving iridescent
on a drop about 5 millimeters in diameter answered the question
of rolling or scooting when a droplet moves in circles. The tracer
particles revealed that this droplet was definitely rolling with
no observed slippage. It reminded me of a bowling ball rolling down
a never-ending circular alley in search of pins. Up until this observation,
I assumed the droplets were scooting on the interface more like
a hockey puck on ice. For the droplet to be rolling, there must
be some frictional force between the two bodies and this implies
some level of interesting interactions across the interface. Perhaps
this is a manifestation of the fluid dynamic no-slip boundary condition.
However, this interaction cannot be so strong, else it causes coalescence.
Left in the
wake of these observations were more questions than answers and
a mind full of wonder. I have found that this is not unusual for
someone engaged in exploration.
practical come from this? For my symphony of spheres, perhaps not.
Will this discovery tickle our imaginations and enrich our minds?
Most definitely, yes. Will it incite new ideas for future discoveries?
Maybe. My personal reward for undertaking new explorations is simple
and does not require a practical application.
When you enter
the realm of the unknown, you see things in a truly naive state
where prior knowledge is of little help; you can once again see
the world through the wondrous eyes of a child. | <urn:uuid:b26cfd69-3564-4ca9-a88f-b49c454641f3> | 3.203125 | 2,732 | Nonfiction Writing | Science & Tech. | 40.62563 |
[This is a slightly more considered summary of recent work on cryoprotectants, which appears as my Crucible column in the May issue of Chemistry World. The links to the papers discussed can be found in the previous blog entry below.]
When the going gets tough, the tough get sweet. There are many physiological responses to cold conditions, from goose pimples (useless for humans, handier for hairier beasts) to the famous antifreeze proteins of fish. But one of the common strategies for insects is to fill their cells with sugar. It’s still something of a mystery why this helps.
Cold poses diverse threats to life. Ice crystals in the body can simply rupture cell walls, which is why frozen strawberries thaw to a mush. And below about –20 degC protein molecules themselves start to unravel, a process called cold denaturation. That’s not well understood yet either, although a recent paper suggests it involves weakening of the force between hydrophobic (water-repelling) parts of proteins that normally binds the folded form in place.
Sugars such as fructose and trehalose, as well as polyols such as glycerol and ethylene glycol, are manufactured seasonally by insects as cryoprotectants, just as we put antifreeze in our car radiators as winter draws near. Over winter, up to a fifth of the mass of some insects may consist of these substances. One consequence of cell fluid rich in sugar is simple depression of water’s freezing point. But that won’t get you very far into a bitter winter – typically, the depression will be only a few degrees, whereas the Arctic willow gall, flooded with glycerol, will survive temperatures of –66 degC. It’s thought that the cryoprotectants are doing something else here too.
Surviving the cold isn’t always about not getting frozen; sometimes there’s no avoiding it, and cryoprotectants then seem to act as freeze-tolerance rather than freeze-avoidance agents. It’s not only insects that do this: some frogs can survive being frozen solid if they fill their cells with glucose. The sugars and polyols seem to interact with cell water to protect delicate proteins and membranes – but no one is sure how.
Minoru Sakurai of the Tokyo Institute of Technology and his coworkers have shed some light on this through studies of the African midge Polypedilum vanderplanki. They’ve studied not freezing as such, but an environmental stress more common in Africa which has similar consequences: dehydration. Dried larvae of the midge can enter a state called anhydrobiosis, in which they show no metabolic activity but can recover viability when water becomes available. They do this by generating trehalose.
There have been two suggestions for the protective mechanism: either the water is substituted by the sugar, or the sugar promotes the formation of a glassy cell matrix rather than ice crystals. The Japanese team thinks that in fact both are true. They find that the sugar forms hydrogen bonds with the lipids in cell membranes, replacing a shell of hydration water and preventing the membranes from becoming rigid. But the larvae also undergo a distinct glass transition as they are slowly dried. The glass is not pure trehalose, but is peppered with other components, such as proteins, that might help to disrupt crystallization.
How, though, does a shell of sugar or polyol protect a protein when water cannot? It seems that cryoprotectants can stabilize proteins against unfolding, but whether this comes from direct protein-sugar interactions or some kind of sugar-induced modification of water structure isn’t clear. Martina Havenith at the Ruhr University of Bochum and her colleagues recently reported signs that the latter might play a role. Using terahertz spectroscopy, they found that the dynamics of water molecules are disturbed a remarkably long distance away from dissolved sugars – up to about 5-7 Å for trehalose and lactose. These perturbations are stronger and longer-ranged for disaccharides than for the monosaccharide glucose, which would support the notion that cryoprotection (which disacchardides do better) is tied up with the sugar’s ability to slow down the water motions and promote a pseudo-glassy state.
Findings by Giovanni Strambini and coworkers at the Consiglio Nazionale delle Ricerche in Pisa, Italy, could be seen to lend support to this idea. The Italian team have asked how cryoprotectants do their job if ice actually begins to form. They used fluorescence spectroscopy to study the stabilization of a protein called azurin by sugars and polyols (sucrose, trehalose, sorbitol, glycerol) in ice-water mixtures. It seems none of these molecules offers strong protection against ice formation, although trehalose ‘tries hardest’: as ice appears, the protein is increasingly prone to unfold. So the cryoprotectants don’t make the native protein significantly more thermodynamically stable. Instead, the researchers think that they somehow cajole the protein to stay folded in the liquid until the whole system becomes a sluggish glass and unfolding is then simply too slow – a kinetic rather than thermodynamic effect.
So there is an emerging picture, albeit a complex one. The cryoprotectants could have a dual role. First they remodel biomolecular hydration shells, retarding water and maybe suppressing the loss of crucial protein-folding forces. Then they eventually promote the formation of a glassy matrix rather than an icy one, arresting the biomolecular structures in recoverable suspended animation. That’s clever work for a spoonful of sugar.
1. C. L. Dias et al., Phys. Rev. Lett. 100, 118101 (2008).
2. M. Sakurai et al., Proc. Natl Acad. Sci. USA 105, 5093-5098 (2008).
3. M. Heyden et al., J. Am. Chem. Soc. doi:10.1021/ja0781083.
4. G. B. Strambini et al., J. Phys. Chem. B 112, 4372-4380 (2008). | <urn:uuid:f5d775dc-70e7-422f-acd6-731b2e08f2af> | 3.21875 | 1,334 | Personal Blog | Science & Tech. | 49.881982 |
By Wade Wilbur
For low-volume databases, those that are predominantly read and not written to, or database tables that are designed to not
be updated or only updated under rare circumstances, a DBA or developer may be interested
in being notified whenever the data in a particular table is modified. Or you may have certain records in a table that
are assigned to a particular user and, upon that record being updated, that user should be notified of the change.
While this sort of logic can be implemented at the code level, Microsoft SQL Server has all of the technologies needed to
achieve this aim built directly into it. Triggers
can be used to perform some action when data is inserted, updated, or deleted from a table, and Microsoft SQL Server's
extended stored procedure can be invoked to send an email to one or more recipients. Combining triggers with
xp_sendmail provides a means for
alerting specified users via email when the data in a particular table is modified.
The purpose of this article is to demonstrate how to create such a notification system. This code presented here is a simplified version from an action item application that emails the appropriate people when an item is updated. I noticed that some of my staff who were just learning SQL were intimidated by triggers and SQL Mail, so I wanted to come up with an example which was useful, yet easy to follow.
In this article we'll step through creating the table, some dummy data, and the trigger one step at a time. You can download the entire database script at the end of this article.
Creating the Table and Populating it with a Test Record
Let's start by creating a table which will hold comments that need to be updated:
Next we'll insert a row to update:
Creating the Trigger
Now we create the trigger for the
tstComments table. Before we create the trigger, however, we should
first see if the trigger already exists and, if so, delete it. This can be accomplished by checking the
sysobjects table to see if a trigger with the specified name exists. If so, we can delete it by using
DROP TRIGGER keyword.
Once we've ensured that the trigger, if it already exists, has been deleted, we can go ahead and create the trigger.
CREATE TRIGGER keyword followed by your name for the trigger.
CREATE TRIGGER you need to specify the trigger's name along with what table it operates on and for what
Now for the body of the trigger! Since we will be sending an email, we will need a variable to hold the body of the email (along with some other variables).
One thing to keep in mind is that a trigger only fires once per
DELETE statement. If we want to process all of the records that were updated in
UPDATE statement, we'd need to use a
here. For this example, let's just assume that we only want to send off an email if exactly one record is updated. Therefore,
our trigger won't send notifications if batch
UPDATE statements are issued. In the trigger, the
inserted table contains the record(s) that
were just updated. Therefore, we can check this table to see how many records were updated.
We now need to assign values to the variables from the row that was just updated. The
table is a special table that holds the contents of the row that was updated right before it was updated. So it has the
pre-updated values. We want to take these original values and store them into our
Then we want to assign values to the variables from the row after it was updated. To access those valued we can use the
Set the body of the email by concatinating the strings and the variables. Notice the placement of the single quotes and carriage returns which allows for formatting in the email body:
Finally, send the email using the
xp_sendmail extended stored procedure.
Since the procedure is in the
master database and we are in
pubs (or whatever database you may
be adding this trigger to), we need to specify which database the
xp_sendmail stored procedure is in.
Moreover, in order to use this extended stored procedure you must have SQL Mail set up. See
How to Configure SQL Mail for more information. Lastly,
you must grant
EXECUTE permissions for the
xp_sendmail procedure in the
Assuming you're fulfilled all of these steps, the following statements will actually send the email to a specified account.
Testing the Trigger
After you have created the table, inserted the row and created the trigger, run this
UPDATE statement to test
out the trigger's functionality.
If everything is configured correctly, you should receive the following email:
CommentID 1 has been updatedYou can explore the SQL Server Books Online to find many other useful parameters to the
Old Comment:this is the original comment
New Comment: this is the updated comment
xp_sendmailprocedure as well as detailed descriptions of other types of triggers. Now that you've seen how easy triggers and SQL Mail are, you will no doubt find lots of uses for them!
|Return to user tips...| | <urn:uuid:df5b8c05-0585-442c-a3d0-5ecbbc1fad08> | 2.75 | 1,080 | Tutorial | Software Dev. | 46.189463 |
From a distance, it looks like a parking lot filled with over-sized television antennae. In actually it is the High Frequency Active Auroral Research Program or HAARP, a government research facility focused on physical and electrical properties of the earth's ionosphere. Set in the the beautiful Alaskan forest, HAARP is, to certain conspiracy theorists, neither a research program nor a TV antennae, but a weather control device, space weapon, or even a death ray.
Funded by DARPA, the United States Air Force, and the Navy, HAARP's projects involve superheating the ionosphere with high-frequency radio waves. This incited the suspicions of physicist Bernard Eastlund and a small group of other scientists in the 1990s, who expressed concern about HAARP's possible future use as a weapon. Russia also expressed concern and criticized HAARP as a "new integral geophysical weapon." The Russian government now operates a very similar facility known as the Sura Ionospheric Heating Facility.
Despite the criticism, or because of it, the researchers at HAARP have been very open about their research, stating unequivocally that "there are no classified documents pertaining to the HAARP." They are adamant that the site is in no way a danger to anyone. Among the stated goals of HAARP are studying how the earth's natural ionosphere affects radio signals, something of interest to both the commercial and military worlds.
While there is little evidence to suggest that HAARP has a potential use as a weapon or anything else nefarious, one of the stated aims of the project is to generate VLF and ELF (very low frequencies and extremely low frequencies) for communication with submarines (hence the Navy funding) and possible use in remotely searching for mineral content. HAARP recently bounced low frequency signals off the moon in the "lunar echo" experiment and invited amateur radio enthusiasts to listen in.
In the spirit of openness, the HAARP facility hosts a open house each year, inviting anyone who wants to to tour the facility. The dates and times are announced in advance on the HAARP website. | <urn:uuid:6660c064-59dd-41f0-a4ae-1e150310b5a1> | 2.703125 | 429 | Knowledge Article | Science & Tech. | 29.172326 |
Mount Rainier hasn't erupted in more than one
hundred years, yet ash-laden mudflows can still
run down it today.
This one, caught in action, was initiated by the
sudden release of water stored at the base of a
glacier. Outburst floods become debris flows by
picking up large quantities of ash and loose debris
as they move.
5. List at least two factors that
contribute to the formation of mudflows on volcanoes. | <urn:uuid:0df026aa-99b2-439a-a059-719675d7e768> | 4 | 102 | Tutorial | Science & Tech. | 62.626514 |
Blow Wind, Blow!
- Most people think that Chicago is the windiest city in
the U.S. It's not. Boston is the windiest city. The numbers
below indicate the miles per hour of the winds for Boston
for a 2-week period. Use the information to make a stem and
leaf plot. Then find the range, median and mode.
22 35 27 15 38 32 22 35 25 29 22 18
- Thirty-seven states use wind as a source of energy. What
fraction of the states does not use wind energy?
Twelve states in the midsection of the country, Texas to
North Dakota, contribute 90% of the wind electric potential.
What fraction of the states is this? What percentage?
Wind power plants in California produced over 3.1 billion
kWh of electricity during 1995, about 1.2% of the
electricity used by California. What % of California's
electricity does not come from wind?
Would you say that California depends on wind for most of
its electricity? Explain.
- Tornadoes and the threat of tornadoes are a key part of
the USA's spring weather because spring brings favorable
tornado conditions. Forecasters and researchers use a wind
damage scale called the F scale to classify tornadoes. The
ratings are based on the amount and type of wind damage. The
F-0: Light damage. Wind up to 72 mph.
F-1: Moderate damage. Wind 73-112 mph.
F-2: Considerable damage. Wind 113-157 mph.
F-3: Severe damage. Wind 158-206 mph.
F-4: Devastating damage. Wind 207-260 mph.
F-5: Incredible damage. Wind above 261 mph.
What is the range for an F-2? F-3?
What would the median wind be in an F-1 tornado?
If winds were clocked at 165 mph how would the tornado be
classified? What kind of damage could be expected?
- Damon loves to fly his kite on the weekends. If March
1st is on a Saturday, how many days will he get to fly his
kite in March?
If one weekend is a washout, what fraction of the weekend
days is this?
What fraction shows the days out of the month when he can
fly his kite?
- Who is the kiteflier, sailor, windsurfer and
Kate, Wayne, Ben, and Sarah are all 20 years old. Here are
some clues to help you solve the problem:
No one is involved in the sport that has the same first
letter as his or her name.
Kate is not friends with the balloonist but she is best
friends with the sailor.
Wayne is a male chauvinist and he can't believe that a girl
could be a balloonist.
Ben is afraid of water.
- On one windy day in March, the combined winds of
Philadelphia and Pittsburgh totaled 68 mph. The winds in
Pittsburgh were 10 mph more than in Philadelphia. What were
the winds in each city?
- Look at the Wind-chill Factor chart below to answer
What effect does a 5-mph wind have on temperature?
If the air temperature is above freezing what is the lowest
wind speed that will make the air feel like it is below
If the temperature is 15 degrees and the wind is 15 mph,
what temperature does it feel like out?
What wind speed makes a 25 degree air temperature feel like
it's below 0?
What air temperature feels like -24 when there is a 20-mph
|Wind speed in miles per hour
|Air Temperature (F)
Back to Puzzling and | <urn:uuid:0b9e50e6-c645-4d2c-aaef-76dfdd45b7ed> | 2.953125 | 794 | Tutorial | Science & Tech. | 80.3337 |
NON-AGROBACTERIAL SPECIES FOR GENE TRANSFER TO PLANTS
P. Janaki Krishna
Agrobacterium tumefaciens is a common soil bacterium that causes crown gall disease by transferring some of its DNA to the plant host. This unique mode of action has enabled this bacterium to be used as a tool in plant breeding1. Many desired genes of agronomic importance are engineered into this bacterial DNA and thereby inserted into plant genomes. Though close relatives of Agrobacterium, such as Rhizobium trifolii and Phyllobacterium myrsinacearum, display the gall producing ability by harboring a Ti (Tumor-inducing) plasmid, no direct molecular evidence of gene transfer to plants by these bacteria has been reported. In fact, until now, the body of research has focused on using Agrobacterium as a vehicle for gene transfer. However, researchers are now attempting to use other Agrobacteria-related species, such as Sinorhizobium and Mesorhizobium, to augment gene transfer techniques.
A research team from CAMBIA (http://www.cambia.org/daisy/cambia/563) has investigated whether a non-Agrobacterial species of bacteria can competently transfer genes in plants2. To do this, a disarmed Ti plasmid (pEHA105) from a hypervirulent Agrobacterium strain was introduced into several species of bacteria. To facilitate transfer of this large plasmid, the origin of transfer (oriT) of a broad host range IncP plasmid was integrated into the Ti plasmid of EHA105 at two different locations (pTiWB1, pTiWB3). The modified plasmids were then mobilized into 1) a Rhizobium species (NGR234) that has an exceptionally broad host range, capable of nodulating over 100 different plants3; 2) the alfalfa-symbiont Sinorhizobium meliloti; and 3) Mesorhizobium loti, a representative of a different family (Phyllobacteriaceae). In order to check the genotype of engineered strains by PCR and confirm that the strains were free of contaminating Agrobacterium by selective plating, additional primers were developed. In addition, the transfer rate and replication potential were enhanced by incorporating two broad-spectrum replication origins (sites) to the disarmed Ti plasmid. To assay for gene transfer, three binary vectors were prepared: pCAMBIA1105.1R was introduced into the Rhizobia bacteria, and either pCAMBIA1105.1 or pCAMBIA1405.1 into Agrobacterium.
The plant transformation events were analyzed through GUS activity, Southern blotting, and PCR assays. First, the GUS assay tested the transformation rate in tobacco. The transformation rate using Sinorhizobium meliloti was about 25% that of Agrobacterium, and M. loti had a rate approximately one third of Sinorhizobium. However, that value is still significant enough to get the attention of researchers interested in plant transformation.
The researchers also tested the non-Agrobacterium bacteria for their ability to transform other plant species, namely Arabidopsis and rice. Arabidopsis was transformed with S. meliloti using the floral dip method, producing six transgenic plants from 70,000 T0 seeds, which is 510% of the normal efficiency of A. tumefaciens. Interestingly, in all cases, T-DNA was integrated in a manner identical to that of Agrobacterium. Also, an effort was made to increase the transformation efficiency of the floral dip technique by modifying the infiltration medium, whereby a four-fold improvement was obtained. In rice, the transformation efficiency was considerably lower (0.6%) when compared with Agrobacterium tumefaciens (5080%). One transformed rice plant from a total of 695 calli was regenerated and rooted. T-DNA integration analysis in this rice plant revealed that the T-DNA had integrated into rice chromosome 11.
Thus, though the transformation efficiency was considerably lower when compared to Agrobacterium mediated transformation, the results confirm that all three non-Agrobacterium species, Rhizobium, Sinorhizobium, and Mesorhizobium strains, belonging to two families of bacteria, can transform plants. Of these, S. meliloti is the most competent to transfer genes into both monocots and dicots and into a range of tissues, including leaf tissue, undifferentiated calli, and immature embryo ovules.
Albeit, at a lower frequency, T-DNA transformation appeared to proceed normally, but most notable is the fact that transformation occurred at all using non-virulent, non-Agrobacterium microbes. Though a number of factors that reside on Ti plasmids (as acting genes and DNA elements) play an important role in DNA transfer, it has been noted that other transacting elements are located on Agrobacterial chromosomes4. In addition, the researchers suggest that if there are gene functions necessary for gene transfer that are not encoded by the Ti plasmid, they must have equivalents or homologues in multiple Rhizobial species. It is likewise possible that the small number of vir-related genes on the Ti plasmid is sufficient to confer gene transfer competence to any bacterium. It is also noted that homologues of these transacting Agrobacterium genes exist in other bacteria and it is suggested they could have evolved from DNA transfer to plants in the past.
This study is a breakthrough in research concerning the exploitation of non-Agrobacterial bacteria for gene transfer to plants. It appears that when it comes to acquiring or transferring genetic information, interestingly, bacteria always show promise.
In addition, it is heartening to note that this alternative technology is available to the public in a "protected technology commons," optimized and improved as a Bioforge project (http://www.bioforge.net).
1. Gelvin SB (2003) Agrobacterium- mediated plant transformation: the biology behind "gene-jockeying" tool. Microbial. Mol. Biol. Rev. 67, 1637
2. Broothaerts W et al. (2005) Gene transfer to plants by diverse species of bacteria. Nature 433, 629633
3. Pueppke SG & Broughton WJ (1999) Rhizobium sp. strain NGR234 and R. fredii USDA257 share exceptionally broad, nested host ranges. Mol. Plant Microbe Interact. 12, 293318
4. Van Montagu M & Schell J. (2003) (19352003): Steering Agrobacterium-mediated plant gene engineering. Trends Plant Sci. 8, 353354
P S Janaki Krishna | <urn:uuid:c3db2a10-a18d-49c5-89b6-c54b3576bd5a> | 3.359375 | 1,460 | Academic Writing | Science & Tech. | 32.716816 |
Guide for Beginning Fossil Hunters
Fossil GuideAdapted from ISGS Geoscience Education Series 15, 2002
by Charles Collinson
Long before the first humans appeared on Earth and such familiar features as our lakes and rivers were formed, the Earth was inhabited by plants and animals.
Even though humans are the only creatures able to record their history, we know that plants and animals lived an incredibly long time before human beings were here to see them. We have evidence that single-celled organisms swarmed in the seas half a billion years ago. We know that after this small beginning animals grew bigger, more complex, and more varied, and that after millions of years such monsters as dinosaurs evolved. We can also prove that they in turn gave way to the mammals that today dominate the Earth.
We know these things because the prehistoric creatures left behind the telltale marks that we call fossils. Some fossils are merely foot tracks or worm holes. Others are impressions of an entire animal or plant. Many are bones or shells—or even skin and hair.
The materials in which the fossils are encased were not always rocks. At one time they were mud or sand on the floor of a sea or sand dunes on an ancient land. As time went on, these sediments were buried under more sand and mud. Layer after layer piled up, and the sediments with their enclosed fossils were cemented into rock.
Pentremites, a blastoid
The great numbers of fossils in the rocks represent only a small part of all life that has existed on our planet. For every fossil we see, millions of animals have lived, died, and been destroyed without leaving a trace. Nevertheless, by carefully collecting the fossils and recording the layers of rocks they came from, we can reconstruct hundreds of generations that have lived on both land and sea at one time or another.
Paleontologists devote their lives to seeking and studying fossil remains in order to interpret Earth history, but the search for fossils can be an adventure for almost anyone. It can be an excursion to an ancient beach or a plunge to the bottom of a long-vanished sea.
A trip to a quarry may yield fossil clams and corals; a search through a strip mine may produce tropical ferns; mastodons or snails may be the subject of a hunt along the river bluffs. All such excursions provide good outdoor fun—whether for an afternoon, a weekend, or an entire vacation.
In addition to outdoor adventure, a successful hunt provides interesting trophies for your collections. Many of science's most valuable fossil finds have been brought in by amateur hunters.
Common Types of Illinois Fossils
Several people contributed to this project. Christina Cleburn, Joseph A. Devera, Wayne T. Frankie, Dennis R. Kolata, Rodney D. Norby, and George L.H. Stone are thanked for their loan of fossil specimens. The manuscript reviews of Dennis R. Kolata, Rodney D. Norby, and Wayne T. Frankie are appreciated. Illustrations were prepared by Margaret L. Whaley, Charles Collinson, and Marie E. Litterer. Pamella K. Carrillo created the cover design and layout. Production editor was Cheryl K. Nimz. Photographs were taken by Joel M. Dexter. The Web version of this publication was created by Sally L. Denhart.
Purchase the Book
The printed version of Guide for Beginning Fossil Hunters can be purchased from the Shop ISGS Web site. for $5.00 plus S&H.
Updated 09/23/2011 SLD | <urn:uuid:7143da5d-8af3-4dbd-9fd5-71286aad21df> | 4.03125 | 739 | Truncated | Science & Tech. | 57.383694 |
Reads and (mostly) writes seismic data from and to files written by Landmark's (a subsidiary of Halliburton) Geoprobe software. This implementation is based on reverse-engineering the file formats, and as such, is certainly not complete. However, things seem to work.
DependenciesRequires python >= 2.5 and numpy. Some functions (e.g. horizon.toGeotiff) require gdal and its python bindings. The plotting functions in utilities (e.g. utilities.wiggles) require matplotlib.
DocumentationDocumentation is in progress. For the moment, the only documentation is the docstrings (which are fairly extensive).
As a basic example:
>>>from geoprobe import volume
>>>vol = volume('/path/to/geoprobe/volume/file.vol')
>>>print vol.xmin, vol.ymin # Model coordinate min and max
>>>test_xslice = vol | <urn:uuid:c1043b59-e831-48f4-9b23-5a6ea7e6cdc4> | 2.6875 | 202 | Documentation | Software Dev. | 39.116303 |
The Other Global Warming.
I'm Bob Karson with the discovery files--new advances in science and engineering from the National Science Foundation.
(Sound effect: volcanic eruptions) Early Triassic period, 250 million years ago, massive volcanic activity spews enough molten rock to cover millions of square miles. Enough greenhouse gases to dramatically increase the earth's temperature. Known as "the great dying," it chokes off seventy percent of all land vertebrates, ninety percent of marine species.
It took over five million years for species that did survive to fully recover. Researchers from Ohio State and the University of Cincinnati studied ancient sediments from this period, taken from a gorge in Iran and confirmed that the slow recovery was indeed due to something familiar to us: Global warming.
For five million years, carbon dioxide mixed with water to form acid rain, which eroded the rock filling the oceans with sediment. Oceans that were as warm as a modern hot tub, microbes thrived, everything else, not so much.
Though extreme, the team says we can learn from the period. That life doesn't just snap back. (Sound effect: ocean sounds) During current warming conditions, although the earth's oceans may only reach 80°, carbon dioxide, acid rain and ocean acidification are already a problem for ocean life.
Say the scientists: After "the great dying," it was as if life had a five-million-year hangover. And, they say, looking at the past provides an important perspective about the future.
"The discovery files" covers projects funded by the government's national science foundation. Federally sponsored research--brought to you, by you! Learn more at nsf.gov or on our podcast. | <urn:uuid:e9d83880-ab09-41f5-96de-767081a41c69> | 3.71875 | 352 | Audio Transcript | Science & Tech. | 47.059321 |
Deep Water Circulation
Early modeling studies by Peter Wyle and Clause Rooth in the 1960.s indicated that small perturbations in the fresh water balance in the high North Atlantic could modulate the production of North Atlantic Deep Water and possibly effect global climate. The pioneering carbon isotope studies on benthic foraminifera by Jean-Claude Duplessy and Nick Shackleton during the 1970.s hinted that North Atlantic Deep Water production rates varied dramatically during glacial-interglacial cycles. These results fueled a gold rush of speculations that NADW may be the mystery climate amplifier of the Milankovitch cycles. This sparked thirty years of NADW studies using an evolving arsenal of deep ocean circulation proxies. The picture is still confusing today but slowly there appears to be some concordance between the newer proxies although there remain stark differences in the details. We have worked on this problem for most of its evolution with five major foci. First we concentrated on statistically linking the North Atlantic surface water signal to the NADW signal. We then shifted to the Southern Ocean where we could unambiguously estimate the relative flux of NADW into the Southern Ocean. This obviated the chronic problem of falsely estimating NADW production changes by inadvertently measuring the migration of the Antarctic Bottom Water (AABW) and NADW mixing front that in many instances did not correspond to NADW production changes. Next we designed and built an automated stable isotope inlet system capable of analyzing single benthic foraminifera in order to continue working on many North Atlantic cores that were depleted in benthic foraminifera. We sorted out the significant fraction of the carbon isotope signal recorded in benthic foraminifera that was due to partial isotopic equilibration of seawater with the atmosphere in order to better understand the differences between the carbon isotope tracer and the Cd/Ca nutrient proxy. Using better-defined tracers from higher accumulation rate cores, we were able to estimate the changing fractions of southern source waters from the northern source waters including deep and intermediate waters.
Our current research uses radiocarbon measurements on 230Th/234U/238U dated deep-sea corals from the high North Atlantic in the Orphan Knoll region east of Newfoundland. In this region we can estimate the ventilation age of NADW arriving at Orphan Knoll during the last deglacial period. Along with our Canadian collaborators, we have proposed a research cruise in the coming year to sample deep-sea corals from 1700 meters to 4700 meters depth down the flanks of Orphan Knoll. In addition, recent work using the neodymium tracer of NADW may have broken the longstanding deadlock between the carbon isotope tracer and the Cd/Ca tracer history of the Southern Ocean.
- [PDF] Piotrowski, A.M., S.L. Goldstein, S.R. Hemming, and R.G. Fairbanks, 2005. Temporal relationships of carbon cycling and ocean circulation at glacial boundaries. Science, 307, 1933-1938.
- [PDF] Piotrowski, A.M., S.L. Goldstein, S.R. Hemming, and R.G. Fairbanks, 2004. Intensification of North Atlantic Deep Water through the last glacial termination. Earth & Planetary Science Letters, 225, 1-2, 205-220.
- [PDF] Lynch-Stieglitz, J.. A. van Geen, R.G. Fairbanks, 1996. Inter-ocean exchange of glacial North Atlantic Intermediate Water: evidence from subantarctic Cd/Ca and carbon isotope measurements, Paleoceanography, 11, 191-201.
- [PDF] Naqvi, W.A. and R.G. Fairbanks, 1996. A 27,000 year record of Red Sea outflow: implications for timing of post-glacial monsoon intensification, Geophy. Res. Lett., 23, 12, 1501-1504.
- [PDF] van Geen, A., R. G. Fairbanks, P. Dartnell, M. McGann, J.V. Gardner, and M. Kashgarian, 1996. Ventilation changes in the northeast Pacific during the last deglaciation. Paleoceanography, 11, 5, 519-528.
- [PDF] Charles, C.D., J. Lynch-Stieglitz, U.S. Ninnemann and R.G. Fairbanks, 1996. Climate connections between the hemispheres revealed by deep sea sediment core/ice core correlations, Earth and Planetary Science Letters 142:19-28.
- [PDF] Lynch-Stieglitz, J., T.F. Stocker, W.S. Broecker and R.G. Fairbanks, 1995. The influence of air-sea exchange on the isotopic composition of oceanic carbon: observations and modeling. Global Biogeochemical Cycles, 9, 4, 653-665.
- [PDF] Lynch-Stieglitz, J., R.G. Fairbanks and C.D. Charles, 1994. Glacial-interglacial history of Antarctic intermediate water: Relative strengths of Antarctic vs. Indian Ocean sources. Paleoceanography, 9, 1, 7-30.
- [PDF] Naqvi, W.A., C.D. Charles and R.G. Fairbanks, 1994. Carbon and oxygen isotopic records of benthic foraminifera from the Northeast Indian Ocean: implications on glacial-interglacial atmospheric CO2 changes. Earth & Planet. Lett., 121, 99-110.
- [PDF] Lynch-Stieglitz, J. and R.G. Fairbanks. 1994. A conservative tracer for glacial ocean circulation from carbon isotope and palaeo-nutrient measurements in benthic foraminifera. Nature, 369, 308-310.
- Charles, C.D., J.D. Wright and R.G. Fairbanks, 1993. Thermodynamic influences on the marine carbon isotope record. Paleoceanography, 8, 6, 691-698.
- deMenocal, P.B., D.W. Oppo, R.G. Fairbanks and W. L. Prell, 1992. A 1.2 Myr record of mid-depth δ13C variability in the North Atlantic: implications for climate change, ocean circulation, and atmospheric CO2. Paleoceanography, 7, 229-250.
- [PDF] Charles, C.D. and R. G. Fairbanks, 1992. Evidence from Southern Ocean sediments for the effect of North Atlantic deep-water flux on climate. Nature 355, 416-419.
- Wright, J.D., K.G. Miller, R.G. Fairbanks, 1991. Evolution of modern deepwater circulation: evidence from the late Miocene southern ocean. Paleoceanography, 6, 275-290.
- Charles, C.D. and R.G. Fairbanks, 1990. Glacial to interglacial changes in the isotopic gradients of Southern Ocean surface water. in: Geologic History of Polar Oceans: Arctic versus Antarctic, Bleil and Theide, eds., NATO ASI Series, 308, 519-538, Kluwer Academic Publishers, Boston, Mass.
- Oppo, D.W., R. G. Fairbanks, A.L. Gordon and N.J. Shackleton, 1990, Late Pleistocene Southern Ocean δ13C variability: North Atlantic Deep Water modulation of atmospheric CO2. Paleoceaonography, 5, 43-55.
- Oppo, D.W and R.G. Fairbanks, 1990. Atlantic Ocean thermohaline circulation of the last 150,000 years: relationship to climate and atmospheric CO2. Paleoceanography, 5, 277-288.
- Oppo, D.W. and R.G. Fairbanks, 1989. Carbon isotope composition of phosphate-free surface water of the past 22,000 years. Paleoceanography, 4, 333-351.
- Duplessy, J.C., N.J. Shackleton, R.G. Fairbanks, L. Labeyrie, and D. Oppo, 1988. Deep water source variations during the last climatic cycle and their impact on the global deep water circulation. Paleoceanography, 3, 343-360.
- [PDF] Oppo, D.W. and R.G. Fairbanks, 1987. Variability in the deep and intermediate water circulation of the Atlantic Ocean during the past 25,000 years: Northern Hemisphere modulation of the Southern Ocean. Earth and Planet. Sci. Letters., 86, 1-15.
- Miller, K.G., R.G. Fairbanks and E. Thomas, 1986. Benthic foraminiferal carbon isotopic records and the development of abyssal circulation in the eastern North Atlantic. In Ruddiman, W.F., Kidd, R.B., Thomas, E., et al.,Init. Repts. DSDP, 94: Washington (U.S. Govt. Printing Office), 981-996.
- Miller, K.G. and R.G. Fairbanks, 1985. Oligocene to Miocene global carbon isotope cycles and abyssal circulation changes. in: Natural Variations in Carbon Dioxide and the Carbon Cycle (E.T. Sundquist and W.S. Broecker, ed.) American Geophysical Union _Monograph 32, 469-486.
- [PDF] Mix, A.C. and R.G. Fairbanks, 1985. North Atlantic surface-ocean control of Pleistocene deep-ocean circulation. Earth and Planet. Science Lett., 73, 231-243.
- [PDF] Miller, K.G. and R.G. Fairbanks, 1983. Evidence for Oligocene-Middle Miocene abyssal circulation changes in the western North Atlantic. Nature, 306, 250-253. | <urn:uuid:ec61299e-54e3-4763-b2c5-7f761eea60cb> | 3.234375 | 2,117 | Academic Writing | Science & Tech. | 67.31798 |
The American pickerels are two subspecies of Esox americanus, a species of freshwater fish in the pike family (family Esocidae) of order Esociformes: the Redfin pickerel, E. americanus americanus, and the Grass pickerel, E. americanus vermiculatus.
Both subspecies are native to North America. The Redfin pickerel’s range extends from the Saint Lawrence drainage in Québec down to the Gulf Coast, from Mississippi to Florida, while the grass pickerel’s range is further west, extending from the Great Lakes basin from Ontario to Michigan down to the western Gulf Coast, from eastern Texas to Mississippi.
The two subspecies are very similar, but the Grass pickerel lacks the Redfin’s distinctive orange to red fin coloration, its fins having dark leading edges and amber to dusky coloration. In addition, the light areas between the dark bands are generally wider than the bands on the body grass pickerel and narrower on the Redfin pickerel. These pickerels grow to a maximum overall length of 16 in (40 cm) and a maximum weight of 2.25 pounds
The Redfin and Grass pickerels occur primarily in sluggish, vegetated waters of pools, lakes, and swamps, and are carnivorous, feeding on smaller fish. Larger fishes, such as the Striped bass (Morone saxatilis), bowfin (Amia calva), and Gray weakfish (Cynoscion regalis), in turn, prey on the pickerels when they venture into larger rivers or estuaries.
These fishes reproduce by scattering spherical, sticky eggs in shallow, heavily-vegetated waters. The eggs hatch in 11″“15 days; the adults guard neither the eggs nor the young.
The E. americanus subspecies are not as highly prized as a game fish as their larger cousins, the Northern pike and Muskellunge, but they are caught by anglers.
Lesueur originally classified the grass pickerel as E. vermiculatus, but it is now considered a subspecies of E. americanus.
E. americanus americanus is sometimes called the Brook pickerel. There is no widely-accepted English common collective name for the two E. americanus subspecies; “American pickerel” is a translation of the systematic name and the French brochet d’Amérique. | <urn:uuid:6f1fcb8a-4e97-4fcf-84c2-e9de336a2cd6> | 2.859375 | 528 | Knowledge Article | Science & Tech. | 36.993798 |
Feb. 17, 2013 In our ongoing quest for alternative energy sources, researchers are looking more to plants that grow in the wild for use in biofuels, plants such as switchgrass.
However, attempts to "domesticate" wild-growing plants have a downside, as it could make the plants more susceptible to any number of plant viruses.
In a presentation at this year's meeting of the American Association for the Advancement of Science, Michigan State University plant biologist Carolyn Malmstrom said that when we start combining the qualities of different types of plants into one, there can be unanticipated results.
"Most wild plants are perennials, while most of our agriculture crops are annuals," Malmstrom said. "Sometimes when you mix the properties of the two, unexpected things can happen."
For example, annual domestic plants are made to grow quickly. "In agriculture we select more for growth," she said. "There is a reduced need for the plants to defend themselves because we have taken care of that."
If pest control measures aren't taken, these annual plants can serve as "amplifiers," producing lots of viruses and insects to move the viruses around.
In contrast, perennial plants in nature grow slower, but are usually better equipped to fight off invading viruses. When wild-growing perennials do get infected they can serve as reservoirs for viruses, Malmstrom said, "a place where viruses can hang out a long time."
In the domestication of wild plants for bioenergy, long-lived plants are being selected for fast growth like annuals. "Now you have a plant that could be a long-term reservoir, but it also happens to be faster growing and can serve as an amplifier for viruses. This all-in-one combination could increase virus pressure in crop areas unless mitigated."
Malmstrom said that plant virus ecology and the study of viral interactions between wild-growing plants and agricultural crops is an expanding field. In the last 15 years, disease ecology has really come to the fore as a basic science.
Most of what is known about plant viruses comes from studies of crops. To understand the complete ecology of viruses, researchers are now studying these tiny organisms in nature, too. "The mysteries of how plant viruses can play a role in ecosystem properties and processes in natural ecosystems are emerging more slowly," Malmstrom said.
Malmstrom said it's important to catch-up in our understanding of viral ecology, as there are any number of societal issues that need to be addressed in this area.
"Society wants us to be able to answer questions such as whether viruses can be used in agricultural terrorism, how to recognize a novel virus, and what happens if a virus is genetically modified and then let loose?"
Other social bookmarking and sharing tools:
Note: If no author is given, the source is cited instead. | <urn:uuid:56b10d65-6ed7-498d-88c8-bdf366932ecc> | 3.546875 | 587 | Truncated | Science & Tech. | 40.28552 |
Observations and results
Did the larger boat hull support a greater amount of weight? Did both hulls have a similar density right before sinking, which was roughly one gram per cubic centimeter—the density of water?
When you first put one of the boat hulls on the water, it should float because its total density is less than that of water. As you add pennies to the hull, its density increases and the hull floats lower. Eventually, when enough pennies are added, the hull's density roughly equals the water’s. This happens right before the penny that sinks the hull is added. The hull sinks because its density has finally become greater than that of water. Consequently, the hull’s density right before sinking should roughly equal the density of water, which is one gram per cubic centimeter. Even though the larger hull supports more weight, it also has a greater volume, and both hulls should roughly have a density of one gram per cubic centimeter right before sinking. (Your densities may not have been exactly this, but may have ranged between 0.7 to 1.3 grams per cubic centimeter. Sources of error that could be eliminated to give you an answer closer to the actual density of water include more accurately calculating the volume of each hull, using something smaller than pennies and including the hull's weight in your calculations.)
Be sure to recycle the aluminum foil when you are done testing your hulls.
More to explore
How does a boat float if it's heavy?, from the University of California, Santa Barbara, ScienceLine
Rainbows on Titan, from NASA Science News
Cartesian Diver, from PBS Kids DragonflyTV
How Much Weight Can Your Boat Float?, from Science Buddies | <urn:uuid:9945a605-2fa0-4167-bef5-8c6d9a15c6ac> | 3.53125 | 356 | Knowledge Article | Science & Tech. | 52.026234 |
A research team led by Australian engineers has created the first working "quantum bit" based on a single atom in silicon,
En-Lugal"-They made a hard-drive.[/quote]
See more evidence you don't understand the science.
A quantum bit, based on a single atom is going to be able to transfer more information at any one time then a photon of light.
[quote="En-Lugal wrote:What the Australian Team is working on now is irrelevant and has zero impact on the original article At1 posted. The American Team sent the information farther unless you can produce proof that they didn't.
You crack me up .... As always ... you have that back the front ..... what the American / European team did in At1 post has NO RELEVANCE AT ALL, on either what the Australian team is doing today .... nor on what they did 10 years ago.
nor does it even have any relevance to what the Japanese & Chinese scientific teams did all those years ago.
that fact is the European & American teams research has had no impact on any aspect of the science what so ever.
En-Lugal wrote:The American Team sent the information further and were recognized for it in the scientific community.
You claim that the scientific world see's the fact that they sent a beam of light a few hundred meters further than the chines did in 2011, a a major breakthrough ..................
I say where did you see that ..... all there is are a few reports in a few local newspapers & on a few geek websites ( who also don't know the history here ) ........ go check them out for yourself.
En-Lugal wrote:Take modern or even older machines for example. The information on them was stored on the HDD or another medium like floppy, flash-drive, CD, DVD, etc. Information couldn't be shared any other way before the modem was invented. In essence, that is what the American Team is working on.
Your So wrong in your facts .... Your just making stuff up .... your such a lair. The US team is working on no such thing .....
They are still sending a beam of light via copper cables. You keep telling us they are sending the info further than the Australian team.
Now ... your trying to claim that the Americans have been working on the hard drive, The Quantum computer all this time.
Your such a lair.
En-Lugal wrote:I understand perfectly what's happening here and I recognize your distasteful, dishonest, biased "debating" style for what it is.
Look who's talking ... You don't know what your talking about so you just make it up.
En-Lugal wrote:All it proves is that the 2002 and 2011 are the same team. We already knew that, by the way. Pointing to this information and calling me stupid didn't prove anything. I and everyone else already knew they were the same team. This is more of that emotional biased arguing you do. It detracts from the real facts. Are you really this dense or is this purposefully done as I suspect?
& Yet again you make yourself look the fool. & Once more prove you don't get it & that you keep getting your facts wrong.
You keep copping thins said in the Articles, but when you type it, you get the facts all messed up.
Know-body was talking About the 2002 & 2011 teams being the same ( tho they are )
I was pointing out that the 2002 & 2011 teams are the same team that is working on the Quantum computer.
The Quantum computer ( the hard drive ) is what makes the ability to transfer larger amounts of data then just a single photon of light.
Rath wrote:Scientists teleport Schrodinger's cat
Australian engineers write quantum computer 'qubit' in global breakthrough.
It's the same team, stupid ....... UNSW ..... University of New South Wales.
Clearly you didn't read anything, & your just pretending to. ( like AT1 )
& Clearly, your still not getting the point.
I find it intriguing that you linked this article. You managed to leave out the damning part that proves distance is, in fact, quite relevant.
From the article you linked:
The team’s greatest contribution is not necessarily the distance it made the data travel but the method it used to harness the 1.3-watt laser beam that carries it. The longer a beam of light travels, the more it spreads out, causing the photon to lose information and trail off course. To keep the beam on target, the researchers created a technique that focuses and steers the laser. Though beaming up humans and animals à la Star Trek is not on the agenda anytime soon, as the technology becomes more sophisticated, it will likely be applied to military communication.
I didn't leave it out, it was there all along .... & it just proves that i was / am right.
Like iv been saying all this time Distance is irrelevant ..... more so because the object is in both locations at the same time .... 0 & a 1 at the same time there but not at the same time .... cant travel faster than the speed of light, but yet it does @ the same time.
& also i pointed out that it was the method, speed, storage capacity that was far more important than the distance.
& what made the distance in Att1's post even less important, is that they only sent the photon a few hundred meters 20 miles further than the last team to do the same experiment.
En-Lugal wrote:This is what makes your whole asinine argument disintegrate, right here. The method to deliver the quantum information is via a laser. Distance is relevant as the further away the receiver, the more focused the beam has to be in order to receive anything intelligible on the other side.
Cept, It's the technology ( Computer ) that determinants the transfer rate & the strength of the beam. ( & i said that already ) But you & Att1 claimed it was wrong. ( & now your both backtracking ) | <urn:uuid:956d270c-bea8-4bf5-b8fb-4e7ec440864c> | 2.953125 | 1,261 | Comment Section | Science & Tech. | 72.770241 |
AGB (Asymptotic Giant Branch) stars generate a massive dust driven stellar wind at the end
of their lives. Ideally, this mass loss is spherical if the physical conditions are homogeneous
at the stellar surface (e.g. temperature) and the stellar vicinity (e.g. density). Indeed,
several physical processes induce deviations from these ideal conditions. This will affect
the condensation of dust and therefore the mass loss rate. Inhomogeneities can also caused
by cool spots at the stellar surface. These inhomogeneities of the temperature are able to
emanate from a magnetic field or a huge convection cell within the stellar envelope. Both
options are possible at the surface of AGB-stars.
This thesis introduces a model for the investigation of the mass loss above cool spots.
For that purpose a radiation hydrodynamic simulation (including a gas, a dust and a radiation
component) has been used and modified for the special purposes of this problem. A
geometry has been chosen which could have been produced by a magnetic field in the lower
stellar atmosphere. Finally, a discussion has been carried out about the creation of dense
knots in planetary nebula as a result of cool regions at the stellar surface.
The result supports the theory that stellar spots generate significant inhomogeneities of
the mass loss. But the formation of dense knots in planetary nebulae
(e.g. in Helix
Nebula or Escimo Nebula)
have to be interpreted
as a combination of inhomogeneities in the mass loss together with hydrodynamical
instabilities. The model investigated describes the formation of initial inhomogeneities
which can be later amplified by an interaction of the slow AGB wind with the fast tenuous
wind of the hot central star of the planetary nebula.
View the "trumpet star" .
view PhD Thesis as
private homepage (currently under construction)
Christian Reimers' photo | <urn:uuid:92971455-c7d1-4488-8931-11418747cd78> | 2.8125 | 413 | Academic Writing | Science & Tech. | 34.501059 |
Know the Science Behind the Flying V
The temperature is dropping and the leaves are falling off the trees as nature is preparing itself for the change in seasons. Up from above you hear the honking and the familiar sight of the flying V. That's right, its migration time for many different species of ducks and geese (also known as waterfowl). But why do they fly in the V shape?
Scientists have found that migrating waterfowl choose the V formation for two reasons. The first is to conserve energy. Each bird flies slightly above the one in front of him, resulting in a reduction of wind resistance. Each bird takes a turn flying in front of the flock and falls back when they get tired. This way the flock can fly a longer distance without getting tired and stopping to rest.
The second benefit of the V formation is that it's easier to keep track of each individual bird in the flock. Each bird can see one another which assists with communication and coordination. Why are there more birds on one side of the V than the other? The simplest answer in this case is the correct one. There are simply more birds on that side. For more information, click here.
Things That Go Bump in the Night
You know they're out there. You hear them. You've seen the evidence. But what are they? Join us for our Family Nite Hike through West Neck Creek Natural Area on Saturday, December 10 from 7 to 8 pm. We'll explore the area, examine the evidence and find out what exactly is active in our parks while we sleep. This family-oriented hike includes opportunities to explore and discover as we hike to the Whitehurst-Buffington House, take in the night sky and enjoy a warm cup of cocoa. Be sure to dress for the weather and bring your flashlight! There is no cost to attend, but registration is required. Register soon as spaces are filling fast!
Family Nike Hike | 12/10 | 7-8 pm | No cost | #101659 - Register!
The Life Cycle of an Aluminum Can
Just like clockwork, you place your full blue recycling bin on the curb for pick up on your assigned day. When you bring the bin back to your house, it's empty. We all know that the items have gone to be recycled, but have you ever given it a second thought beyond that? What happens to the materials that get collected?
For an example, let's take a look your aluminum cans. Did you know that an aluminum can is 100% recyclable? According to TFC Recycling, aluminum is one of the most sought after commodities for recyclers. After being dumped in the recycling truck, your cans travel to a recycling facility where they are sorted and separated from the other materials. The aluminum is compacted into bales, which are then loaded onto trucks and transported to manufacturing facilities. From there, they'll be shredded, melted down and eventually reshaped back into cans. The new cans are shipped to beverage manufacturing companies, refilled with your favorite liquid refreshment, readied for shipment back to store shelves and eventually make it back to your home. Within 60 days of leaving the recycling facility, that new can might be right back in your blue bin waiting by the curb and the cycle continues.
Learn more about the recycling process of aluminum in this article by TFC Recycling. | <urn:uuid:5b889aa4-d4b9-4b13-a309-f82db616206e> | 3.171875 | 692 | Content Listing | Science & Tech. | 62.026221 |
Ars Technica: A team of researchers from Caltech and the University of Rochester, New York, has created the world’s most sensitive accelerometer. Accelerometers are most familiar in reference to smartphones, which use them to recognize when the device has been rotated. Accelerometers are also used to trigger airbags in cars to inflate during a collision. The new accelerometer, however, is more likely to be used in research labs than in consumer products. The researchers etched a spring-like system out of a silicon nitride membrane, with a mass on the spring of just 10 pg. The resulting system had a resonance frequency of just under 30 kHz and could detect accelerations at a rate of about 15 kHz. That sensitivity is then enhanced further by pairing the mechanical resonator with an optical resonator. Next to the spring the researchers placed a zipper-like structure, which works like a pair of mirrors that bounce light back and forth. Oscillations in the spring stretch the structure, altering the frequency of the light in the optical resonator. The altered frequency allows for easy detection of the otherwise unnoticeable variations in the physical spring’s frequency. The researchers also showed that the device can detect such tiny accelerations that anything smaller would be undetectable because of quantum fluctuations.
Nature: Deposits of methane trapped in the sea floor could greatly increase the effects of global warming if released through drilling or natural processes. A newly discovered deposit of methane hydrate off the coast of Canada is the shallowest such deposit found to date, at just 290 m below sea level. Above 270 m, gas hydrates—crystalline solids of gas trapped in ice—are unstable. Because the deposit is so close to the boundary level, any warming in the water above the sea floor could cause the crystalline ice formation to melt and release the trapped methane. The newly discovered deposit is relatively small, says Charles Paull of the Monterey Bay Aquarium Research Institute in California, so its release would not have a significant impact on climate. However, the shallowness of the methane provides researchers an opportunity to study the nature of such deposits and the events that occur as they decompose.
Washington Post: A Chinese auto parts manufacturer, Wanxiang America, won the bidding for bankrupt lithium-battery manufacturer A123 Systems, based in Waltham, Massachusetts. Wanxiang will pay $256.6 million for A123’s technology, manufacturing facilities in the US and China, and contracts with utilities and automakers. A123’s government and military contracts, however, will go to a US company, Navitas System, in Woodridge, Illinois. Despite $249 million in grant money from the US Department of Energy, A123 was forced to declare bankruptcy last October after suffering a series of setbacks, including a major battery recall.
Science News: Dating from the period of the Nazca civilization, the famous large-scale drawings in the Peruvian desert have puzzled archaeologists for almost a century. Clive Ruggles of the University of Leicester and Nicholas Saunders of the University of Bristol, both in the UK, believe that at least one of the patterns is a walkable labyrinth. Because the pattern can only be seen clearly from the air, walkers would not have known what path they were taking. The path Ruggles and Saunders examined consists of 15 sharp turns, several large curves around hills, and even a spiral, before ending about 60 m away from where it began. They believe the total walking time would have been only about one hour. The two researchers had to reconstruct parts of the path that had been washed away by rain, and it took four years of fieldwork to piece together a map of the full labyrinthine pattern. Although they don’t know the reason why people would have walked the labyrinth, Ruggles and Saunders point to the lack of damage to the rocks lining the paths as evidence that the paths were well taken care of. | <urn:uuid:f37e3b11-3a67-4898-bd37-5bbf302e18c6> | 3.328125 | 808 | Content Listing | Science & Tech. | 34.019512 |
How to prove the existence of graviton, boson?
What is the smallest particle found?
How come a particle has no charge, no mass and no existence?
Gravitons are still theoretical.
The smallest particle with non-zero rest mass is still the electron.
The neutrino is a particle with zero rest mass (maybe) and no charge:
it carries only spin and momentum.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:471f018c-24ec-4975-9aa4-0fc224eeb032> | 2.890625 | 100 | Content Listing | Science & Tech. | 54.83 |
salt-wedge | fjord | slightly stratified | vertically mixed | freshwater
Fjords (pronounced fee-YORDS) are typically long, narrow valleys with steep sides that were created by advancing glaciers. As the glaciers receded they left deep channels carved into the earth with a shallow barrier, or narrow sill, near the ocean. The sill restricts water circulation with the open ocean and dense seawater seldom flows up over the sill into the estuary. Typically, only the less dense fresh water near the surface flows over the sill and out toward the ocean. These factors cause fjords to experience very little tidal mixing; thus, the water remains highly stratified. Fjords are found along glaciated coastlines such as those of British Columbia, Alaska, Chile, New Zealand, and Norway.
In the animation below, the blue-colored fresh water is seen flowing over the narrow sill of the fjord on the far right-hand side of the image into the ocean. Almost none of the green-colored seawater is able to make it over the sill into the estuary. | <urn:uuid:d5c1bb40-6d2d-432f-9f28-a4ea0226ac15> | 4.03125 | 227 | Knowledge Article | Science & Tech. | 42.655164 |
|+ Visit the NASA portal|
Lunar Research Station Design Challenge
By the Murphy Science Research 9 Classes
The Peary Crater is the closest large crater to the Lunar North Pole (88.6° N 33.0° E) that is around 73 km wide. From the Earth the crater appears on the northern lunar limb, and is seen from the side. The crater is nearly circular, with an outward bulge along the northeast rim. It is now determined that four mountainous regions on the rim of Peary crater appeared to remain illuminated for the entire Lunar day. These unnamed "mountains of eternal light" are possible due to the Moon's extremely small axial tilt, which also gives rise to permanent shadow at the bottoms of many polar craters.
As a result, the northern rim of the Peary crater is considered a likely site for a future moon base due to this steady illumination, which would provide both a relatively stable temperature and an uninterrupted solar power supply. It is also near permanently shadowed areas that may contain some quantity of frozen water (covered later).
Energy and Life Support
The first part is dealing with crop systems and a source of comestibles. By using a combination of artificial light, in the form LED lights (light emitting diodes) that are relatively effective and doesn’t use a lot of energy (however it would have to be special LED light that only produces light which can be used by plants in photosynthesis). In addition this would be augmented by natural solar insolation found at the Peary Crater.
Greenhouse design consists of fans would be installed to keep air moving, so that plants can have carbon dioxide. Mirrors could be used for light. Loam would be the soil. Regolith soil could be used, but it would have to be enhanced with nutrients. Soil particles are 1-2 millimeters in size because there is no gravity so water is more evenly distributed. This particle size helps plants get water and air.
Ideal plants/crops to grow: have to have high yield, few inedible parts, resistant to disease include: Soybeans, Sweet Potato, Carrots, Tomatoes, Peppers, Lettuce, Parsley, Rice, Grapes, Broccoli, etc. The diet is generally health containing low fat and cholesterol but on the other hand it lacks protein and minerals such as Iron. These would have to be dealt with in supplements, and proteins and oils found in beans and serve as a replacement for meats.
In addition, the greenhouse facility also serves as a waste management center. Feces would be put into a bioreactor, turning into fertilizer which could be used for plants. In addition, plants produce oxygen and we produce carbon dioxide. A man needs .63 kg of oxygen a day. A plant used about 50 g of carbon dioxide; a human produces about 1000 g of carbon dioxide. The process of transpiration can clean water as shown in the diagram below.
The first plan is the method of obtaining water, oxygen, and hydrogen fuel from the lunar crust. The moon is covered in a substance called ilmenite, which is composed of FeTiO3. To obtain the necessary resources from this substance, we will break it down in a system of decomposition reactions:
The second method is the safe approach of bringing water will take effect in the simple process of shipping water from the earth. The drawback with this plan is that it is very expensive and somewhat inefficient to bring water from earth. It may take weeks for water to be shipped to and from the moon and it costs approximately $10,000 per kg of water sent. However, the benefit is that water can be recycled (as shown above and in diagram below) in various process. A portion of the water shipped can be used to produce oxygen and hydrogen fuel.
Lastly Water Recovery Production can be accomplished in several ways such naturally occurring as ice in craters (doubtful and risky to depend on it), Chemical removal from regolith, and importation from earth at high costs (as said before).
However “Waste water sources” which include humidity condensate, urine, and hygiene waters which can be reclaimed by “condensing heat exchangers” which control cabin humidity levels. The moisture in the air can lightly “contaminated” with ammonia or other water soluble organics such as carboxylic acids and alcohols which are present at low concentration in the air in air to help regulate conditions.
Multifiltration system is a process that’s been tested by both Russian and American astronauts. Multifiltration (MF) is the removal of dissolved contaminants by using an upstream filter followed by a system of absorbent materials that are designed to remove specific chemicals in the waste water. Particulate material is removed by filtration, and dissolved salts and organics are removed by various ion exchange resins. Chemicals with low molecular weight such as alcohols and urine which are not effectively removed by sorption are destroyed by a catalytic oxidation post-treatment, in which oxygen, heat, and pressure chemically alter these substances and convert them into CO2, H2O and more oxygen
Lastly Iodine is used to disinfect water because it is a biocide helps regulate water and bacterial colonies in small amounts. The MCV or Microbial Check Valve can be used to filter water for microbes through an Iodine resin. Regenerative Iodine solutions can be stored and used to replenish iodine levels in the filter when necessary.
Many technologies that help us generate the energy we need but Solar Energy is by far the best. Other forms of energy are too expensive or simply do not have the right conditions on the lunar surface.
Solar heating systems are a possibility of insolation-heated hot water systems use sunlight to heat water. These systems may be used to heat hot water or any other liquid for space heating. These systems are basically composed of solar thermal collectors and a storage tank.
Photovoltaics Solar cells, also referred to as photovoltaic cells, are the most probable source of energy. These devices use the photovoltaic effect of semiconductors to generate electricity directly from received insolation. Their use may be pioneered on the moon due to the abundance of many materials there. They can also be used in powering orbiting satellites (communications) and other spacecraft.
Energy storage will be necessary as the surplus electricity will eventually be used elsewhere, so some means must be employed to store the collected energy for use during hours of darkness on the lunar surfaces (but again Peary Crater has relatively few problems). The following list includes both mature and immature techniques (Bold indicates types I believe are possible and probably the best):
The others will serve as back up storage systems that can be easily implemented (most work better on the Earth than the Moon though).
Exploration and EVA activities:
A lunar base would provide an excellent site for any kind of observatory for deep space exploration. As the Moon's rotation is so slow, visible light observatories could perform observations for days at a time. The lack of an atmosphere doesn’t obstruct the view of the heavens, unlike the earth. An infrared instrument would benefit from the very cold temperatures and a radio telescope would benefit from being shielded from Earth's broad spectrum radio interference.A futuristic design of lunar craft.
Lunar colonists will want the ability to move over long distances, to transport cargo and people to and from modules and spacecraft, and to be able to carry out scientific study of a larger area of the lunar surface for long periods of time. Rovers could be useful if the terrain is not too steep or hilly. A design has been created for a manned pressurized rover for a crew of two; with an effective range would be 396 km (which greatly aids EVA activity.
An experiment done in the mid 1900s at the U.S. Naval Research Lab helped test a “Passive Moon Relay System.” The Naval Research Lab helped demonstrate the moon’s ability to transmit data in a coherent and predictable manner. Today we could easily implement something similar in a “passive circuit” with radio communication and satellite system on the moon.
Length: 107 m (358 feet)
Gross lift off weight: 3350 t (7.4 million lbs.)
Payload Capacity: 130 t (287,000 lb.) to Low earth orbit; (after docking with the separately launched CEV,crew exploration vehicle aka Orion, the EDS, Earth Departure Stage, that will be launched by Ares V will be able to propel 65 t (143,000 lb.) to the Moon).
NASA Official: Mark León
Last Updated: May 2005
+ Contact Us | <urn:uuid:a03750c7-81dd-44e8-992e-d4def6c0513a> | 4.125 | 1,807 | Knowledge Article | Science & Tech. | 41.469813 |
IBM Unknown Compound
Date added: 02 Aug 2010
In a pioneering research project, for the first time, scientists at IBM and the University of Aberdeen have collaborated to "see" the structure of a marine compound from the deepest place on the Earth using an atomic force microscope (AFM). The results of the project open up new possibilities in biological research which could lead to the faster development of new medicines in the future. The experiment was the first successful use of an AFM in the determination of, what was at the time, an unknown molecular structure. In this image, a low-pass filtered three-dimensional representation of the unknown compound. Image courtesy of Nature Chemistry. | <urn:uuid:9e1804a0-afe6-451a-aeab-72e686997bd9> | 3 | 137 | Truncated | Science & Tech. | 31.945652 |
Science Fair Project Encyclopedia
The wave equation is an important partial differential equation which generally describes all kinds of waves, such as sound waves, light waves and water waves. It arises in many different fields, such as acoustics, electromagnetics, and fluid dynamics. Variations of the wave equation are also found in quantum mechanics and general relativity.
The general form of the wave equation for a scalar quantity u is:
Here c is a fixed constant, the speed of the wave's propagation (for a sound wave in air this is about 330 m/s, see speed of sound). For the vibration of string this can vary widely: on a spiral spring (a slinky) it can be as slow as a meter per second.
u = u(x,t), is the amplitude, a measure of the intensity of the wave at a particular location x and time t. For a sound wave in air u is the local air pressure, for a vibrating string it is the physical displacement of the string from its rest position. is the Laplace operator with respect to the location variable(s) x. Note that u may be a scalar or vector quantity.
The basic wave equation is a linear differential equation which means that the amplitude of two waves interacting is simply the sum of the waves. This means also that a behavior of a wave can be analyzed by breaking up the wave into components. The Fourier transform breaks up a wave into sinusoidal components and is useful for analyzing the wave equation.
The one-dimensional form can be derived from considering a flexible string, stretched between two points on a x-axis. It is
The general solution to this is a Fourier series: an infinite sum of sine and cosine waves. If the domain of the equation is infinite with no boundary conditions, then D'Alembert's method can be used to solve it.
In two dimensions, expanding the Laplacian gives:
An example of the solution to the 2-D wave equation is the motion of a tightly-stretched drumhead. In this case, rather than sinusoids, the solutions are combinations of Bessel functions.
The wave equation is the prototypical example of a hyperbolic partial differential equation.
More realistic differential equations for waves allow for the speed of wave propagation to vary with the frequency of the wave, a phenomenon known as dispersion. Another common correction is that, in realistic systems, the speed also can depend on the amplitude of the wave, leading to a nonlinear wave equation:
The elastic wave equation in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion:
- λ and μ are the so-called Lamé moduli describing the elastic properties of the medium,
- ρ is density,
- is the source function (driving force),
- and is displacement.
Note that in this equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation.
- Linear Wave Equations at EqWorld: The World of Mathematical Equations.
- Nonlinear Wave Equations at EqWorld: The World of Mathematical Equations.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:59fca4d2-a967-407a-a24b-3e9a88a962b2> | 3.953125 | 746 | Knowledge Article | Science & Tech. | 38.995891 |