text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
If the oceans eventually become too acidified to sustain most marine life and the jellyfish take over, we can at least take solace in the fact that we’ll have an abundant source of renewable energy. GFP (Green Fluorescent Protein), the same protein isolated in Aequorea victoria that earned three researchers the Nobel Prize in chemistry in 2008, has found a new lease of life in solar and fuel cells being developed by Zackary Chiragwandi at the Chalmers University of Technology in Sweden. Much like the dye found in cutting-edge dye-sensitized solar cells, GFP absorbs a specific wavelength of sunlight—in this case, ultraviolet light—to excite electrons that are shuttled off to an aluminum electrode to generate a current. After giving up their energy, the electrons are then returned to the GFP molecules, where they are ready for another round of stimulation (so to speak).
The cell’s design is simple: two aluminum electrodes are placed onto a thin layer of silicon dioxide, which helps to optimize light capture and energy conversion efficiency, and a single drop of GFP is deposited between them. Without prodding, the protein then self-assembles into strands to connect the electrodes and form a tiny circuit. While cheaper than conventional solar cells, dye-sensitized cells still require some costly materials and are hard to build, making these bio-inspired cells potentially a much more alluring proposition down the line. And because slightly different versions of GFP are found in a number of other marine species, there is the potential for an entire array of more finely tuned GFP cells.Chiragwandi and his colleagues also employed the same basic components to fabricate a rudimentary fuel cell. A mixture of reagents that includes magnesium and luciferase, an enzyme used for bioluminescence, produce the light that activates GFP’s electrons and help the device run—no direct sunlight needed. Because of its minuscule size and low power output, the fuel cell might be a good fit for a wide range of medical nanobots that could one day patrol our bloodstreams and treat our illnesses from within.
These devices are just the latest in a long line of renewable energy technologies that seek to bring down costs and boost efficiency by capitalizing on Mother Nature’s designs. Just a few weeks ago, 80beats’ Joe Calamia wrote about an Australian team’s discovery of chlorophyll f, a pigment that captures light in the infrared wavelength, in cyanobacteria. Since none of the current solar cells can absorb IR light, which accounts for over half of the sun’s rays, some researchers are already giddy at the prospects of harnessing this pigment for use in more efficient cells.
In the realm of science fiction, there’s also the “human” fuel cell (I almost hesitate to use the term since it inevitably brings to mind The Matrix‘s silly human batteries) being developed by a group of French scientists which Discovery News reported on several months back. This device would run on a combination of oxygen and glucose—theoretically ad infinitum—and could thus quite easily be implanted in not only humans but a variety of animals as well. Sure, it probably wouldn’t have much juice but, like the jelly-fuel cell, it could handily power those nanobots.
And if that’s all too highfalutin for you, there’ll always be the trusty poop-powered mobile.
Links to this Post
- PERC SmartBrief | January 24, 2011 | <urn:uuid:2dc67863-bb73-480a-91d7-8e76fdc4a021> | 3.78125 | 745 | Nonfiction Writing | Science & Tech. | 28.36125 |
In some past blogs I talked about the unfortunate lack of computer resources available to the National Weather Service (NWS), resulting in the U.S. trailing behind many international numerical weather prediction centers. This lack of computer resources undermines the ability of the NWS to run high-resolution weather models, to move effectively into probabilistic weather prediction, to enhance hurricane prediction, and much, much more. Far smaller nations, with much more benign weather, such as England and South Korea possess weather supercomputer facilities that dwarf ours.
The cost to the American people of inferior weather computer resources is substantial, both economically and in saving lives.
While U.S. operational weather prediction is provided inadequate computer resources, climate prediction, including studies of potential human-forced global warming, ienjoys the availability of huge supercomputers, with capacities hundreds of times larger than that available for weather prediction.
It is not a little ironic that great emphasis is placed on acquiring state-of-the-art petaflop supercomputers for climate change, while in a year of a dozen billion-dollar weather disasters, the NWS is not given critical tools needed to protect the American people. Someone has their priorities wrong.
Let me give you a few examples. As noted in an earlier blog, the National Weather Service operational computer system has 4992 processors for a total computational capacity of .07 petaflops (a petaflop is one quadrillion floating point operations per second). Keep this .07 petaflop number in mind.
|The new 1.1 petaflop GAIA computer just acquired by NOAA|
Lets begin with the 1.1 petaflop GAIA computer recently acquired by NOAA, a machine sixteen times more capable than the NWS weather prediction computer. This machine will be dedicated to climate research (see an article here on this machine).
The National Center for Atmospheric Research, an entity mainly funded by the National Science Foundation, is now completing a 70 million dollar facility that includes the new Yellowstone computer, capable of 1.5 petaflops. This machine will be used mainly for climate research (article here). A great irony is that this machine uses HUGE amounts of power, power that will come from coal-fired power plants that emit lots of CO2.
|NASA Pleiades supercomputer|
I could go on and on naming other supercomputers owned by the U.S. Department of Energy and others that are used for climate research--it would be a very long list. The bottom line is that the computational power available for climate simulations, for understanding and predicting climate change over the next few decades to a century, absolutely dwarfs what is available for predicting the weather and for understanding how weather systems work.
This makes no sense.
Imagine if one of the petaflop machines was made available for weather prediction. The forecast skill of the U.S. global weather models could be substantially increased--providing skillful forecasts further out in time. We could run models with enough resolution to get the fine scale structures of hurricanes and other storms. A new age of probabilistic weather prediction could begin, with high-resolution ensembles providing uncertainty information for local weather features. The impact would be huge, saving hundreds of millions or billions of dollars in economic impacts from severe weather, protecting lives, and improving the functioning of our air traffic control and highway systems. Real and profound benefits.
Now don't get me wrong. Understanding and modeling climate change is important. But there are dozens of supercomputers in the U.S. that are quite capable of this task--and remember that climate simulations don't have to be done within a set schedule like weather predictions. And there are many groups around the world doing the same type of global climate simulations--and quite frankly all the better models get essentially the same results. There is a vast overkill in pushing computer resources for climate prediction, while weather prediction is a very poor cousin. And consider the fact that with all the supercomputers available for climate prediction, the uncertainty of the predictions for the next century has remained essentially unchanged. The reason...adding more physics and interactions in the models adds uncertainty since the problem is getting more and more complex.
Why is this gross imbalance happening? That is something I will leave to the comment section of this blog. But it is clear that leadership in NOAA, the Department of Commerce, and in other Federal agencies have let this go on too long, to the detriment of the American people. Our congressional representatives and others need to intervene. The U.S. Office of Management and Budget (OMB) needs to evaluate this situation more carefully.
And don't forget that improved weather prediction is critical for dealing with climate change.
Mankind is doing very little to stop anthropogenic global warming--we are going to do the experiment. We are ALREADY doing the experiment. Thus, adaptation will be critical and what is more important for adaptation than improved weather forecasts? If climate extremes will increase under global warming we need to be able to predict them in the short-term to protect people and property. Furthermore, there is no better way to improve climate models than to improve weather models, since essentially they are the same. You learn about model weaknesses from daily forecast errors in a way you can't do in climate predictions.
|Climate Computer Support|
|Weather Computer Support|
We can do BOTH climate and weather prediction, but more balance in resource allocation is needed.
PS: Probcast, the UW high-tech probabilistic prediction system (www.probcast.com), is back up! The software is now on my department servers, so it should be far more dependable.
PS:: My lost dog was spotted in Mountlake Terrace (the person was pretty sure about this)-- near the intersection of 238th place SW and 52nd Ave. If you live around, let me know if you see her! (see right panel for more information) | <urn:uuid:f5c073df-0b10-4d9a-99b3-0150c5fad785> | 3.0625 | 1,227 | Personal Blog | Science & Tech. | 38.696418 |
Welcome to Matter Anti-Matter, a site about nerd stuff. By day, I'm Head of Community at Kickstarter.
You can also find me here .
Researchers from the Australian National University have announced that they have built a device that can move small particles a meter and a half using only the power of light.
Physicists have been able to manipulate tiny particles over miniscule distances by using lasers for years. Optical tweezers that can move particles a few millimeters are common.
Andrei Rhode, a researcher involved with the project, said that existing optical tweezers are able to move particles the size of a bacterium a few millimeters in a liquid. Their new technique can move objects one hundred times that size over a distance of a meter or more.
The science behind optical tweezers won’t work in the vacuum of space, but we’re gettin’ there. 1 meter at a time. | <urn:uuid:d3262c4f-9f48-45de-b2df-655309242c20> | 3.5 | 196 | Personal Blog | Science & Tech. | 44.776088 |
On today's Moment of Science, we're talking about the giant weta, a record holder of the insect world that needs our protection.
Giant penguin populations started to decline about twenty five million years ago.
How lancewood tree leaves change to defend against the moa, a flightless bird.
Scientist are studying the extinct giant moa bird and its environment. Find out what kinds of artifacts they are searching for to help their study!
You probably haven't heard of the Sooty Shearwater. This creature can travel 40,000 miles in just 200 days, which is the longest migration we have on record.
As you know, bats are the only flying mammals, and, as their bodies became increasingly specialized for flight over the course of evolution, most species lost the ability to walk. Only a couple of exceptions are known, including a species of bat that lives in New Zealand and the common vampire bat of South and Central America. While other bats can only shuffle, these bats use their wings as forelimbs, so they can walk around like any other four-legged animals. Learn more on this Moment of Science.
What role do hand gestures play in regular conversation? Find out on this Moment of Science.
Over the past few decades, the earthworm population has severely decreased. Since the New Zealand flatworm accidentally made its way to the British Isles in 1963, it has wreaked havoc on some of the land, devouring fields of earthworms. | <urn:uuid:b8ca1bb2-50b4-4f95-83be-900b43ba7f3f> | 3.625 | 304 | Content Listing | Science & Tech. | 55.723146 |
In C and C++, you can have 3 ways of write a number: a) decimal b) hexadecimal 3) octal
Posts tagged ‘Data Types’
Windows comes with two types that represent a Boolean variable (TRUE or FALSE.) Both represent FALSE if 0 and TRUE if non-zero.
A union is a memory location that is shared by two or more different types of variables. A union provides a way for interpreting the same bit pattern in two or more different ways (or forms.)
In fact, unions share structures lots of characteristics, like the way they defined and marshaled. It might be helpful to know that, like structures, unions can be defined inside a structure or even as a single entity. In addition, unions can define complex types inside, like structures too. | <urn:uuid:c84b12cb-79e8-4643-8f90-9c8f18ef6c12> | 3.171875 | 166 | Personal Blog | Software Dev. | 49.371154 |
5.1. Morphological Types and Masses of Double Galaxies
In the preceding chapters we have viewed galaxies in pairs as elementary units, ignoring any differences of morphological type. By taking these differences into account we can find new laws concerning the origin and coupled evolution of double galaxies. The first version of the catalogue of pairs (Karachentsev, 1972) incorporated only broad divisions of double galaxies into ellipticals and spirals. In the present version of the catalogue (Appendix 1), we extend this to a more detailed structural classification of galaxies by Hubble type: E, S0, Sa, Sb, Sc, Sm (see section 2.4). The morphological types are based largely on high contrast prints. For a small fraction of the cases, about 20%, we also had large scale plates taken by the author. In determining types, we further considered estimates of their colours from comparison of the blue and red Palomar charts.
Morphological classification of faint galaxies on the Palomar charts is often rather uncertain because of the high contrast and overexposure of the central regions of galaxies, which applies particularly to objects of low surface brightness. A further difficulty arises for tight pairs, where the classical indicators of morphological type are strongly disturbed by the effects of interaction. To estimate the errors in classification we turn to the data by Eichendorf and Reinhart (1981), who compiled a sample of double galaxies which satisfy any of the criteria of Holmberg (1937), Karachentsev (1972) or Turner (1976a). For their sample they collected data on the apparent axial ratio, position angles and morphological type (E, S0, Sa, Sb, Sc, Irr) for 1105 pairs of galaxies. The distribution by Hubble type of 323 galaxies in common between our catalogue (K) and the sample of Eichendorf and Reinhart (ER) is presented in Table 11, from which we can draw some conclusions. The fractional number with identical classification (the sum of the diagonal elements of the matrix) is 41%. The number of cases with errors of one and two classes are respectively 41% and 13%. The number of galaxies for which the difference in type Ty 3 does not exceed 5%. The mean quadratic error in determining the Hubble type is < Ty2>1/2 = 1.18, with a mean value <TyK - TyER> = - 0.8 ± 0.07, showing that any classification errors are not large (4). Know, however, that this comparison was performed on a selection of bright objects, for which the structural types may be determined with more certainty than for galaxies closer to the catalogue limit.
Table 12 shows how the pairs in the catalogue are distributed by morphological types of the brighter and fainter components. Here and in what follows we will discuss only those pairs for which the orbital mass-to-luminosity ratio does not exceed 100 f. It follows from these data that there is no large difference between the structural types of the bright and faint components. The galaxies of a given Hubble class (excluding Sm) have an excellent chance of being either the brighter or the fainter member of a double system. The main property of this matrix of morphological types is the excess number of pairs along its diagonal. The tendency of galaxies located in pairs to share morphological types was remarked by Karachentsev and Karachentseva (1974) and confirmed by Noerdlinger (1979). To estimate the magnitude of this effect, if i is the relative number of Hubble type i, (i = 1, 2, ..., 6) in a sub-sample of double galaxies, then the probability of encountering two galaxies in a double system with types i and j is given by the relative fraction ij = 2i j (i j) and ii = i2. The observed and expected number of pairs are presented in matrix form in Table 13. As we see, double galaxies with the same structural type are encountered in the catalogue significantly more often than expected by chance. This excess even occurs for pairs with mixed Hubble types. Comparison of these two distributions using a 2 criterion shows that the assumption of random distributions of morphological types may be ruled out at the level p << 10-4. The largest effect is seen for galaxies at the extremes of the Hubble sequence, Sm and E.
It is well known that the distributions of elliptical and spiral galaxies are coupled strongly to the local concentration of galaxies. Objects of early type, E and S0, concentrate in the central regions of rich clusters, while spirals and irregular galaxies are distributed on the edges of clusters, in smaller groups and throughout extra-galactic space. This universal law is just as visible in the simplest systems, the double galaxies. Table 14 presents the number ni and relative fraction i of various Hubble types among catalogue double galaxies and single (isolated) galaxies from the Karachentseva (1973) catalogue. Also shown is a sub-sample of wide pairs in which the components have a projected mutual separation exceeding 50 kpc. It is apparent that the late types (Sb, Sc, Sm) are encountered less often among double galaxies than among isolated systems. This conclusion is supported by the statistics for the axial ratios of spiral galaxies in both catalogues (Karachentsev and Karachentseva, 1974), as well as by radio data on the abundance of neutral hydrogen (Balkowski and Chamaraux, 1981). Note that the photometric limits in the region of the sky covered by the catalogues of pairs and of isolated galaxies are identical, and so, therefore, differences in the occurrence of early and late types cannot be artefacts of any sort of selection effects.
The morphological type segregation is also present among double systems as a function of separation. Wide pairs with X > 50 kpc contain a smaller number of elliptical and lenticular galaxies, approaching the values found among isolated systems, and objects of irregular types are almost absent among these wide pairs. These properties are apparently due to differences in the formation conditions for wide and close double systems.
In the literature one may encounter additional discussions about the dependence of the structural type of a galaxy on that of its nearest neighbour. One source of such an effect might be the heating of the disk of a spiral galaxy due to the passage of massive satellites. Numerical experiments by Gerhard and Fall (1983) show, however, that `blowing up' a galaxy (increasing the stellar velocities in the z coordinate) is effective only when the velocity of relative motion of the two galaxies is more than three times the mean rotation velocity of the disk. This condition is satisfied in rich clusters, but in pairs and small groups this method is not sufficient to change the thin disk of a spiral into a thick one. Dressler (1980) examined a further mechanism for morphological segregations, according to which the formation of gaseous disks in spiral galaxies, and stellar sub-systems within the disks, takes place over a time t 2 × 109 years. If galaxies have close neighbours during this period, then their presence may block or delay the formation of flat sub-systems.
We now return to the question of the correlation of structural types between galaxies in pairs. Let n0( Ty) indicate the total observed number of double systems in the catalogue with Hubble type difference Ty = 0, 1, ...5, and let np( Ty) be the analogous expected number of systems for a random distribution of galaxies in pairs. These numbers are easily derived from the diagonal elements of the matrix in table 13. We introduce the correlation function by morphological type,
The values of this function for the sample of 487 pairs are presented in figure 33 as diamonds, the height of which indicates the statistical standard deviation in the mean. The data for 118 wide pairs are indicated by the points, with their associated standard deviation. NOTE: We will have to include the more recent Japanese work on nearest neighbours, incorporating radial velocities, at this point.
As is apparent, the correlation function has significant values only for identical and neighbouring Hubble types. For wide pairs, the amplitude (0) is significantly smaller than for the entire sample. The excess relative number of `twins' among tight pairs may follow from various processes which would tend to synchronize the evolution of double galaxies, such as the exchange of gas, a change in stellar orbits due to resonance effects, or bursts of star formation. On the other hand, the apparent match in structural types and their further dependence on separation may have their origin in simultaneous epochs of formation of the members of pairs in identical environments. Is there a morphological effect due to relict causes, or does it indicate evolutionary processes ? The answer to this crucial question can still not be given.
We note here two circumstances. According to the data in figure 33 the correlation function ( Ty) is significant only for Ty 1. For a mean error in the type (Ty) 1, the error in classification should contribute significantly to the correlation function. Detailed estimates show that this will increase the amplitude of (0) by 1.5 to 2 times.
These estimates of the probability of encountering galaxies of various morphological types in double systems characterize their catalogue, and not their spatial, distribution. This caution applies as well to the correlation function for Hubble types. The difference between the partial distributions of objects of two types, per unit volume and in the catalogue, depends on the difference in their luminosity functions. An attempt to go from the catalogue to the true distribution of galaxies by morphological type was made by Arakelian (1983b).
Table 15 illustrates the variation along the Hubble sequence of the mean absolute magnitude and mean linear diameter of double galaxies, along with the standard deviations of these quantities, (M) and (A). Excluding the irregular galaxies, the luminosity function and diameter function for pair members does not depend strongly on morphological type, in agreement with the results of Sandage et al. (1985). Because the luminosity of Sm galaxies is an order of magnitude lower than for other types, the true density of such objects per unit volume will be considerably higher than in the catalogue (magnitude limited) sample. Thus Sm galaxies comprise 45% of the 12 closest pairs (table 10), instead of the 6% characteristic of the entire catalogue.
We have already remarked that pairs of galaxies exhibit a morphological segregation which depends on separation. In Table 16 we present the mean values of the projected separation and radial velocity difference for double galaxies of various types. On passing from early to late Hubble types we note a tendency for increasing separation and decreasing radial velocity difference. However, double systems containing dwarf components of type Sm deviate from this property, due to selection effects (the criteria discriminate against wide pairs containing galaxies of low luminosity).
In the previous chapter we concluded that the orbital masses of double galaxies agreed very well with their individual masses measured from rotation curves. Do we observe such agreement for selected morphological types of galaxies ? The data on this question have a somewhat contradictory history. Page (1952, 1960, 1961), who first measured radial velocities for double galaxies, gave estimates for orbital mass-to-luminosity ratios for elliptical galaxies two orders of magnitude greater than for spirals. Page considered that this result indicated the large virial values of f in clusters, in which elliptical and lenticular systems are concentrated. On the other hand, a strong difference between EE and SS galaxies by mass-to-luminosity ratio is not supported by measures of the stellar velocity dispersion in the central parts of elliptical galaxies. These results may be satisfied by the simple proposition that ellipticals concentrated in rich clusters and compact groups should include a significant number of false pairs where the galaxies are projected nearly in contact with one another and have radial velocity differences of order the virial velocity of the cluster. Further, excluding the false pairs by means of the isolation criterion, we may consider the ratio fE/fS. Using the sub-samples of Turner (1976b), Karachentsev (1977), and Peterson (1979b) this ratio is reduced to fE / fS 2. We now examine the dependence of f (Ty) on the properties of pairs for the entire sample catalogue.
The estimates in the literature on the mass-to-luminosity ratio for systems of various types are collected in table 17. The first column indicates the Hubble type of the galaxies, the second, the mean mass-to-luminosity ratio <f> and the standard deviation in the mean, the third indicates the number of observed objects n, and the final one the source for the data. The data as presented by various authors have been reduced to a standard system of apparent magnitude BT0 and standard radius R25 for a Hubble constant H = 75 km/s/Mpc and an absolute solar magnitude M = 5.40. Within the bounds of each morphological type, the mass-to-luminosity ratios as obtained by various methods (stellar velocity dispersion, rotation curve, or 21-cm HI line width) showed satisfactory agreement. For galaxies of type Sm estimates of the mass from the 21-cm profile may be affected by systematic errors in measuring the width of weak lines and by neglect of turbulent velocities in the objects seen almost face-on (Lewis, 1983).
The compilation of these data supports the general trend remarked by Faber and Gallagher (1979): with decreasing importance of the spheroidal component to the integrated luminosity of the galaxy, the mass-to-luminosity ratio decreases from values f (E) 10 to f (Sm) 4. An analogous trend can be demonstrated for the mean ratio of orbital mass to luminosity for double galaxies. Of the 487 pairs having f < 100 we have selected the systems with identical structural types of the components. To reduce the role of radial velocity errors we excluded 15 pairs for which the error fu (see section 4.2) exceeds 5. Averaging the orbital mass-to-luminosity ratios gives the results shown in the last column of table 15. Because of an unsatisfactorily small sample size, there was only one S0+S0 pair, so the mean for this type was calculated by including the 14 E+S0 pairs.
The reduced data are presented together in figure 34. The mean orbital mass-to-luminosity ratio for 107 pairs is shown as diamonds, the height of which indicates the standard deviation of the mean. The points show the mean for various types of galaxies according to the data in table 17. The mean individual mass-to-luminosity ratios for 209 individual components (Karachentsev, 1985) are shown as the crosses.
In agreement with the conclusions of the previous chapter, we may confidently assert that the orbital mass-to-luminosity ratio for double galaxies agrees with the individual f values not only in general but also for each individual morphological type. The contention often found in the literature that the masses of pairs of elliptical (Rood, 1976) and of dwarf (Lake and Schommer, 1984) galaxies are unusually high (f 100) are contradicted by the results in figure 34. The excessive estimates of pair masses by these authors are apparently due to the inclusion of a large number of false double systems. It must be stressed that the excellent agreement between orbital and individual masses for galaxies of every Hubble type would be very difficult to understand if the majority of the mass of double galaxies were located in invisible extensive haloes around the individual components of double systems.
4 For 76 galaxies in common with the catalogue of de Vaucouleurs et al. (1976) (V) we find <TyK - TyV> = -0.16 ± 0.10 and < Ty2 >1/2 = 0.87. Back. | <urn:uuid:c7c6998e-c07f-420d-9f78-c480742c224c> | 3.140625 | 3,258 | Academic Writing | Science & Tech. | 37.494535 |
Mathematics is the study of patterns. Studying pattern is an
opportunity to observe, hypothesise, experiment, discover and
Show that among the interior angles of a convex polygon there
cannot be more than three acute angles.
Can you mark 4 points on a flat surface so that there are only two
different distances between them?
Use the interactivity to listen to the bells ringing a pattern. Now
it's your turn! Play one of the bells yourself. How do you know
when it is your turn to ring?
ABC is an equilateral triangle and P is a point in the interior of
the triangle. We know that AP = 3cm and BP = 4cm. Prove that CP
must be less than 10 cm.
A useful visualising exercise which offers opportunities for
discussion and generalising, and which could be used for thinking
about the formulae needed for generating the results on a
Triangle numbers can be represented by a triangular array of
squares. What do you notice about the sum of identical triangle
How many moves does it take to swap over some red and blue frogs? Do you have a method?
ABCDEFGH is a 3 by 3 by 3 cube. Point P is 1/3 along AB (that is AP
: PB = 1 : 2), point Q is 1/3 along GH and point R is 1/3 along ED.
What is the area of the triangle PQR?
A package contains a set of resources designed to develop pupils'
mathematical thinking. This package places a particular emphasis on
“visualising” and is designed to meet the needs. . . .
In the game of Noughts and Crosses there are 8 distinct winning
lines. How many distinct winning lines are there in a game played
on a 3 by 3 by 3 board, with 27 cells?
Use the interactivity to play two of the bells in a pattern. How do
you know when it is your turn to ring, and how do you know which
bell to ring?
A rectangular field has two posts with a ring on top of each post.
There are two quarrelsome goats and plenty of ropes which you can
tie to their collars. How can you secure them so they can't. . . .
How could Penny, Tom and Matthew work out how many chocolates there
are in different sized boxes?
A huge wheel is rolling past your window. What do you see?
Seven small rectangular pictures have one inch wide frames. The
frames are removed and the pictures are fitted together like a
jigsaw to make a rectangle of length 12 inches. Find the dimensions
of. . . .
ABCD is a regular tetrahedron and the points P, Q, R and S are the midpoints of the edges AB, BD, CD and CA. Prove that PQRS is a square.
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
Imagine you are suspending a cube from one vertex (corner) and
allowing it to hang freely. Now imagine you are lowering it into
water until it is exactly half submerged. What shape does the
surface. . . .
The diagram shows a very heavy kitchen cabinet. It cannot be lifted but it can be pivoted around a corner. The task is to move it, without sliding, in a series of turns about the corners so that it. . . .
The opposite vertices of a square have coordinates (a,b) and (c,d). What are the coordinates of the other vertices?
Draw a pentagon with all the diagonals. This is called a pentagram.
How many diagonals are there? How many diagonals are there in a
hexagram, heptagram, ... Does any pattern occur when looking at. . . .
These are pictures of the sea defences at New Brighton. Can you
work out what a basic shape might be in both images of the sea wall
and work out a way they might fit together?
Four rods, two of length a and two of length b, are linked to form
a kite. The linkage is moveable so that the angles change. What is
the maximum area of the kite?
You have 27 small cubes, 3 each of nine colours. Use the small
cubes to make a 3 by 3 by 3 cube so that each face of the bigger
cube contains one of every colour.
Square numbers can be represented as the sum of consecutive odd
numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153?
The whole set of tiles is used to make a square. This has a green and blue border. There are no green or blue tiles anywhere in the square except on this border. How many tiles are there in the set?
Start with a large square, join the midpoints of its sides, you'll see four right angled triangles. Remove these triangles, a second square is left. Repeat the operation. What happens?
A right-angled isosceles triangle is rotated about the centre point
of a square. What can you say about the area of the part of the
square covered by the triangle as it rotates?
A half-cube is cut into two pieces by a plane through the long diagonal and at right angles to it. Can you draw a net of these pieces? Are they identical?
Can you cross each of the seven bridges that join the north and south of the river to the two islands, once and once only, without retracing your steps?
In how many ways can you fit all three pieces together to make
shapes with line symmetry?
Blue Flibbins are so jealous of their red partners that they will
not leave them on their own with any other bue Flibbin. What is the
quickest way of getting the five pairs of Flibbins safely to. . . .
Choose any two numbers. Call them a and b. Work out the arithmetic mean and the geometric mean. Which is bigger? Repeat for other pairs of numbers. What do you notice?
A Hamiltonian circuit is a continuous path in a graph that passes through each of the vertices exactly once and returns to the start.
How many Hamiltonian circuits can you find in these graphs?
Imagine starting with one yellow cube and covering it all over with
a single layer of red cubes, and then covering that cube with a
layer of blue cubes. How many red and blue cubes would you need?
Bilbo goes on an adventure, before arriving back home. Using the
information given about his journey, can you work out where Bilbo
A tilted square is a square with no horizontal sides. Can you
devise a general instruction for the construction of a square when
you are given just one of its sides?
Show that all pentagonal numbers are one third of a triangular number.
Can you find a rule which connects consecutive triangular numbers?
On the graph there are 28 marked points. These points all mark the
vertices (corners) of eight hidden squares. Can you find the eight
Use the animation to help you work out how many lines are needed to draw mystic roses of different sizes.
Can you find a way of representing these arrangements of balls?
Some students have been working out the number of strands needed for different sizes of cable. Can you make sense of their solutions?
What is the shape of wrapping paper that you would need to completely wrap this model?
When dice land edge-up, we usually roll again. But what if we
Have a go at this 3D extension to the Pebbles problem.
Here is a solitaire type environment for you to experiment with. Which targets can you reach?
A bus route has a total duration of 40 minutes. Every 10 minutes,
two buses set out, one from each end. How many buses will one bus
meet on its way from one end to the other end?
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges. | <urn:uuid:6c59fa9b-01ed-40d7-8b24-97b2531c3135> | 4.3125 | 1,748 | Content Listing | Science & Tech. | 71.028344 |
|First half of page.||Second half of page.|
The Regions of Superdeformation
Superdeformed nuclei are beautiful examples of quantum rotors. They have been referred to as "nuclear pulsars" and indeed close analogies have been found between the fast rotation of the nucleus and that of a neutron star! Additionally we have made the stunning discovery that superdeformed nuclei are the best example of single-nucleonic motion in a deformed potential. Thus the physics of superdeformation involves a fascinating interplay between the microscopic (shell structure) and the macroscopic (e.g. surface and Coulomb energies) properties of the nucleus. Consequently, they provide a unique laboratory for testing nuclear models. Many superdeformed rotational bands have been observed throughout the nuclear chart (see accompanying figure), and they cover the full range of possible spins, from I = 0h right up to I = 70h close to the fission limit. GAMMASPHERE has played a leading role in the discovery and investigation of these extreme and unusual nuclear "shape isomers". For example, the recent observation of superdeformation in Zn nuclei using GAMMASPHERE has opened up a new region in which to investigate the properties of highly collective superdeformed states.
The high sensitivity of GAMMASPHERE has enabled for the first time extremely precise measurements of transition energies and decay probabilities. A full understanding of the decay of superdeformed bands, in terms of the mixing of superdeformed with normal deformed states, requires accurate knowledge of the transition rates (state lifetimes) at the point of decay. A Recoil Distance Method (RDM) experiment on 194Pb is shown opposite, the state lifetime is deduced from the ratio of the stopped (u) to moving (s) component of the gamma ray peak, requiring spectra with very high statistics and low background.
At higher spins and transition energies the Doppler Shift Attenuation Method (DSAM) is used. GAMMASPHERE allows a simultaneous measurement of the lifetimes of states in multiple superdeformed bands, which greatly reduces the systematic errors which have plagued this type of measurement up until now. It is now possible to extract very accurate relative quadrupole moments (deformations) of different structures. The gamma-ray spectra of superdeformed bands in 192Hg and 194Hg (middle portion of the figure) are clearly identical. The question is, are other properties, such as level spins, or deformations, also identical? GAMMASPHERE data have recently shown that the states in 192Hg and 194Hg which decay with the same transition energies do not have the same spin values. However, they do have the same lifetimes (as seen by the identical fractional Doppler shift curves, right figure), and hence we conclude that these superdeformed nuclei have the same deformation. Knowing which properties of "identical" bands are the same (apart from transition energies) imposes strong constraints on possible explanations of this unexpected and as yet unexplained phenomenon. | <urn:uuid:d7326750-46d9-4a3a-967b-6ebbbcff5ac0> | 3.453125 | 632 | Academic Writing | Science & Tech. | 22.062236 |
News from the deep: Oceana's crew aboard the Ranger has discovered a previously undocumented coral reef in the Alboran Sea in the high seas of the Mediterranean.
The reef, which is located more than 1,300 feet below the surface and covers over 1 million square feet, is formed primarily by white coral. With this discovery, Oceana with be able to glean additional data from the reef to support our efforts to declare new marine protected areas in the Mediterranean.
Coral reefs are the backbone of many marine ecosystems, and deep-sea corals area among the most vulnerable. Tragically, many reefs are destroyed by bottom trawling, a fishing technique akin to clear-cutting that devastates coral reefs and creates seafloor wastelands devoid of life.
And coral reefs aren’t the only habitats that suffer. The area around the newly discovered reef is flourishing with other important habitats including gorgonian gardens and rare glass sponge fields. In order to protect this region, Oceana is planning to present the data to the Barcelona Convention in the near future, pressing officials to list it as a protected area.
This is part of a series of posts about our Pacific Hotspots expedition. Today's highlights: On their final day in Oregon, the crew ventures into uncharted territory and finds a variety of corals and fish.
Oregon Leg, Day 5
Friday was our last day aboard the R/V Miss Linda and it could not have been a better day for working on the ocean. We left the Charleston Marina at 7 AM bound for the nearshore reef south of Cape Arago and west of Seven Devils State Park.
As we were working in and out of Charleston today, we invited guests to join our expedition including Dr. Craig Young, the director of the University of Oregon’s Oregon Institute of Marine Biology and Dr. Jan Hodder from the Oregon Institute of Marine Biology.
The University of Oregon has been operating marine studies in the Charleston area since 1924 with year-round research programs beginning in 1966. Dr. Young and his graduate students have made hundreds of deep dives in submersibles and sailed on oceanographic ships in the Atlantic, Pacific and Indian Oceans. Yet surprisingly, nobody has ever been to the areas we went Friday with a Remotely Operated Vehicle (ROV) and underwater camera.
This is part of a series of posts about our Pacific Hotspots expedition.
Today, in beautiful Monterey, Oceana kicked off the first part of a three-week research cruise. This week we are aboard the research vessel Derek M. Baylis, focusing on Important Ecological Areas (ocean hotspots) in Monterey Bay.
Today’s goal consisted of conducting trial runs with the Remotely Operated Vehicle (ROV) called Video Ray Pro IV as well as allowing the Oceana crew from South America, Alaska, Oregon, and California to get our sea legs and refine our on-board duties. With a small High Definition camera on the ROV, we recorded about an hour of footage at each of the four sites we visited.
At the Monterey Shale Beds, at depths up to 125 feet, we observed a myriad of life in the nooks and crannies including sea cucumbers, anemones, gobies, juvenile rockfish, kelp rockfish, sculpins, gorgonian corals, an octopus, a wolf eel, and a metridium (an anemone that looks like white cauliflower). We watched a sunflower star feeding and a sheep crab that was not so ‘sheepish’ as it instigated a wrestling match with the ROV. | <urn:uuid:3e22a580-dfa2-41e9-b579-71a94d3ab62b> | 2.84375 | 762 | Content Listing | Science & Tech. | 42.357142 |
A uniform plank of weight 150N and of length 4.0m rests horizontally on two bricks. One of the bricks is at the end of the beam. The other brick is 1.0m from the other end of the plank.
a) Sketch the arrangement and calculate the support force on the plank from each brick
My answer for A was: 50N and 100N.
b) A child stands on the free end of the beam and just causes the other end to lift off its support. Sketch this arrangement and calculate the weight of the child.
I'm stuck on how to solve this question. The answer is supposed to be 150N. | <urn:uuid:771a287c-7bed-417a-a485-6b49431f4fc3> | 3.75 | 134 | Q&A Forum | Science & Tech. | 94.200932 |
Publications Home » Finding the Lazy Programmer's Bugs
Traditionally developers and testers created huge numbers of explicit tests, enumerating interesting cases, perhaps biased by what they believe to be the current boundary conditions of the function being tested. Or at least, they were supposed to.
A major step forward was the development of property testing. Property testing requires the user to write a few functional properties that are used to generate tests, and requires an external library or tool to create test data for the tests. As such many thousands of tests can be created for a single property. For the purely functional programming language Haskell there are several such libraries; for example QuickCheck, SmallCheck and Lazy SmallCheck.
Unfortunately, property testing still requires the user to write explicit tests. Fortunately, we note there are already many implicit tests present in programs. Developers may throw assertion errors, or the compiler may silently insert runtime exceptions for incomplete pattern matches.
We attempt to automate the testing process using these implicit tests. Our contributions are in four main areas: (1) We have developed algorithms to automatically infer appropriate constructors and functions needed to generate test data without requiring additional programmer work or annotations. (2) To combine the constructors and functions into test expressions we take advantage of Haskell's lazy evaluation semantics by applying the techniques of needed narrowing and lazy instantiation to guide generation. (3) We keep the type of test data at its most general, in order to prevent committing too early to monomorphic types that cause needless wasted tests. (4) We have developed novel ways of creating Haskell case expressions to inspect elements inside returned data structures, in order to discover exceptions that may be hidden by laziness, and to make our test data generation algorithm more expressive.
In order to validate our claims, we have implemented these techniques in Irulan, a fully automatic tool for generating systematic black-box unit tests for Haskell library code. We have designed Irulan to generate high coverage test suites and detect common programming errors in the process.
pubs.doc.ic.ac.uk: built & maintained by Ashok Argent-Katwala. | <urn:uuid:9412937d-588b-475c-9bdb-c95361caa3ca> | 2.71875 | 433 | Academic Writing | Software Dev. | 25.719484 |
The granddaddy solar flare of all time has, until very recently, been regarded as the Carrington Event in 1859, the dawn of the telegraph age. The event was witnessed in real time by British astronomer Richard Christopher Carrington.
"Two patches of intensely bright and white light broke out," he later wrote. Carrington puzzled over the flashes. "My first impression was that by some chance a ray of light had penetrated a hole in the screen attached to the object-glass," he explained, given that "the brilliancy was fully equal to that of direct sun-light."Note that these flashes were so much brighter than the projected image of the sun in his dark room that he thought daylight was somehow getting into the room. The story itself is amazing. That evening, when the Coronal Mass Ejection hit, telegraph operators were able to run without batteries; the flare-induced voltages on their wires working better than batteries. The aurora display was global, even in the deep tropics.
If such a flare were to happen today, I believe it would take out the power grid on at least the hemisphere facing the sun when the CME hit. What we have in our favor is that our modern monitoring systems would allow grid operators to shut down some to all of their gear: force the world into a black out, so that the equipment could be reconnected when the storm was over.
Now comes a story that there appears to have been a flare that could have been 20 times stronger than the Carrington event.
Everybody loves a good “whodunit?” How else could you explain the number of television shows with the prefix “CSI”? So when a study in Nature identified a previously unknown (and very large) spike in carbon-14 around the year 774 AD, it raised a lot of eyebrows. This radioactive isotope of carbon is created when energetic particles from beyond the Earth transform atmospheric nitrogen to a form of carbon with two neutrons more than the most common isotope.There are a couple of known mechanisms for creating C14 in the atmosphere, one is a massive solar flare. 774 AD was 600 years or so before the first telescopes were used, so there was no Carrington to be watching.
So when a college student from UC-SD found a record of a “red crucifix” in the skies over Britain in that year, Nature published his note.The story diverges a bit here, because the original group who tried to calculate how big a flare would have to be to cause the measured amount of carbon-14 in tree rings made a mistake and ended up with a preposterous result, 1000 times bigger than Carrington's granddaddy flare.
A pair of researchers from Washburn University and the University of Kansas published a comment in Nature pointing out that the solar flare calculations included a rather fundamental error. Working backwards from the intensity required to produce the right amount of carbon-14 in Earth’s atmosphere, they mistakenly calculated the total size of the event as if the flare was emitted in all directions from the Sun, forming an expanding bubble of charged particles.CMEs, are fairly localized, so the amount of spread would be much less than calculated and a powerful enough flare much smaller than the original calculations. These researchers derive a number closer to 20x the size of Carrington's flare.
A flare this size is genuinely scary; I don't know how much bigger it would be than the one pictured here, from November 2003, the largest flare observed since the space age began, but it would fry an entire hemisphere's grids, and if the grids are connected better than I think, would plunge the entire world into darkness. I don't know if anyone on the Space Station could survive that.
this and briefly explained why here).
But you know how a bunch of those UN Agenda 21 freaks want to kill off 95% of the humans on earth? This just might do it. | <urn:uuid:38bcd5b3-0706-4c8d-a1d9-397cf13c6e5c> | 3.609375 | 813 | Personal Blog | Science & Tech. | 47.635243 |
Normal cloud bottoms are flat.
This is because moist warm air
that rises and cools will condense into water droplets at a specific temperature,
which usually corresponds to a very specific height.
droplets grow, an opaque cloud forms.
Under some conditions, however, cloud pockets can develop that contain large droplets
of water or ice that fall into clear air as they evaporate.
Such pockets may occur in
turbulent air near a
Resulting mammatus clouds can appear especially
if sunlit from the side.
mammatus clouds were photographed over
during the past summer.
Craig Lindsay, Wikipedia | <urn:uuid:516791aa-33cd-47da-a502-002f9117febc> | 2.75 | 132 | Knowledge Article | Science & Tech. | 40.633333 |
Soufriere Hills volcano in Montserrat has been active for more than 14 years
Volcano expert Geoff Wadge explains how the International Charter on Space and Major Disasters helped manage a volcanic eruption in Montserrat.
The International Charter on Space and Major Disasters aims to provide timely and targeted satellite data during an emergency.
It gives disaster managers an official route for getting satellite sensors, run by cooperating space agencies, to obtain images of problem areas.
Experience from the small Caribbean island of Montserrat shows such data can be invaluable. They can prove crucial to planning an effective emergency response if local expertise is already focused on the problem and available to take swift action.
The Soufriere Hills volcano in Montserrat has been active for more than 14 years. While many months can go by with no hazards, there are often periods of very high risk.
The government manages this fluctuating risk by keeping the southern two thirds of the island uninhabited, while people still live in a zone that can be easily evacuated in the face of immediate danger.
So when the volcano exploded on 28 July last year (2008), showering fist-sized pumice fragments over the inhabited areas, the government ordered an evacuation of people closest to the volcano.
Scientists at the Montserrat Volcano Observatory (MVO) had already been measuring a swarm of earthquakes beneath Soufriere Hills, as new magma rose into the volcano.
So had the explosion made the volcano more dangerous? Or would it be less hazardous over the coming weeks? And if it was less dangerous, could the evacuation be revoked?
Shrouded in cloud
In the years running up to the explosion a large dome of lava a few hundred metres across had grown. If this collapsed, it would send deadly 'pyroclastic' flows of super-hot gas and rocks towards the occupied areas. Had the explosion made this dome more unstable?
To answer this, the scientists needed to observe the dome. But Soufriere Hills was covered in cloud.
As luck would have it, one of the MVO scientists had just been on a course in remote sensing for disaster management and knew of the international charter. Knowing my expertise in volcano remote sensing, he contacted me and, on 29 July, we activated the charter via the UK government (Montserrat is a UK Dependent Territory).
What we needed was detailed images of the lava dome beneath the cloud, to examine small features and tell what changes had occurred during and after the explosion. Only radar can see through cloud, and several satellite radars, available through the charter, were tasked to acquire images.
The devil's in the detail
But we were worried that these radars would not supply data at sufficiently high resolution. The highest resolution civilian radar, TerraSAR-X, is owned by the German space agency (DLR), which is not yet signed up to the charter.
Luckily, DLR nevertheless agreed to collect TerraSAR-X data over Soufriere Hills and we received new images on 2 August. More importantly, they also gave us earlier images acquired before the explosion that let us see the changes.
After processing the two sets of images to emphasise changes to the dome's surface, we had our answer the same day — the explosion had punched a relatively small new vent through the lava dome but had not made it less stable as a result.
The MVO submitted this conclusion to Montserrat's civil protection committee, which, on 6 August, used it as grounds to revoke the evacuation order and allow people back to their houses.
It wasn't until more than a week later (14 August) that the cloud cleared sufficiently to confirm the radar interpretation.
Charter, expertise... and luck
There is no doubt that satellite data accessed through the charter helped guide an effective response to this natural hazard. Although you could argue that the data that solved the problem did not actually come from the charter, it was the charter's acceptance of our case that convinced DLR to help.
But the charter itself was not the only factor underpinning our success.
Montserrat had specific expertise in remote sensing and was already monitoring the volcano's behaviour before the crisis. So scientists recognised the exact need and which satellites could help, and they knew the way to get data quickly (the charter). Starting 'from cold' may have taken longer or missed the route to the solution.
And there was another element of luck: TerraSAR-X had already acquired appropriate 'before' images of the volcano.
Could better be done next time? Our luck with the 'before' images highlights the need for a global archive of satellite imagery to help with future disasters. Having DLR as part of the charter would also formalise access to their valuable resource.
And while the UK government responded quickly, the contact details we needed to activate the charter are not easily available on the Internet. Only authorised users from Charter member countries can request images. Neither the British National Space Centre nor the Charter itself had the contact information available online.
On our side, we could have been better prepared had we asked ourselves what archival data we might need to be able to respond to the type of scenario we faced on 28 July 2008.
Geoff Wadge is a professor in the Environmental Systems Science Centre at University of Reading, United Kingdom and chairman of the Foreign and Commonwealth Office's Scientific Advisory Committee on Montserrat volcanic activity.
All SciDev.Net material is free to reproduce providing that the source and author are appropriately credited. For further details see Creative Commons. | <urn:uuid:8b32fe22-262c-42f5-b823-c1ddeeb92c36> | 3.53125 | 1,144 | Nonfiction Writing | Science & Tech. | 38.65787 |
At 07:36 AM 10/24/97 PDT, you wrote:
>Pardon my ignorance, but what does the ".db" command to anyway? I don't
think I have seen it mentioned in the "Online Zshell School" thing.
It's define byte, it can define 1 byte, like in
or it can be used to insert a string like
.db "42 is the answer!", 0
Wherever the .db appears, the bytes following it will be inserted.
You can use .db after your program's code to define text strings your prog
uses. If you use this in the middle of your program, the code before it
*must jump* past the strings, or else your prog will probably have serious
problems. This is also used to define bytes which are not part of the
program's code, but which the program has access to. When you do this it
is equivalent to defining variables in high level languages. .db for
chars, unsigned chars, and bools. .dw for ints, uints etc, and for floats,
stick with TI's scheme. Then you can use the multiplication and division
routines with the OP vars.
- A82: Re:
- From: email@example.com (Bennie R Copeland) | <urn:uuid:768c88d3-92da-49eb-bf83-aa106b11f434> | 3.3125 | 285 | Comment Section | Software Dev. | 81.924358 |
Whole Body Ozone Chemistry
In this activity, students will play the roles of various atoms and molecules
to help them better understand the formation and destruction of ozone in the
Ozone, a molecule containing three oxygen atoms, is made when UV light breaks
the bonds of oxygen molecules containing two oxygen atoms in the stratosphere.
The single oxygen atom is highly reactive and bonds with another oxygen molecule
By having students play the roles of various atoms and molecules, ideas of
basic chemistry in the atmosphere are made more concrete. For example, pairs
of students can represent diatomic oxygen while a trio is required for ozone.
This illustrates chemical reactions involved in the photochemistry of ozone
production and destruction, along with a catalyst that affects the rate of the
- Students will understand how ozone is formed in the earth's stratosphere
and will be able to explain the importance of stratospheric ozone.
- Students will be able to explain how ozone is destroyed in the stratosphere.
- Students will understand that some chemicals can speed up the breakdown
of ozone in the atmosphere.
- Students will be able to explain why chlorofluorocarbons (CFCs) are destructive
to the ozone layer.
Alignment to National Standards
National Science Education Standards
- Physical Science, Properties and Changes of Properties in Matter, Grades
5 to 8, pg. 154, Item #2: "Substances react chemically in characteristic
ways with other substances to form new substances (compounds) with different
characteristic properties. In chemical reactions, the total mass is conserved.
Substances often are placed in categories or groups if they react in similar
ways; metals is an example of such a group."
- Physical Science, Chemical Reactions, Grades 9 to 12, pg. 179, Item #2:
"Light can initiate many important chemical reactions such as photosynthesis
and the evolution of urban smog."
- Physical Science, Chemical Reactions, Grades 9 to 12, pg. 179, Item #3:
"Radical reactions control many processes such as the presence of ozone
and greenhouse gases in the atmosphere, the burning and processing of fossil
fuels, the formation of polymers, and explosions."
Benchmarks for Science Literacy, Project 2061, AAAS
- The Physical Setting, Structure of Matter, Grades 6 to 8, pg. 78, Item #1:
"All matter is made up of atoms, which are far too small to see directly
through a microscope. The atoms of any element are alike but are different
from atoms of other elements. Atoms may stick together in well-defined molecules
or may be packed together in large arrays. Different arrangements of atoms
into groups compose all substances."
- The Physical Setting, Structure of Matter, Grades 9 to 12, pg. 80, Item
#9: "The rate of reactions among atoms and molecules depends on how often
they encounter one another, which is affected by the concentration, pressure,
and temperature of the reacting materials. Some atoms and molecules are highly
effective in encouraging the interaction of others."
- Grade level: 6 to 9 (Note: care must be taken with the younger grades
to make the atomic concepts simple and clear. You may wish to eliminate the
more complex CFC reactions, for example.)
- Allow a minimum of 30 minutes to run the students through each simulation
and discuss the meaning of each.
- 8 1/2 by 11 sheets of paper or cardboard
- Hole punch
- Magic markers
- Clear red and blue plastic sheets to cover flashlight lens
- String (optional)
Note: As written, this activity requires that students hold hands. Younger
students may not have any problems with this, however, the self-consciousness
of adolescents may hinder the spontaneous movement and physical contact required
for this activity. If you think this will be problematic in your classroom,
cut 12-inch lengths of string for the students to hold to make the 'bonds.'
This activity should be done a step at a time, being sure the students understand
the analogy. Otherwise the analogy may be confusing or more difficult to understand
than the concepts being illustrated. It is essential to stop and discuss after
Part 1: Modeling Oxygen in the Earth's Atmosphere
- Let 5 or 6 pairs of students represent oxygen molecules. Each student should
construct a sign using a piece of paper, writing a large O on it and attaching
a string to go around their neck, indicating they are oxygen atoms.
- Students in each pair should hold hands to simulate the bonding between
the atoms of oxygen in each molecule. Have these pairs of students move about
in a cleared area in the classroom to simulate molecular motion. It is appropriate
for them to bounce off a wall or collide with each other as they move about.
After moving about for a minute or so, stop to discuss what has been demonstrated.
Questions and Observations
- How are the moving pairs of students similar to what occurs in the air in
the room? (Oxygen in the air exists as two atoms to each molecule, and, like
all air molecules, oxygen is constantly in motion.)
- How is it different? (Obviously the pair of students is much larger than
one oxygen molecule. In addition, air has other gasesnitrogen, carbon
dioxide, and other trace gases.)
- What could be done to make the analogy better? (Some suggestions might include
having other students act as nitrogen atoms, carbon dioxide molecules, etc.
To make it more realistic, how many nitrogen molecules ()
should be used for each oxygen molecule ()?
About four, since air contains about 80% nitrogen and 20% oxygen.)
- What is oxygen called if it has two atoms per molecule? (Diatomic oxygen
also known as molecular oxygen. A single O atom is known as atomic oxygen.)
Part 2: Simulating the Formation of Ozone in the Stratosphere
- Repeat the steps under modeling the earth's oxygen, but this time darken
or dim the lights in the room.
- Add a student who, with a flashlight, simulates solar radiation. Place a
clear blue plastic sheet over the lens of the flashlight to represent the
ultraviolet short wavelengths that are involved in the breakup of diatomic
- Let pairs of students representing oxygen begin their motion as before.
When the student with the flashlight shines the light on a pair of students,
the bond between them breaks, and students let go of their partner.
- As the motion continues, these single atoms of oxygen move around until
they bump into a pair of oxygen atoms. Each of the single oxygen atoms combines
with the pair they bump into, forming a group of three oxygen atoms. These
three students hold hands, representing a molecule of ozone.
Questions and Observations
- How is this simulation similar to the way ozone is formed in the stratosphere?
(UV light breaks the bonds on oxygen molecules, and the free oxygen atom combines
with other oxygen molecules to produce ozone.)
- What is oxygen with three atoms per molecule called? (ozone)
- How many molecules of ozone can be formed by the breakup of one molecule
of diatomic oxygen by ultraviolet light? (2)
- Why is ozone formed this way in the stratosphere and not in the air near
the earth's surface? (Much more ultraviolet light exists in the stratosphere
than near the earth's surface.)
Part 3: Demonstrating How Ozone Breaks Down in the Stratosphere
- Have several groups of three students, each representing ozone, move about
the room. Pairs of students representing diatomic oxygen can be added as a
touch of realism.
- This time the lens of the flashlight should be covered with clear red plastic
to represent UV light of a longer wavelength.
- When this light is used to illuminate an ozone molecule, the ozone breaks
up to form a diatomic molecule (a pair of students) and an oxygen atom (single
- This process is repeated by shining the light on a second ozone molecule,
producing another pair of oxygen atoms and another single oxygen atom.
- The two single oxygen atoms should then combine to form a pair of atoms,
or a molecule of diatomic oxygen.
Questions and Observations
- How many molecules of diatomic oxygen are formed from the breakup of two
molecules of ozone? (3)
- How is the breakup of ozone in the stratosphere similar to its formation
there? (Both the formation and breakup of ozone involve UV light, but different
Part 4: An Example of a Chemical that Speeds up the Breakdown of Ozone
Of all the chemicals involved in the breakdown of stratospheric ozone, none
have received more attention than the chlorofluorocarbons, or CFCs. The two
most common are CFC-11 ()
and CFC-12 (). These compounds
can be modeled by letting students represent atoms of carbon (C), chlorine (Cl),
and fluorine (F). For example, a molecule of CFC-11 would be composed of one
student representing a carbon atom, another representing a fluorine atom, and
three students representing three chlorine atoms. The students should hold hands
to demonstrate how atoms are bonded in a molecule.
Graphic of the molecular structure of common CFCs
Questions and Observations
- The CFCs are inert, that is, they do not react with other materials under
most conditions. How can this be demonstrated using groups of students to
represent atoms of different elements? (The CFCs can move around together,
but students should lock elbows, showing that the bonds of these molecules
do not break apart easily.)
- The CFCs that enter the atmosphere at the earth's surface have found their
way into the stratosphere. How can this be demonstrated using students to
play the role of various gases in the air? (The CFCs can gradually move from
the place designated in the classroom as the earth's surface to the place
designated as the stratosphere. More ozone molecules should be in the stratosphere.
The student with the flashlight representing UV should be in the place designated
as the stratosphere.)
Part 5: The Role of Chlorine in the Breakdown of Ozone in the Stratosphere
UV light breaks down CFCs in the stratosphere, releasing chlorine atoms. This
can be demonstrated by having a student with a flashlight shine a light on a
group of students representing a molecule of CFC-11 or CFC-12. Let one student
representing a freed chlorine atom move amidst groups of students representing
ozone. The chlorine is involved in the breakdown of ozone as follows:
Cl + ----> ClO +
ClO + O ----> Cl +
- A student representing chlorine pulls an oxygen atom away from an ozone
molecule to form chloride oxide (ClO).
- The two students representing ClO react with an oxygen atom.
- The two students representing oxygen combine to form an oxygen molecule.
- The student representing chlorine is then free to attack another molecule
- Repeat these steps several times to show the chain reaction.
Questions and Observations
- What is a catalyst? (A chemical that promotes a chemical reaction but is
not used up in the reaction.)
- Does the chlorine act as a catalyst in this reaction? (Yes)
- Why is the involvement of chlorine in the breakdown of ozone called a chain
reaction? (Chlorine can cause the breakdown of many ozone molecules and the
chlorine is not altered or destroyed.)
- Because this is a complex, multistep simulation, it would be difficult
for the teacher to informally observe or question each student during the
activities. We suggest instead that students keep a log of the discussion
questions and answers as they go, to be turned in and evaluated by the teacher.
- Draw an unlabeled set of simple "ball and stick" molecular pictures
on overheads illustrating each of the activities done by the students. Have
students copy the overhead drawings and label each molecule and process.
- Provide gumdrops or clay and toothpicks for students to build the molecular
Modifications for Alternative Learners
- The kinesthetic nature of the lesson will be easily followed by English
Language Limited students, but the connection to the molecular processes may
be difficult. Use overhead illustrations liberally to connect the student
activities to the processes, rather than relying only on voice.
- Students with physical limitations could be given gumdrops or clay and toothpicks
to simulate molecular models.
When you're finished with the activity, click on Back to
Activities List at the top of the page to return to the activity menu. | <urn:uuid:69e5c287-c622-4996-96ee-9786f4ad9029> | 4.15625 | 2,679 | Tutorial | Science & Tech. | 40.510975 |
401 Argyresthia laevigatella
(Heydenreich, 1851)Wingspan 9-13 mm.
One of the plainer species of Argyresthia, which can be confused with A. glabratella though the latter species is generally slightly smaller.
The two species have different foodplants, laevigatella feeding on larch, usually European larch (Larix decidua) where the larvae feed internally on the terminal shoots, causing them to die off. It can be a pest in larch plantations where it is common.
The moths fly between May and July and can be attracted to light.
The species is distributed widely over much of the British Isles, and is fairly common in places. | <urn:uuid:236f2607-c7d6-4942-836d-7303cc2e4b03> | 2.875 | 161 | Knowledge Article | Science & Tech. | 51.045 |
I am endlessly fascinated by volcanoes — their power, the science behind them, and of course their terrible beauty. I’ve stood on a few (though never an active one — but that’ll happen someday!) and they are among the most amazing geological features on our planet.
In the past few years, we’ve started getting incredible high-resolution pictures of volcanoes from space, and they never cease to amaze me. One I saw recently really got to me: the south Pacific volcano Tinakula, located over 2000 km northeast of Australia:
Ye. Gads. [Click to hephaestenate.]
This shot was taken by the Earth Observing-1 satellite, and shows the volcanic island in the ocean. The colors are stunning: the deep green of the vegetation on the volcanic slope, and the bizarre silvery color of the ocean. This image is actually natural color; the silver is due to the way the sunlight is reflecting and glinting off the choppy water.
Tinakula is sporadically active, and you can see the plume of steam (probably with some ash mixed in) blowing out. You can also see the shadows on the water; the sunlight is coming from the right.
This is a sparsely populated region, and observations of the volcano are pretty rare. But from space, everything on the surface of the Earth is visible at some point. And while you can’t keep a constant eye on such things, even the occasional shot like this helps scientists understand what’s going on below the surface. This helps us understand volcanoes, of course, but also adds to the knowledge database of geologists, vulcanologists, and seismologists. And given the number of people who live near active volcanoes, this knowledge saves lives. It really is that simple: the better we understand the world — the Universe — around us, the better off we are.
Image credit: NASA/Jesse Allen and Robert Simmon (Earth Observatory)
I love these satellite views of volcanoes from space, and I’ve collected quite a few into a gallery slideshow. Click the thumbnail picture to get a bigger picture and more information, and scroll through the gallery using the left and right arrows.]
Links to this Post
- The wonder of volcanoes at Bad Astronomy « The Volcanism Blog | March 5, 2012
- Earth Update – March 2012 « Earth « Science Today: Beyond the Headlines | March 6, 2012 | <urn:uuid:c05e0101-7c87-41bf-b607-99b51e7ec2c7> | 3.375 | 516 | Personal Blog | Science & Tech. | 47.597608 |
|Camponotus sp (worker)|
|> 1,000 species|
Carpenter ants are large (.25 to 1 in or 0.64 to 2.5 cm) ants indigenous to many parts of the world. They prefer dead, damp wood in which to build nests. They do not consume it, however, unlike termites. Sometimes carpenter ants will hollow out sections of trees. The most likely species to be infesting a house in the United States is the black carpenter ant (Camponotus pennsylvanicus). However, there are over a thousand other species in the genus Camponotus.
All ants in this genus, and also some related genera, possess an obligate bacterial endosymbiont called Blochmannia. This bacterium has a small genome, and retains genes to biosynthesize essential amino acids and other nutrients. This suggests the bacterium plays a role in ant nutrition. Many Camponotus species are also infected with Wolbachia, another endosymbiont that is widespread across insect groups.
Carpenter ant species reside both outdoors and indoors in moist, decaying or hollow wood. They cut "galleries" into the wood grain to provide passageways for movement from section to section of the nest. Certain parts of a house, such as around and under windows, roof eaves, decks and porches, are more likely to be infested by Carpenter Ants because these areas are most vulnerable to moisture.
As pests
Carpenter ants can damage wood used in the construction of buildings. They can leave behind a sawdust-like material called frass that provides clues to their nesting location. Carpenter ant galleries are smooth and very different from termite-damaged areas, which have mud packed into the hollowed-out areas.
Control involves application of insecticides in various forms including dusts and liquids. The dusts are injected directly into galleries and voids where the carpenter ants are living. The liquids are applied in areas where foraging ants are likely to pick the material up and spread the poison to the colony upon returning.
Exploding ants
In at least nine Southeast Asian species of the Cylindricus complex, including Camponotus saundersi, workers feature greatly enlarged mandibular glands that run the entire length of the ant's body. They can release their contents suicidally by performing autothysis, thereby rupturing the ant's body and spraying toxic substance from the head, which gives these species the common name "exploding ants." The ant has an enormously enlarged mandibular gland, many times the size of a normal ant, which produces the glue. The glue bursts out and entangles and immobilizes all nearby victims.
Selected species
See List of Camponotus species for a complete listing of species and subspecies.
- Camponotus atriceps: Florida carpenter ant; one of the many kinds of carpenter ants that can bite and release a painful acid.
- Camponotus chromaiodes: red carpenter ant
- Camponotus compressus (Fabricius, 1787)
- Camponotus consobrinus: sugar ant
- Camponotus crassus Mayr, 1862
- Camponotus ferrugineus (Fab.): red carpenter ant
- Camponotus festinatus (Buckley, 1866)
- Camponotus flavomarginatus Mayr, 1862
- Camponotus floridanus: a species whose genome was sequenced
- Camponotus gigas (Latreille, 1802): Giant forest ant
- Camponotus herculeanus
- Camponotus kaura
- Camponotus ligniperda: an important species in Europe
- Camponotus nearcticus (Emery): smaller carpenter ant
- Camponotus novaeboracensis Region of Québec, black and red
- Camponotus pennsylvanicus (DeGeer): black carpenter ant
- Camponotus punctulatus (Mayr): Tacuru ant
- Camponotus saundersi: Malaysia
- Camponotus schmitzi Stärke, 1933
- Camponotus sericeus
- Camponotus silvestrii Emery, 1906
- Camponotus taino
- Camponotus universitatis Forel, 1890
- Camponotus vagus Scopoli, 1763
- Camponotus variegatus: Hawaiian carpenter ant
- Catseye Pest Control http://www.catseyepest.com
- Jones, T.H.; Clark, D.A.; Edwards, A.A.; Davidson, D.W.; Spande, T.F. and Snelling, Roy R. (2004): "The Chemistry of Exploding Ants, Camponotus spp. (Cylindricus complex)". Journal of Chemical Ecology 30(8): 1479–1492. doi:10.1023/B:JOEC.0000042063.01424.28
- Emery, Carlo (1889). Viaggio di Leonardo Fea in Birmania e regioni vicine. XX. Formiche di Birmania e del Tenasserim raccolte da Leonardo Fea (1885–87). Annali del Museo Civico di Storia Naturale Giacomo Doria (Genova) 2 7(27): 485–520. [PDF]
- "Utahn enters world of exploding ants". Deseret News. September 11, 2002. University of Utah graduate student Steve Cook explained "They've been called kamikaze ants by other researchers because they tend to explode or self-destruct when they're attacked or harassed in any way."
- Vittachi, Nury (June 6, 2008). "The Malaysian ant teaches us all how to go out with a bang". Daily Star (Dhaka).
- Ridley, Mark (1995). Animal Behaviour (Second ed.). Blackwell Publishing. p. 3. ISBN 0-86542-390-3. Retrieved 2009-09-26.
- Robert S. Anderson, Richard Beatty, Stuart Church (2003-01). Insects and Spiders of the World 9. p. 543. ISBN 978-0-7614-7334-3.
Further reading
- Mayr, Gustav (1861): Die europäischen Formiciden. Vienna. PDF—original description of p. 35
- McArthur, Archie J (2007): A Key to Camponotus Mayr of Australia. In: Snelling, R.R., B.L. Fisher and P.S. Ward (eds). Advances in ant systematics (Hymenoptera: Formicidae): homage to E.O. Wilson – 50 years of contributions. Memoirs of the American Entomological Institute 80. PDF—91 species, 10 subspecies
|Wikimedia Commons has media related to: Camponotus|
- University of Kentucky Extension Fact Sheet
- Ohio State University Extension Fact Sheet
- Carpenter Ant Fact Sheet from the National Pest Management Association with information on habits, habitat and prevention
- Black Carpenter Ants Diagnostic large format photographs, information
- compact carpenter ant, Camponotus planatus on the UF / IFAS Featured Creatures Web site
- Florida carpenter ant complex, Camponotus floridanus and C. tortuganus on the UF / IFAS Featured Creatures Web site
- Carpenter Ants An online supplemental to "Carpenter Ants: Biology and Control" by Laurel Hansen, Ph.D. of Spokane Falls Community College
- "Species: Camponotus saundersi". antweb.org. Retrieved 2009-09-26. Photos of the exploding ant. | <urn:uuid:c5ac105c-3615-4372-8072-eae077a698e4> | 3.5 | 1,676 | Knowledge Article | Science & Tech. | 45.443013 |
The power of our oceans has long been seen as potential source of energy.
The UK has some of the best wave and tidal resources in the world, but it's only in the last decade that it has become more than a pipe dream. By 2020, the UK could produce 2GW of energy from wave and tidal power - enough to supply 1.4 million homes*
Wave energy occurs in the movement of the water near the surface of the sea. Waves are formed by the wind blowing over the surface of the sea. This moves the water near the surface, creating waves. The greatest amount of energy is available in deeper well exposed waters offshore and there are many different ways of capturing this energy.
22 November 2011
Anniversary of wave energy machine project marks next phase of wave farm reality in Orkney waters
The project was also recently shortlisted for the best project award at the Scottish Green Energy Awards to be held in December 2011.
Click here to learn more (PDF, 29.4KB)
Harnessing Energy From Our Oceans (PDF, 3.6MB)
Pelamis Wave Energy Project Information Sheet (PDF, 978KB)
*Source: RenewableUK - Marine Renewable Energy, State of industry report, March 2010 | <urn:uuid:3d459e54-6ded-4397-8846-48360d707e1e> | 3.234375 | 258 | Content Listing | Science & Tech. | 51.620648 |
Science news articles about 'development timeline'
... in 1962, the seventh edition of the IISD Sustainable Development Timeline highlights key meetings, events, publications and other milestones that have paved the path toward sustainability ...
NEW HAVEN, Conn., May 12 (UPI) -- A Moroccan fossil trove suggests soft-bodied distant relatives of crustaceans lived beyond the Cambrian period, U.S. and Canadian researchers say.
Researchers have not only created a new material for high-speed organic semiconductors, they have also come up with a new approach that can take months, even years, off the development timeline.
A five-year development timeline unveiled Thursday by the Chinese government could establish it as a major rival in space at a moment when the American program is in retreat.
Latest Science Newsletter
Get the latest and most popular science news articles of the week in your Inbox! It's free! | <urn:uuid:44b2688b-3143-4665-a951-5e6b83337507> | 2.75 | 186 | Content Listing | Science & Tech. | 42.05771 |
Prometheus (Saturn XVI)
Orbit:139,350 km from Saturn
Diameter:91 km (145 x 85 x 62)
Discovered by:S. Collins and others in 1980 from Voyager photos.
Prometheus is the inner shepherd satellite of the F ring.
Prometheus has a number of ridges and valleys and several craters about 20 km in diameter but appears to be less cratered than the neighboring moons Pandora, Janus and Epimetheus.
From their very low densities and relatively high albedos, it seems likely that Prometheus, Pandora, Janus and Epimetheus are very porous icy bodies. (Note, however, that there is a lot of uncertainty in these values.)
The 1995/6 Saturn Ring Plane Crossing observations found that Prometheus was lagging by 20 degrees from where it should have been based on Voyager 1981 data. This is much more than can be explained by observational error. It is possible that Prometheus's orbit was changed by a recent encounter with the F ring, or it may have a small companion moon sharing its orbit.
Pandora (Saturn XVII)
Pandora ("pan DOR uh") is the fourth of Saturn's known satellites.
Orbit:141,700 km from Saturn
Diameter:84 km (114 x 84 x 62)
Discovered by: Collins and others in 1980 from Voyager photos.
More heavily cratered than nearby Prometheus, Pandora has at least two large craters 30 km in diameter. But it shows no linear ridges or valleys.
[Source-About Team #C0115361-Comparison Tables-Earth Gelogical-Eclipses-Kinds of stars-Lunar Eclipse-Lunar Tides-Plate Tectonics-Quiz-
Last Modified : 5 Sep. 2001
Created By#C0115361 Team | <urn:uuid:b1cf8661-7c9e-418f-b588-cba18d51756b> | 3.84375 | 384 | Knowledge Article | Science & Tech. | 60.202155 |
6.1.1 Ground-Based Selection
Ground-based surveys have used morphological criteria, colour selection and emission line selection. Lists of blue compact galaxies were pioneered by Zwicky, followed by Fairall and others who isolated objects from their anomalous high surface brightness as seen on the Palomar Sky Survey. Spectroscopic follow ups have revealed a large proportion of H II galaxies and AGNs (Kunth et al. 1981)
The colour selection proceeds by searching for blue or ultraviolet excess objects involving various techniques such as the use of very low dispersed images or multiple colour direct images. Dispersed images have been used by the First and Second Byurakan Survey (FBS, SBS) by Markarian (1967) on IIaF emulsion and later the University Michigan survey (UM, MacAlpine et al. 1977) and Case survey (Pesch and Sanduleak 1983) with IIIaJ emulsion. The second method has been pioneered by Haro (1956) and extensively developed by the Kiso Observatory Survey (Takase and Miyauchi-Isobe 1984). Low resolution slitless spectroscopy enables to detect [O II] 3727, H, [O III] 4959, 5007 and H lines depending on the chosen emulsion or filter. Good seeing and excellent guiding are a requisite to avoid trailing and loss in detectivity. These techniques face a trade off between the dispersion and the spectral range covered. The higher the dispersion, the easier it becomes to detect weak emission lines against the continuum, while a narrow spectral range cuts significantly the sky background at the expense of the redshift range. The recent surveys conducted by Gallego et al. (1997) and Salzer (1999) use the H line which can be bright even in low-excitation or very metal-poor objects. Because each technique involves specific observational biases, modern surveys tend to combine various approaches. The use of large CCD arrays equipped with scanning Fabry Perot interferometry or slitless spectroscopy offer deeper limits at the expense of the reduced field of view. In the future, these combinations will probe distant H II galaxies populations. The most difficult problem that these surveys have to face is that of the follow-up observations (Terlevich et al. 1991). Getting even a rough oxygen abundance for an object fainter than the 17th magnitude requires long telescope time and suggests the use of multi-object-spectroscopy. | <urn:uuid:3cd4bda8-d66c-40cc-bc54-5090863c186c> | 3.296875 | 510 | Academic Writing | Science & Tech. | 38.0092 |
I did not write for the Globe and Mail in 1993 let alone about climate!
global av temp (ignoring pinatubo drop) is about 0.2C above 1991 level after 22 yrs – so I was spot on so far!
As you can see, the graph he cites shows 0.5 degrees of warming since he made his prediction, so it seems that he is applying a 0.3 degree correction for Pinatubo. Which brings us to Ridley’s next column, published in The Sunday Telegraph on 30 Jan 1994 (one month after his column with the failed prediction):
The satellites, however, tell a very different story about the 1980s (their data do not go further back). Orbiting the planet from north to south as the Earth turns beneath them, they take the temperature of the lower atmosphere using microwave sensors. By the end of 1993 the temperature was trending downwards by 0.04 of a degree per decade.
The satellite’s masters explain away this awkward fact by subtracting two volcanic eruptions (Mount Pinatubo in 1991 and El Chichon in 1982) and four El Ninos (sudden changes in the circulation of the water in the Pacific). Since they assume that all these would have cooled the atmosphere, they conclude that the 1980s did see a gradual warming of the air by 0.09 degrees: still less than a third of that recorded by the old method.
Even with this sleight of hand (and when I was a scientist I was trained not to correct my data according my preconceptions of the result), the startling truth remains that the best measure yet taken of the atmosphere has found virtually no evidence of global warming.
So according to Matt Ridley in 1994, Matt Ridley in 2013 used a “sleight of hand”, something that he was trained not to do. If we hold Matt Ridley to the standard he declared at the time of his prediction there has been 0.5 degrees of warming since he predicted that there would be just one degree by 2100.
But if we do want to know what the long term warming trend is, it is not a “sleight of hand” to remove the short term effects of volcanoes and El Nino/La Nina. It is, however, a sleight of hand for Ridley to just correct for Pinatubo and not El Nino/La Nina. Here is the graph from Foster and Rahmstorf (2011) that shows what temperature records look like if the short term effects are removed:
Using Ridley’s preferred UAH data set we see that there has been 0.4 degrees of warming since he made his prediction.
Any way you slice it, there has been much more warming that Ridley predicted. I hope this information will help him reach stage 5, acceptance. | <urn:uuid:1fef19a5-412b-4b0a-906e-6e4b9e71a418> | 2.8125 | 580 | Personal Blog | Science & Tech. | 64.851355 |
December 2nd, 2010 | Ted Dhillon
Temperatures reached record levels in several regions during 2010, the World Meteorological Organization (WMO) says, confirming the year is likely to be among the warmest three on record. Full article at http://www.bbc.co.uk/news/science-environment-11903397
You can’t argue with those numbers, skeptics! If you happened to be living in Canada, you will attest to the warm November we’ve had so far. I’m not complaining; maybe Canada will become one of the best places (climatically) to live in 25 years from today. But what about the Bangladeshes and the Sudans and the Maldives of the world, where rising sea waters and increasing desertification are creating climate refugees even today.
All the political wranglings and debates notwithstanding, climate change is happening faster than we expected. So what are we going to do about it? Can the proclaimed “dead” Cancun COP turn out something viable? Let’s wait and see. | <urn:uuid:a3e968b3-aa04-419e-8602-58e119a5e504> | 2.78125 | 227 | Personal Blog | Science & Tech. | 54.769167 |
"We scientists made a mistake.
We thought that the next cycle was going to be quiet. Some of our data was off by a 20 and that's why we are issuing this alert now. We made a mistake.
The next cycle will be much more serious than we previously thought," said Dr. Michio Kaku, one of the world's leading experts in theoretical physics
in an interview on Fox News.
Scientists have discovered leaks in Earth's magnetic field.
Earth's magnetic field which acts as our protective shield in space has a hole in it. That could be a problem because a weakened
field could leave Earth vulnerable to solar storms.
Every 11 years the Sun releases a shockwave, a tsunami of radiation that could wipe out our communication, weather satellites, GPS, spy
satellites, internet television and more.
All this could be wiped out when we have the peak of the sunspot cycle. That's when the Sun's magnetic field flips. North pole and
south pole flip, releasing a shockwave of radiation that hit the Earth potentially wiping out a lot of our satellite communications.
Solar radiation can wipe out communication, weather satellites, GPS and more.
"Lets hope that nothing happens. However, what if our communication systems are wiped out?
A massive solar storm can have devastating consequences on our global
economy," says Dr. Kaku.
"Imagine large cities without power for a week, a month, or a year.
The losses could be $1 to $2 trillion, and the effects could be felt for years," says Daniel Baker, of the University of Colorado's
Laboratory for Atmospheric and Space Physics.
A massive solar storm that scientists are predicting could force the majority of the population to change their focus from working in
white collar positions to those more in a skilled trade profession.
Bill Shepherd Fisher Investments and other such firm reviews wouldn't be a priority anymore, but the focus would shift in helping and training more
farmers, EMS, and other trade skills to survive weeks without power or food.
The storm will hit most every day items including Sat Nav's which will cause huge problems for drivers and emergency services.
Pete Riley, a senior scientist at Predictive Science in San Diego, California, says there 12 per cent chance of being struck by a solar megaflare.
"Even if it's off by a factor of two, that's a much larger number than I thought," said Riley after publishing his estimate in Space Weather on February 23.
"A geomagnetic storm could shatter nations all over the earth. We cannot wait for disaster to spur us to action," said former US government defence adviser Dr Avi Schnurr.
A massive solar storm can affect everything from home freezers to car sat navs.
"The sun has an activity cycle, much like hurricane season. It's been hibernating for four or five years, not doing much of anything,"
said Tom Bogdan, director of the Space Weather Prediction Center in Boulder, Colorado.
"Now the sun is waking up. The individual events could be very powerful.'
If Earth is hit by the same force as the worst recorded solar storm in history, 1859's Carrington Event, it would be devastating.
Giant Jupiter - Our "Would-Be" Friend With Secrets From The Past
It knocked a giant planet into deep space, swallowed up a smaller rival before it could grow any bigger.
Careful observers saw dark spots rapidly changing places and almost a century ago, a luminous protuberance on the eastern edge of Jupiter,
on the equatorial side of the north equatorial belt was reported.
Unusual Pulsar Or Alien Signals?
The pulse timing of this object is considered unusual.
What kind of phenomenon is related to this object?
It is the first time this kind of phenomenon has been observed by astronomers.
The "Cloaked" Star Was Difficult To Find
An object obscured by dust, and buried in a two-star system enshrouded by dense gas, is not easy to find.
A "cloaked" star was discovered after it ate a little of its neighbor. The meal must have given the star a bit of indigestion, because it
"burped" with a blast of high-energy radiation, which gave it away.
Radio Emission From Ultracool Dwarf Detected By Arecibo Telescope
The Arecibo Telescope in Puerto Rico has discovered sporadic bursts of polarized radio emission from the T6.5 brown J1047+21.
Because Arecibo is a single, fixed-dish telescope, it has a restricted practical sensitivity to weak, quiescent emission from radio sources...
Invader From Another Galaxy
This alien intruder from another galaxy is in many ways different from other exoplanets observed by astronomers.
Located about 2000 light-years from Earth in the southern constellation of Fornax (the Furnace), the Jupiter-like planet orbits a dying star of
extragalactic origin and risks to be engulfed by it.
"Pillars Of Creation" Are Gone
Every time you look at the beautiful and famous image of the Pillars of Creation taken by Hubble back in 1995,
you are actually admiring something that no longer exists.
In fact, the Pillars of Creation were already long gone by the time the image was captured! | <urn:uuid:24983257-0d5a-4db9-8d64-7c3572acffd2> | 2.71875 | 1,118 | Content Listing | Science & Tech. | 49.976764 |
First there was SETI@home
, allowing the public to help search for artificial radio signals in radio observatory data by donating spare computing time on their PCs. Then Stardust@home
was born, in which people scrutinise images of the Stardust spacecraft's dust collector, in order to help locate valuable bits of space dust.
Now, you can help astronomers in a task of truly cosmic proportions. The Galaxy Zoo
project is encouraging members of the general public to help classify the shapes of galaxies from images in a massive online database.
The goal is to determine whether the observable universe is skewed along a particular direction playfully named the "axis of evil".
As it turns out, the project was spurred by a New Scientist story
that described a study claiming that the axes of rotation of galaxies tended to line up with the axis of evil.
New Scientist reporter Zeeya Merali alerted astronomer Kate Land
to the claim in the course of writing the story. Land, along with Joao Magueijo
, published an analysis of the possible axis in 2005, on the basis of an apparent alignment
of spots in the radiation field left over from the big bang, and dubbed
it the axis of evil.
When Land heard about the galaxy alignment study, she was highly intrigued, but wanted to analyse a larger sample of galaxies to verify whether the alignment was real.
The result was the Galaxy Zoo project. By identifying the type of galaxy (spiral or elliptical) in each image, and finding the direction of rotation for the spirals, users will help astronomers determine whether galaxy rotation axes really do line up along the axis of evil. The original study looked at 1660 galaxies, but Galaxy Zoo aims to analyse more than a million.
Most of the galaxies served up by the Galaxy Zoo project have never been seen before by human eyes, so volunteers can experience the thrill of being a pioneer as well as the satisfaction of contributing to scientific research.
Says the project's Chris Lintott: "What the Stardust team achieved was incredible, but our galaxies are much more interesting to look at than their dust grains."
I've looked at both and I have to agree. Due to high demand, the site is currently quite slow. But when that lets up a bit, I aim to try my hand at classifying galaxies myself.David Shiga, online reporter (Image: NASA/ESA/S Beckwith/HUDF Team)
Labels: axis of evil, galaxy zoo | <urn:uuid:b25b715f-1290-45e5-b3fb-57ccc984922f> | 3.484375 | 513 | Personal Blog | Science & Tech. | 38.828122 |
|Feb19-13, 12:58 PM||#1|
Variation in radioactive decay rates
I would like to hear opinions on the variation in decay rates as described by Fischbach and coworkers and how (if at all) this will affect radioemtric dating. Does this phenomenon indeed exist or is it the result of errors in experimental technique?
|Feb19-13, 01:27 PM||#2|
Evidence against correlations between nuclear decay rates and Earth–Sun distance (arXiv version)
I don't think there is anything new. Maybe new insights how measurement errors can be avoided.
|Feb19-13, 03:14 PM||#3|
Thanks! Any other comments will be appreciated.
|Feb19-13, 11:58 PM||#4|
Variation in radioactive decay rates
FAQ: Do rates of nuclear decay depend on environmental factors?
There is one environmental effect that has been scientifically well established for a long time. In the process of electron capture, a proton in the nucleus combines with an inner-shell electron to produce a neutron and a neutrino. This effect does depend on the electronic environment, and in particular, the process cannot happen if the atom is completely ionized.
Other claims of environmental effects on decay rates are crank science, often quoted by creationists in their attempts to discredit evolutionary and geological time scales.
He et al. (He 2007) claim to have detected a change in rates of beta decay of as much as 11% when samples are rotated in a centrifuge, and say that the effect varies asymmetrically with clockwise and counterclockwise rotation. He believes that there is a mysterious energy field that has both biological and nuclear effects, and that it relates to circadian rhythms. The nuclear effects were not observed when the experimental conditions were reproduced by Ding et al. [Ding 2009]
Jenkins and Fischbach (2008) claim to have observed effects on alpha decay rates at the 10^-3 level, correlated with an influence from the sun. They proposed that their results could be tested more dramatically by looking for changes in the rate of alpha decay in radioisotope thermoelectric generators aboard space probes. Such an effect turned out not to exist (Cooper 2009). Undeterred by their theory's failure to pass their own proposed test, they have gone on to publish even kookier ideas, such as a neutrino-mediated effect from solar flares, even though solar flares are a surface phenomenon, whereas neutrinos come from the sun's core. An independent study found no such link between flares and decay rates (Parkhomov 2010a). Laboratory experiments[Lindstrom 2010] have also placed limits on the sensitivity of radioactive decay to neutrino flux that rule out a neutrino-mediated effect at a level orders of magnitude less than what would be required in order to explain the variations claimed in [Jenkins 2008]. Despite this, Jenkins and Fischbach continue to speculate about a neutrino effect in [Sturrock 2012]; refusal to deal with contrary evidence is a hallmark of kook science. They admit that variations shown in their 2012 work "may be due in part to environmental influences," but don't seem to want to acknowledge that if the strength of these influences in unknown, they may explain the entire claimed effect, not just part of it.
Jenkins and Fischbach made further claims in 2010 based on experiments done decades ago by other people, so that Jenkins and Fischbach have no first-hand way of investigating possible sources of systematic error. Other attempts to reproduce the result are also plagued by systematic errors of the same size as the claimed effect. For example, an experiment by Parkhomov (2010b) shows a Fourier power spectrum in which a dozen other peaks are nearly as prominent as the claimed yearly variation.
Cardone et al. claim to have observed variations in the rate of alpha decay of thorium induced by 20 kHz ultrasound, and claim that this alpha decay occurs without the emission of gamma rays. Ericsson et al. have pointed out multiple severe problems with Cardone's experiments.
In agreement with theory, high-precision experimental tests show no detectable temperature-dependence in the rates of electron capture[Goodwin 2009] and alpha decay.[Gurevich 2008]
He YuJian et al., Science China 50 (2007) 170.
YouQian Ding et al., Science China 52 (2009) 690.
Jenkins and Fischbach (2008), http://arxiv.org/abs/0808.3283v1, Astropart.Phys.32:42-46,2009
Jenkins and Fischbach (2009), http://arxiv.org/abs/0808.3156, Astropart.Phys.31:407-411,2009
Jenkins and Fischbach (2010), http://arxiv.org/abs/1007.3318
Parkhomov 2010a, http://arxiv.org/abs/1006.2295
Parkhomov 2010b, http://arxiv.org/abs/1012.4174
Cooper (2009), http://arxiv.org/abs/0809.4248, Astropart.Phys.31:267-269,2009
Lindstrom et al. (2010), http://arxiv.org/abs/1006.5071 , Nuclear Instruments and Methods in Physics Research A, 622 (2010) 93-96
Sturrock 2012, http://arxiv.org/abs/1205.0205
F. Cardone, R. Mignani, A. Petrucci, Phys. Lett. A 373 (2009) 1956
Ericsson et al., Comment on "Piezonuclear decay of thorium," Phys. Lett. A 373 (2009) 1956, http://arxiv4.library.cornell.edu/abs/0907.0623
Ericsson et al., http://arxiv.org/abs/0909.2141
Goodwin, Golovko, Iacob and Hardy, "Half-life of the electron-capture decay of 97Ru: Precision measurement shows no temperature dependence" in Physical Review C (2009), 80, 045501, http://arxiv.org/abs/0910.4338
Gurevich et al., "The effect of metallic environment and low temperature on the 253Es α decay rate," Bull. Russ. Acad. Sci. 72 (2008) 315.
|Similar Threads for: Variation in radioactive decay rates|
|Radioactive decay||Introductory Physics Homework||9|
|Radioactive Decay||Biology, Chemistry & Other Homework||1|
|Nuclear Chemistry: Kinetics of Radioactive Decay and Radioactive Dating||Biology, Chemistry & Other Homework||1|
|What is the proper use and meaning of "Radioactive decay"? How much decay HAS occured||General Physics||2|
|radioactive decay equilibrium when decay constants are equal||Biology, Chemistry & Other Homework||2| | <urn:uuid:5c2c230e-48d6-4bd1-ac05-bf98ae9dc80c> | 2.71875 | 1,495 | Comment Section | Science & Tech. | 58.938722 |
Head Tilt Mouse
Ever since Mario Capecchi, Martin Evans, and Oliver Smithies created the first knockout mouse in 1989, genetically engineered animals have steadily increased in popularity for all kinds of biology research: simply pick a gene, turn it off in the mouse, and see what happens.
Knockout mice are undoubtedly helpful animal models for many human genetic disorders. But there is still plenty of potential for discovery in mice that go through spontaneous mutations: indeed, these natural mutations can have high levels of complexity and diversity, leading to surprising phenotypes that give insight to human genetic disorders. Last week in at Jackson Laboratory's 50th annual Short Course in Medical and Experimental Mammalian Genetics in Bar Harbor, Maine, Popular Science got a peek at 150 of the Lab's 10,000-plus mutant mouse strains.
Jackson Lab has one of the largest populations of spontaneous mutant mice in the world: out of the 10,000 or so total strains, over 500 came from natural mutations. The lab also has around 1,500 targeted strains, knockout mice that which were genetically engineered, and various flavors of regular, healthy mice.
For over 20 years, researchers here have conducted "deviant searches" at the lab, which help link specific genetic mutants to an appropriate area of research. Every two weeks, mice that differ physically or behaviorally from their littermates -- usually anywhere from five to a few dozen -- are pooled together. Jackson scientists look for mice that exhibit phenotypes (the observable characteristics of their genotypes) that are common to whatever disease or disorder they are studying, and take those mice back to their lab. This process has led to the discovery of hundreds of genes associated with genetic diseases or disorders. Some classic examples include diabetes and obesity mouse models, which led to modern-day metabolism research.
The characteristics of some strains are hard to detect in a photograph or video, like the Social Anxiety mouse, which hangs back from its brothers and sisters and has a hard time getting a date (poor things are often too afraid of contact to mate). But others have obvious physical or behavioral traits, like those in our gallery.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:e96a63fc-5468-4868-a299-71c29bb6340c> | 3.125 | 483 | Truncated | Science & Tech. | 29.563013 |
In the course of discussing light physics and vision, the question has been asked how the actual location of a planet, say Mars, is determined and if the calculation uses the apparent location as a variable, and if the calculation corrects for the light speed time delay. If we can get an actual formula and brief explanation, that would be most helpful.
I think this question is fairly elementary, but our interlocutor at another board remains dissatisfied with our answers, and would like input from experts. Here is the actual question:
"When sending rockets to other planets, do scientists take into account that the image of a planet produced by a telescope shows a delayed true location, due to the finite speed of light?"
Although redundant, I have been asked to include this addendum
"does this delay factor into the equation which determines the actual location of the planet?"
Thanks in advance for any input. | <urn:uuid:cd20c3dd-1bea-4162-8a42-4f0455ca37b6> | 3.109375 | 183 | Comment Section | Science & Tech. | 38.158683 |
You are on a carousel with a linear speed , such that
Let denote the tangential component of the Coriolis force exerted on your body at time t and let its magnitude equate 1. Calculate
The normal component of angular velocity is constant.
I have been working on this problem for >6 hours to no avail. I have tried to get a detailed explanation as to how to do this, but no luck. There are only 4 sample questions in the textbook dealing with the Coriolis effect, which I can get non of.
I know that the velocity vector is tangent to the circle of rotation, and that the angular velocity is perpendicular to the plane of rotation, as well the force is directed away from the carousel, in the same plane as the velocity. | <urn:uuid:0fd4ac6d-c6b0-459b-8f8e-dcb19207522c> | 2.953125 | 160 | Q&A Forum | Science & Tech. | 48.427861 |
Did you know there were principles of forecasting? I don't mean like the positions of the planets. Which for time spans of tens of thousands of years is fairly mechanical. The kind of forecasting I'm talking about involves events that are less deterministic than the motions of the planets. And yet there are principles.
The first is to classify the methodology. Are you starting with numbers or guesses? Which is to say how good is your data base? If you have numbers, what kind of precision is attached? Do you use the numbers directly? Or do you use statistical methods to tease out "useful" information?
OK. You have some data. Now you have to select a method of analysis that is both suitable to the data and the purpose for which it will be used. Is this an investment decision? Or just a report on something to keep an eye on? Do you have a business plan in hand or just a casual "this seems like a good idea"?
The above pages are full of annotated charts with little pop-up explanation boxes to help you understand the charts.
And if that isn't enough the authors of these pages and the accompanying book will give you free help if you describe your problem(s) to them.
We have come a ways and surely it can't be just to talk about forecasting methods. Well yes and no. I want to talk about climate. Climate forecasting.
J. Scott Armstrong, of the Wharton School, University of Pennsylvania, and Kesten C. Green, of the Business and Economic Forecasting Unit, Monash University have done a short audit of IPCC climate science [pdf] based on the forecasting principles outlined above.
I think it would be good to start with the title which really gets to the heart of the matter.
Global Warming: Forecasts by Scientists versus Scientific ForecastsNaturally they have some points to make.
In 2007, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme issued its updated, Fourth Assessment Report, forecasts. The Intergovernmental Panel on Climate Change’s Working Group One Report predicts dramatic and harmful increases in average world temperatures over the next 92 years. We asked, are these forecasts a good basis for developing public policy? Our answer is “no”.Then they have a devastating word about the "consensus".
Much research on forecasting has shown that experts’ predictions are not useful. Rather, policies should be based on forecasts from scientific forecasting methods. We assessed the extent to which long-term forecasts of global average temperatures have been derived using evidence-based forecasting methods. We asked scientists and others involved in forecasting climate change to tell us which scientific articles presented the most credible forecasts. Most of the responses we received (30 out of 51) listed the IPCC Report as the best source. Given that the Report was commissioned at an enormous cost in order to provide policy recommendations to governments, the response should be reassuring. It is not. The forecasts in the Report were not the outcome of scientific procedures. In effect, they present the opinions of scientists transformed by mathematics and obscured by complex writing. We found no references to the primary sources of information on forecasting despite the fact these are easily available in books, articles, and websites. We conducted an audit of Chapter 8 of the IPCC’s WG1 Report. We found enough information to make judgments on 89 out of the total of 140 principles. We found that the forecasting procedures that were used violated 72 principles. Many of the violations were, by themselves, critical. We have been unable to identify any scientific forecasts to support global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder.
• Agreement among experts is weakly related to accuracy. This is especially true when the experts communicate with one another and when they work together to solve problems. (As is the case with the IPCC process).They have lots more where that came from. What it boils down to is a warning in the wash room. Keep your eye on this. It is not worth a meeting. Let alone a report to the investment committee.
• Complex models (those involving nonlinearities and interactions) harm accuracy because their errors multiply. That is, they tend to magnify one another. Ascher (1978), refers to the Club of Rome’s 1972 forecasts where, unaware of the research on forecasting, the developers proudly proclaimed, “in our model about 100,000 relationships are stored in the computer.” (The first author was aghast not only at the poor methodology in that study, but also at how easy it was to mislead both politicians and the public.) Complex models are also less accurate because they tend to fit randomness, thereby also providing misleading conclusions about prediction intervals. Finally, there are more opportunities for errors to creep into complex models and the errors are difficult to find. Craig, Gadgil, and Koomey (2002) came to similar conclusions in their review of long-term energy forecasts for the US made between 1950 and 1980.
• Given even modest uncertainty, prediction intervals are enormous. For example, prediction intervals expand rapidly as time horizons increase so that one is faced with enormous intervals even when trying to forecast a straightforward thing such as automobile sales for General Motors over the next five years.
In electronics we can work with very complex systems because the interactions are strictly limited. How is this done? A marvelous Bell Labs invention called the transistor. It isolates as well as performing other useful functions.
The electronics guys, with lots of knowledge and isolation plus simple models, are real happy when their predictions of what will happen next in a circuit comes within 5%. The climate guys say they can tell within better that 1%. What are the odds?
When you have lots of things or some very complex things interacting, prediction gets hard. As a very great Yogi is reputed to have said: "Prediction is very difficult, especially about the future."
Cross Posted at Classical Values | <urn:uuid:a604a60e-5844-4deb-8884-738deaac0bad> | 2.734375 | 1,226 | Personal Blog | Science & Tech. | 46.945825 |
When it's hot out, you'll often see dragonflies perched on a stem in the sun with their long bodies (abdomens) sticking straight up towards the sky. It looks like they are doing some sort of insect handstand, but they are really working on thermoregulation, and their strange posture is called obelisking. Not all dragonflies obelisk to cool their bodies, some drop their abdomens downward, some shade themselves with their wings, some circulate hemolymph through their abdominal sections, and some dive into the water...
A male Blue Dasher (Pachydiplax longipennis) dragonfly obelisking in the hot sun to regulate his body temperature.
By raising their abdomens straight up, dragonflies reduce the surface area heated by the sun, which helps them cool their bodies. Blue Dashers are famous for obelisking. They often take the stance even when the temperatures are not that high, and males also seem to use the posture as a threat display when defending their territory. Additionally, if the sun is low in the sky and it's cooler, they use the obelisk posture to heat themselves by exposing more of their abdomen to the sun's warming rays.
A male Blue Dasher dragonfly has several distinguishing field marks--a powder-blue abdomen tipped in black, amazing turquoise-green eyes in a white face, brownish areas on the wings, and very noticeable stripes on its thorax.
Blue Dashers are common in numbers but not in looks! With powdery blue abdomens and bright turquoise-green eyes, it's hard to pass them by without a second look!
Blue Dashers are "perching" predators. They like to perch in one place and fly out to catch their prey, returning to the same perch to eat it. Because they spend so much time sitting and waiting in one place without moving, thermoregulation by adjusting their posture works well for them (source: Obelisk posture, Wikipedia). Blue Dashers are formidable predators and will eat all sorts of insects including mosquitoes, flies, butterflies, moths, mayflies, flying ants and termites (source: Idaho State Univeristy).
Even as a naiad (the nymph form that lives in the water), Blue Dashers are "sit and wait" predators, hiding behind rocks and logs until the prey goes past.
...an interesting fact: Blue Dasher naiads can tolerate low levels of oxygen in the water, so just as lichens are an indicator species of a healthy environment, a lot of Blue Dasher naiads in relation to other species in an area can indicate low water quality (source: Idaho State University).
(I photographed this guy on 6/13/2010 on Pinckney Island in Hilton Head, SC. It was really hot that day and beautiful. The field guide I use to help me identify dragonflies is "Dragonflies and Damselflies of Northeast Ohio," by Larry Rosche, Judy Semroc, and Linda Gilbert) | <urn:uuid:e5ecb8ed-9b87-4881-9224-c8e0ef0a73f5> | 3.1875 | 631 | Personal Blog | Science & Tech. | 47.260988 |
Thank you very much
This article is really helpful for me and anyone who are learning JAVA as beginner level like me :-). Again thank you and wish you all the best
Architecture of application
is also attribute to point of view.
Thank you very much.
Ladislav...Architecture of application Hello,
I would like to know your ideas... components:
- Desktop client application (multi-user, multi-role) - Swing
Thank you so much - JSP-Servlet
Thank you so much Respected Sir/madam,
By the grace of Sri guru Raghavendra, I have successfully got the output for my Image selection problem..
Lord Raghavendra has sent your team to help me in time..
Custom Converter Example in JSF JSF provides a very flexible architecture that
not only let the developers use converters provided
so that applications can be designed well with greater
maintainability. JSF also uses the MVC pattern. ?JSF
Architecture? article in roseindia.net explains the architecture of a JSF
Read about the topic at
Thank U - Java Beginners
Thank U Thank U very Much Sir,Its Very Very Useful for Me.
Three tier architecture of the application:
... the architecture of the application and
the different components that makes up
to understand what is MVC design pattern, how MVC helps to design an application
well and how can we make our web application easy to maintain.
The MVC design pattern
JSP Architecture, JSP Model 1 architecture, JSP Model 2 architecture
generating the view for the application.
JSP Model 1 architecture is good for very small application, but it's not a
good solution for big enterprise application. The JSP Model 1 architecture do
not separate the view, business
JSF architecture What is JSF architecture
In this tutorial you will learn about the Hibernate architecture.
Here we will understand about the architecture of Hibernate using a diagram.
A diagram given below is a very high level view of Hibernate | <urn:uuid:83321584-1504-4ebd-975c-c8fe45f8decb> | 2.71875 | 445 | Comment Section | Software Dev. | 48.193346 |
This story is in the news again, so I’ve reposted my description of the paper from 3½ years ago. This is an account of the discovery of soft organic tissue within a fossilized dinosaur bone; the thought at the time was that this could actually be preserved scraps of Tyrannosaurus flesh. There is now a good alternative explanation: this is an example of bacterial contamination producing a biofilm that has the appearance of animal connective tissue.
Read GrrlScientist’s explanation and Greg Laden’s commentary and Tara Smith’s summary of the recent PLoS paper that tests the idea that it is a biofilm.
Look! A scrap of soft tissue extracted from dinosaur bone:
It has been reported in Science this week that well-preserved soft tissues have been found deep within the bones of a T. rex, and also within some hadrosaur fossils. This is amazing stuff; fine structure has been known to be preserved to this level of detail before, but these specimens also show signs of retaining at least some of their organic composition. What the authors have done is to carefully dissolve away the mineral matrix of the bone, exposing delicate and still flexible scraps of tissue inside.
Here, for example, is a piece of endothelial tissue, or the tubelike epithelia that line blood vessels and form capillaries. It is compared to a similarly prepared piece from fresh ostrich bone; you can tell the T. rex fragment has undergone some changes, but it’s comparable in size and organization to the bird sample.
Looking more closely with a scanning electron microscope, here’s a similar piece of T. rex blood vessel that has ruptured, spilling out its contents. Maybe those cells don’t look perfectly preserved, but they’re darned close.
And lastly, here’s a closeup of the surface of that epithelia, compared with an ostrich epithelium. The cells here are very, very flat, and the nuclei are the thickest part, bulging up and giving the surface a pebbled appearance. The T. rex epithelium has a similar pebbly look, suggesting that just maybe there is even some subcellular structure preserved.
How could this be? Here’s the authors’ explanation.
…we demonstrate the retention of pliable soft-tissue blood vessels with contents that are capable of being liberated from the bone matrix, while still retaining their flexibility, resilience, original hollow nature, and three-dimensionality. Additionally, we can isolate three-dimensional osteocytes with internal cellular contents and intact, supple filipodia that float freely in solution. This T. rex also contains flexible and fibrillar bone matrices that retain elasticity. The unusual preservation of the originally organic matrix may be due in part to the dense mineralization of dinosaur bone, because a certain portion of the organic matrix within extant bone is intracrystalline and therefore extremely resistant to degradation. These factors, combined with as yet undetermined geochemical and environmental factors, presumably also contribute to the preservation of soft-tissue vessels. Because they have not been embedded or subjected to other chemical treatments, the cells and vessels are capable of being analyzed further for the persistence of molecular or other chemical information.
So, basically, these cells were entombed in a thick mineral sarcophagus, protected from bacteria and other external insults. There have to have been other factors at play—cells are full of enzymes that trigger a very thorough self-destruct sequence at death—so I’m definitely looking forward to the molecular analysis. Even if their form was preserved, I expect these cells to be denatured monomer soup on the inside.
Schweitzer MH, Wittmeyer JL, Horner JR, Toporski JK (2005) Soft-Tissue Vessels and Cellular Preservation in Tyrannosaurus rex. Science 307(5717):1952-1955. | <urn:uuid:e9cee154-7230-40b6-893c-4be0ecf913a0> | 3.109375 | 827 | Personal Blog | Science & Tech. | 34.619502 |
The Dayton Daily News notes an overnight frost quake centered around Darke County, Ohio on February 10, 2011.
The quake, or cryoseism as it’s known in scientific circles, occurs when moisture soaks into the soil and a quick freeze causes a sudden, even violent expansion and contraction. Darke County’s 911 director Brandon Redmond said the quakes erupted for eight hours Thursday, starting at 1 a.m. The heaviest reports were between 5:30-7:30 a.m.
A similar ice quake happened on Lake Mendota near the University of Wisconsin in 2008, close enough for a geology department seismograph to pick up a reading.
The Jan 31, 1986 This Week in NOAA describes the analysis of frost quake events in Maine.
The most thorough analysis I've found is Frost quakes as a particular class of seismic events: Observations within the East-European platform A. A. Nikonov , IZVESTIYA PHYSICS OF THE SOLID EARTH Volume 46, Number 3, 257-273, DOI: 10.1134/S1069351310030079 . Alas, subscription only; the abstract:
The group of quakes, which are caused by fast freezing of water-saturated soils or rocks at abrupt drop of winter temperatures often occurring in the middle and high latitudes of Eurasia, is considered. The review of little-known literature is given; the statistical data on the distribution of earthquakes in seasons and the time of day in various regions of Eurasia are presented. Special attention is paid to the East European Platform; using the data for this platform, with thorough consideration of reference quakes along with the weather conditions, the signs of a specific class of nontectonic seismic events are determined. The question concerning the necessity of the frost quakes’ discrimination in compilation of tectonic earthquake catalogues in certain regions is stated.
Translated from the original Russian, which is transliterated as Morozoboinye sotryaseniya kak osobyi klass seismicheskikh yavlenii (po materialam Vostochno-Evropeiskoi platformy). The dedication is
The tremors come, the tremors go, They love the wintry weather, With periods fast and periods slow – Perplexing altogether C.B. Hammond. The song of the seismologist. 1911.
The original Song of the Seismlologist is from the Bulletin of the Seismological Society of America.
More frost quake accounts, from 1908 in Connecticut | <urn:uuid:f915a223-2f16-4575-920a-7314c29f0ad1> | 2.9375 | 532 | Personal Blog | Science & Tech. | 44.277699 |
How Things Work: Ring Laser Gyros
- By Linda Shiner
- Air & Space magazine, September 2002
(Page 2 of 2)
“What causes the light to stretch? The fact that it had to go farther. Because when it comes back, it has to come back exactly the same way it left,” says Koper. “It has to resonate.”
Sagnac’s counter-rotating beams of light are analogous to beams in a linear cavity. If the turntable rotates clockwise, the beam traveling clockwise has farther to go to catch its starting point; the path of the counterclockwise beam is shorter.
In a given medium, “light travels at a constant velocity,” Koper says. “Einstein says you can’t change that. We definitely know that the beam going clockwise takes longer to get there than the beam going counterclockwise.”
In a ring laser gyroscope, the two counter-rotating beams are channeled to a photo detector. If the vehicle is not rotating, the beams remain in phase. If rotation is occurring, one beam continuously changes phase with respect to the other. A diode translates that moving interference pattern into digital pulses, each pulse representing an angle of rotation (typically .0005 degree per pulse, according to Koper). The rate at which the pulses are produced is also a measure of the rate of rotation.
In the JSOW glide bomb guidance package, Koper’s company also includes GPS receivers to update the ring laser gyros, which are arranged to measure yaw, pitch, and roll. Though the gyros are necessary for the constant feedback required for flight controls, the GPS system corrects any errors that inevitably build up in inertial systems, making them dependent, if only temporarily, on something outside the instruments in the closet. | <urn:uuid:8936e879-5612-4376-bb3e-802ae1c42165> | 2.71875 | 392 | Truncated | Science & Tech. | 47.488389 |
Science Fair Project Encyclopedia
Generally, the members of the ectoprocta phylum are colonial aquatic animals (also known as moss animals). Although the individual members are microscopic, colonies can grow up to one foot in length. They can reproduce both sexually and asexually. The Ectoprocta are one of the few classical phyla from which no members have been found in the Cambrian. They seem to have evolved in the Ordovician. Currently, there are about 5,000 living species in this phylum.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:2b2604c5-3ad2-49e1-af77-aba6802bdbd3> | 3.265625 | 140 | Knowledge Article | Science & Tech. | 40.590208 |
>> Study Material
>> IIT JEE Physics
>> Electric Current
>> Specific Resistance of Material of wire using Meter Bridge
Specific resistance of the material of a wire using a meter bridge
A known length (L) of a wire is connected in one of the gaps (P) of a metre bridge, while a ‘resistance box’ is inserted into the other gap (Q). The circuit is completed by using a battery (B), a rheostat (Rh), a key (K) and a galvanometer (G).
Alt txt: specific-resistance-of-material
The balancing length () is found by closing key K and momentarily connecting the galvanometer until it gives zero deflection (null point). Then, p / q = / 100 – (using the expression for the meter bridge at balance)
[Here, P represents the resistance of the wire while Q represents the resistance in the resistance box. The key K is kept open when the circuit is not in use.]
The resistance of the wire, P = r L / πr2 or r = πr2 / L P
where r is the radius of the wire and L is length of the wire (r is measured using a screw gauge while L is measured with a scale). | <urn:uuid:2fc4d838-8b47-4709-a25b-e09b175a0979> | 3.453125 | 265 | Tutorial | Science & Tech. | 48.238596 |
By Alana Range
Sea level rise — we've heard about it, but what does it actually mean, and how will it affect you, and the community where you live?
Independent research has already predicted that by 2100, sea level may rise by one meter, due to a combination of the melting of land-based ice sheets and the warming oceans (when water warms, it expands). That gives us about 89 years until high tide is lapping at the door of many residents of Virginia Beach, New Orleans, and Miami, where some may need to find new homes on higher ground. But we already knew that.
A new study published by Climate Central's Ben Strauss and the University of Arizona's Jeremy Weiss and Jonathan Overpeck, takes a closer look at the problem of sea level rise. According to their research, rising sea levels could threaten an average of nine percent of the land within 180 U.S. coastal cities by 2100.
Of the 180 cities the study examines, 20 cities have populations above 300,000, and 160 cities have populations between 50,000 and 300,000. For each city, the scientists reported the percentage of the city's land under 1-6 meters in elevation (about 3-20 feet). Only cities directly connected to the sea were included. What did they find?
"Nationally on average, approximately nine percent of the area in these coastal municipalities lies at or below one meter. This figure rises to 36 percent when considering area at or below six meters."
An interesting aspect of this paper is that it factors in the amount of sea level rise we're "committing" ourselves to this century by maintaining our current output of greenhouse gases, such as carbon dioxide (CO2). With no change in our emissions, the paper suggests that we could see oceans rise as much as six meters in the coming centuries.
In the slide show above, we've found photos to represent the 20 largest cities likely to be affected by sea level rise, according to this study. In addition to the population of each city, we've included the percentage of land under the six meter elevation point.
ACTIVITY DESCRIPTION AND TEACHING MATERIALS
Watch Slideshow >> 20 Big U.S. Cities that Should Worry about Sea Level Rise
TEACHING NOTES / CONTEXT FOR USE
This is a collection of images of cities that are at risk of even slight sea level rise. Is your city one of them?
Assessment is at the discretion of the educator as to how the slideshow is applied and the expectations after viewing it. | <urn:uuid:7760ec5b-1ecf-4b2a-a3e9-f94e0e69401c> | 3.84375 | 527 | Knowledge Article | Science & Tech. | 53.180087 |
The left-hand image is a Dawn FC (framing camera) image, which shows the apparent brightness of Vesta’s surface. The right-hand image is based on this apparent brightness image, which has had a color-coded height representation of the topography overlain onto it. The topography is calculated from a set of images that were observed from different viewing directions, which allows stereo reconstruction. The various colors correspond to the height of the area. The white and red areas in the topography image are the highest areas and the blue areas are the lowest areas. Publicia crater is the large, sharp rimmed crater in the bottom right of the image. Publicia is bowl shaped, which can be seen in the topography image. There is a large mound of material in the center of the crater. This material was probably deposited here after Vesta’s gravity caused it to slump down the crater walls to the center, which is the crater’s lowest point. A number of small impact craters have formed on this material. The apparent brightness image shows that there is a fine-scale intermingling of bright and dark material around the rim of Publicia crater.
These images are located in Vesta’s Lucaria Tholus quadrangle, in Vesta’s northern hemisphere. NASA’s Dawn spacecraft obtained the apparent brightness image with its framing camera on Oct. 14, 2011. This image was taken through the camera’s clear filter. The distance to the surface of Vesta is 700 kilometers (435 miles) and the image has a resolution of about 70 meters (230 feet) per pixel. This image was acquired during the HAMO (high-altitude mapping orbit) phase of the mission. These images are lambert-azimuthal map projected.
The Dawn mission to Vesta and Ceres is managed by NASA’s Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, for NASA’s Science Mission Directorate, Washington D.C. UCLA is responsible for overall Dawn mission science. The Dawn framing cameras have been developed and built under the leadership of the Max Planck Institute for Solar System Research, Katlenburg-Lindau, Germany, with significant contributions by DLR German Aerospace Center, Institute of Planetary Research, Berlin, and in coordination with the Institute of Computer and Communication Network Engineering, Braunschweig. The framing camera project is funded by the Max Planck Society, DLR, and NASA/JPL.
More information about Dawn is online at http://dawn.jpl.nasa.gov.
Image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA | <urn:uuid:9f4d7e57-dcd9-4bec-a312-9241df9bce59> | 3.328125 | 560 | Knowledge Article | Science & Tech. | 43.553212 |
Northern Florida Swamp Snake
Scientific name: Seminatrix pygaea pygaea (COPE 1871)
* Currently accepted name
* scientific names used through time
- Contia pygaea – COPE 1871
- Tropidonotus pygaeus – BOULENGER 1893
- Seminatrix pygaea – DOWLING 1950
- Seminatrix pygaea pygaea – DOWLING 1950
Description: Average adult size is 10-15 inches (25-38 cm), record is 18.5 inches (47 cm). Adults are shiny and black. The belly is red with a black line at the base of each ventral scale extending 1/3 the distance from the edge to the center. It has more than 117 belly scales. The scales on the back are smooth and there are 17 dorsal scale rows at midbody. The pupil is round. Juvenile color is similar to adults.
A. Top of the head
B. Smooth scales
C. Elongated scales below the tail (subcaudal scales) are typically divided
D. Front (face view) of the head
E. Side of the head
Range: In Florida, the Northern Florida Swamp Snake occurs from the central peninsula north into the panhandle. Outside of Florida, it occurs from southern Alabama to eastern Georgia.
Habitat: Commonly found in a variety of aquatic enivironments such as cypress swamps, marshes, prairies, lakes, ponds, slow moving streams and rivers, willow heads, hyacinth-choked canals, and estuaries.
Comments: HARMLESS (Non-Venomous). The Northern Florida Swamp Snake is aquatic and prefers to inhabit areas with dense vegetation where it burrows into sphagnum moss, and bottom and floating vegetation. It is rarely encountered away from water, but sometimes can be found under logs or debris near water, in crayfish burrows, or crossing roads during rain. It feeds on small fishes, frogs, tadpoles, salamanders, sirens, amphiumas, and invertebrates including leeches, and worms. It is live-bearing, with 2-11 young born in the summer months.
Comparison with other species: The Southern Florida Swamp Snake (Seminatrix pygaea cyclas) has less than 117 ventral scales with black edges. The Redbelly Snake (Storeria occipitomaculata) is grayish-brown with a light colored neck. The Mud Snake (Farancia abacura) has lateral pinkish-red bands or bars and a black and red checkerboard patterned belly. | <urn:uuid:9e5b4ca7-cac7-4388-b143-91f3f0b667cb> | 3.1875 | 561 | Knowledge Article | Science & Tech. | 49.282636 |
The pink and orange gamete bundles that look like caviar eggs but are actually hermaphroditic clusters of eggs and sperm are migrating up the polyps toward the oral cavities, the corals' single, multipurpose orifices. Each night these bundles have been growing and stretching the polyps until they resemble nothing so much as minuscule pregnant bellies. On this night, the fourth night after the full moon of the austral springtime, the gametes are beginning to crown, like human heads in their birth canals.
I check my watch. When I glance back to the coral, the ocean is transformed. I blink, thinking I'm seeing things. But it's really here—the black water engulfed in a pink and orange blizzard flowing toward the surface. Within seconds, countless billions of magenta and tangerine gamete bundles have been birthed from their polyps and are floating upward on the buoyancy of the fatty eggs.
Those of us underwater at this moment are also transformed by the bundles, which collect under the folds and angles of our wet suits, buoyancy control devices, dive masks, and regulators. Colorful gametes tangle in our hair. If we could breathe water, we'd be breathing them. The rate of the blizzard amplifies until the light from the strobes blinds us. Clicking to a lower setting, I see eruptions of milky white sperm pulsing rhythmically from nearby sponges and sea cucumbers, polychaete worms and giant clams.
On this night, as many as half of the reef-building corals—perhaps 150 species—plus a host of other invertebrates inhabiting the 1,200 miles of Australia's Great Barrier Reef, are spawning. It's an ancient ritual, maybe as old as the 200 million-plus years that scleractinian corals have been alive. These corals emerged in the darkest days after the Permian-Triassic extinction, when the planet was impoverished nearly beyond repair by massive global climate change, and when almost all life died in hot, dry, and iceless conditions. Since then they have survived two subsequent mass extinctions, including the one that killed the non-avian dinosaurs.
Already, manta rays with six-foot wingspans are sailing into view, mouths open, filtering the eggs from the water. At the outer range of our strobes, reef sharks are circling, preparing to gorge on those that have come to feast. From the cold and perpetually dark reaches of the deep known as the mesopelagic, fish that glow in the dark, and live a mile or more below the tidal, lunar, and seasonal influences that trigger the mass spawning, are rising toward it now, preparing to devour the bonanza they have perceived in ways we can't.
No modern human knew of the mass spawning of corals on the Great Barrier Reef before 1982, when marine biologists accidentally happened upon it. Since then, other spawnings adhering to their own unique schedules have been discovered on many reef systems. Somehow, spineless, brainless, eyeless, earless, immotile marine animals that meet all our criteria for zero intelligence manage to synchronize their activities to ensure survival. Otherwise, all their gametes—a year's investment in energy—would launch off into open water without ever finding suitable partners. While an individual animal might survive such behavior for the term of its natural life, the species could not.
12 ASTEROIDS AND EVOLVING INTO WISDOM
IN 2004, JOHN SCHELLNHUBER, distinguished science adviser at the Tyndall Centre for Climate Change Research in the United Kingdom, identified 12 global-warming tipping points, any one of which, if triggered, will likely initiate sudden, catastrophic changes across the planet. Odds are you've never heard of most of these tipping points, even though your entire genetic legacy—your children, your grandchildren, and beyond—may survive or not depending on their status.
Why is this? Is it likely that 12 asteroids on known collision courses with earth would garner such meager attention? Remarkably, we appear to be doing what even the simplest of corals does not: haphazardly tossing our metaphorical spawn into a ruthless current and hoping for a fertile future. We do this when we refuse to address global environmental issues with urgency; when we resist partnering for solutions; and when we continue with accelerating momentum, and with what amounts to malice aforethought, to behave in ways that threaten our future.
A 2005 study by Anthony Leiserowitz, published in Risk Analysis, found that while most Americans are moderately concerned about global warming, the majority—68 percent—believe the greatest threats are to people far away or to nonhuman nature. Only 13 percent perceive any real risk to themselves, their families, or their communities. As Leiserowitz points out, this perception is critical, since Americans constitute only 5 percent of the global population yet produce nearly 25 percent of the global carbon dioxide emissions. As long as this dangerous and delusional misconception prevails, the chances of preventing Schellnhuber's 12 points from tipping are virtually nil.
So what will it take to trigger what we might call the 13th tipping point: the shift in human perception from personal denial to personal responsibility? Without a 13th tipping point, we can't hope to avoid global mayhem. With it, we can attempt to put into action what we profess: that we actually care about our children's and grandchildren's futures.
Science shows that we are born with powerful tools for overcoming our perilous complacency. We have the genetic smarts and the cultural smarts. We have the technological know-how. We even have the inclination. The truth is we can change with breathtaking speed, sculpting even "immutable" human nature. Forty years ago many people believed human nature required blacks and whites to live in segregation; 30 years ago human nature divided men and women into separate economies; 20 years ago human nature prevented us from defusing a global nuclear standoff. Nowadays we blame human nature for the insolvable hazards of global warming.
The 18th-century taxonomist Carolus Linnaeus named us Homo sapiens, from the Latin sapiens, meaning "prudent, wise." History shows we are not born with wisdom. We evolve into it.
CLIMATE CLIQUES AND NAYSAYERS
EISEROWITZ'S STUDY OF risk perception found that Americans fall into "interpretive communities"—cliques, if you will, sharing similar demographics, risk perceptions, and worldviews. On one end of this spectrum are the naysayers: those who perceive climate change as a very low or nonexistent danger. Leiserowitz found naysayers to be "predominantly white, male, Republican, politically conservative, holding pro-individualism, pro-hierarchism, and anti-egalitarian worldviews, anti-environmental attitudes, distrustful of most institutions, highly religious, and to rely on radio as their main source of news." This group presented five rationales for rejecting danger: belief that global warming is natural; belief that it's media/environmentalist hype; distrust of science; flat denial; and conspiracy theories, including the belief that researchers create data to ensure job security.
We might wonder how these naysayers, who represent only 7 percent of Americans yet control much of our government, got to be the way they are. A study of urban American adults by Nancy Wells and Kristi Lekies of Cornell University sheds some light on environmental attitudes. Wells and Lekies found that children who play unsupervised in the wild before the age of 11 develop strong environmental ethics. Children exposed only to structured hierarchical play in the wild—through, for example, Boy Scouts and Girl Scouts, or by hunting or fishing alongside supervising adults—do not. To interact humbly with nature we need to be free and undomesticated in it. Otherwise, we succumb to hubris in maturity. The fact that few children enjoy free rein outdoors anymore bodes poorly for our future decision-makers.
Another study, this one from the Earth Institute at Columbia University, found an ominous silence when it comes to educating American K-12 students on the relationship between our personal behavior and our environment: that the size and inefficiency of our cars, homes, and appliances, our profligate fuels, our love of disposables, and the effects of buying more than we need actually undermine our prospects on earth. Slightly more time is spent teaching kids how the environment can affect us, overpowering humanity with floods, droughts, storms, earthquakes, climate change. But in our overall failure to illuminate the interdependence between Homo sapiens and earth we withhold critical knowledge from those whose lives depend upon it most.
Many of today's kids recreate in the unwilderness of the shopping mall, where messages of prudence and wisdom are overwhelmed by the consumerism that feeds global warming. We send our kids to the mall because we fear the dangers outside. We could hardly be more wrong in our assessment of risk.
THE ALARMISTS AND THE ACROBAT
ON THE OTHER END of Leiserowitz's spectrum of perception regarding global warming is an interpretive community he calls the alarmists, generally comprised of individuals holding pro-egalitarian, anti-individualist, and antihierarchical worldviews, who are supportive of government policies to mitigate climate change, even so far as raising taxes. Members of this group are likely to have taken personal action to reduce greenhouse gas emissions. Collectively, alarmists compose 11 percent of Americans, with the remaining interpretive communities falling considerably closer to the alarmists than the naysayers in the spectrum—suggesting the gap might be cinched by sustained public education on the neighborhood dangers likely to arise in a changed global climate.
Hurricane Katrina provided a wake-up call for how bad it can get in the neighborhood, and may prove a tipping point itself. Yet long before its rampage, American kids were coloring pictures of the first icon of global environmentalism, the Amazon. Its billion-plus acres of rivers and rainforest—its trees collecting and containing excessive greenhouse gases from the atmosphere—were our primer for the revolutionary notion that the earth's neighborhoods are interdependent.
Today Amazonia is the most famous of Schellnhuber's tipping points. For a generation, kids have grown up learning that the Amazon is at risk from massive deforestation. But even if clearcutting were to halt, climate models forecast that a warming globe will convert the wet Amazonia forest into savanna within this century, and the loss of trees will render the region a net CO2 producer, further accelerating global warming.
Amazonia's tipping point might be fast approaching. The year 2005 saw the driest conditions in 40 years, with wildfires raging unabated, and 2006 is looking worse, raising alarms that environmental synergism is already in play as changes become self-sustaining and reinforce one another. Dan Nepstadt of the Woods Hole Research Center in Massachusetts questions whether the warming of the Atlantic (the tropical North Atlantic rose 1.7 degrees Fahrenheit above the 1901-1970 average in 2005) is affecting airflow over the Amazon, leading to drier and fierier conditions there.
Changes in the currents of the North Atlantic constitute another tipping point. As the Atlantic warms, ice caps melt, diluting the ocean and potentially shutting down its thermohaline circulation (THC), the oceanic river currently delivering the thermal equivalent of 500,000 power stations' worth of warmth to Europe. A 2005 study published in Nature found that after 50 years of monitoring, a critical component of the THC had suddenly slowed by 30 percent.
The fate of this circulation is closely linked to one of Schellnhuber's more notorious tipping points, the Greenland Ice Sheet. Encompassing 6 percent of the earth's freshwater supply, this ice, if melted, would raise sea levels by about 23 feet worldwide—not counting ice loss from the rest of the Arctic and the Antarctic. A study by NASA and the University of Kansas showed the decline of Greenland's ice unexpectedly doubled between 1996 and 2005, as glaciers surged into the sea with unpredicted speed. More worrying, the area of melt shifted 300 nautical miles north during the last four years of the study, indicating the warmth is spreading rapidly.
One tipping point affects the other in a balance as delicate as that of an acrobat's spinning plates. Greenland's increasing freshwater flow into the North Atlantic will certainly impact the THC. Warm water recirculating within the central Atlantic may further rearrange airflow over the Amazon, accelerating its dry-down and tree loss, and potentially freeing as much carbon dioxide from its enormous reservoir as the 20th century's total fossil fuel output. A sudden Amazonian release would surely melt whatever of Greenland hadn't already melted, crashing the THC and drastically cooling Europe—in the worst-case scenario, freezing it solid. Although we like to compartmentalize, nature does not. Biology and climatology are the indivisible warp and weft of earth's living fabric. | <urn:uuid:eb4dfb91-28f7-4483-b5f5-d872e28a83c9> | 3.078125 | 2,707 | Nonfiction Writing | Science & Tech. | 33.307123 |
An object in the icy Kuiper belt has been found orbiting the Sun backwards, compared to most other objects in the solar system. It may help explain the origin of an enigmatic family of comets typified by Comet Halley.
The new object, called 2008 KV42, lies in the Kuiper belt, a ring of icy bodies beyond Neptune. Its orbit is inclined 103.5° to the plane of the Earth's orbit, or ecliptic. That means that as it orbits the Sun, it actually travels in the opposite direction to the planets.
Researchers led by Brett Gladman of the University of British Columbia first spotted the maverick object in May. Observations suggest it is about 50 kilometres across and travels on a path that takes it from the distance of Uranus to more than twice that of Neptune (or between 20 and 70 astronomical units from the Sun, with 1 AU being the Earth-Sun distance).
Its orbit appears to have been stable for hundreds of millions of years, but astronomers say it may have been born elsewhere. "It's certainly intriguing to ask where it comes from," says Brian Marsden of the Minor Planet Center in Cambridge, Massachusetts.
Gladman says it was probably born in the same place as Halley-type comets. These comets also travel on retrograde or highly tilted orbits - lasting between 20 and 200 years, but they come closer to the Sun.
It has been unclear where such comets come from. Computer models suggest they do not arise in either of the two birthplaces of other types of comets - the Kuiper belt or the much more distant Oort cloud, a shell of of icy bodies lying between 20,000 and 200,000 AU from the Sun.
Gladman's team calculates that 2008 KV42 arises beyond the Kuiper belt but closer than the Oort cloud, in a region thought to lie between 2000 to 5000 AU from the Sun. Some astronomers call the zone the inner Oort cloud.
A gravitational disturbance likely kicked 2008 KV42 out of the inner Oort cloud and to its present orbit. And Gladman says it might one day be pushed out of that orbit and into one that brings it closer to the Sun, making it a possible "transition object" on its way to becoming a Halley-type comet.
Gladman's team has found more than 20 other Kuiper belt objects with steeply inclined orbits while surveying the sky well away from the ecliptic - but no others with a retrograde orbit.
Comets and Asteroids - Learn more about the threat to human civilisation in our special report.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Fri Sep 05 01:15:49 BST 2008 by Toddsherman
Hey 2008 KV42, your doing it wrong!
Fri Sep 05 01:35:12 BST 2008 by Ade
LOL! Leave 2008 kv42 alone Todd, it's just minding its own business. :)
Fri Sep 05 04:21:58 BST 2008 by Dann
Perhaps Drac dated Venus for a while, and picked up some bad habits?
Fri Sep 05 05:04:34 BST 2008 by W.
Syntax??--- "it actually travels in the opposite direction as the planets." Opposite AS ? Sentence fails even elementary high school English.
Fri Sep 05 15:25:54 BST 2008 by Michael Marshall
Fair point W. We've fixed that.
Fri Sep 05 09:25:07 BST 2008 by Werthers Stoolbase
I was wondering where that had got to. 2008 KV42 is merely my missing sandwich box that I haven't seen since I departed an Intercity 110 back in 1976. I left it on the carriage after I was momentarily distracted by a billboard containing a giant image of Jimmy Saville's face.
I would like to think that my dearly departed wife's cucumber and chutney sandwiches have remained fairly fresh under the 50 km wide case of ice it has gathered although, to be honest, they weren't ever that fresh anyway, the silly dead bitch.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:81765c1a-93bd-4af2-a578-a7ef19b4b549> | 3.5625 | 982 | Comment Section | Science & Tech. | 66.255698 |
More In This Article
Editor's note: This article is the second of a three-part series by John Carey. Part 1, posted on June 28, is "Storm Warning: Extreme Weather Is a Product of Climate Change".
Extreme floods, prolonged droughts, searing heat waves, massive rainstorms and the like don't just seem like they've become the new normal in the last few years—they have become more common, according to data collected by reinsurance company Munich Re (see Part 1 of this series). But has this increase resulted from human-caused climate change or just from natural climatic variations? After all, recorded floods and droughts go back to the earliest days of mankind, before coal, oil and natural gas made the modern industrial world possible.
Until recently scientists had only been able to say that more extreme weather is "consistent" with climate change caused by greenhouse gases that humans are emitting into the atmosphere. Now, however, they can begin to say that the odds of having extreme weather have increased because of human-caused atmospheric changes—and that many individual events would not have happened in the same way without global warming. The reason: The signal of climate change is finally emerging from the "noise"—the huge amount of natural variability in weather.
Scientists compare the normal variation in weather with rolls of the dice. Adding greenhouse gases to the atmosphere loads the dice, increasing odds of such extreme weather events. It's not just that the weather dice are altered, however. As Steve Sherwood, co-director of the Climate Change Research Center at the University of New South Wales in Australia, puts it, "it is more like painting an extra spot on each face of one of the dice, so that it goes from 2 to 7 instead of 1 to 6. This increases the odds of rolling 11 or 12, but also makes it possible to roll 13."
Why? Basic physics is at work: The planet has already warmed roughly 1 degree Celsius since preindustrial times, thanks to CO2and other greenhouse gases emitted into the atmosphere. And for every 1-degree C (1.8 degrees Fahrenheit) rise in temperature, the amount of moisture that the atmosphere can contain rises by 7 percent, explains Peter Stott, head of climate monitoring and attribution at the U.K. Met Office's Hadley Center for Climate Change. "That's quite dramatic," he says. In some places, the increase has been much larger. Data gathered by Gene Takle, professor of meteorology at Iowa State University in Ames, show a 13 percent rise in summer moisture over the past 50 years in the state capital, Des Moines.
The physics of too much rain
The increased moisture in the atmosphere inevitably means more rain. That's obvious. But not just any kind of rain, the climate models predict. Because of the large-scale energy balance of the planet, "the upshot is that overall rainfall increases only 2 to 3 percent per degree of warming, whereas extreme rainfall increases 6 to 7 percent," Stott says. The reason again comes from physics. Rain happens when the atmosphere cools enough for water vapor to condense into liquid. "However, because of the increasing amount of greenhouse gases in the troposphere, the radiative cooling is less efficient, as less radiation can escape to space," Stott explains. "Therefore the global precipitation increases less, at about 2 to 3 percent per degree of warming." But because of the extra moisture, when precipitation does occur (in both rain and snow), it's more likely to be in bigger events.
Iowa is one of many places that fits the pattern. Takle documented a three- to seven-fold increase in high rainfall events in the state, including the 500-year Mississippi River flood in 1993, the 2008 Cedar Rapids flood as well as the 500-year event in 2010 in Ames, which inundated the Hilton Coliseum basketball court in eight feet (2.5 meters) of water . "We can't say with confidence that the 2010 Ames flood was caused by climate change, but we can say that the dice are loaded to bring more of these events," Takle says. | <urn:uuid:f5c9014f-f930-4cac-b96a-fd9cfe05f940> | 3.90625 | 845 | Truncated | Science & Tech. | 48.811345 |
Classic blueschists. The slight blue tinge results from the
mineral glaucophane (an amphibole), which here forms the rather stubby needles.
This rock started life as a volcanic rock of basic composition, part of the
old ocean floor of Tethys. Blueschists are comonly thought to be diagnostic
of former subduction zones, because they imply relatively high pressure conditions
relative to the temperature (compared to normal geothermal gradients).
Return to metamorphism page | <urn:uuid:7ccff102-e7e4-4de6-b10b-f61cb8feb867> | 2.703125 | 105 | Knowledge Article | Science & Tech. | 25.915357 |
Small local hunting communities in Siberia are very distant from any governmental control. Hunted waterbird species, including globally and regionally threatened species, rely for their well-being on the self regulation of remote hunting communities. Interviewed hunters showed a profound knowledge of Baikal Teal, its population status, and the causes of their past decline. Whether the knowledge is shared by other communities in the region and beyond in Northern Siberia needs verification.
This study demonstrates the utility of carbon isotope discrimination in describing genetic adaptation to arid environments, although it is probably most useful in detecting differentiation when the strategy of the species under investigation is to increase water use efficiency, rather than drought-avoidance. The results suggest that populations on the eastern and western sides of the Andes should be treated as separate management units for the purposes of conserving the genetic resource of this species.Resource Type: Journal Papers
In the past few years, a number of analyses have been undertaken to measure progress towards the 2010 and 2012 CBD targets. This report demonstrates how the measurement of progress is influenced by decisions on which protected areas are included (for instance, whether internationally designated sites, or sites without an assigned IUCN category are included) and which biogeographic datasets used (for instance which mountain dataset is chosen), and highlights the need for standardised methods and datasets.Resource Type: Journal Papers
Protected areas can act as a case study for REDD: lessons can be learnt from their success or otherwise in reducing deforestation and supporting local livelihoods. Further research into the most effective management and governance frameworks for acheiving goals on carbon emissions, biodiversity and communities, and the extent to which protected areas reduce (or merely displace) deforestation within national boundaries would be useful in informing REDD implementation.Resource Type: Journal Papers
Through the Convention on Biological Diversity (CBD), the world’s governments recently adopted a target to protect at least 17% of the global land area by 2020. This paper evaluates current levels of protection for mountains at multiple scales. It shows that the CBD’s 17% target has already been almost met at a global scale: 16.9% of the world’s mountain areas outside Antarctica fall within protected areas. However, protection of mountain areas at finer scales remains uneven and is largely insufficient, with 63% (125) of countries, 57% (4) of realms, 67% (8) of biomes, 61% (437) of ecoregions and 53% (100) of Global 200 priority ecoregions falling short of the target. The CBD target also calls for protected areas to be focussed “especially [at] areas of particular importance for biodiversity”. Important Bird Areas and Alliance for Zero Extinction sites represent existing global networks of such sites. It is therefore of major concern that 39% and 45% respectively of these sites in mountain areas remain entirely unprotected. Achievement of the CBD target in mountain regions will require more focused expansion of the protected area network in addition to enhanced management of individual sites and the wider countryside in order to ensure long term conservation of montane biodiversity and the other ecosystem services it provides.Resource Type: Journal Papers
A table is provided of 122 bird species with restricted breeding distributions and for which Nepal may hold significant populations. Habitat threats and population changes are detailed for 33 species for which Nepal may be especially important. The vital importance of Nepal's forests to Nepal's avifauna is emphasised.Resource Type: Journal Papers
We generated biodiversity surfaces for both present-day and pre-human landscapes to map spatial patterns of change in a diverse ecological community to calculate the combined biodiversity impacts of habitat loss and fragmentation that accounts for the exact spatial pattern of deforestation. Our spatially-explicit, landscape-scale index of community change shows how the fine-scale configuration of habitat loss sums across a landscape to determine changes in biodiversity at a larger spatial scale. After accounting for naturally occurring within-forest heterogeneity, we estimate that the conversion of 43% of forest to grassland in a 1300 km2 landscape in New Zealand resulted in a 47% change to the beetle community.Resource Type: Journal Papers
We made a complete survey of all the extant populations in Djibouti and to collect samples for genetic analysis with a view conserving the palm for the future.
Our survey revealed that there were a total of 314 adults, 20 juveniles, 134 rosettes, 210 small rosettes (more than 6 leaves) and 465 seedlings (<3 leaves) living in the Bankouale area of Djibouti. These are distributed unequally amongst three valley systems. 65% of the adults, 85% of the juveniles, 75% of the rosettes, 76% of the small rosettes, and 93 % of the seedlings were found in the Bankouale valley.
©2013 UNEP All rights reserved | <urn:uuid:4af1b4a6-217a-415a-a334-090d1d45d197> | 2.71875 | 991 | Content Listing | Science & Tech. | 20.073715 |
Consider the different ways of representing state in Prolog. First, there's the functional way, in which the state of the entire program is passed around from predicate to predicate in the form of additional arguments. Although burdensome, this approach interacts well with backtracking: when a failure occurs, changes to the program's state are automatically rolled back.
Another way to represent state in Prolog is to use the built-in database manipulation predicates assert and retract. This imperative approach has the advantage of not requiring predicates to take additional arguments, but it unfortunately does not interact well with backtracking. This is because assert and retract are destructive operations, i.e., they survive backtracking. So when a failure occurs, the programmer must make sure that each assert is retracted and vice versa, which is not easy to get right.
If only the programmer had access to special variants of assert and retract that automatically undo their side effects when backtracked over, the imperative approach would enable some really interesting programming styles that you could never get away with in a "real" imperative language. Interestingly, Tim Menzies shows us how to implement these variants in just a few lines of standard Prolog. (I had never come across this trick before, but Tim's nonchalance makes me think that it's well known in the Prolog community.)
Here is a variant of the assert predicate that automatically retracts itself upon backtracking:
assert2(X) :- assert(X).The first clause above makes assert2 behave like a regular assert if the query in which it is used succeeds. But if the query fails, the program will eventually backtrack to the point where assert2 was used and try the second clause, which retracts the fact asserted by the first clause.
assert2(X) :- retract(X), fail.
Similarly, we might define retract2 as follows:
retract2(X) :- retract(X).But consider what happens if retract2's argument is not already in Prolog's database. The first clause (which uses retract) will fail, so the program will try the second clause, which will assert a fact that was never retracted in the first place! We can fix this problem by refactoring retract2 into two predicates, as shown below:
retract2(X) :- assert(X), fail.
retract2(X) :- X, reallyRetract2(X).And that's that.
reallyRetract2(X) :- retract(X).
reallyRetract2(X) :- assert(X), fail. | <urn:uuid:8e128cc6-f4c2-4913-8e6f-5678787a2a50> | 2.859375 | 538 | Personal Blog | Software Dev. | 45.981311 |
in the constellation Leo
75 million light-years
210 million times the mass of the Sun
Diameter slightly less than the size of the orbit of Jupiter
At first glance, NGC 3608 is an unremarkable galaxy. It is an elliptical galaxy, so it looks like a faint, fuzzy football with no discernible features other than its bright core. Yet evidence suggests the galaxy has undergone a fairly recent encounter with a neighboring galaxy, NGC 3607. The encounter has stirred up the galaxy's core, which is rotating in the opposite direction from the stars around it.
The core appears to harbor a supermassive black hole. Measurements of the orbital speeds of stars reveal that the stars are being pulled by a large, dark mass at the center of NGC 3608. Early studies suggested a mass of about 100 million times the mass of the Sun, but subsequent studies have increased the mass. A study in 2011, for example, found a range of 140 million to 320 million times the mass of the Sun, with a most likely number of 210 solar masses.
The galaxy is "radio quiet," however, suggesting that the black hole itself is quiet. If an accretion disk encircles the black hole it may be fairly small and thin, and it is not producing the "jets" of charged particles seen shooting from near many supermassive black holes.
Did you find what you were looking for on this site? Take our site survey and let us know what you think.
This document was last modified: April 30, 2012. | <urn:uuid:51a786cb-392f-4c04-a7df-7ee9b7c8a273> | 4.03125 | 316 | Knowledge Article | Science & Tech. | 59.223116 |
For 20 years, field scientists participating in CI’s Rapid Assessment Program (RAP) have been exploring some of the world’s most abundant, mysterious and threatened tropical ecosystems; to date, they’ve discovered more than 1,300 species new to science. Entomologist and RAP Director Leeanne Alonso describes a day in the field on a recent RAP expedition in the forests of southern Suriname.
I sit on a rotting log in the quiet forest, looking for signs of movement. Nothing but branches falling and a few small birds swooping past. In the movies, filmmakers portray the rainforest as teeming with dangerous animals — snakes slithering down tree trunks, ocelots prowling after monkeys or peccaries, and scorpions at every turn, waiting to sting you. These animals are definitely out there, but they’re hard to find and even harder to see, blending in with the dense vegetation and hiding out in caves and crevices.
My RAP colleagues are out in the forest right now looking for these elusive animals, putting out traps of all kinds: small boxes with oat-peanut butter bait to tempt the mice and opossums; mist nets to catch birds and bats in flight; camera traps to take clandestine photos of large mammals and birds that pass by. Just last night a camera about 200 feet [61 meters] from our camp took a photo of a jaguar walking along our trail. I’m sitting very near to that spot right now. The animal probably moved on when it smelled humans; then again, it could be watching me right now, so I keep a lookout for yellow eyes within the sea of green.
Traps do some of the work for us, but we find most species by actively looking for them. The RAP bird team walks for miles at dawn and dusk to record and identify the birds by their songs. The herpetologists hunt for frogs and snakes along creeks at night, following the frog calls to find their hiding spots. Botanists measure out plots and count all the trees and estimate their mass. They have it easier — the trees don’t run away and are easy to find. However, they can be harder to identify, as accurate identification requires finding the flower or fruit.
I look down, where the action is — at least in my mind. The leaf litter — the layer of dead and decaying leaves, twigs and fruits that covers the soil — will become the organic soil of the forest from which the trees and shrubs obtain their nutrients. The hot, humid climate causes the leaf litter to decay quickly so that carbon, nitrogen and other nutrients are transferred to the soil where they are quickly taken up again by the trees. This process of soil formation provides the foundation for all plant life, including the crops that sustain us.
Within this leaf litter is a whole world of fascinating, tiny creatures that we know very little about. Springtails, mites, amphipods, beetles, pseudoscorpions, and of course my favorite creatures: ants. I get out the tools of my trade: a trowel, a sifter and tray, forceps, vials filled with ethanol, and leather work gloves. I look around me for a spot with some good litter — it should be fairly thick, not too wet, with a fair number of leaves and twigs that look like they’ve been there for a while. Litter at the base of a tree or around exposed tree roots usually houses a good variety of ants.
I find a spot and scoop up some leaf litter, twigs and soil into my sifting tray. I use gloves and trowel to avoid being stung by stinging ants, scorpions, wasps and other creatures, as well as to avoid being jabbed by a spine from a palm or another plant. I watch for movement among the sifted litter in the tray — ants are restless and always on the move. I spot a large black Pachycondyla running across the tray, dashing to make its escape. I look closely for the tiny ants — little specks walking slowly over the soil particles. To me, these are the real gems — the species that perhaps no scientist has seen before.
As you might guess from my profession, I usually have a good relationship with ants. However, just last night I had a bad experience with my little friends. I had put up my tent while it was still light and then went to help prepare dinner. When I returned to my tent in the dark and got in for the night, I heard a crinkling sound in the far corner of my tent.
I grabbed my headlamp and shone it in the corner. A small swarm of ants had chewed holes in the floor of my tent and were beginning to stream inside. These were a species of Neivamyrmex, a nocturnal army ant that emerges from underground during the night to forage. I had unfortunately placed my tent right on top of their hole. I was fortunate that I had returned to my tent when I had; if I had been a bit later they would have been all over my tent — in my sleeping bag, in my clothes — and it would have been almost impossible to get all of them out. These ants sting something fierce; if they had reached me in my sleep, it would have been a true nightmare.
When I return to the camp, I show the other RAP scientists that ants I’ve found. They in turn show me beautiful creatures they have found: frogs, katydids, fishes, plants and bats. Some people may think we are just cataloguing the diversity of life on Earth before it’s gone so that we’ll have something to remember — that it’s too late for most of these species. I hope that’s not the case; I can’t just sit by and watch that happen. So I head back to the forest to look for more ants, taking a local forest ranger and a university student with me. They also play an essential role in protecting these forests, and we need all the help we can get.
Leeanne Alonso is the director of CI’s Rapid Assessment Program (RAP). To learn more about 20 years of RAP achievements, check out the new book “Still Counting.” Read other posts in our series commemorating RAP’s 20th anniversary, or check out the new RAP book “Still Counting.” | <urn:uuid:0e823ebf-67db-4685-97bc-c62ade836ded> | 4.25 | 1,352 | Personal Blog | Science & Tech. | 61.283638 |
Firehose-Like Jet Observed In Action
The Chandra images in this montage show the erratic variability of a jet of high energy particles that is associated with the Vela pulsar, a rotating neutron star. These images are part of a series of 13 images made over a period of two and a half years that has been used to make a time-lapse movie of the motion of the jet.
Much like an untended firehose, the jet bends and whips about spectacularly at half the speed of light. Bright blobs move in the jet at similar speeds.
The jet is half a light year (3 trillion miles) in length and is shooting out ahead of the moving neutron star. The extremely high-energy electrons or positrons that compose the jet were created and accelerated by the combined action of the fast rotation of the neutron star and its intense magnetic field. These particles produce X-rays as they spiral outward around the magnetic field of the jet.
Over its entire length, the width of the jet (about 200 billion miles) remains approximately constant. This suggests that the jet is confined by magnetic fields generated by the charged particles flowing along the axis of the jet. Laboratory studies of beams of particles confined in this manner have shown that they can change rapidly due to an effect called the "firehose instability". This is the first time such behavior has been observed in astrophysical jets.
To picture how the firehose instability works, imagine a firehose lying on the ground. When the water is turned on, different parts of the hose will kink and move rapidly in different directions, pushed by the increased pressure at the bends in the hose. The Vela jet resembles a hose made of magnetic fields, which confines the charged particles. The bright blobs in the jet are thought to be a manifestation of the increased magnetic field and particle pressure at the kinks in the jet.
The instability could be triggered by the strong headwind created as the pulsar moves through the surrounding gas at a speed of about 200,000 miles per hour. The activity of the Vela pulsar jet could also help to understand the nature of the enormous jets coming from supermassive black holes. Those jets may also vary, but on time scales of millions of years, instead of weeks as in the Vela pulsar jet. | <urn:uuid:3d4d1b33-2206-490a-80a7-de9df928556a> | 4.0625 | 476 | Knowledge Article | Science & Tech. | 45.825051 |
Test Precisely and Concretely
Although it is important to test for desired, essential behavior rather than incidental behavior of an implementation, this should not be construed or mistaken as an excuse for vague tests. Tests need to be both accurate and precise.
Something of a tried and tested (and testing) classic, sorting routines provide an illustrative example. Implementing a sorting algorithm is not necessarily an everyday task for a programmer, but uses for sorting are sufficiently commonplace that the expectations for a sorting routine are familiar. This familiarity, however, often brings with it a false sense of knowledge.
When programmers are asked what they would test for if they were to implement a sorting routine, the answer most commonly given is that the resulting sequence of elements is sorted, i.e., the elements are in non-descending order. While this is not wrong, it is also not completely correct. When prompted for a more precise condition, many programmers add that the resulting sequence should be the same length as the original. Although correct, this is still insufficient. For example, given the sequence of values
[3, 1, 4, 1, 5, 9], the sequence
[3, 3, 3, 3, 3, 3] satisfies a postcondition of being sorted in non-descending order and having the same length as the original sequence. It also contains an error taken from real production code (fortunately caught before it was released), where a simple slip of a keystroke or momentary lapse of reason led to an elaborate mechanism for populating a whole array with the first element of the passed array.
The full postcondition is that the result is sorted and that it holds a permutation of the passed values. This appropriately constrains the required behaviour. The fact that the result length is the same as the input length comes out in the wash and doesn't need restating.
Even stating the postcondition in the way described is not enough to give you a good test. A good test should be readable. It should be comprehensible and simple enough that you can see readily that it is correct (or not). Unless you already have code lying around for checking that a sequence is sorted and that one sequence contains a permutation of values in another, it is quite likely that the test code will be more complex than the code under test. As Tony Hoare noted:
- There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other is to make it so complicated that there are no obvious deficiencies.
Using concrete examples eliminates this accidental complexity and opportunity for accident. For example, the result of sorting
[3, 1, 4, 1, 5, 9] is
[1, 1, 3, 4, 5, 9]. No other answer will do.
Concrete examples helps to illustrate general behavior in an accessible and unambiguous way. The result of adding an item to an empty collection is not simply that it is not empty, it is that the collection now has a single item. And that the single item held is the item added. Two or more items qualify as not empty. And also as wrong. A single item of a different value is also wrong. The result of adding a row to a table is not simply that the table is one row bigger, it also entails that the row's key can be used to recover the row added. And so on. In specifying behavior, tests should not simply be accurate, they also need to be precise.
This work is licensed under a Creative Commons Attribution 3
Back to 97 Things Every Programmer Should Know home page | <urn:uuid:5638af15-9089-41e3-8bf9-152e66ed676e> | 3.0625 | 743 | Knowledge Article | Software Dev. | 48.274432 |
Date: Sat, 13 Sep 2003 17:40:46 -0400
Author: "Fred Stein"
Subject: Re: Latent Heat
Another way to demonstrate latent heat to students as young as middle school is to heat a beaker of water that contains a large test tube of pure parafin wax until the wax melts completely. Each beaker and tube will contain a thermometer. Have students record the temperature of water and wax on a temp vs. time graph. Both curves will be quite close until the wax plot goes horizontal for awhile and then rejoins the water cooling curve. Ask. "What happened?"
Dr. Fredrick M. Stein
Director of Education and Outreach
American Physical Society
One Physics Ellipse
College Park, MD 20740-3844
(301) 209-0865 fax
>>> email@example.com 09/12/03 15:37 PM >>>
The only one I have on my shelf at the moment is Latent Heat of
Crystallization. Sodium Acetate handwarmer that goes from liquid to solid
with heat produced.
On Fri, 12 Sep 2003, Gerald Zani wrote:
> How do people demonstrate latent heat?
> Gerald Zani e-mail: Gerald_Zani@brown.edu
> Manager of Demonstrations phone: (401) 863-3964
> Department of Physics FAX: (401) 863-2024
> Brown University Providence, RI 02912-1843 USA
> URL http://www.physics.brown.edu/users/staff/zani/index.html
> URL http://www.physics.brown.edu/Studies/Demo/
> Do a little more of that work which you have confessed to be good,
> Which you feel that society and your most Just Judge rightly demand of you.
> Cultivate the tree which you have found to bear fruit in your soil.
> If you have any experiments you would like to try, try them.
> Now's your chance.
> Henry David Thoreau, Journal entry, 1850.
From mlowry@D115.ORG Sun Sep 14 17:13:13 2003 | <urn:uuid:87c775bf-348b-4606-8ed4-06d6bbc72c98> | 3.1875 | 473 | Comment Section | Science & Tech. | 76.939921 |
The pipes module defines a class to abstract the concept of a pipeline — a sequence of converters from one file to another.
Because the module uses /bin/sh command lines, a POSIX or compatible shell for os.system() and os.popen() is required.
The pipes module defines the following class:
>>> import pipes >>> t=pipes.Template() >>> t.append('tr a-z A-Z', '--') >>> f=t.open('/tmp/1', 'w') >>> f.write('hello world') >>> f.close() >>> open('/tmp/1').read() 'HELLO WORLD'
Template objects following methods:
Append a new action at the end. The cmd variable must be a valid bourne shell command. The kind variable consists of two letters.
The first letter can be either of '-' (which means the command reads its standard input), 'f' (which means the commands reads a given file on the command line) or '.' (which means the commands reads no input, and hence must be first.)
Similarly, the second letter can be either of '-' (which means the command writes to standard output), 'f' (which means the command writes a file on the command line) or '.' (which means the command does not write anything, and hence must be last.) | <urn:uuid:3c2da9cc-accf-4508-bd7d-e0b686012cb7> | 3.328125 | 286 | Documentation | Software Dev. | 73.624894 |
Appendix A. A model for locating regenerated trees.
The pattern of trees was retained over time by first fitting a statistical model to the empirical tree pattern, and then locating new trees using the fitted model. More specifically, we first analyzed the empirical tree pattern using the K-function for point patterns (Diggle 1983, Bailey and Gatrell 1995, Venables and Ripley 1999). The estimated K-function is defined as
where r is the radius of a circle with the centre at a randomly chosen point i, n is the observed number of points, A is the study area, Ih(dij) is an indicator function which is 1 if dij r and 0 otherwise, and wij is the proportion of the circumference of this circle which lies within A, i.e., an edge correction.
For a random pattern with density n/A, the expected number of neighbors within a distance r from an arbitrary point of the pattern is r2n/A. The benchmark of complete randomness is the Poisson process for which Kest(r) = r2. For an aggregated pattern, the points have more neighbors than expected under the null hypothesis, hence Kest(r) > r2n/A, and conversely, for a regular pattern the points have fewer neighbors, Kest(r) < r2n/A.
We constructed 95% confidence envelopes for random patterning of the trees by randomizing the number of trees (n) within A 99 times, estimating Kest(r), and then plotting the highest (upper envelope) and lowest (lower envelope) Kest(r) values (Diggle 1983). This plot (not shown) revealed that trees at Valkrör were spatially aggregated up to a scale of 25 meters.
We next fitted a model to the empirical tree pattern (coordinate system in meters), assuming an underlying Poisson cluster process, according to Diggle (1983). The fitted model was used for locating the new trees ("offspring") which thereby showed the same scale of aggregation as their "parent" trees. We fitted the Poisson cluster model by minimizing
where K(r) is the theoretical K-function with the parameter vector , and r0 and c are "tuning constants" chosen to provide desirable estimation properties. We choose r0 = 45, corresponding to the observed scale of tree aggregation, and c = 0.25, suggested by Diggle (1983) for aggregated patterns. Diggle (1983) showed that for a specific Poisson cluster process, with Poisson number of offspring per parent, and where the probability density function, for the distance between offspring and parent, is a radially symmetric normal distribution,
K(r) = r2 + (1 exp(-r2/42) / .
The parameter 2
is the variance of the normal distribution, and
is the density of the random parent Poisson process. We found that D()
was minimized with
2 = 41.9 and = 0.0019. For each simulation time step, we thus located two new trees using this model for the Poisson cluster process, K(r). An offspring was not allowed to be located closer than 0.1 meter from an existing tree. This procedure retained the scale of tree aggregation during the simulated 100 years.
Bailey, T. C., and A. C. Gatrell. 1995. Interactive spatial data analysis. Longman Group Limited, Essex, UK.
Diggle P. J. 1983. Statistical analysis of spatial point patterns. Academic Press, London, UK.
Venables W. N., and B. D. Ripley. 1999. Modern Applied Statistics with S-PLUS. Third edition. Springer-Verlag, New York, New York, USA. | <urn:uuid:96bee133-0b63-48b9-a27f-51c28996e02f> | 2.703125 | 783 | Academic Writing | Science & Tech. | 63.269383 |
Glories - My Pictures
A glory is a circular disc or ring of light seen at the antisolar point when looking downwards either from an aircraft or a mountain towards cloud or fog. The shadow of the observer appears at the centre of the disc and there are usually coloured fringes resembling those seen in a rainbow. It appears that droplets of liquid water are required and that ice crystals (which make up many types of cloud) do not produce glories.
These pictures were taken over the North Atlantic on 21st April 2001. The glory appears in the lower left of the pictures. The glory is much more visible by eye from the aircraft than in the photographs. I have tried to enhance the visibilty by using software and what you see here is the best I can get.
The most interesting feature is that the angular diameter of the glory is not fixed (all pictures were taken with the same lens and are shown to the same scale).
Click on any of the pictures below for a larger version.
The contrail from the aircraft I was in can be seen ending in the centre of the glory.
The straight horizontal lines are the shadows of contrails of other aircraft flying on the same route.
Read my review and discussion of the physics of the phenomenon (with links).
More pictures from around the Web.
Last updated: 20 March 2002; © Lawrence Mayes, 2001 | <urn:uuid:0279258f-84e6-43ef-aa68-d531505c93ae> | 2.75 | 283 | Personal Blog | Science & Tech. | 58.485681 |
You are hereA "Chembow" Observed in Prince Edward County, Ontario, July 3 2010
A "Chembow" Observed in Prince Edward County, Ontario, July 3 2010
" Have you noticed anything unusual in the sky lately? If you're over 30 or so (as of this writing), you may remember what a clear blue sky looks like. Otherwise, you may not even realize that these long, persistent trails coming out of the backs of planes are not normal contrails. Prior to the mid '90s, airplane contrails did not have the tendency to persist as long as they do now. Observe how the trails form wispy, persistent clouds, gradually dispersing into haze. You may see rainbows (as seen here) and pink and green dichroism (2-colors). Diffraction and dichroism are strong indicators for the presence of metallic aerosol. "
THIS IS NOT A REGULAR RAINBOW -- the key difference is that it is reflected, not refracted. A normal rainbow is seen regardless of clouds, but usually after rain when there are droplets of moisture in the sky; however a chembow is seen superimposed on a cloud (containing metallic aerosols), even when it hasn't rained.
To learn more about chembows and chemtrails, click on these links : | <urn:uuid:164d7c5d-f2e4-4bb6-9528-775f802b7d51> | 2.703125 | 280 | Knowledge Article | Science & Tech. | 50.188444 |
Eric Louis Mann, Ph.D.
University of Connecticut, 2005
"As students progress through the educational system their interest in mathematics diminishes. Yet there is an ever increasing need within the workforce for individuals who possess talent in mathematics. The literature suggests that mathematical talent is most often measured by speed and accuracy of a student's computation with little emphasis on problem solving and pattern finding and no opportunities for students to work on rich mathematical tasks that require divergent thinking. Such an approach limits the use of creativity in the classroom and reduces mathematics to a set of skills to master and rules to memorize. Doing so causes many children's natural curiosity and enthusiasm for mathematics to disappear as they get older. Keeping students interested and engaged in mathematics by recognizing and valuing their mathematical creativity may reverse this tendency.
In "Rising Above The Gathering Storm: Energizing and Employing America for a Brighter Economic Future" (Committee on Science, Engineering, and Public Policy), 2005) members of the National Academy of Science developed a list of recommended actions needed to ensure that the United States can continue to compete globally. The top recommendation was to increase America's talent pool by vastly improving K-12 mathematics and science education (pp. 91-110).
One of the strengths of the United States economic growth has been the creativity of its citizens. Inherent in the recommendations above is the need for growth and innovation, both of which are fueled by creativity. This study investigates several means of identifying mathematical creativity as a first step in identifying and nurturing this talent.
As students progress through the educational system their interest in mathematics diminishes. The U.S. Department of Education (2003) reports that 81% of fourth graders have a positive or strongly positive attitude towards mathematics but four years later only 35% of eighth graders share that attitude. At the post-secondary level less than 1% of degree-seeking baccalaureate students choose mathematics as their major field of study (National Center for Educational Statistics, 2005). Current emphases on convergent thinking and rapid response have failed to reverse the trend. Limiting the use of creativity in the classroom reduces mathematics to a set of skills to master and rules to memorize. Doing so causes many childrenÄôs natural curiosity and enthusiasm for mathematics to disappear as they get older, creating a tremendous problem for mathematics educators who are trying to instill these very qualities (Meissner, 2000).Keeping students interested and engaged in mathematics by recognizing and valuing the mathematical creativity may reverse this tendency.
It is hoped that by finding simpler ways to identify creative potential an increase in the recognized talent pool of future mathematics can be achieved at a younger age. It is also hoped that identifying mathematical creativity in students will encourage teachers to nurture this aspect of mathematical talent; an aspect that is perhaps the most important one for mathematicians who will make significant contributions to the field.
This chapter provided a rationale for this study and identified the research questions that guided the investigation. The negative trend in individual interest in mathematics was noted, as was the failure of traditional classroom emphasis on convergent thought and computational speed in reversing this trend. An understanding of mathematics is needed in almost every occupation and the need to find and develop talent is in the best interest of both the individual and society as a whole. The rationale for expanding the effort to find mathematical talent beyond those who are academically gifted was discussed." | <urn:uuid:855fedc0-5bb2-41ea-bbf8-dc1c91326015> | 3.28125 | 688 | Academic Writing | Science & Tech. | 21.860254 |
I've come across an example that I can't understand. Can somebody explain what is happening here?
So, I get that if I can get the base to be something that is equal to 1 (mod 3), then I know that the overall answer is also 1 mod 3. What I don't understand is the simplification that is used.
The example just shows one more step before jumping to the final answer. Here is what it shows:
I can see that they got the base to be 4, which is 1 (mod 3), but I just can't follow the math for how they were able to make this leap. Can anybody tell me the exponentiation rule used? | <urn:uuid:db349e74-9f79-4daf-ad30-a2719d52ecfc> | 2.796875 | 139 | Q&A Forum | Science & Tech. | 75.225 |
The reasons you should care about this are laid out better in this video, which explains a lot more of the details:
At 3:26 they nicely explain about the reduced voltage required (which means it's more efficient) of 1.6 Volts instead of 2.3 volts of the 1.2 volts you could potentially recover in a fuel cell.
At 3:55 they explain that there are no precious metals involved, which means it could be scaled quickly once they tweak it.
At 5:15 they explain why they started experimenting with cobalt compounds to avoid precious metals
At 5:45 they explain the nifty catalyst they set out to make and investigate
At 6:05 - it didn't work... serendipity occurs instead
At 6:55 - they don't know how it works... but they are willing to learn
So... here we have a new way of efficiently converting excess electrical power to hydrogen and oxygen. This is a critical part of the cycle required to store energy for later use.
I consider this the modern physics equivalent of inventing the first granary. It's a new place to securely store a harvest from the sun. | <urn:uuid:7afedafb-5e27-4eea-a653-fe6101b44119> | 2.734375 | 238 | Personal Blog | Science & Tech. | 73.897273 |
Forams live today in all the world's oceans, but they first appeared in the rock record approximately 525 million years ago. Between that long ago time and today, the size, shape and construction of foram tests changed significantly. Tests might be single-chambered or multi-chambered, right coiling or left coiling, larger or smaller, or they may be constructed from scavenged grains or self-produced CaCO3, depending on the conditions and time period in which the foram lived.
Traditionally, forams have been grouped into species on the basis of their characteristic test shape; and their distribution has been studied. Scientists discovered that some species existed only during narrow geologic time bands, while other species survived much longer. Some were restricted in their geographic distribution, while others were global in extent; and some species were found exclusively in shallow water environments, while others preferred deeper water.
Over time, scientists have painstakingly pieced together the geologic time period, geographic distribution and preferred environment for many individual foram species, and from this work some major events in foram evolution are evident.
Go to Major Events in Foram Evolution to find out more!
Revised on: January 15, 2005 | <urn:uuid:115e44a8-cbf5-45de-86d0-c0ac19333d5c> | 3.84375 | 251 | Knowledge Article | Science & Tech. | 23.306683 |
Like many islands, Hawaii is incredibly dependent on petroleum. It meets 85% of its energy needs by burning petroleum, according to State figures. The distinguishing factor between Hawaii and the continental United States is that Hawaii relies on petroleum for electricity generation. Whereas the US as a whole meets only 3% of its electricity needs this way, Hawaii is at nearly 80%.
Oil-dependence is a common characteristic of islands. Fiji, for example, meets over 50% of its electricity through petroleum and Puerto Rico, over 90%. The use of petroleum for electricity is a common trait amongst islands because it is easily transportable and provides a firm source of power.
A model state
Although Hawaii is the most oil-dependent State in the US, the US Department of Energy recently shifted its view regarding Hawaii’s energy portfolio from being an “outlier” to a “model” — for the US and beyond. With the highest electricity rates in the country and immense potential for a suite of renewable energies including solar, wind, geothermal, and ocean energy, Hawaii was chosen as an ideal location to test and embark on transformative clean energy projects.
In January 2008, the Hawaii Clean Energy Initiative (HCEI) was launched by the US Department of Energy and the State of Hawaii. HCEI aims to achieve a 70% clean energy economy by 2030. The goals for electricity are to achieve 30% energy efficiency and 40% renewable energy by the year 2030, and both of these goals are now codified within State law.
The State and the primary electric utility, the Hawaiian Electric Company, have entered into a voluntary agreement to expand a wide range of renewable energy types. Large wind farms, which represent the dominant indigenous energy source within the agreement, are intended for the islands of Lanai and Molokai and planning is underway to bring wind power to the most populated island, Oahu.
The idea of building an undersea cable, bringing power from less populated islands with greater renewable energy potential to the urban core of Honolulu on the island of Oahu (which holds roughly three-quarters of the resident population), was first debated in the 1970s. Although a 30-megawatt geothermal plant has been operating for more than two decades, started as a pilot project, geothermal was never brought to its initially intended scale (of nearly 500-megawatts) in part because it was seen to be a violation of Hawaiian cultural and religious beliefs.
Current plans to lay down cable have raised similar issues regarding the cost and environmental impacts, as well as the impact of wind turbines on local communities. This discussion has also renewed the idea of large-scale geothermal and the concomitant cultural dialogue.
Bioenergy master plan
The State recently completed a Bioenergy Master Plan. Like conventional fossil fuels, bioenergy is appealing because it is a firm source of power and is easily transportable between islands. Although Hawaii has a 150-year history of growing sugarcane and was once one of the world’s largest pineapple exporters, the plantations began a rapid decline in the second half of the 20th century. The last sugarcane plantation on the island of Kauai is scheduled to close in August 2010.
A local biofuel industry is seen as potentially renewing currently idle agricultural lands. However, indigenous sources of biofuels, such as ethanol from sugarcane, are estimated to be relatively quite expensive and are unlikely to develop without substantial government support. A 10% ethanol blending mandate for motor fuels was implemented in 2006, for example, and has to date been met with imported sources. As such, early discussion of and support for biofuels have waned considerably, due to a reluctance to switch from one imported fuel source to another.
The island of Oahu is likely to have more potential for solar photovoltaic, given the extensive rooftop real estate. To nurture the development of solar projects, the State gives a 35% tax credit in addition to the 30% federal subsidy for the cost and installation of units. There is, however, currently a restrictive net-metering cap (that limits the ability to sell power back to the electric utility) of 1%, which may impede large projects.
Nonetheless, there are new rules being considered by the State’s Public Utilities Commission to set a fixed price for renewable energy purchased by the electric utility (i.e., feed-in tariff) and to decouple the sale of electricity from utility profit margins. These efforts are crucial to changing the incentive structure of the utility away from making money on the sale of kilowatt-hours to a fixed fee for transmission and distribution.
Already investors are feeling an attraction, with two companies recently signing a letter of intent committing to the process of structuring a solar project financing fund that could be as large as $50 million.
“With Hawaii’s feed-in tariff expected to go into effect this year, we believe that a dedicated pool of project capital would provide real competitive advantage in Hawaii’s rapidly evolving PV market,” said the lead company’s chairman/CEO.
Hawaii’s potential for ocean energy is also strong and a number of projects are underway regarding wave energy and ocean thermal energy conversion (OTEC). To-date however, none are yet commercially viable.
Setting standards for transportation fuels and technologies presents a more amorphous challenge. There is a general move towards the electrification of ground transportation. With limited driving distances, electric vehicles may have particular promise. The 2009 state legislative session passed a law requiring large parking lots to incorporate charging stations, and the first opened a few weeks ago in Honolulu.
This is a small step toward establishing the necessary infrastructure for electric vehicles.Several companies, including Better Place and Phoenix Motorcars, have expressed interest in launching operations in Hawaii.
In Hawaii, like anywhere else, one of the largest challenges will be how to manage intermittent energy sources like solar and wind power within the existing electric grid. This issue is particularly pertinent to areas, like islands, with relatively small and isolated electricity grid systems.
Hawaii Natural Energy Institute (HNEI) is modeling electricity grids on several islands to better understand how they need to be upgraded to accept increasing levels of intermittent renewable energies, as well as resiliency measures in the move toward electric vehicles. HNEI is also running several demonstration projects with technologies to help integrate renewable energies — such as “smart grid”, electric vehicles, and battery storage.
Climate change and clean energy
While the State’s efforts have been largely focused on increasing Hawaii’s energy security, greenhouse gas emissions reductions should likewise be considered a co-benefit of HCEI. Fearing Hawaii’s increasing reliance on oil resulting from the loss of biomass from the declining agricultural sector, a concerted effort was made to diversify Hawaii’s electricity portfolio in the early 1990s — and a coal plant was opened on the island of Oahu.
With hindsight and considerably more knowledge of the deleterious impacts of burning fossil fuels, it is imperative to constantly assess greenhouse gas emissions impacts within energy policy.
Like many of its Pacific island neighbors, Hawaii will be affected by the impacts of climate change; particularly in rising sea levels, ocean acidification and native forests threatened by invasive species. HCEI establishes lofty goals for the State — with high hopes of breaking free of the typecast on islands and oil dependence. | <urn:uuid:3b649dd8-c21a-4225-9409-a64cfc078ef6> | 3.515625 | 1,517 | Knowledge Article | Science & Tech. | 23.889305 |
Reproducibility forms one of the cornerstones of physics; independent scientists need to corroborate a finding before it's widely accepted in the scientific community.
But sometimes the window of observation only lasts for several hours twice every hundred years or so. That makes reproducibility fairly difficult.
Earlier this summer, Venus passed in front of — or transited — the sun for the last time this century. While the astronomical event amazed viewers across the world, a group of physicists were re-creating an observation from over 250 years ago: the discovery of Venus' atmosphere. At the same time, they've stoked the fire in a debate over who first made this discovery.
The entire Venus transit of 2012 in one image. Image courtesy of NASA.
In 1761, Mikhail Lomonosov, a Russian astronomer, watched as Venus completed one of its extremely rare transits of the sun. While hundreds of astronomers across the globe were anticipating this historic event, many did not expect what Lomonosov would soon observe.
As Venus transited the sun, Lomonosov noticed that the sun's light seemed to form a bulge around Venus. This observation suggested that something must have been changing the direction of the sun's light. Who was the culprit? A Venetian atmosphere, according to Lomonosov.
Lomonosov argued that molecules in a hypothetical atmosphere surrounding Venus could change the light waves' direction through refraction. When light passes through a new medium (e.g. water or an atmosphere), it will bend in a new direction. This phenomenon causes sticks to appear bent when submerged in water and leads to the distorted reflections seen in water droplets.
Lomonosov made this discovery with some bare bones equipment: a 4.5 foot long telescope made in the 18th century. But not everyone thinks Lomonosov should get the credit for this discovery. During the 2004 transit of Venus, Scientists who used slightly more sophisticated instruments than those available to Lomonosov had trouble
reproducing Lomonosov's results. Consequently, Fermilab physicist Vladimir Shiltsev and his colleagues decided to re-create this experiment
in June with the closest replicas of Lomonosov's telescope that they could find.
One of the telescopes used for the June, 2012 re-creation of Lomonosov's original observation. Image courtesy of Koukarine et al. via their arXiv article.
Shilstev and his team pored over Lomonosov's original texts to create two new telescopes that would closely match what he had in 1761. Lomonosov's original telescope was lost during a bombardment in WWII.
After finding the necessary supplies and creating their rudimentary telescopes, two team members positioned themselves in Illinois and California for the 2012 transit, respectively. Although clouds obscured their view for part of the transit, they both saw the same refraction that Lomonosov observed over 250 years ago. Apparently, Lomonosov didn't make it up! Or he at least had the right tools to make his claim.
The characteristic light "whisker" indicating that light has been refracted as observed from the Illinois telescope during the June, 2012 transit. Image courtesy of Koukarine et al. via their arXiv article.
Scientists, of course, had already verified the existence of Venus' atmosphere through other means. But now we know that Lomonosov likely could have observed Venus' atmosphere through refraction back in 1761, just as he reported.
Now there's more evidence to support Lomonosov as the original discoverer of a Venetian atmosphere. I doubt we've heard the end of this story, though. Stay tuned for the next transit in 2117 for new developments, or maybe tell your grandchildren to watch for you.
The full arXiv preprint article can be found here
For more background on Lomonosov and his original discovery, take a look at this conference talk
(PDF) by Mikhail Marov from 2004.
If you want to keep up with Hyperspace, AKA Brian, you can follow him on Twitter | <urn:uuid:dcefd9a2-7006-47d9-9d9f-5c5edaf17b40> | 3.875 | 853 | Nonfiction Writing | Science & Tech. | 39.138979 |
Originally published in:
Encyclopedia of Planetary Sciences, edited by J. H. Shirley and R. W. Fainbridge,
905-907, Chapman and Hall, New York, 1997.
Venus is sometimes characterized as Earth's 'twin' because of its close proximity in solar system location (~ 0.72 AU heliocentric distance compared to 1.0 AU) and its similar size (~ 6053 km radius compared to - 6371 km radius), but other close resemblances are few. Besides the more obvious atmospheric composition and pressure differences, and the related extreme temperatures at the surface described elsewhere in this volume, events in the history and evolution of the interior of Venus have left that planet with practically no intrinsic magnetic field. The consequences for the space environment and atmosphere are numerous, ranging from the presence of an 'induced' magnetotail in the wake, to an ionosphere and upper atmosphere that are constantly being scavenged by the passing solar wind.
Venus, like the other terrestrial planets, was presumably accreted from iron and silicate-bearing planetesimals some 4.5 billion years ago. These new planets are all likely to have differentiated in a similar manner, so that they have the common feature of a molten iron-rich core of about half the planet's radius, covered by a crust of the remaining (mainly silicate) material. Only indirect information is available about these cores, but seismic measurements on the surface of Earth tell us that a solid inner core, with a size depending on the size of the planet and on its thermal history, may also be a common feature. Since no seismic measurements have been obtained on the surface of Venus, we cannot be as certain about its interior; however, the large value of the mean density of ~ 5.25 g cm-3 derived from satellite orbits suggests that Venus contains an Earth-like core. Essentially, all other deductions about the interior of Venus are based on models of Earth-like planets with internal temperatures and pressures adjusted for the slightly different radius and possible compositional differences. One of these models has led to the hypothesis that the core of Venus may be completely solid or 'frozen' today, while others propose that core solidification has not yet commenced or has stopped at some time in the past (e.g. Stevenson, Spohn and Schubert, 1983). In all cases, evidence cited in support of these hypotheses always includes the known weakness of the intrinsic magnetic field.
When Mariner 2 flew by Venus in 1962 at a distance of 6.6 planetary radii (Rv), it did not detect any evidence of an Earth-size magnetosphere. Mariner 5, passing within 1.4 Rv in 1967, detected the signatures in the solar wind of deflection around an 'obstacle' at Venus. The small inferred size of that obstacle placed an upper limit on the magnetic dipole moment of Venus of ~ 10-3 that of Earth. Later Venera 4 made magnetic measurements down to 200 km altitude, still detecting no planetary field but providing data that reduced this estimate by about an order of magnitude. In a 1974 flyby, Mariner 10 merely confirmed the existence of a small, nearly planet- size obstacle. Venera 9 and 10 were put into orbit around Venus in 1975, but did not approach Venus closer than ~ 1500 km. Nevertheless, the data that these spacecraft obtained in the wake of the planet provided the first evidence that an Earth-like magnetotail was absent, and that instead a structure related to the interplanetary magnetic field occupied that region of space. The most definitive measurements of the magnetic moment of Venus were obtained during the Pioneer Venus Orbiter mission in its first years of operation (1979-1981). Repeated low-altitude (~ 150 km) passes by that spacecraft over the antisolar region, coupled with dayside observations to the same altitude, proved the insignificance of a field of internal origin in near-Venus space. The observed fields for the most part could be explained as solar wind interaction-induced features, to be described below. The new upper limit on the dipole moment obtained from the Pioneer Venus Orbiter wake measurements placed the Venus intrinsic magnetic field at ~ 10-5 times that of Earth.
Of course, the weakness of the present measurement does not imply that Venus has always been bereft of an intrinsic field. Theories of the dynamos operating in the liquid cores of the newly accreted terrestrial planets suggest that there was a magnetic moment of Venus of the same order as Earth's for about the first billion years of Venus' life. During that time, thermal convection from the heat left over from accretion drove the dynamo. However, after that energy source diminished, there was apparently no source to replace it. While solid core formation in Earth's interior maintains its dynamo to this day by virtue of the related 'stirring' of the molten core around it, Venus appears to either lack the necessary internal ingredients (chemical or physical) for solid core formation, or to have ceased such processes at an earlier time if they resulted in complete core solidification or arrested core solidification. It is important to note that, contrary to popular belief, dynamo theory does not credit the smallness of the magnetic moment to the slow rotation of Venus (a Venus day of ~ 243 Earth days is almost equal to the length of its year of ~ 224 days, and its sense of rotation is retrograde). It is also notable that Venus would not have maintained any remanent crustal magnetic fields from its proposed early period of dynamo activity because the temperatures in the crust are expected to be above the Curie point (below which such fields could persist in rocky materials).
The 'magnetosphere' of Venus that was detected by spacecraft is now known to be an example of an 'induced' magnetosphere. In an induced magnetosphere, the solar wind interacts directly with the planetary ionosphere. The fields and plasmas that are observed are generally of solar wind or ionospheric origin. There are no belts of trapped radiation such as Earth's Van Allen belts, and there is no 'magnetotail' composed of fields of planetary origin. The basic features of an induced magnetosphere are shown in Figure 1. The ionospheric obstacle to the solar wind is defined by a surface called the ionopause. At the ionopause, pressure balance exists between the solar wind dynamic pressure on the outside and the thermal pressure of the ionospheric ions and electrons on the inside. Outside of the ionopause the solar wind interaction has all of the features characteristic to a planetary magnetosphere. A bow shock forms upstream of the obstacle. An interesting feature of the Venus bow shock is that it appears to have a location that varies with the solar cycle. The 'nose', or subsolar position, of the bow shock at sunspot maximum is near 1.5 Rv, but the terminator location moves in to - 2.1 Rv. Inside of the bow shock the solar wind plasma is deflected around the obstacle in a magnetosheath region, which is sometimes referred to as an ionosheath since the obstacle is an ionosphere. The embedded interplanetary magnetic field is compressed and draped around the obstacle in the magnetosheath region in the usual way.
|Fig. 1. Illustration of the major features of the solar wind interaction with the ionosphere of Venus. The solid dots represent the neutral atmosphere, while the circled plus symbols represent ionized atmosphere. The ionized atmosphere above the ionopause is removed by the solar wind.|
Inside of the ionopause the plasma changes from solar wind-dominated to ionospheric in origin. During the primary Pioneer Venus mission, which occurred at a time of high solar activity when the planetary ionospheres are densest, this boundary between the solar wind and ionosphere proper occurred at an average altitude of about 300 km, flaring to ~ 800 km average altitude near the terminator. The boundary, which moves up and down in response to changing external (solar wind) pressure, was typically thin at a few tens of kilometers, although it increases in thickness as its altitude decreases. The observations indicated that the magnetic fields of interplanetary origin in the magnetosheath generally remain confined above the ionosphere proper, although small-scale (dimensions of a few kilometers) field increases (to ~ 100 nT) of still-unknown origin were observed. These field intrusions appeared to have twisted internal structures and so were dubbed 'flux ropes'. The exception to this behavior occurred on the rare occasions (about 15% of the time) when the solar wind pressure was high enough to drive the pressure-balance boundary to altitudes of ~ 250 km or less (the ionospheric thermal pressure increases as altitude decreases down to about 190 km altitude). At these times it appeared as if the interplanetary magnetic field in the magnetosheath penetrated the ionosphere to at least the spacecraft minimum altitude of ~ 150 km. Its magnitude in the ionosphere can reach -150 nT.
The nightside solar wind interaction features at altitudes below several hundred kilometers also show a dichotomy with solar wind pressure. When the conditions for tile 'unmagnetized' dayside ionospheres prevail, the nightside ionosphere is supplied by planetary plasma flowing across the terminator from the dayside. The observed nightside ionospheric magnetic fields are fluctuating and weak (~ 10 nT) at these times, and do not appear to be twisted like the dayside flux ropes. Near midnight, however, steady, almost vertical magnetic fields of a few tells of nanotesla were observed in conjunction with ionospheric density depletions called 'holes' by their discoverers (Brace, et al., 1982). These features are up to a significant fraction (~ 1/4) of the planetary radius in horizontal scale, and they appear to have a field 'polarity' (e.g. sunward or antisunward) that depends on the interplanetary magnetic field orientation and the associated draped field in the magnetosheath, Their origin and nature remain controversial. The 'holes' disappear when the solar wind pressure is high. The nightside counterpart of the large-scale magnetosheath field penetration into the dayside ionosphere appears to be a large-scale horizontal field of somewhat smaller magnitude (tens of nanotesla) throughout the nightside. Its relationship to the dayside field is still poorly understood. It should be noted that the high solar wind pressure scenario is expected to be common during solar minimum, when the ionospheric pressure is always weaker than at solar maximum.
The high-altitude wake of Venus is permeated with structured magnetic fields that generally point sunward or antisunward and often exhibit a 'double-lobed' structure like an intrinsic planetary magnetotail. However, examination of the polarities of the fields in the lobes shows them to be coupled closely to the interplanetary field and resulting draped magnetosheath field orientations. As shown in Figure 1, this 'induced' magnetotail can be pictured as an extension of the magnetosheath, with the draped interplanetary fields sinking into the ionospheric obstacles' wake. The draping of the field in the induced magnetotail is observed to be enhanced beyond that in the surrounding magnetosheath. This enhancement has been attributed to the 'mass loading' of those interplanetary flux tubes that pass closest to the ionopause and form the magnetotail by virtue of heavy ionospheric ion production on those passing flux tubes. In this sense, Venus can be likened to a comet, which has an induced magnetotail of similar origin.
The authors are supported for work on this subject by NASA grant NAGW 2- 501 through the Pioneer Venus project.
Brace, L. H., Theis, R. F., Mayr, H. G. et al. (1982) Holes in the nightside ionosphere of Venus. J. Geophys. Res., 87, 199.
Hunten, D. M., Colin, L., Donahue, T. M. and Moroz, V. I. (eds) (1993) Venus. Tucson: University of Arizona Press.
Luhmann, J. G. (1986) The solar wind interaction with Venus. Space Sci. Rev., 44, 241.
Russell, C. T. (ed.) (1991) Venus Aeronomy. Space Sci. Rev., 55, London: Kluwer Academic Publishers.
Russell, C. T. (1987) Planetary magnetism in Geomagnetism, Vol. 2 (ed, J. A. Jacobs) London: Academic Press, pp. 457-523.
Stevenson, D. J., Spohn, T. and Schubert, G. (1983) Magnetism and thermal evolution of the terrestrial planets. Icarus, 54, 466. | <urn:uuid:fe07e68a-b447-49ec-87a0-050c983c99a0> | 4.1875 | 2,674 | Academic Writing | Science & Tech. | 40.174597 |
Science Fair Project Encyclopedia
In astronomy, many stars are referred to simply by catalogue numbers. There are a great many different star catalogues which have been produced for different purposes over the years, and this article covers only some of the more frequently quoted ones. Most of the recent catalogues are available in electronic format and can be freely downloaded from NASA's Astronomical Data Center and other places (see links at end).
Although no longer in serious use, mention should be made of Ptolemy's star catalogue published in the 2nd century as part of his Almagest, which lists 1,022 stars visible from Alexandria. It was the standard star catalogue in the Western and Arab worlds for over a thousand years. Ptolemy's catalogue was based almost entirely on an earlier one by Hipparchus from the 2nd century B.C. (Newton 1977; Rawlins 1982). An even earlier star catalogue was that of Timocharis of Alexandria, which was written about 300 B.C. and later used by Hipparchus.
Two systems introduced in historical catalogues remain in use to the present day. The first system comes from Bayer's Uranometria and is for bright stars. These are given a Greek letter followed by the genitive case of the constellation in which they are located; examples are Alpha Centauri or Gamma Cygni. See Bayer designation for more information. The major problem with Bayer's naming system was the number of letters in the Greek alphabet. It was easy to run out of letters before running out of stars needing names, particularly for large constellations such as Argo Navis.
The second system comes from John Flamsteed's Historia coelestis Britannica . It kept the genitive-of-the-constellation rule for the back end of his catalog names, but used numbers instead of the Greek alphabet for the front half. Examples include 61 Cygni and 47 Ursae Majoris; see Flamsteed designation for more information.
- Newton, Robert R. (1977). The Crime of Claudius Ptolemy. Baltimore: Johns Hopkins University Press.
- Rawlins, Dennis (1982). An investigation of the ancient star catalog. Pub. Astron. Soc. Pacific 94, 359.
Bayer and Flamsteed covered only a few thousand stars between them. In theory, full-sky catalogues try to list every star in the sky. There are, however, literally hundreds of millions, even billions of stars resolvable by telescopes, so this is an impossible goal; these kind of catalogs generally try to get every star brighter than a given magnitude.
HD / HDE
The Henry Draper Catalogue was published in the period 1918–1924. It covers the whole sky down to about ninth or tenth magnitude, and is notable as the first large-scale attempt to catalogue spectral types of stars. The catalogue was compiled by Annie Jump Cannon and her co-workers at Harvard College Observatory under the supervision of Edward Pickering, and was named in honour of Henry Draper, whose widow donated the money required to finance it.
HD numbers are widely used today for stars which have no Bayer or Flamsteed designation. Stars numbered 1–225300 are from the original catalogue and are numbered in order of right ascension for the 1900.0 epoch. Stars in the range 225301–359083 are from the 1949 extension of the catalogue. The notation HDE can be used for stars in this extension, but they are usually denoted HD as the numbering ensures that there can be no ambiguity.
The Smithsonian Astrophysical Observatory catalogue is a photographic atlas of the sky, complete to about ninth magnitude, as a result of which there is considerable overlap with the Henry Draper catalogue. The epoch for the position measurements in the latest edition is J2000.0. The SAO catalogue contains one more major piece of information than Draper, the proper motion of the stars, so it is often used when that fact is of importance. The cross-references with the Draper and Durchmusterung catalogue numbers in the latest edition are also useful.
Names in the SAO catalogue start with the letters SAO, followed by a number. The numbers are assigned following 18 ten-degree bands in the sky, with stars sorted by right ascension within each band.
The Bonner Durchmusterung and follow-ups were the most complete of the pre-photographic star catalogues.
As it covered only the northern sky and some of the south (being compiled from the Bonn observatory), this was then supplemented by the Südliche Durchmusterung (SD), which covers stars between declinations -1 and -23 degrees (1886, 120,000 stars). It was further supplemented by the Cordoba Durchmusterung (580,000 stars), which began to be compiled at Córdoba, Argentina in 1892 under the initiative of John M. Thome and covers declinations -22 to -90. Lastly, the Cape Photographic Durchmusterung (450,000 stars, 1896), compiled at the Cape, South Africa, covers declinations -18 to -90.
Astronomers preferentially use the HD designation of a star, as that catalogue also gives spectroscopic information, but as the Durchmusterungs cover more stars they occasionally fall back on the older designations when dealing with one not found in Draper. Unfortunately, a lot of catalogues cross-reference the Durchmusterungs without specifying which one is used in the zones of overlap, so some confusion often remains.
Star names from these catalogues include the initials of which of the four catalogues they are from (though the Southern follows the example of the Bonner and uses BD; CPD is often shortened to CP), followed by the angle of declination of the star (rounded down, and thus ranging from +00 to +89 and -00 to -89), followed by an arbitrary number as there are always thousands of stars at each angle. Examples include BD+50°1725 or CD-45°13677.
The Catalogue astrographique (Astrographic Catalogue) was part of the international Carte du ciel programme designed to photograph and measure the positions of all stars brighter than magnitude 11.0. In total, over 4.6 million stars were observed, many as faint as 13th magnitude. This project was started in the late 1800s. The observations were made between 1891 and 1950. To observe the entire celestial sphere without burdening only a handful of institutions, the sky was divided among 20 observatories, by declination zones. Each observatory exposed and measured the plates of its zone, using a standardized telescope so each plate photographed had a similar scale of approximately 60 arcsecs/mm. The U.S. Naval Observatory took over custody of the catalogue, now in its 2000.2 edition.
USNO-B1.0 is an all-sky catalog created by researchers at the U.S. Naval Observatory that presents positions, proper motions, magnitudes in various optical passbands, and star/galaxy estimators for 1,042,618,261 objects derived from 3,643,201,733 separate observations. The data were obtained from scans of 7,435 Schmidt plates taken for the various sky surveys during the last 50 years. USNO-B1.0 is believed to provide all-sky coverage, completeness down to V = 21, 0.2 arcsecond astrometric accuracy at J2000.0, 0.3 magnitude photometric accuracy in up to five colors, and 85% accuracy for distinguishing stars from non-stellar objects.
- New general catalogue of double stars within 120 deg of the North Pole (1932, R. G. Aitken).
This lists 17,180 double stars north of declination -30 degrees.
BS / BSC / HR
First published in 1930 as the Yale Catalog of Bright Stars, this catalog contained information on all stars brighter than visual magnitude 6.5 in the Harvard Revised Photometry Catalogue. The list was revised in 1983 with the publication of a supplement that listed additional stars down to magnitude 7.1. The catalog detailed each star's coordinates, proper motions, photometric data, spectral types, and other useful information.
GJ / Gliese / Gl
The Gliese (later Gliese-Jahreiss ) catalogue attempts to list all stars within 20 parsecs of Earth (later editions expanded the coverage to 25 parsecs). Numbers in the range 1.0–965.0 are from the second edition, which was
- Catalogue of Nearby Stars (1969, W. Gliese).
Apparently, the integers represent stars which were in the first edition, while the numbers with a decimal point were used to insert new stars for the second edition without destroying the desired order. This catalogue is referred to as CNS2, although this name is never used in catalogue numbers.
Numbers in the range 9001–9850 are from the supplement
Numbers in the ranges 1000–1294 and 2001–2159 are from the supplement
- Nearby Star Data Published 1969–1978 (1979, W. Gliese and H. Jahreiss).
The range 1000–1294 represents nearby stars, while 2001–2159 represents suspected nearby stars.
Numbers in the range 3001–4388 are from
- Preliminary Version of the Third Catalogue of Nearby Stars (1991, W. Gliese and H. Jahreiss).
Although this version of the catalogue was termed "preliminary", it is still the current one as of September 2001, and is referred to as CNS3. It lists a total of 3,803 stars. Most of these stars already had GJ numbers, but there were also 1,388 which were not numbered (plus the Sun, which needs no number). The need to give these 1,388 some name has resulted in them being numbered 3001–4388, and data files of this catalogue now usually include these numbers. An example of a star which is often referred to by one of these unofficial GJ numbers is GJ 3021 (see Extrasolar planet).
The General Catalogue of Trigonometric Parallaxes, first published in 1952 and later superseded by the New GCTP (now in its fourth edition), covers nearly 9,000 stars. Unlike the Gliese, it does not cut off at a given distance from the Sun; rather it attempts to catalogue all known measured parallaxes. It gives the co-ordinates in 1900 epoch, the secular variation, the proper motion, the weighted average absolute parallax and its standard error, the number of parallax observations, quality of interagreement of the different values, the visual magnitude and various cross-identifications with other catalogues. Auxiliary information, including UBV photometry, MK spectral types, data on the variability and binary nature of the stars, orbits when available, and miscellaneous information to aid in determining the reliability of the data are also listed.
- William F. van Altena , John Truen-liang Lee and Ellen Dorrit Hoffleit, Yale University Observatory, 1995.
The Hipparcos catalogue was compiled from the data gathered by the European Space Agency's astrometric satellite Hipparcos, which was operational from 1989 to 1993. The catalogue was published in June 1997 and contains 118,218 stars. It is particularly notable for its parallax measurements, which are considerably more accurate than those produced by ground-based observations.
Proper motion catalogues
- Ross, Frank Elmore, New Proper Motion Stars, eleven successive lists, Astrophysical Journal, Vol. 36 to 48, 1925-1939
- Wolf, Max, "Katalog von 1053 stärker bewegten Fixsternen", Veröff. d. Badischen Sternwarte zu Heidelberg (Königstuhl), Bd. 7, No. 10, 1919; and numerous lists in Astron. Nachr. 209 to 236, 1919-1929
Willem Jacob Luyten later produced a series of catalogues:
L - Luyten, Proper motion stars and White dwarfs
- Luyten, W. J., Proper Motion Survey with the forty-eight inch Schmidt Telescope, University of Minnesota, 1941 (General Catalogue of the Bruce Proper-Motion Survey)
LFT - Luyten Five-Tenths catalogue
- Luyten, W. J., A Catalog of 1849 Stars with Proper Motion exceeding 0.5" annually, Lund Press, Minneapolis (Mn), 1955 ()
LHS - Luyten Half-Second Catalogue
- Luyten, W. J., Catalogue of stars with proper motions exceeding 0"5 annually, University of Minnesota, 1979 ()
LTT - Luyten Two-Tenths catalogue
- Luyten, W. J., Catalogue of stars with proper motions exceeding 0"2 annually, Univ. of Minnesota, 1980 ()
LP - Luyten Palomar proper-motion catalogue
- Luyten, W. J., Proper Motion Survey with the 48 inch Schmidt Telescope, University of Minnesota, 1963-1981
Later, Henry Lee Giclas took over, again with a series of catalogues:
- NASA Astronomy Data Center
- Centre de Données astronomiques de Strasbourg
- Sloan Digital Sky Survey
- IAU FAQ on "Naming Stars"
- Name a Star? The Truth about Buying Your Place in Heaven
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:c8568388-1b12-4293-8cda-4a62dca5d8ad> | 3.875 | 2,863 | Knowledge Article | Science & Tech. | 50.714547 |
"There is no joy more intense than that of coming upon a fact that cannot be understood in terms of currently accepted ideas."
What are the stars made of? The answer to this fundamental question of astrophysics was discovered in 1925 by Cecilia Payne and explained in her Ph.D. thesis. Payne showed how to decode the complicated spectra of starlight in order to learn the relative amounts of the chemical elements in the stars. In 1960 the distinguished astronomer Otto Struve referred to this work as “the most brilliant Ph.D. thesis ever written in astronomy.”
Cecilia Payne (1900–1979) was born in Wendover, England. After entering Cambridge University she soon knew she wanted to study a science, but was not sure which one. She then chanced to hear the astronomer Arthur Eddington give a public lecture on his recent expedition to observe the 1919 solar eclipse, an observation that proved Einstein’s Theory of General Relativity. She later recalled her exhilaration: “The result was a complete transformation of my world picture. When I returned to my room I found that I could write down the lecture word for word.” She realized that physics was for her.
Later, when the Cambridge Observatory held an open night for the public, she went and asked the staff so many questions that they fetched “The Professor.” She seized the opportunity and told Professor Eddington that she wanted to be an astronomer. He suggested a number of books for her to read, but she had already read them. Eddington then invited her to use the Observatory’s library, with access to all the latest astronomical journals. This simple gesture opened the world of astronomical research to her.
England, though, was not in Payne’s professional future. She realized early during her Cambridge years that a woman had little chance of advancing beyond a teaching role, and no chance at all of getting an advanced degree. In 1923 she left England for the United States, where she lived the rest of her life. She met Harlow Shapley, the new director of the Harvard College Observatory, who offered her a graduate fellowship.
Harvard had the world’s largest archive of stellar spectra on photographic plates. Astronomers obtain such spectra by attaching a spectroscope to a telescope. This instrument spreads starlight out into its “rainbow” of colors, spanning all the wavelengths of visible light. The wavelength increases from the violet to the red end of the spectrum, as the energy of the light decreases. A typical stellar spectrum has many narrow dark gaps where the light at particular wavelengths (or energies) is missing. These gaps are called absorption “lines,” and are due to various chemical elements in the star’s atmosphere that absorb the light coming from hotter regions below.
The study of spectra had in fact given rise to the science of astrophysics. In 1859, Gustav Kirchoff and Robert Bunsen in Germany heated various chemical elements and observed the spectra of the light given off by the incandescent gas. They found that each element has its own characteristic set of spectral lines—its uniquely identifying “fingerprint.” In 1863, William Huggins in England observed many of these same lines in the spectra of the stars. The visible universe, it turned out, is made of the same chemical elements as those found on Earth.
In principle, it seemed that one might obtain the composition of the stars by comparing their spectral lines to those of known chemical elements observed in laboratory spectra. Astronomers had identified elements like calcium and iron as responsible for some of the most prominent lines, so they naturally assumed that such heavy elements were among the major constituents of the stars. In fact, Henry Norris Russell at Princeton had concluded that if the Earth’s crust were heated to the temperature of the Sun, its spectrum would look nearly the same.
When Payne arrived at Harvard, a comprehensive study of stellar spectra had long been underway. Annie Jump Cannon had sorted the spectra of several hundred thousand stars into seven distinct classes. She had devised and ordered the classification scheme, based on differences in the spectral features. Astronomers assumed that the spectral classes represented a sequence of decreasing surface temperatures of the stars, but no one was able to demonstrate this quantitatively.
Cecilia Payne, who studied the new science of quantum physics, knew that the pattern of features in the spectrum of any atom was determined by the configuration of its electrons. She also knew that at high temperatures, one or more electrons are stripped from the atoms, which are then called ions. The Indian physicist M. N. Saha had recently shown how the temperature and pressure in the atmosphere of a star determine the extent to which various atoms are ionized.
Payne began a long project to measure the absorption lines in stellar spectra, and within two years produced a thesis for her doctoral degree, the first awarded for work at Harvard College Observatory. In it, she showed that the wide variation in stellar spectra is due mainly to the different ionization states of the atoms and hence different surface temperatures of the stars, not to different amounts of the elements. She calculated the relative amounts of eighteen elements and showed that the compositions were nearly the same among the different kinds of stars. She discovered, surprisingly, that the Sun and the other stars are composed almost entirely of hydrogen and helium, the two lightest elements. All the heavier elements, like those making up the bulk of the Earth, account for less than two percent of the mass of the stars.
Most of the mass of the visible universe is hydrogen, the lightest element, and not the heavier elements that are more prominent in the spectra of the stars! This was indeed a revolutionary discovery. Shapley sent Payne’s thesis to Professor Russell at Princeton, who informed her that the result was “clearly impossible.” To protect her career, Payne inserted a statement in her thesis that the calculated abundances of hydrogen and helium were “almost certainly not real.”
She then converted her thesis into the book Stellar Atmospheres, which was well-received by astronomers. Within a few years it was clear to everyone that her results were both fundamental and correct. Cecilia Payne had showed for the first time how to “read” the surface temperature of any star from its spectrum. She showed that Cannon’s ordering of the stellar spectral classes was indeed a sequence of decreasing temperatures and she was able to calculate the temperatures. The so-called Hertzsprung-Russell diagram, a plot of luminosity versus spectral class of the stars, could now be properly interpreted, and it became by far the most powerful analytical tool in stellar astrophysics.
Payne also contributed widely to the physical understanding of variable stars. Much of this work was done in association with the Russian astronomer Sergei Gaposchkin, whom she married in 1934.
From the time she finished her Ph.D. through the 1930s, Payne advised students, conducted research, and lectured—all the usual duties of a professor. Yet, because she was a woman, her only title at Harvard was “technical assistant” to Professor Shapley. Despite being indisputably one of the most brilliant and creative astronomers of the twentieth century, Cecilia Payne was never elected to the elite National Academy of Sciences. But times were beginning to change. In 1956, she was finally made a full professor (the first woman so recognized at Harvard) and chair of the Astronomy Department.
Her fellow astronomers certainly came to appreciate her genius. In 1976, the American Astronomical Society awarded her the prestigious Henry Norris Russell Prize. In her acceptance lecture, she said, “The reward of the young scientist is the emotional thrill of being the first person in the history of the world to see something or to understand something.” As much as any astronomer, she had fully experienced that most important of all scientific rewards. | <urn:uuid:2e356855-8533-4ff5-b57c-ba7e5290fee3> | 3.703125 | 1,632 | Knowledge Article | Science & Tech. | 41.04949 |
This case study compares dynamic light scattering to other methods such as transmission electron microscopy and x-ray diffraction for the determination of particle size for Cadmium Selenide...
http://www.azonano.com/article.aspx?ArticleID=1098 | 20 Jan 2005
Researchers have succeeded in fabricating quantum dot sensitized solar cells (QDSSC) by the electrophoretic deposition of semiconductor such as CdSe quantum dots onto conducting electrodes coated with...
http://www.azonano.com/article.aspx?ArticleID=2940 | 19 Oct 2011
This article describes some applications of fluorescence instruments from HORIBA Scientific to nanophotonics, e.g., singlewalled carbon nanotubes (SWNTs), quantum dots, and organic light-emitting...
http://www.azonano.com/article.aspx?ArticleID=1624 | 6 Jul 2006
Can nanotechnology overcome the cost burdens of solar cells and come up with a clean, green, highly efficient renewable electricity souce.
http://www.azonano.com/article.aspx?ArticleID=1815 | 19 Dec 2006
Specially prepared crystalline semiconductors that emit different colours of light when illuminated by lasers allow scientists to watch the inner workings of a living cell. The process consists of...
http://www.azonano.com/article.aspx?ArticleID=1173 | 12 Apr 2005
Spectrofluorometers and spectrophotometers can be used to measure the properties and behaviour of quantum dots. These can be in the design of devices such as optoelectronics, biosensing and...
http://www.azonano.com/article.aspx?ArticleID=1358 | 17 Aug 2005
Quantum Dots (QDs) have unique optical and electronic properties that make them suitable for breakthrough treatments such as the detection and destruction of cancer cells. This article is a...
http://www.azonano.com/article.aspx?ArticleID=1726 | 13 Sep 2006
By Cameron Chai
Lawrence Berkeley National Laboratory (Berkeley Lab) of the U.S Department of Energy (DOE) has developed artificial semiconductor nanocrystal molecules and observed them working...
http://www.azonano.com/news.aspx?newsID=22881 | 4 Jul 2011
Strem Chemicals, Inc., a manufacturer of specialty chemicals for research and development, is pleased to announce the addition of CANdot Quantum Dots to its portfolio of nanomaterials....
http://www.azonano.com/news.aspx?newsID=24367 | 28 Feb 2012
Nanotechnology today is growing very rapidly and has infinite applications in
almost everything we do. The medicine we take, food we eat, chemicals we use,
car we drive and much much... | <urn:uuid:9ba317df-7174-419b-82cf-a14558d67148> | 2.828125 | 597 | Content Listing | Science & Tech. | 50.035066 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
The ultimate purpose of firing is to achieve some measure of bonding of the particles (for strength) and consolidation or reduction in porosity (e.g., for impermeability to fluids). In silicate-based ceramics, bonding and consolidation are accomplished by partial vitrification. Vitrification is the formation of glass, accomplished in this case through the melting of crystalline silicate...
What made you want to look up "vitrification"? Please share what surprised you most... | <urn:uuid:17ac26ec-4668-44bd-9a80-3d16ecf356f0> | 3.359375 | 134 | Knowledge Article | Science & Tech. | 35.477036 |
The first experimental tower to be erected is to be 665 feet high, with 200-foot turbine wheels, and located near Berlin.
- Jun 1932
Seems a bit Rube Goldberg by today's standards. --Ed.
Currents in Upper Air Form Unfailing Source of Power for Windmills of Future
Wind, at the surface of the earth, is proverbially uncertain; but recent researches show that, a thousand feet or more above the ground, wind is comparatively steady and unfailing. This has given new life to the hope of finding a substantial source of natural power, even more universally available than water power; and the designs illustrated here have been prepared by a German engineer, Honnef, the erector of several huge radio towers. As shown here, the structure carrying the power plant would be higher than any other building man has yet been able to erect.... [Read More] | <urn:uuid:bd4234cc-e861-476a-a6b6-07f5d86ae77e> | 3.1875 | 185 | Comment Section | Science & Tech. | 50.1825 |
Research Index / Materials Science / Laser System
Our group uses one of the two 10 Hz laser systems at CUOS. At 10 Hertz, a pulse occurs every 0.1 seconds and each pulse in our system lasts for 100 femtoseconds (1 fs = 0.000000000000001 seconds = 1 millionth of a billionth of a second).
The system is Ti:sapphire-based which means that the short pulses are created and amplified by sapphire crystal rods that are doped with titanium (Ti). So the rods are mostly sapphire, but have a small concentration (doping) of titanium throughout the rod.
Ti:sapphire rods can be excited (pumped) by green light and they will give off light (fluoresce) in the near-infrared -- a color between red and infrared.
They are used for short pulse systems because they can support a large color range (bandwidth) which is essential to produce the short pulses.
(There is an uncertainty relation in physics that says that for a laser pulse to be short in time it must be composed of lots of color and vice versa -- for a laser pulse to be monochromatic, composed of one color, it must be long in time or even continuous, not pulsed.)
The center wavelength (color) of our system is 780 nanometers (nm) with a total spread (bandwidth) of 20 nm. Only a small portion of the bandwidth is in the visible range (red), with the rest being in the infrared (IR). Therefore, most of the beam is invisible to the eye. IR cards, IR viewers, and CCD cameras are used to follow the beam from place to place. CCD (charge-coupled device) cameras are sensitive to IR light so they can be used for alignment and photographs. (Most digital cameras use CCDs.)
Laser Components: Our 10 Hz system consists of the following:
Laser Output: The final output of this system is a 10 Hertz train of 100-fs pulses with an energy per pulse as high as 100 mJ and a contrast as high of 107 (peak-to-background ratio).
Chirped-Pulse Amplification: When short pulse lasers were first developed, there was a limit to the energy per pulse that could be achieved. This limit came from the fact that a short pulse in time has a high power (power is energy divided by time) or high intensity (intensity is power divided by area). The high power or high intensity of these lasers will damage the material used to amplify it. The method to avoid this damage was first developed by our CUOS director Gerard Mourou and his graduate student in 1985. The method, chirped-pulse amplification (CPA), involves stretching the pulse in time, then amplifying it, then re-compressing it to a short pulse. After stretching the pulse, the pulse has a power or intensity low enough to avoid damage. The trick is to stretch the pulse in time such that it can be re-compressed after amplification. This stretching method is to add a "chirp" to the pulse and is described below under Stretcher. A chirped pulse is a pulse which has a frequency or color that changes during the pulse time. Just as a bird's chirp changes in pitch during the bird's song (in time), the pulse's color changes during the pulse (in time). The compressor then removes this chirp to regain the short pulse, but now amplified.
An Argon laser operating at 5 Watts is used to pump the oscillator Ti:sapphire crystal. Four Nd:YAG lasers operating at a doubled-frequency of 532
nm (green) are used to pump the four amplifier Ti:sapphire crystals. (Nd:YAG is another crystal, Yttrium Aluminum Garnet, doped with
Neodymium that is a common laser medium. It produces 1064-nm light which is infrared. If you use a doubling crystal to double the frequency or halve the wavelength, you get 532-nm light which is green.) The Nd:YAG lasers produce
10-ns pulses and are triggered to run at 10 Hz. The laser system as a whole cannot be run faster than 10
Hz because 0.1 seconds is necessary for the Ti:sapphire crystals to cool down after being pumped by the Nd:YAG lasers. The properties of Ti:sapphire change with temperature. Thus, if the Ti:sapphire crystals were still warm when the next pulse came through, the optical properties would be different and the laser system would vary from pulse to pulse. So, to produce an amplified short pulse, we use a continuous Argon laser to create it and four "long-pulse" lasers to amplify it. (Each pump laser will be described with its respective amplifier below.)
Oscillator: The oscillator that creates the ultrashort pulse is a commercial laser from Clark-MXR. It contains a Ti:sapphire rod that is pumped by an Argon laser. The optics are set up in a V-shaped design to create the laser cavity. The laser can operate in continuous wave (CW) mode which means it produces continuous near-IR light with a narrow color range (bandwidth). More importantly, the laser can be mode-locked to produce 100-fs pulses with a bandwidth of 30 nm at a repetition rate of 100 MHz (one pulse every 10-millionth of a second). Mode-locked means that the entire color range is amplified in the laser cavity, and thus, all the modes are "locked" in the laser cavity. Normally, only a small range of colors can be amplified which gives the standard CW laser. (I have a link to a detailed explanation of mode-locking which I will add here soon.)
Pre-Amplifier: For the cleanest pulses, the pre-amplifier is used. (It can be bypassed for simplicity when the cleanest pulses are not necessary.) The pre-amplifier is a multi-pass amplifier which allows 6 passes through the Ti:sapphire rod. It is pumped by a Nd:YAG laser which produces 10-ns pulses with 200 mJ of energy at 532 nm. This amplification happens before the stretcher and regenerative amplifier (regen). The goal is to reduce the number of round-trips in the regen and reduce the background level to produce cleaner pulses (pulses with a higher peak-to-background contrast ratio). To avoid the damage problems mentioned above in the Chirped-Pulse Amplification section, the pre-amplifier is limited in the maximum level it can amplify the pulse.
Stretcher: The stretcher uses a grating setup to spread out the colors (bandwidth) of the short pulse in space. In the photo to the right, you can see the initial short pulse beam hit the grating. After it reflects off the grating, its color will be spread out in space. A mirror in the stretcher reflects the spread-out beam back to the grating, below the original beam. A flashlight is shown in the grating to the right to show that the grating does indeed spread the colors. (Since the flashlight is white light, it is made up of the rainbow of visible colors.)
Once the color of the short pulse is spread out in space, the design of the stretcher allows certain colors to travel a shorter path than others. A last bounce off the grating puts the colors back together in space, but now the colors at one end of the bandwidth arrive later than the colors at the other end. So now the short pulse has been stretched in time to a "long" pulse. With our design, the original 100-fs pulse is stretched in time by a factor of 10,000 to a final duration of 1 ns. Since the different colors are spread across the pulse in time, the pulse is said to have a "chirp". This is the basis for chirped-pulse amplification (CPA), as mentioned above. Whether the pulse has a positive chirp or a negative chirp depends on whether the shorter or the longer wavelengths arrive first.
The stretcher, due to the way gratings work, can only transmit at best half of the energy that enters. So a
2-nJ short pulse coming from the oscillator leaves the stretcher as a 1-nJ long pulse.
Regenerative Amplifier: The regenerative amplifier (regen) is the first amplification stage after the stretcher and can amplify the pulse by almost a factor of one million. Its Ti:sapphire rod (shown above in the first photo on this page) is pumped by 5 mJ split off from a higher-energy Nd:YAG laser which produces a 10-ns pulse at 532 nm. The regen is actually a laser cavity and, as such, can produce near-IR laser pulses on its own -- so-called "free-running" laser pulses (pulses as long as the Nd:YAG laser: 10 ns). However, if a near-IR pulse is injected into the regen at the right time, it can amplify itself by stimulating emission of its own radiation. (On the other hand, the free-running laser will stimulate emission of noise, creating so-called ASE: amplified spontaneous emission.) With each round trip in the regen cavity, the near-IR pulse is increasingly amplified. The amplification is very fast for the first 10-15 round trips in the cavity. After this, the amount of amplification begins to saturate. After saturation, the pulse energy actually decreases each round trip. However, the pulse energy is constant from pulse to pulse after saturation, whereas it can vary be as much as 50% before saturation. Thus, the best time to switch out the amplified pulse is one round trip after saturation. At this point, the pulse energy is both large in amplitude and stable from pulse to pulse. The final result is that a 1-nJ pulse from the stretcher can be amplified to a 0.5-mJ pulse.
2-Pass Amplifier: The 2-pass amplifier is the simplest of the multi-pass amplifiers. It consists of a Ti:sapphire rod pumped by green (532 nm) laser light, with the short pulse being amplified by 2 passes through the rod. The same Nd:YAG laser that pumps the regenerative amplifier is used to supply a 120-mJ, 10-ns pulse to the 2-pass amplifier. The near-IR pulse extracts enough energy from the crystal to amplify itself by a factor of 40 -- from 0.5 to 20 mJ.
The 4-pass amplifier is very similar to the 2-pass amplifier. However, due to the larger energies involved, its Ti:sapphire crystal is pumped by two more powerful Nd:YAG lasers, one from each side of the crystal. Each Nd:YAG laser produces
10-ns, 600-mJ light at 532 nm. With four passes through the excited crystal, the near-IR pulse amplifies itself by a factor of 10 -- from 20 to 200
As the final step in the chirped-pulse amplification scheme, the amplified (chirped) pulse must be compressed to remove the chirp and regain the short pulse in time. The compressor consists of a pair of gratings that spread out the color range (bandwidth) of the amplified pulse. To undo the effect of the stretcher, the opposite end of the bandwidth (as the one in the stretcher) travels a shorter path than the other end of the bandwidth. All the colors in the bandwidth of the amplified pulse are once again traveling together to give a short, unchirped pulse. As in the stretcher, only half the energy can be transmitted through the compressor (due to the way gratings work). The entire compressor is placed in a vacuum chamber because a high-energy, short pulse will become distorted in air (at worst, it will spark causing most of the energy to be lost in the spark). The final output from this laser system leaves the compressor chamber through a vacuum pipe and enters a "turning box" which is uses a mirror to direct the laser beam to whatever experiment is desired. From the time the laser beam enters the compressor chamber, the laser beam is kept inside vacuum pipes to avoid distortions to the beam. In addition, any material through which the beam passes will distort it, so all optics must be reflective. For example, to focus the laser beam, a curved mirror must be used instead of a lens.
Single-Shot Auto-Correlator: This laser system produces electromagnetic pulses shorter than anything that can be produced electrically (light is an electromagnetic wave). As such, it is not possible to measure the pulse width (in time) by electrical means. Instead, the laser pulse is used to measure itself through a technique called an auto-correlation. An autocorrelation is a mathematical operation where a function is split into two pieces which are delayed with respect to each other, multiplied, and then integrated (summed) over time. This operation is implemented in the following manner (see the photo to the right): a) the pulse is split in two pieces by a beam-splitter; b) the pulses are delayed with respect to each other by using a translation stage to create a longer path for one of the pulses; c) the pulses are multiplied by overlapping them in a doubling crystal; and d) the multiplied pulses are integrated by looking at the total light on a detector.
The doubling crystal produces frequency-doubled light which means for near-IR,
780-nm light, the crystal produces blue, 390-nm light. (Since frequency and wavelength are inversely related, double the frequency means half the wavelength.) For two pulses entering the crystal, the intensity of the blue light produced depends on the multiplication of the pulses in time and space. The detector looks only at the blue light and integrates (sums) it for 0.1 seconds (the time between laser pulses). Since the autocorrelation of a function of time is a function of delay time (between the pulses), the detected light must be measured for a range of delay times. To shorten the measurement time and simplify the setup, a single-shot auto-correlator is used. By using geometry, the two pulses are overlapped in the doubling crystal such that the delay time is mapped to the position on the detector. Thus, with a properly calibrated time-to-space mapping on the detector, the autocorrelation can be measured with a single pulse (single shot).
3rd-Order Auto-Correlator: A 3rd-order auto-correlator uses the tripled-frequency to measure pulse width, as compared to the doubled-frequency used by the single-shot auto-correlator (a 2nd-order auto-correlator). It is used to measure asymmetries in the pulse since a 2nd-order auto-correlator will always give a symmetric measurement in time. So, you can find out what is happening before and after your pulse. In many experiments and applications, what happens before the pulse is much more important than what happens afterwards. This is because a messy pre-pulse structure can modify things before the main pulse arrives, thereby changing the effect you're looking for. A messy post-pulse structure arrives after the main pulse arrives and is often too late to affect things. A 3rd-order autocorrelation will show you the asymmetries so you can clean up whatever is most important to you.
In addition, a 3rd-order autocorrelation can give a better measurement of the contrast of the laser pulse. The intensity contrast ratio (ICR) of a laser pulse is the ratio between the peak intensity and the background intensity. The background can be composed of multiple structures. So, the range (in time) over which the ICR is measured is important to note.
Copyright © Center for Ultrafast Optical Science, University of Michigan | <urn:uuid:dbdd1403-2977-47d0-93cb-471e76879012> | 3.71875 | 3,356 | Knowledge Article | Science & Tech. | 53.48416 |
Animal fact sheets
Think you know a lot about Australian wildlife? See if you can answer the questions below, then check out the fact sheets to see if you were right.
Australian brush turkey
QUESTION: How do brush turkeys keep their eggs warm until they hatch?
QUESTION: Why do magpies sometimes swoop at people during spring? What should you do if it happens to you?
QUESTION: What of the following does a bandicoot NOT eat - worms, spiders, grubs, tadpoles, fungus, or roots?
QUESTION: What might you find in a satin bowerbird's bower - a pink hairclip, a blue clothes peg, or a silver bottletop?
QUESTION: What's the best way to help out brush-tailed possums that live near your house?
QUESTION: How long have dingos lived in Australia - 3500 years, 35,000 years, or three million years?
QUESTION: How can an echidna eat termites, if it doesn't have any teeth?
QUESTION: How does an emu defend itself when attacked?
QUESTION: Frog populations are declining around the world - what's killing them?
QUESTION: What distance can a yellow-bellied glider cover in a single leap - 30 metres, 85 metres or 140 metres?
Kangaroos & wallabies
QUESTION: Where would you find a red kangaroo - in the Minnamurra Rainforest, on the Western Plains, or up in the Snowy Mountains?
QUESTION: How big is a newborn koala - as big as your fingernail, as big as your finger, or as big as your hand?
QUESTION: How does a kookaburra 'soften up' a snake before eating it?
QUESTION: Where's the only known mainland colony of little penguins in NSW - Coffs Harbour, Nelson Bay or Sydney Harbour?
Lord Howe Island woodhen
QUESTION: Why can't Lord Howe Island woodhens fly?
QUESTION: Can a lyrebird sound like a chainsaw?
QUESTION: Who looks after a malleefowl chick when it hatches?
QUESTION: A lorikeet's tongue has a brush-like tip. What is this used for?
QUESTION: Is a platypus poisonous?
Purple copper butterfly
QUESTION: The larvae of this butterfly are protected and 'shepherded around' by ants. Why do the ants do this?
QUESTION: Why are large numbers of dead shearwaters sometimes found washed up on the beach?
QUESTION: Do snakes have ears?
QUESTION: What's longer - your arm, or the wing of a wedge-tailed eagle?
QUESTION: Which whale blows up a 'V' shaped plume of spray - the humpback whale, or the southern right whale?
QUESTION: Who's heavier - a fully-grown wombat, or you?
Page last updated: 27 February 2011 | <urn:uuid:ff365952-2cd2-4889-90b1-ced2852a5bc5> | 3.21875 | 637 | Content Listing | Science & Tech. | 69.401913 |
HTML versions of the experiments are added as they are assigned in class. Many have links to other pages that either provide background information or specific information related to techniques involved in the experiment itself, such as how to use an electronic balance correctly.
Exp. 1: The Quality of Laboratory Measurements
Exp. 2: Chemical Properties of Pure Substances
Exp. 3: Is It Really 3% Hydrogen Peroxide?
Exp. 4: Gravimetric Analysis of Silver in an Alloy
Exp. 5: The Identification of an Unknown Solution
Exp. 6: What is the Oxidation State of Nitrogen?
Exp. 7: Reduction of Permanganate
Exp. 8: Winning A Metal From Its Ore
Exp. 9: Determination of Molar Mass of a Solid From Freezing Point Depression
Exp 10: Determining The Activity Series of Metals
Exp. 11: Some Reactions of Cations
Exp 12: Heat Effects & Calorimetry
Exp 13: Calorimetry and Hess's Law
Exp 14: The Spectroscope and Line Spectra
Exp 15: Hydrogen Bonding, Solubility, and Polarity - An Approach to Bond Classification
Exp. 16: Molecular Models
Exp 17: Homogeneous Equilibrium
Exp 18: Determination of an Equilibrium Constant
Exp 19: Titration of Phosphoric Acid
Exp 20: Determing the Rate of a Chemical Reaction Using the Iodine Clock Reaction
Exp 21: What Color is My Tee-Shirt? | <urn:uuid:21b031dd-3d48-4d84-87ce-975285b291e0> | 3.515625 | 315 | Structured Data | Science & Tech. | 45.511535 |
Mutually exclusive events vs Independent events
In regards to probability, are Mutually exclusive events the same as Independent events?
If not, how are they different?
Events [tex]A[/tex] and [tex]B[/tex] are mutually exclusive if [tex]A \cap B = \emptyset,[/tex] which means they cannot both occur. Those events are independent if [tex]P(A \cap B) = P(A)P(B),[/tex] which means one occurring does not affect the other occurring.
If [tex]P(A) > 0[/tex] and [tex]P(B) > 0[/tex] then those events cannot be both mutually exclusive and independent. This is because mutually exclusive implies [tex]P(A \cap B) = 0[/tex] while independence implies [tex]P(A \cap B) = P(A)P(B) > 0.[/tex]
mutually exclusive events have no outcomes in common.
ex: probability of rolling doubles or a sum of 6 with two dice?
these events are not mutually exclusive, cause you can roll a 6 with two 3's, which are also doubles. you solve this by adding the probability of rolling doubles (6/36) with the probability of rolling a sum of 6 (5/36) and then taking away the outcome in common (1/36) answer is .27
ex: probability of rolling doubles or an odd number?
these events are mutually exclusive because there are no events in common. you solve this by adding the probability of rolling doubles (6/36) with the probability of rolling an odd number (18/36). answer is .66
"Two events are said to be independent if the result of the second event is not affected by the result of the first event."
http://www.mathgoodies.com/lessons/vol6 ... vents.html
google is your friend for generic questions
I just wanna
say thank you...you are both nice. | <urn:uuid:d8fab256-7941-497d-b698-fb38d6290d99> | 2.828125 | 426 | Q&A Forum | Science & Tech. | 69.248508 |
FASM provides a useful macro that saves us having to explicitly declare and initialise a string. Instead of providing a pointer to the variable which contains our string, we can just type in the string itself.
Thus the line that writes to the console becomes
invoke WriteConsole,[outhandle],"Hello World!",12,numwritten,0
Of course the problem with defining a string this way is that we can only use it once. However, if this is not a problem, it is a useful time saving device for us. Of course internally, the macro just replaces the string with a pointer to a string which is then declared elsewhere without us seeing it.top
Each call to WriteConsole simply writes to the screen immediately following the text just written. To go to a new line, we need to send a newline character to the console. This character has ASCII code 13, so we declare such a character as follows:
endline DB 13
To output this character to the console, we just output it as a one byte string, just as we would any other string: | <urn:uuid:07a4bb9e-ce86-4b51-82f8-3ee49dfacf29> | 3.125 | 221 | Documentation | Software Dev. | 49.089006 |
|Mineral class||Silicates : Nesosilicates : Zircon - tantalite group.|
|Habitus||Short prismatic dipyramidal crystals or irregular grains.|
|Cleavage||Imperfect, uneven to conchoidal fracture.|
|Color||Colorless, red, brown, yellow, green or gray. Transparent to opaque.|
|Luster||Vitreous, adamantine to greasy.|
|Occurance||In granites, pegmatites and nefeline syenites.|
|Notes|| Often radioactive due to small amounts of thorium and uranium.
(Up to 14 %) |
Zircon is often metamict (no crystal structure) because of the radioactivity thats destroyed the internal structure.
Zircon is a very good mineral to measure the age of rocks. It encapsules the uranium and the rest product, led, so it is possible to measure the age of different layers in a zircon crystal. The oldest zircon found in Sweden is 3300 million years old.
Flourescent in UV-light.
|Locations|| A common mineral in microscopic amounts.|
| To silicate index.
|| Mineral group index.
If you have some questions, suggestions or comments you are welcome to write me a line or two. | <urn:uuid:70de2835-0644-4092-9248-52a244d017bf> | 3.203125 | 285 | Knowledge Article | Science & Tech. | 34.828575 |
NOCTILUCENT CLOUDS: MYSTERY IN THE HIGH ATMOSPHERE
About two years after the catastrophic eruption of Krakatoa in 1885, sky watchers in Europe began seeing splendid displays of clouds in the evening sky that apparently had not been seen before. These wispy filaments of electric blue and white cloud were unique because they only became visible when the sun was well below the horizon. First described in the open literature by German Thomas W. Backhouse, they were given the name noctilucent cloud, meaning "night shining." The term noctilucent clouds (Leuchtende Nachtwolken) likely originated with O. Jesse of the Berlin Observatory. However, records indicate that Thomas Romney Robinson, Director of the Armagh Observatory in Northern Ireland, may have been the first to document these clouds, at least 35 years earlier than Backhouse. On 1 May 1850, he noted in his observational diary: "strange luminous clouds in NW, not auroral."
Observations of these special clouds after Backhouse's discovery suggested they were only visible north of about 50 degrees latitude. Scientists first speculated that the dust particles from Krakatoa, that circled the Earth at a height of about 80 km (50 miles), caused noctilucent clouds. But when the dust of Krakatoa finally settled, observers continued to see noctilucent clouds at high latitudes during the summer.
Over the past century, noctilucent clouds have been considered as infrequent residents of the sky. But in the past decade or so, sightings have become more frequent, and the clouds have even been seen at much lower latitudes. Have they truly become more common? Or are they, like many other natural phenomena, noticed more because more eyes are looking? Robinson's observations suggest that the clouds were there and seen well before Krakatoa, but that the attention played to the sunset/sunrise skies following the eruption resulted in regular sightings of noctilucent clouds.
One website that has featured sightings and discussions of noctilucent clouds is Space Weather (http://www.spaceweather.com/) compiled by Dr Tony Phillips of NASA. The site, which I first visited when working on my piece on the aurora and space weather, has featured observations of noctilucent clouds for some time and maintains a Noctilucent Cloud Gallery 2007 of photos submitted by readers. The full gallery has photos dating back to 2003, and provides a good primer in what to look for if you decide to seek the illusive noctilucent.
I See Them
As I now live above 50 degrees N (52o 46' N, 119o 15' W), I began watching the evening sky for noctilucent clouds (also known by the acronym NLCs) this summer. I became more vigilant when photos of them were submitted to Space Weather by folks in northern Washington and elsewhere in British Columbia in early July. Unfortunately, at the time, we were plagued with evening cloudy skies, and the nearby mountains reduce my horizon-line view.
However, on the evening of 6 July 2007, I witnessed a display of noctilucent clouds for the first time. I noticed the clouds beginning at 10:59 pm (PDT) and they were still visible after 11:21 pm, but falling below the horizon. (Local sunset was at 9:22 pm.) I ran to get my digital camera and snapped a series of photographs (collected here). To make sure I did not miss the display, I did not take the time to search for my tripod, so the first series are a bit shaky as the exposure time required is long. After documenting the display, I located my tripod and used it to snap the last two photos.
I have looked each night for a return display, but so far, no luck, though we have again been in a very rainy period and the prime season for viewing them is nearing its end. Indeed, the night I saw them I was also treated to a good thunderstorm and at first did not expect the lower skies to clear enough to see any noctilucent. When I did look out just before 11 pm, I was surprised that the skies had cleared sufficiently to see the display.
My NLCs were more a veil pattern than some of the wispy filament observations seen over Europe this summer. They also were whiter looking, at least on the photograph than the striking electric blue of other photos. My camera was set with a three-second exposure time, a night setting available on my Fujifilm FinePix S700.
Much about noctilucent clouds remains a mystery. What is known is that they form in the high atmosphere between about 75 and 85 km (47-53 miles). They are thought to be small ice-coated particles, though some favour dust for their composition. They are extremely tenuous, so transparent that they only reflect 1/1000 of the incident solar light they receive, and thus NLCs are only visible when illuminated from below against a dark background.
Their height, in the upper stratosphere to lower mesosphere but well above the troposphere where most weather occurs, allows them to catch the sunlight when the sun is well below the horizon, usually 6 to 16 degrees, generally about 30 minutes after sunset. For this reason they are usually seen in the western sky. The clouds themselves typically range from 15-20 degrees above the horizon, along the edge of the twilight arch.
There doesn't seem to be many reports from before sunrise (in the information I have seen) but I would think they should appear in the eastern sky before sunrise as well unless they form only after some solar heating of the upper atmosphere. A few recent scientific studies using short observational periods suggest nearly equal distribution in time of occurrence before and after local midnight. Perhaps the aberration of more evening sightings is just a reflection of summer sleep habits of the observers; dawn at these high latitudes comes very early.
The annual peak of NLC observations occurs a few weeks either side of the summer solstice with Northern Hemisphere observations ranging from mid-May to mid-August, just about the range of solar summer (most solar light and longest days). The most likely region for viewing is between 50 and 65 degrees latitude. Further north, we are in the realm of the midnight sun or at least strong midnight twilight which makes the sky too bright. South of there, twilight is not long enough as the sun dips below the horizon into night conditions more quickly. That does not mean that they cannot be seen further south as observations this year in Utah and Colorado attest. In the Southern Hemisphere, ground-based observations are infrequent due to the fact that hemisphere has little land area within the favoured band.
The optimum viewing geometry for noctilucent clouds. Sunlight scattered by tiny ice crystals in NLCs is what gives the clouds their characteristic blue color. Courtesy NASA
An early observer of NLCs, Robert Leslie described them as "...weird small cloud forms, at times very regular, like ripple marks in sand, or the bones of some great fish or saurian embedded on a slab of stone." Observations over the years confirm Leslie's description and tell us NLCs can appear as complex streaks or thick veils with a white, electric blue, or pearly blue hue. They may have a fishnet appearance or exhibit bands, waves or whirls. NLCs often look like daylight cirrus but experienced observers tell us they show finer details under binocular or other magnification whereas cirrus tend to be nebulous when so viewed.
Electric blue noctilucent clouds viewed from the ISS. Photo credit: Don Pettit and NASA TV.
NLCs have also been viewed from space by the astronauts in the International Space Station. Astronaut Don Pettit viewed a display over the Southern Hemisphere in early 2003 (late January to early February). "We routinely see them when we're flying over Australia and the tip of South America," he said. While a staff scientist at the Los Alamos National Laboratory between 1984 and 1996, Pettit had studied NLCs that were seeded by high-flying sounding rockets. "Seeing these kinds of clouds [from space] ... is certainly a joy for us on the ISS," he said on NASA TV.
A good overview of NLCs and identification manual Observing Noctilucent Clouds by M Gadsden and P Parviainen can be downloaded in pdf format from the International Association of Geomagnetism and Aeronomy. To download the file, click here.
At present, there are two main theories as to the cause and formation of NLCs. The dust theory proposes that the clouds form from either volcanic or cosmic (meteorite) dust. One of the arguments for this theory cites the first observations after Krakatoa as evidence and supplement that with the fact that most observations fall around the time of summer meteor showers. There are several important flaws in this theory and it is generally rejected, though high-altitude dust may provide the seeds for ice crystals.
The main theory holds that the NLCs are composed of water ice nucleated by small dust particles or ions in the mesopause region. Since this region is very dry ("one hundred million times drier than air over the Sahara desert"), the temperatures must be very low to produce supersaturation of the local air. How ice crystals form in the arid air is the essential mystery of noctilucent clouds.
The sliver of the setting moon and noctilucent clouds caught the eye of astronaut Ed Lu aboard the International Space Station (ISS) on 27 July 2003 when the ISS was over central Asia. Courtesy NASA
Intuitively, one would think it gets colder as one ascends higher into the atmosphere, based on our observations of the lower atmosphere. But this is not the case. The continual drop in temperature stops as we pass through the tropopause into the stratosphere. After an isothermal layer, the stratosphere begins warming in its upper regions due to the absorption of solar ultraviolet (UV) radiation by ozone in the ozone layer. The hottest region of the atmosphere above the troposphere's surface layer is actually found around the stratosphere-mesosphere boundary. (The thermosphere, where atmosphere and outer space become indistinguishable is technically hotter still.)
So how does this region cool down sufficiently for NLCs to form? The answer may be found in gravity waves, the term used by upper-air meteorologists (aeronomers) to describe buoyancy waves generated by differences in air density (a wave on the ocean is also a gravity wave). These waves can be caused by the jet stream, towering thunderstorm clouds, or mountain ranges. The waves force air upward, but in doing so, the air cools which dampens the wave. The full mechanism is a complex interaction between solar radiation, local chemistry of the ozone layer and fluid dynamics forces. I'll leave out most of the details on the process here (see http://www.atm.helsinki.fi/~tpnousia/nlcgal/nlcinfo.html for more).
Atmospheric Layers Mesopause is at the boundary of the mesosphere and thermosphere
The motion of the wave through the mesosphere results in an extreme cooling of the local temperature field by as much as 60 K (60 C degrees) over the non-sun winter values, down to around 130 K (-143oC / -225oF). Interestingly, this process only can happen during the summer months in the mesosphere, and only some properly oriented gravity waves can produce such changes.
These gravity waves may also provide the water vapour needed to form the clouds by moving it upward from the stratosphere. The waves also provide the brush by which NLCs derive their characteristic textures and shapes. Water vapour can also be injected through the breakup of methane gas molecules by solar energy at these high altitudes. The chemical process involves ozone and free oxygen also present at these altitudes. But water is not immune to the energetic solar beam and is also broken up. Estimates give a water molecule an average lifetime of 3-10 days in the mesopause region (the boundary between the upper stratosphere and lower mesosphere).
Satellite observations show that NLCs are also present poleward of 60 degrees, but the "midnight sun" of summer obscures their sighting from the ground. Often the whole polar region is covered with these high clouds which are referred to in this context as polar mesospheric clouds.
Based on surface observations, the particles making up NLCs are likely very small, less than 50 nanometres (one nanometre is one-billionth of a metre and 2.5x10-7 of an inch). To understand more about NLCs, researchers at NASA and Hampton University in Virginia have undertaken the AIM Mission.
Finally, some scientists believe there is a connection between the increase in NLC sightings and climate change. They suggest that the increase in greenhouse gases has raised the temperature in the lower atmosphere but has lowered the temperature in the upper atmosphere. A reduction in the ozone layer could also change the nature of the region resulting in lower temperatures. The atmospheric content of methane has also been rapidly increasing over the past centuries, and some of this may be reaching the upper atmosphere where it can be photo-dissociated and increase the water content.
The AIM Mission
The extreme altitude and limited extent of NLCs makes the probing of these clouds from the Earth's surface difficult at best. In order to determine the nature of NLCs, NASA in conjunction with Virginia's Hampton University and other research groups launched the Aeronomy of Ice in the Mesosphere (AIM) mission to provide the first detailed exploration of these unique and elusive clouds sitting literally on the "edge of space." The launch of the AIM satellite came on 25 April 2007 aboard a Pegasus XL Rocket from Vandenberg Air Force Base in California.
AIM Mission Poster
Takeoff of the L-1011 carrier aircraft, Stargazer, with the Pegasus XL rocket and NASA's AIM spacecraft. Image credit: NASA
Over the two-year mission, AIM will document for the first time the entire complex life cycle of polar mesospheric clouds(aka noctilucent clouds). AIM will take wide angle photos of NLCs to see their structure. The satellite instruments will also measure their temperatures and chemical abundances, monitor dusty aerosols, and count the meteoroids raining down on Earth. With this data, scientists hope to unravel many of the mysteries about what NLCs are and how they form.
On 11 June 2007 AIM's satellite cameras sent its first visual data documenting noctilucent clouds over the Arctic regions of Europe and North America.
White and light blue represent noctilucent cloud structures. Black indicates areas where no data is available.
Credit: Cloud Imaging and Particle Size Experiment, University of Colorado Laboratory for Atmospheric and Space Physics
Once the summer season ends in the Northern Hemisphere — mid- to late August — the satellite's attention will turn to the Southern Hemisphere in late November. Following southern summer, the focus will again return to the north. The project team has planned for two complete seasons of measurement in both hemispheres.
Keep track of the AIM Mission news at their websites: NASA (http://www.nasa.gov/mission_pages/aim/index.html)
AIM Project Data Center, Hampton University ( http://aim.hamptonu.edu/index.html).
The AIM folks even have a mission song Noctilucent Cloud written by Patricia Boyd and sung by The Chromatics. You can hear the song from their AstroCappella CD here.
Learn More About Weather From These Relevant Books Chosen by The Weather Doctor | <urn:uuid:a8b6989b-3bd4-47f1-b80f-219f17ea887b> | 3.5 | 3,294 | Personal Blog | Science & Tech. | 45.144366 |
Simple API for XML
David Megginson, principal of Megginson Technologies, led the development of the Simple API for XML (SAX), a widely-used specification that describes how XML parsers can pass information efficiently from XML documents to software applications. SAX was originally implemented in Java, but is now supported by nearly all major programming languages.
Unlike most XML-related specifications, SAX does not come from a formal committee at a standards body or industry consortium. David put together a proof-of-concept implementation during the holidays in December 1997 and presented it to the xml-dev mailing list in January 1998. Through collaborative online discussion, the xml-dev members developed the proof-of-concept into SAX 1.0, which immediately received industry-wide acceptance from both commercial and free software developers.
Pros and cons
SAX is a streaming interface — applications receive information from XML documents in a continuous stream, with no backtracking or navigation allowed. This approach makes SAX extremely efficient, handing XML documents of nearly any size in linear time and near-constant memory, but it also places greater demands on the software developer's skills. Tree-based interfaces, like the Document Object Model (DOM), make exactly the opposite trade-off: they are much simpler for developers, but at the cost of significant time and computer resources. | <urn:uuid:d173e415-3323-4e8f-8fdc-a2014d44c488> | 2.953125 | 273 | Knowledge Article | Software Dev. | 23.527426 |
Scientific name: Gavia immer
Length: 27 - 35 inches
Wingspan: around 60 inches
Weight: males larger; anywhere from 3.5 to 17 pounds
Plumage: black, white and gray; darker during the breeding seasons; in winter, evenly gray; mainly distinguished by straight, black (and sharp) bill
Range: mostly found in northern United States and Canada lakes, waterways
Breeding grounds: Greenland, Canada and most northern United States including Alaska; on coasts and in-land lakes
Reproduction: breed once a year, near spring; will lay an average of 2 eggs;
Behavior: forager, bird of prey
Diet: carnivorous; fish, shrimp, lobster, crabs, mollusks, insects, etc.
Average lifespan: some researchers believe up to 9 years; other, 30 years
Threats: marine mammals like otters or larger birds of prey to adults; turtles, gulls, raccoons and other rodents are known to take eggs and young loons
Conservation status: threatened; loss and degradation of habitat, pollution; man is their biggest threat
The common loon is an aquatic migratory bird that has some rather strange characteristics. Its most intriguing quality are the unusual calls it makes - which vary from long wails to an almost maniacal laughter (known as the tremolo) to a yodel and a hoot. So loud and eerie are these calls that the well-known naturalist John Muir once remarked that they are “one of the wildest and most striking of all wilderness sounds.”
The loon is also known for being extremely awkward. Its legs are placed far back on its body which makes it a powerful swimmer. This arrangement, however, doesn't help when on land. Loons are rather clumsy when on their own two feet.
Wintering grounds are chosen based on whether or not the water will freeze. This is why this migratory bird prefers the Great Lakes and North America's Pacific and Atlantic coasts instead of the warmer climates many other migratory birds prefer. These spots are also abundant in the crabs, lobsters, shrimp, fish and gulf menhaden it prefers to eat.
Though we call them crazy, the common loon has the smarts to survive despite their disadvantages on the ground - and it's not just about their lack of ability when it comes to walking. Common loons are considered threatened in many of its natural habitats due to habitat loss and degradation. Pollution, particularly industrial waste, has also affected this aquatic bird.
Although man is the common loon's biggest threat, we are also the answer when protecting this species. Keeping habitats safe from land development and other human interferences will keep the loon coming back to its breeding and wintering grounds. | <urn:uuid:b8545e32-f906-4513-8c97-dea40a9d25b8> | 3.59375 | 580 | Knowledge Article | Science & Tech. | 43.376421 |
Basics of optically pumped astrophysical lasers
Optical (visible and near-IR) astrophysical lasers are pumped by selective photoexcitation performed through a ‘photoexcitation by accidental resonance’ (PAR) process; a process introduced by Kastner and Bhatia. This chapter considers some known examples of PAR and how inversion of population is created by the PAR effect in astrophysical conditions.
Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
If you think you should have access to this title, please contact your librarian. | <urn:uuid:2b4a9ca9-d026-4a8c-b7db-cba766e4e6a4> | 2.84375 | 144 | Truncated | Science & Tech. | 23.820542 |
MAKING WAVES: Waves are the result of wind traveling over water. Imagine a breeze blowing gently across the surface of a lake, creating small waves. The waves arise from the surface tension of water. The molecules on the water's surface hold together and form a sort of 'skin', which makes the surface stretchy, and therefore 'sticky.' As more air passes over that sticky surface, it grabs some molecules and pushes them into molecules ahead, which push on other molecules, and so on, so that the wave travels to the opposite end of the shore. The water mostly stays in place; it's the disturbance caused by the wind that is moving across the water. In strong wind, the waves become choppy. The stronger the wind, the larger the waves, because as the waves move, they run into each other and merge-- adding their energy together to become bigger and move faster.
The American Geophysical Union, the American Meteorological Society, the American Association of Physics Teachers, and the American Physical Society contributed to the information contained in the TV portion of this report. | <urn:uuid:9a697d21-d08f-4535-b990-421f8d4e37d3> | 4.28125 | 220 | Knowledge Article | Science & Tech. | 35.48847 |
Well we've been discussing air masses over the past few blogs, and yesterday I told you a front is the "front-line" so to speak of these air masses.
Today we'll start learning more about fronts.
Cold fronts, stationary fronts, warm fronts, back door cold fronts, occluded fronts...you may have never realized there are so many terms out there.
It all just depends on what is happening at the boundary between two air masses.
Let's start with a stationary front.
This is just a front that has no movement. Neither air mass is strong enough to overtake the other.
Waves of lower pressure, or disturbances, may ride along the front bringing unsettled weather to a region, sometimes for several days.
Eventually either the two air masse kind of just equal out and blend into one another, sometimes described by a t.v. weatherman as the front "washing out".
Or other times a new weather system may come along and give one air mass the strength it needs to push on and win the battle.
On a colored weather map, the stationary front is drawn as an alternating red and blue line.
The semicircles point toward the colder air on the red line.
The triangles point toward the warmer air on the blue line.
(Kind of opposite of what one might think -- welcome to meteorology!!)
Winds at the surface tend to blow parallel to the stationary front, and in opposite direction on either side of it.
Up above in the atmosphere, winds also tend to blow parallel to the front.
(Remember you now have to think 3-D -- the cold front extends from the surface up into the atmosphere)
Monday we will talk about the types of weather you can expect along a stationary front. | <urn:uuid:60da42b6-c5ef-4caa-9e71-84969bcf18e4> | 3.96875 | 366 | Personal Blog | Science & Tech. | 65.080775 |
- number of column positions of a wide-character code
#include <wchar.h> int wcwidth(wchar_t wc);
The wcwidth() function determines the number of column positions required for the wide character wc. The value of wc must be a character representable as a wchar_t, and must be a wide-character code corresponding to a valid character in the current locale.
The wcwidth() function either returns 0 (if wc is a null wide-character code), or returns the number of column positions to be occupied by the wide-character code wc, or returns -1 (if wc does not correspond to a printing wide-character code).
No errors are defined.
See attributes(5) for descriptions of the following attributes: | <urn:uuid:2047b449-3f3b-4afc-af01-fc201d8a836b> | 2.71875 | 168 | Documentation | Software Dev. | 45.447345 |
Hydrogen (H2) the lightest element, has a gaseous specific gravity of 0.0695 and a boiling point of -423 F (-252.8 C) at atmospheric pressure. It is a colorless, odorless, tasteless, flammable gas found at concentrations of about 0.0001 % in air. Hydrogen is produced by several methods, including steam/methane reforming, dissociation of ammonia, and recovery from by-product streams from chemical manufacturing and petroleum reforming. Hydrogen can be stored and transported as either a gas a cryogenic liquid.
In the welding industry, hydrogen is used as a fuel in underwater oxy-hydrogen torches, and for metal welding and brazing. Hydrogen is widely used in petroleum refining processes such as hydrotreating, catalytic reforming, and hydro-cracking. It is used in the food industry for turning inedible grease into soaps and animal feeds. It is a raw material for innumerable chemical processes ranging from the manufacturing of high-density polyethylene and polypropylene resins to the hydrogenation of food-grade oils. Hydrogen is also used as a reducing gas in metals processing operations. Applications in the electronics industry are found in the manufacture of silicon wafers and computer chips. Rocket engine fuel is another major use for hydrogen since weight and energy considerations are paramount to its success.
DOT Name: Hydrogen
DOT Hazard Class: Flammable Gas
DOT Label: Flammable Gas
DOT ID No.: UN1049
CAS No.: 1333-74-0
Valve Outlet: CGA 350, LB-CGA 110/170
Physical State in High Pressure Cylinder: Gas
Fire Potential: Flammable
Physical Properties of Oxygen
Molecular Weight: 2.016 lb/mol
Specific Volume at 70°F and 1 atm: 192.0 ft3/lb (11.99 m3/kg)
Specific Heat: 6.87 BTU/lbmol-deg F@ 70 deg. F
Specific Gravity: 0.069
Gas Density: 0.005210 lb/ ft3 @ 70 deg. F. 14.7 PSIA
Boiling Point: Temperature: -423.0 deg. F (-252.8 deg. C)
Liquid Density: 4.43 lb./ft3
Latent Heat: 95.0 BTU/lb.
Critical Point : Temperature: -400.3 deg. F
Pressure: 187.51 PSIA
Melting Point: Temperature: -434.8 deg. F
Pressure: 1.021 PSIA
Discovered in 1766, the hydrogen atom is the simplest atom that can possibly exist. The most common isotope is composed of a single proton and an electron. There is relatively little hydrogen gas in the earth’s atmosphere, but there are plenty of hydrogen atoms in compounds like water. Consider that every molecule of water in all the seas, lakes and streams contains two hydrogen atoms; hydrogen is one of the ten most abundant elements on the earth.
Hydrogen is a light colorless gas, which has no smell or taste when pure. It burns explosively in air or oxygen to form water, H2O. It combines directly with nonmetals to form compounds. With reactive metals such as lithium, sodium, and calcium, hydrogen forms metal hydrides. These decompose in water and liberate hydrogen gas. The cation H+, is characteristic of acids in aqueous solution.
The principal industrial sources of hydrogen are:
- Electrolysis of aqueous solutions of sodium chloride, table salt.
- reaction of carbon with water to form carbon monoxide and hydrogen gas,
- cracking processes in oil refineries, and
- Reaction of methane, CH4 (in natural gas), or other simple hydrocarbons with steam.
There are three isotopes of hydrogen:
1. 1H, occurs in nature at 99.9985% abundance,
2. 2H, or D, also known as deuterium or heavy hydrogen occurs in nature at 0.015% abundance, and
3. 3H, or T, also known as tritium, are found only in trace amounts.
The most common compound is deuterium oxide, D2O, or heavy water. This name is appropriate because the deuterium atom is twice as heavy as 1H. Since deuterium occurs naturally, in every 7000 molecules of ordinary H2O, there is one D2O. Heavy water can be prepared by prolonged electrolysis of ordinary water. Approximately 100,000 gallons of water have to be carefully electrolyzed to produce a single gallon of pure heavy water. Considering the cost of the electrical energy involved in such a process, heavy water is generally regarded as a scarce commodity. Heavy water is a suitable and convenient moderator in nuclear reactors.
Tritium, 3H, is extremely rare in nature. It occurs in ordinary water, but only in portions of one atom for every 1018atoms of 1H. Tritium is more effectively produced by nuclear reactions than by separations from water by electrolysis. | <urn:uuid:810e1255-c5fb-4c71-820b-a8816e24b87f> | 3.578125 | 1,080 | Knowledge Article | Science & Tech. | 52.535418 |
|How to Think Like a Computer Scientist|
source ref: ebookit.html
I haven't said much about it, but it is legal in Java to make more than one assignment to the same variable. The effect of the second assignment is to replace the old value of the variable with a new value.
The output of this program is 57, because the first time we print fred his value is 5, and the second time his value is 7.
This kind of multiple assignment is the reason I described variables as a container for values. When you assign a value to a variable, change the contents of the container, as shown in the figure:
When there are multiple assignments to a variable, it is important to distinguish between an assignment statement and a statement of equality. Because Java uses the = symbol for assignment, it is tempting to interpret a statement like a = b as a statement of equality. It is not!
First of all, equality is commutative, and assignment is not. For example, in mathematics if a = 7 then 7 = a. But in Java a = 7; is a legal assignment statement, and 7 = a; is not.
Furthermore, in mathematics, a statement of equality is true for all time. If a = b now, then a will always equal b. In Java, an assignment statement can make two variables equal, but they don't have to stay that way!
The third line changes the value of a but it does not change the value of b, and so they are no longer equal. In many programming languages an alternate symbol is used for assignment, such as <- or :=, in order to avoid this confusion.
Although multiple assignment is frequently useful, you should use it with caution. If the values of variables are changing constantly in different parts of the program, it can make the code difficult to read and debug.
One of the things computers are often used for is the automation of repetitive tasks. Repeating identical or similar tasks without making errors is something that computers do well and people do poorly.
We have already seen programs that use recursion to perform repetition, such as nLines and countdown. This type of repetition is called iteration, and Java provides several language features that make it easier to write iterative programs.
The two features we are going to look at are the while statement and the for statement.
Using a while statement, we can rewrite countdown:
You can almost read a while statement as if it were English. What this means is, "While n is greater than zero, continue printing the value of n and then reducing the value of n by 1. When you get to zero, print the word `Blastoff!'
More formally, the flow of execution for a while statement is as follows:
This type of flow is called a loop because the third step loops back around to the top. Notice that if the condition is false the first time through the loop, the statements inside the loop are never executed. The statements inside the loop are sometimes called the body of the loop.
In general, the body of the loop should change the value of one or more variables so that, eventually, the condition becomes false and the loop terminates. Otherwise the loop will repeat forever, which is called an infinite loop. An endless source of amusement for computer scientists is the observation that the directions on shampoo, "Lather, rinse, repeat," are an infinite loop.
In the case of countdown, we can prove that the loop will terminate because we know that the value of n is finite, and we can see that the value of n gets smaller each time through the loop (each iteration), so eventually we have to get to zero. In other cases it is not so easy to tell:
The condition for this loop is n != 1, so the loop will continue until n is 1, which will make the condition false.
At each iteration, the program prints the value of n and then checks whether it is even or odd. If it is even, the value of n is divided by two. If it is odd, the value is replaced by 3n+1. For example, if the starting value (the argument passed to sequence) is 3, the resulting sequence is 3, 10, 5, 16, 8, 4, 2, 1.
Since n sometimes increases and sometimes decreases, there is no obvious proof that n will ever reach 1, or that the program will terminate. For some particular values of n, we can prove termination. For example, if the starting value is a power of two, then the value of n will be even every time through the loop, until we get to 1. The previous example ends with such a sequence, starting with 16.
Particular values aside, the interesting question is whether we can prove that this program terminates for all values of n. So far, no one has been able to prove it or disprove it!
One of the things loops are good for is generating and printing tabular data. For example, before computers were readily available, people had to calculate logarithms, sines and cosines, and other common mathematical functions by hand.
To make that easier, there were books containing long tables where you could find the values of various functions. Creating these tables was slow and boring, and the result tended to be full of errors.
When computers appeared on the scene, one of the initial reactions was, "This is great! We can use the computers to generate the tables, so there will be no errors." That turned out to be true (mostly), but shortsighted. Soon thereafter computers (and calculators) were so pervasive that the tables became obsolete.
Well, almost. It turns out that for some operations, computers use tables of values to get an approximate answer, and then perform computations to improve the approximation. In some cases, there have been errors in the underlying tables, most famously in the table the original Intel Pentium used to perform floating-point division.
Although a "log table" is not as useful as it once was, it still makes a good example of iteration. The following program prints a sequence of values in the left column and their logarithms in the right column:
The output of this program is
Looking at these values, can you tell what base the log function uses by default?
Since powers of two are so important in computer science, we often want to find logarithms with respect to base 2. To find that, we have to use the following formula:
Changing the print statement to
We can see clearly that 1, 2, 4 and 8 are powers of two, because their logarithms base 2 are round numbers. If we wanted to find the logarithms of other powers of two, we could modify the program like this:
Now instead of adding something to x each time through the loop, which yields an arithmetic sequence, we multiply x by something, yielding a geometric sequence. The result is:
Log tables may not be useful any more, but for computer scientists, knowing the powers of two is! Some time when you have an idle moment, you should memorize the powers of two up to 65536, which is 216.
A two-dimensional table is a table where you choose a row and a column and read the value at the intersection. A multiplication table is a good example. Let's say you wanted to print a multiplication table for the values from 1 to 6.
A good way to start is to write a simple loop that prints the multiples of 2, all on one line.
The first line initializes a variable named i, which is going to act as a counter, or loop variable. As the loop executes, the value of i increases from 1 to 6, and then when i is 7, the loop terminates. Each time through the loop, we print the value 2*i followed by three spaces. Since we are using the print command rather than println, all the output appears on a single line.
As I mentioned in Section 2.4, in some environments the output from print gets stored without being displayed until println in invoked. If the program terminates, and you forget to invoke println, you may never see the stored output.
The output of this program is:
So far, so good. The next step is to encapsulate and generalize. Encapsulation usually means taking a piece of code and wrapping it up in a method, allowing you to take advantage of all the things methods are good for. Generalization means taking something specific, like printing multiples of 2, and making it more general, like printing the multiples of any integer.
Here's what such a method would look like:
To encapsulate, all I had to do was add the first line (and the last squiggly-brace) to declare the name, parameter, and return type. To generalize, all I had to do was replace the value 2 with the parameter n.
If I invoke this method with the argument 2, I get the same output as before. With argument 3, the output is:
and with argument 4, the output is
By now you can probably guess how we are going to print a multiplication table, by invoking printMultiples repeatedly with different arguments. In fact, we are going to use another loop to iterate through the rows.
First of all, notice how similar this loop is to the one inside printMultiples. All I did was replace the print statement with a method invocation.
The output of this program is
which is a (slightly sloppy) multiplication table. If the sloppiness bothers you, Java provides methods that give you more control over the format of the output, but I'm not going to get into that here.
In the last section I mentioned "all the things methods are good for." About this time, you might be wondering what exactly those things are. Here are some of the reasons we have seen so far that methods are useful:
To demonstrate encapulation again, I'll take the code from the previous example and wrap it up in a method:
By the way, the process I am demonstrating is a common development plan. You develop code gradually by adding lines to main or someplace else, and then when you get it working, you extract it and wrap it up in a method.
The reason this is useful is that you usually don't know when you start writing exactly how to divide the program into methods. This approach let's you design as you go along.
About this time, you might be wondering how we can use the same variable i in both printMultiples and printMultTable. Didn't I say that you can only declare a variable once? And doesn't it cause problems when one of the methods changes the value of the variable?
The answer to both questions is "no," because the i in printMultiples and the i in printMultTable are not the same variable. They have the same name, but they do not refer to the same storage location, and changing the value of one of them has no effect on the other.
Variables that are declared inside a method definition are called local variables because they are local to their own methods. You cannot access a local variable from outside its "home" method, and you are free to have multiple variables with the same name, as long as they are not in the same method.
It is often a good idea to use different variable names in different methods, to avoid confusion, but there are good reasons to reuse names. For example, it is common to use the names i, j and k as loop variables. If you avoid using them in one method just because you used them somewhere else, you will probably make the program harder to read.
As another example of generalization, imagine you wanted a program that would print a multiplication table of any size, not just the 6x6 table. You could add a parameter to printMultTable:
I replaced the value 6 with the parameter high. If I invoke printMultTable with the argument 7, I get
which is fine, except that I probably want the table to be square (same number of rows and columns), which means I have to add another parameter to printMultiple, to specify how many columns the table should have.
Just to be annoying, I will also call this parameter high, demonstrating that different methods can have parameters with the same name (just like local variables):
Notice that when I added a new parameter, I had to change the first line of the method (the interface or prototype), and I also had to change the place where the method is invoked in printMultTable. As expected, this program generates a square 7x7 table:
When you generalize a method appropriately, you often find that the resulting program has capabilities you did not intend. For example, you might notice that the multiplication table is symmetric, because ab = ba, so all the entries in the table appear twice. You could save ink by printing only half the table. To do that, you only have to change one line of printMultTable. Change
and you get
I'll leave it up to you to figure out how it works. | <urn:uuid:77ed1b27-26ad-4f9b-9835-cfa51986056d> | 3.9375 | 2,729 | Documentation | Software Dev. | 52.312219 |
hello.. i do not understand this question was hoping that someone could guide me through this..
A flat circular disc, of radius R, can be modelled as a thin disc of negligible thickness.
It has a surface mass density function given by f(r,0) = k(1-r^2/R^2),
where k is the surface density at the centre and r is the distance from the centre of the disc.
Using area integral in plane polar coordinates, calculate the total mass of the disc, in kg, when R = 0.15m and k = 12 kg/m^2 | <urn:uuid:37569ebc-6e9b-4c46-8297-4fe0eff7242c> | 2.9375 | 127 | Q&A Forum | Science & Tech. | 87.238041 |
These researchers are proposing that aircraft can function more efficiently using superconducting turbines, which are powered by superconducting magnets. I'm not kidding you.
A superconducting magnet, however, would be much more efficient and powerful for its size. When chilled to 77 kelvins (–321 degrees Fahrenheit) or colder, so-called high-temperature superconductors such as the ceramic YBCO (yttrium barium copper oxide) begin to carry electricity without resistance, which produces a strong magnetic field without wasting energy.
Just because YBCO becomes a superconductor at liquid helium temperature does not mean all the problems are solved. It is a Type II superconductors, which means that there will be magnetic vortices migrating all over the place when it is within a magnetic field of sufficient strength. This diminishes it's "zero conductance" property. Furthermore, the reason why these high-Tc superconductors are not used in all those superconducting magnets that are being installed at the LHC is because they can't tolerate high magnetic fields because the low supercurrent density tends to quench superconductivity at sufficiently high fields.
But more importantly, they seem to ignore the fact that (i) you have to carry the cryogenics and (ii) you have to maintain the cryogenics. This adds weight AND require extra power consumption. Did they take this into account when they estimated all of these "efficiency"?
To be fair, I should read the paper they published rather than rely on some silly news report, especially from Sci-Am that lacks any exact citation to it. So I'll try to get a hold of it when see if I will change my mind afterwards. | <urn:uuid:7080780b-dc55-4766-b35c-9cf9465c6b0c> | 3.34375 | 353 | Personal Blog | Science & Tech. | 35.040897 |
Wednesday December 7, 2011 05:55
Before starting php. Important Things
Posted by pradeep as PHP
What is php ?
php is one of the programming languages. It is mostly used in web things.
Many peoples get confused on how to start learning php? If you want to study all theories and be theoritical then that doesnt
entertain. The best way of learning is practically. For that you can make concepts and edit pre existing php codes.
Before long years this was also one of my question. I tried to learn php browsing via google for about six months but i
got nothing.Since i was working with java(j2me) previously. One company offered me to make a mobile game which sends
high scored to php page and save it in database and display in that mobile game. Well, this was the day i actually got
something useful meaning of php. I browsed source codes of others and edited them and learnt many thing. And some days later i was
able to develop concepts in my mind. We dont need to memorize the syntaxes but at least you should know what operations and actions
they perform? Because we can find syntax anywhere and making software/ websites is something practical not theory exams.
* Some questions in mind? As a begineer i had one question. Html pages run in my browser but why not php ?
Well we need to install server softwares in our local machine since the php codes runs in servers. For that you can download
xampp server, wamp server. You can browse this keywords in google and download necessary softwares. After you install run the
server and you can access any web php page in localhost.
* I have php page, how can i make it online to the world?
Well there are few things to start with it. Domain, web hosting etc. Domain is unique name. Suppose it is like your appearance.
Web hosting is the place where we store our files. Suppose it is like your brain, heart etc.
Every webhosting has name servers. While registering domain we provide name server and anyone who browse that domain will
see the files of web hosting in which the name servers are set. For example www.pradeepsofts.com is my domain. While registering
i supplied my server’s hosting name servers. ns1.hostwebworld.com, ns2.hostwebworld.com. Now you typed this url and name server
forwarded you to my web hosting where i have my any kinds of files.
There you need a domain and hosting account. For trials you can register in co.cc and for hosting, use free hostings like
000webhost (dot) com. Freehostia (dot) com & etc. Upload them in your web hosting and they will be seen worldwide via the domain.
* Some extra tips.
- As we go on developing the code they get more buggy since the code expands. So make different pages and include them whereever
you need it. For example you need to connect to database in 2 php files? So make one extra php file which have database connection
codes and include them in rest of 2 php pages using simple commands like include ‘db.php’; . This help to keep code neat and
good human understandable and less bugs.
- While doing anything new. Changing codes, updating etc. Be sure you have old perfect backups. Else if you faced a error
in middle and you cant solve it then you get dippressed. Trust me i got many times. So make backups.
- Use good softwares like dreamweaver, web expression, frontpage etc which have ctrl+z working. They will make you more smart
then normal notepad.
- Every error you face means you learn more. Since google is there just type the error in google and you get solution.
Solve it and go on developing and if any more? Google is there.
- Please donot expect to be a programmer in 1 night? Thats preety non sense. Try working with codes gain some ideas, experiences
and have a bit patience. If you give regularity to this things daily you will gain a lot in a month.
- Always have algorithms setup on rough sketches or in your mind to reduce puzzles and bugs. Every new thing we do is we add bugs so
- Experiment more means learn more. So give your good time.
Hope This helped you. I will love to see some peoples making good scripts beginning from my blog This tips are enough to make you smart.
Best of luck. | <urn:uuid:0773af18-7773-4e0c-a493-f872193803c0> | 2.921875 | 964 | Personal Blog | Software Dev. | 68.852632 |
Learn more physics!
What is heat energy?
Without getting into the picky points about how the words are used
precisely, heat energy is the energy involved in all the little random,
small-scale jigglings of things. For example molecules bounce around
randomly and squish into each other. The motion has kinetic energy and
the squishing has potential energy, like a compressed spring.
(published on 10/22/2007)
Follow-up on this answer. | <urn:uuid:d33b4aea-a020-407a-9145-88f5bb9c8cec> | 3.015625 | 104 | Q&A Forum | Science & Tech. | 52.48 |
As the European spacecraft the Beagle (named after Darwin's ship) heads for Mars, scientists are looking at the results of an earlier search for life
on Mars, by the NASA Viking landings in 1976. Former mission scientist Gil Levin says he has evidence proving that we really did find signs of life on
Mars during that mission.
Biology experiments during the Viking mission detected strange signs of activity in the Martian soil, that could have been made by microbes giving off
gas. NASA found this hard to believe, so they carried out a search for the organic matter that could be producing the gas, but never found it, so they
announced there's no life on Mars.
Levin was one of the three scientists taking part in the experiments, and he believes Viking did find living organisms in Martian soil. He continued
the experiments on his own, in what he called LR (labeled release) work. He says, "The (NASA) organic analysis instrument was shown to be very
insensitive, requiring millions of micro-organisms to detect any organic matter, versus the LR's demonstrated ability to detect as few as 50
micro-organisms." He says he has a new experiment that "could unambiguously settle the argument." However, both NASA and the European Space Agency
refuse to do it. | <urn:uuid:90ead99e-3222-4575-8fac-7b6d3b267c6a> | 3.421875 | 268 | Comment Section | Science & Tech. | 45.259536 |
Wasps vs. Beetles: Battle Wages to
In the forests of Wisconsin, a quiet but fierce battle to save the state’s ash tree population has begun. In response to the infestation of invasive emerald ash borers, entomologists from the University of Wisconsin-Madison (UWM) have released hundreds of parasitic wasps to help rid the area’s forests of the destructive beetles.
Native to China, the emerald ash borer was first discovered in the United States in 2002, likely having been transported via shipping containers. Since then, the beetles have wreaked havoc on ash forests across the country by burrowing into the trees where they eat the wood and hatch their larvae. After about two years of the beetles’ habits, the trees finally die. The death toll of ash trees has climbed into the tens of millions.
While chemical treatments have proven to keep the beetles at bay in residential areas, greater steps must be taken to stop the infestation from spreading farther into forests. The UWM research team, consisting of entomologists and researchers, headed to China for a solution. There they discovered three types of parasitic wasps that are known to prey on the beetles. The team recently released hundreds of the wasps in isolated areas that have seen some of the greatest destruction from the beetles.
Though the team is optimistic about the wasps’ capabilities to slow or even stop the spread of the beetles, they aren’t quite certain what the long-term effects of releasing another invasive species into the wild will be. If all appears to be going well, the team plans to release another round of the beetle-hungry wasps later this summer. | <urn:uuid:6c4e8797-ac2c-41d1-b01f-4409478d8f70> | 3.453125 | 345 | Knowledge Article | Science & Tech. | 46.665587 |
The World Is Involved In Biomass Energy
The entire world is now involved in the production of Biomass Energy. Being able to take products that grow naturally in the environment and turn them into an energy to supply the needs of the industrial world is exciting. Biomass is the nomenclature for organic materials such as plants, wood, municipal waste and other products that have received solar energy from the sun.
The sun was recognized as a power source from the very beginning. Some ancient civilizations even worshiped it. Without this energy source everything on the planet would die. It was in 1901 that the first patent was recorded regarding a machine to capture its power.
In 1904 Albert Einstein wrote about the potential of electricity from sunlight and in 1913 a patent was given for a solar cell. In 1916 was the first time a person was able to produce electricity using the sun's rays. It wasn't until the 1950's that NASA became involved with using a solar energy platform in connection with spacecraft once they were in orbit.
Although connecting organic growth with the sun's energy had been discussed for years, it was not until the 1970's that much interest was shown outside of scientific circles. In 2000 the government became interested in the production of energy using biomass but it was not very fruitful. It was not until the world started being concerned about damage to the atmosphere from fossil fuel emissions that the idea of using this kind of energy became popular.
The solar energy found in plants and transferring it into the earth population's needs has been a lifetime study of some scientists. Today it has the attention of the world. Worldwide conferences have and are being held and awards given. At the current time there have been 90 nominations, representing all continents for a World Bioenergy Award 2010.
Seven individuals from different countries have been nominated to receive this award. They all made contributions to the development of bioenergy in many different areas. The countries and their work with biomass energy are interesting and show that work is being pursued everywhere to take advantage of this replacement of fossil fuels.
Brazil presented a research on short rotation eucalyptus, utilizing high density technologies; Finland reported moving the country from total dependency on fossil fuels in 1970 to currently using biomass for over 1/4 of energy utilization; India the establishment of a research laboratory for biogas production from cattle waste. New Zealand worked on bioenergy research and pioneered bioenergy on a worldwide scale; USA worked with an African project using liquid biofuel stoves; Sweden is developing a biogas industry; Canada developed transporting wood pellets to Europe for use as biofuel.
Every day a new idea is being presented to capture the solar energy present in natural resources and turn them into daily use by mankind. These resources are not only renewable but are in one form or another over the entire planet. There is now a serious involvement in Biomass Energy development and the prospect of eventually using this source for all the world's needs seems to be a very good possibility. | <urn:uuid:4b5ec647-d6be-478c-b493-68f8a929f3d6> | 3.4375 | 605 | Knowledge Article | Science & Tech. | 36.788071 |
While the tight coupling of the PVM and perl's front-end does cause some problems, there is still a clearly defined intermediate representation (IR) that is generated by perl's front-end. This section discusses that IR.
The perl IR consists of a parse tree. In the context of perl, this IR is commonly referred as the ``OP-tree''. This OP-tree is an acyclic directed graph representing the flow of the Perl program. There are twelve different OP classes, and a total of 346 different OPs.
Each OP has a number of flags and options. These flags and options control the behavior of the OP. Care must be taken when using the IR, as these options and flags can often change the semantics of the OP-tree evaluation.
Certain types of OPs also have additional fields that might refer to other OPs, or to internal perl data structures. For example, the LISTOP, which is used to group other OPs into a list, contains a field for its child OPs. The SVOP, which is used to refer to a scalar variable, contains a field that points to perl's internal representation of that scalar variable.
The next two subsections contain examples of Perl programs, and their equivalent OP-trees.
Copyright © 2000, 2001 Bradley M. Kuhn.
Verbatim copying and distribution of this entire thesis is permitted in any medium, provided this notice is preserved. | <urn:uuid:0d518044-68f8-488a-8048-8aa6390afa39> | 3 | 296 | Academic Writing | Software Dev. | 62.198317 |
Increase in Groundwater Use and Sea-Level Rise
As aquifers are pumped out around the world, the water ultimately makes it to the oceans.
Groundwater depletion will soon be as important a factor in contributing to sea-level rise as the melting of glaciers other than those in Greenland and Antarctica, scientists say.
That's because water pumped out of the ground for irrigation, industrial uses, and even drinking must go somewhere after it's used—and, whether it runs directly into streams and rivers or evaporates and falls elsewhere as rain, one likely place for it to end up is the ocean.
To find out how much of an effect this has on sea level, a team of Dutch scientists led by hydrologist Yoshihide Wada, a Ph.D. researcher at Utrecht University, divided the Earth's land surface into 31-by-31-mile (50-by-50 kilometer) squares on a grid to calculate present and future groundwater usage.To make the calculation as precise as possible, they used not only current groundwater-use statistics from each country, but also economic growth and development projections. They also took into account the impact of climate change on regional water needs, considering "all the major factors that contribute," Wada said.
Because aquifers can be refilled, the scientists also used climate, rainfall, and hydrological models to calculate the rate of groundwater recharge for each region. From this, they projected the net rate of groundwater depletion.
Indian man drives bullock team to turn water wheel, near Ranakapur, India via Shutterstock.
Read more at National Geographic.com. | <urn:uuid:f7d04256-851e-4089-b990-a7007182c2ac> | 3.5625 | 336 | Truncated | Science & Tech. | 38.813197 |
Wayne Stollings wrote:
They warrant investigation, but without some evidence of mechanism the only uncertainity is whatever one wished to believe. I can believe there is a Martian heat ray being used by a spaceship cloaked from detection and claim uncertainty because I have created it in my mind. There is no evidence to support that uncertainty, so in the realm of reality it does not exist yet. If and when there is evidence then the real uncertainty can exist.
plenty of evidence, though.
Like of a mechanism? Wait, the CERN experiments showed there was insufficient production even with the compound multipiers.
CO2 is a GHG, which is proven in experimentation.
GHGs retain energy and warms our planet, which is proven in many ways.
CO2 levels have risen dramatically over the last 150 years, which is proven by measurments
Humanity has released more sequestered CO2 than required for the increase in the atmosphere.
And I can do the same "proof" with the sun.
The sun's activity has correlated with temperatures in the past over various timescales, (See Figure 1) and is known to impact temperatures with an increase in solar radiation, and a cloud decrease from Cosmic Rays.
What? Where is the evidence of the mechanism causing that cloud decrease? Not a correlation, but evidence of the specific mechanism.
Increased Solar radiation warms the planet, which is a basic fact.
Solar activity has dramatically risen over the past 150 years, which is proven by proxies and observations. (See Figure 1)
Except for the recent period where there has been a decrease in solar activity and no corresdponding decrease in temperature. Unless there is some other factor for positive temperature gain, such as GHGs you have a problem.
The increase in solar activity will have a profound impact on the atmospheric processes, since in the past it was a powerful climatic driver. (See Figure 2)
(Figure 1) From Figure 6 of Scafetta and West 2007
. The strong coherency between solar activity and temperature changes can be observed over the last 400 years.
Note the temperature starts to diverge after the start of the 20th century and is mainly above the scale at the end decade or so.
(Figure 2) From Figure 2 of Neff et. al 2001
The authors note that The similarity between the smoothed d18O and D14C
time series, both in their general patterns and in the number of
peaks, is extremely strong. Even millennial-scale trends and relative
amplitudes correspond. Furthermore, the high-resolution interval
between 7.9 and 8.3 kyr BP also reveals a close correspondence
between the two curves. The parallel evolution of d18O and D14C
seems very unlikely to have occurred by chance. Rather, the high
correlation provides solid evidence that both signals are responding
to the same forcing. Variations of D14C were attributed to changes in
the production rate in the stratosphere, induced by solar wind
modulation of the cosmic ray ¯ux. Maxima of 10Be concentrations
in polar ice cores that are synchronous with maxima in D14C further
reinforce this interpretation6,7,21.The high resolution and dating precision of the d18Orecord of H5
make it possible to perform a reliable frequency analysis. Spectral
analyses of the untuned d18O record are given in Fig. 4a and b. The
d18O results show statistically signi®cant periodicities centred on
1,018, 226, 28, 10.7 and 9 years. Two broader sets of cycles are
centred between 101±90 years and 35±26 years. These cycles are
close to the periodicities of the tree-ring D14C record (206, 148, 126,
89, 26 and 10.4 years), which are assigned to solar modulation7.
The period of natural temperature variations in that period of history should correlate very well. The problem is the recent period.
So in other words, what you have just done is presented a correlation and not causation, and a proof that CO2 is not causing zero warming. That is nice, but a better proof would be a proof that CO2 is causing most of the warming observed.
A correlation with an evidenced mechanism is evidence of causation in such a case. Unless you have a duplicate uninhabited planet to use as a control, that is as close as it can get.
They have been proven to absorb and re-radiate energy causing the retention of said energy as heat. There are various other lines of evidence to support this theory, such as warming more during night hours.
That is not a fingerprint of AGW due to Greenhouse Gases, as many things, including warmer oceans and urbanization both reduce the Diurnal Temperature Range.
Not in areas which are not coastal, not immediately impacted by coastal weather patterns, or are not urban.
Yes, it is. If it is insufficient to account for the observed nucleation it is insufficient to account for any impact connected with that nuclealtion.
No, it does not say that GCRs do not cause a significant change in the nucleation rate in the troposphere or the boundary layer, so your logic is quite flawed.
It clearly states the measured nucleation rates were insufficient to account for the observed rates in the boundary layer. Thus, the observed nucleation rate and the impact from clouds formed by the observed nucleation rate cannot be attributed to the GCRs. There may be an impact, but not of the magnitude you have claimed. The logic is quite sound. | <urn:uuid:f9b229bd-f2a1-4d7c-96eb-2f97c53f1fb8> | 2.765625 | 1,176 | Comment Section | Science & Tech. | 48.158288 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 16 results on physics.org and 78 results in our database of sites
78 are Websites,
0 are Videos,
and 0 are Experiments)
Search results on physics.org
Search results from our links database
A brief description of how Planet Hunting works. This site is part of Marshall Brain's HowStuffWorks.com. Description of how planets form. Definition of a planet.
An elementary quiz based on the position and orbits of the planets.
Read how spectroscopy is used to determine what makes up the atmosphere of planets
Details of how large planets have been detected around three other solar type stars. Understanding Doppler wobble aided by an exercise.
a selection of the "best images" of planets (minor planets etc, and instruments) from NASA.
Planet Science contains lots of great articles and activities in all areas of science plus some brilliant little games.
A really interesting data visualisation of all the planets Kepler has found, complete with information about their size, temperature, and distance and period of orbit.
Info on this type of star and links to related topics.
A site containing lots of images of the planets in the solar system and the machines used to take the pictures. This site also contains a comprehensive glossary of space terms.
Beginner's information on different types of star from University of Illinois astronomy department. Covers pulsars, supernovae, dwarf stars etc.
Showing 1 - 10 of 78 | <urn:uuid:3d31c393-a598-492a-9aa4-a722c0e1fd07> | 2.984375 | 349 | Content Listing | Science & Tech. | 51.415333 |
In the double-slit experiment, without which-path information available, the diffraction pattern is usually shown as an even function with respect to the displacement from the midpoint of the slits: something like sin ay / y. (This is the case in Feynman's lectures, and many others.) The question was posed as to whether this result is valid for spin-aligned electrons. The most direct reply
stated that it is indeed valid:
This seems to contradict these UC Berkeley course notes
, which state that if one of the paths is increased in length to make it out of phase from the other by 2∏ radians, then the interference pattern will go to zero, because the state function of the longer path will be multiplied by -1 (or exp i∏). It is stated in the notes that this effect was experimentally confirmed with neutrons (another fermion). This would not be the case for photons, which would constructively interfere.
If this is indeed correct, then wouldn't it also be true that if the electron source were symmetrically placed between the two slits, that there would be places in the diffraction pattern that would be different with photons?
So who's right? The Berkeley professor or Feynman?
Note: It appears the Berkeley notes refer to a magnetic field, B0
, which may have something to do with it. | <urn:uuid:d019a59c-4489-44f4-8f1e-91ea8c19d81a> | 2.75 | 287 | Comment Section | Science & Tech. | 51.2695 |
Jan. 10, 2013 Chris Martin has bred more than 3,000 hybrid fish in his time as a graduate student in evolution and ecology at UC Davis, a pursuit that has helped him create one of the most comprehensive snapshots of natural selection in the wild and demonstrated a key prediction in evolutionary biology.
"We can see a surprisingly complex snapshot of natural selection driving the evolution of new specialized species," said Martin, who with Professor Peter Wainwright published a paper on the topic in the Jan. 11, 2013, issue of the journal Science.
The "adaptive landscape" is very important for evolutionary biology, but rarely measured, Martin said. He's been fascinated with the concept since high school.
An adaptive landscape takes variable traits in an animal or plant, such as jaw size and shape, spreads them over a surface, and reveals peaks of success (what evolutionary scientists call fitness) where those traits become most effective, or adaptive.
It is a common and powerful idea that influences thinking about evolution. But while the concept is straightforward, it is much harder to map out such a landscape in the wild.
For example, about 50 species of pupfish are found across the Americas. The tiny fish, about an inch or so long, mostly eat algae on rocks and other detritus. Martin has been studying species found only in a few lakes on the island of San Salvador in the Bahamas, where some of the fish have evolved different-shaped jaws that allow them to feed on hard-shelled prey like snails or, in one case, to snatch scales off other fish.
In a paper published in 2011, Martin showed that these San Salvadoran fish are evolving at an explosively faster rate than other pupfish.
Martin brought some of the fish back to the lab at UC Davis and bred hybrids with fish with different types of jaws. He created about 3,000 hybrids in all, which were measured, photographed and tagged. Martin then took about 2,000 of the fish back to San Salvador.
"It was the craziest thing I've done," Martin said. "I was leaning on the stack of them in the middle of Miami airport."
Martin released the young fish into enclosures in the lakes of their grandparents. Three months later, he returned to check on the survivors and plotted them out on the adaptive landscape.
Most of the surviving fish were on an isolated peak adapted to a general style of feeding, with another peak representing fish adapted for eating hard-shelled prey. Competition between the fish had eliminated the fish whose jaws put them in the valleys between those peaks. The scale-eating fish did not survive.
The results explain why most pupfish species in America have pretty much the same diets, Martin said. The generalists are essentially stranded on their peak -- variants that get too far out fall into the valley and die out before they can make it to another peak.
"It's stabilizing selection," he said. An early burst of variation when fish entered a new environment with little competition could have allowed the shell-eaters and scale-eaters to evolve on San Salvador.
The work was supported by grants from the National Science Foundation.
Other social bookmarking and sharing tools:
- Christopher H. Martin, Peter C. Wainwright. Multiple Fitness Peaks on the Adaptive Landscape Drive Adaptive Radiation in the Wild. Science, 11 January 2013: Vol. 339 no. 6116 pp. 208-211 DOI: 10.1126/science.1227710
Note: If no author is given, the source is cited instead. | <urn:uuid:3d6a4f30-3ad5-4082-bf07-07d516899908> | 3.609375 | 731 | Truncated | Science & Tech. | 54.751139 |
Often, a software project will have one or more central repositories, directory trees that contain source code, or derived files, or both. You can eliminate additional unnecessary rebuilds of files by having SCons use files from one or more code repositories to build files in your local build tree.
It's often useful to allow multiple programmers working
on a project to build software from
source files and/or derived files that
are stored in a centrally-accessible repository,
a directory copy of the source code tree.
(Note that this is not the sort of repository
maintained by a source code management system
like BitKeeper, CVS, or Subversion.)
You use the
to tell SCons to search one or more
central code repositories (in order)
for any source files and derived files
that are not present in the local build tree:
env = Environment() env.Program('hello.c') Repository('/usr/repository1', '/usr/repository2')
Multiple calls to the
will simply add repositories to the global list
that SCons maintains,
with the exception that SCons will automatically eliminate
the current directory and any non-existent
directories from the list. | <urn:uuid:e8e1baec-f604-42b7-b95e-be6f22e3410d> | 3.375 | 257 | Documentation | Software Dev. | 21.785135 |
Tantalum: the essentials
Tantalum is a greyish silver, heavy, and very hard metal. When pure, it is ductile and can be drawn into fine wire, which can be used as a filament for evaporating metals such as aluminium. Tantalum is almost completely immune to chemical attack at temperatures below 150°C, and is attacked only by hydrofluoric acid, acidic solutions containing the fluoride ion, and free sulphur trioxide. The element has a melting point exceeded only by tungsten and rhenium.
Tantalum: historical information
Niobium was discovered in 1802 by Anders Gustaf Ekeberg, but many chemists thought niobium and tantalum were one and the same. Some felt that perhaps tantalum was an allotrope of niobium. Later, Rose, in 1844, and Marignac, in 1866, showed that niobic and tantalic acids were two different acids.
The first relatively pure tantalum was produced by von Bolton in 1907.
Tantalum: physical properties
Tantalum: orbital properties
Isolation: isolation of tantalum appears to be complicated. Tantalum minerals usually contain both niobium and tantalum. Since they are so similar chemically, it is difficult to separate them. Tantalum can be extracted from the ores by first fusing the ore with alkali, and then extracting the resultant mixture into hydrofluoric acid, HF. Current methodology involves the separation of tantalum from these acid solutions using a liquid-liquid extraction technique. In this process tantalum salts are extracted into the ketone MIBK (methyl isobutyl ketone, 4-methyl pentan-2-one). The niobium remains in the HF solution.
After conversion to the oxide, metallic tantalum can be made by reduction with sodium or carbon. Electrolysis of molten fluorides is also used.
WebElements now has a WebElements shop at which you can buy periodic table posters, mugs, T-shirts, games, molecular models, and more. | <urn:uuid:9cde50fe-0429-4800-adcc-c9bdc26a88e3> | 3.359375 | 437 | Knowledge Article | Science & Tech. | 29.158774 |
Simultaneity - Albert Einstein and the Theory of Relativity
Sign in to YouTube
Sign in to YouTube
Sign in to YouTube
Uploaded on May 5, 2007
Imagine two observers, one seated in the center of a speeding train car, and another standing on the platform as the train races by. As the center of the car passes the observer on the platform, he sees two bolts of lightning strike the car - one on the front, and one on the rear. The flashes of light from each strike reach him at the same time, so he concludes that the bolts were simultaneous, since he knows that the light from both strikes traveled the same distance at the same speed, the speed of light. He also predicts that his friend on the train will notice the front strike before the rear strike, because from her perspective on the platform the train is moving to meet the flash from the front, and moving away from the flash from the rear.
But what does the passenger see? As her friend on the platform predicted, the passenger does notice the flash from the front before the flash from the rear. But her conclusion is very different. As Einstein showed, the speed of the flashes as measured in the reference frame of the train must also be the speed of light. So, because each light pulse travels the same distance from each end of the train to the passenger, and because both pulses must move at the same speed, he can only conclude one thing: if he sees the front strike first, it actually happened first.
Whose interpretation is correct - the observer on the platform, who claims that the strikes happened simultaneously, or the observer on the train, who claims that the front strike happened before the rear strike? Einstein tells us that both are correct, within their own frame of reference. This is a fundamental result of special relativity: From different reference frames, there can never be agreement on the simultaneity of events.
Standard YouTube License
- 9:11 Time Dilation | Einstein's Relativityby Explore the World of Science!Featured 218,087
- 5:43 Space-Time And The Speed Of Light | Einstein's Relativityby ScienceTV 673,403 views
- 1:22 Time Dilation - Albert Einstein and the Theory of Relativityby MyEarbot 465,847 views
- Scientific American Space Lab 259 videos156K
- 2:11 Einstein's Proof of E=mc²by minutephysics 1,292,783 views
- 1:22 Distance and Special Relativity: How far away is tomorrow?by minutephysics 753,041 views
- 1:49:23 Lecture 1 | Modern Physics: Special Relativity (Stanford)by Stanford University 419,215 views
- 4:29 TIME TRAVEL: What Einstein Did Not Seeby timespacevector 140,395 views
- 1:19 Albert Einstein: Why Light is Quantumby minutephysics 616,322 views
- 1:38 GPS, relativity, and nuclear detectionby minutephysics 567,387 views
- 21 videos Play all Einstein's Relativityby starseed03
- 5:26 Einstein's Special Theory of Relativity: Explanationby PatriciaNicole2394 14,533 views
- 1:12 Albert Einstein: The Size and Existence of Atomsby minutephysics 797,353 views
- 2:10 Can we Predict Everything?by minutephysics 1,048,428 views
- 16:18 Albert Einstein's Theory of Relativityby Eugene Khutoryansky 262,648 views
- 1:50 Einstein and The Special Theory of Relativityby minutephysics 748,540 views
- 2:30 Visualization of Einstein's special relativityby udiprod's channel 717,837 views
- 1:49 Einstein the Parrot Percussionist performs The Cup Songby 4zebird 2,137 views
- 4:36 Time Travel And Einstein's Relativity Made Easyby FFreeThinker 116,653 views
- 6:27 Einstein, John Lewis, and Televisionby THNKR 1,806 views
- 4:12 TIME TRAVEL - Einstein's Big Idea (Theory of Relativity explained in a fun way)by plancheman100 3,739 views
- 31:10 Baby Einstein - Descubriendo Formasby MusicForBabys 17,029 views
- Loading more suggestions... | <urn:uuid:6f3a5dd6-6c01-432c-b050-202cb176c930> | 3.78125 | 911 | Truncated | Science & Tech. | 42.570217 |