text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Using Satellites to Monitor Deformation: Radar InterferometryExamples of Interferograms Satellite-based technique captures overall deformation "picture" For years, scientists dreamed of a "geodetic camera" capable of snapping a picture that would show in exquisite detail how much the ground near a volcano was moving. The dream has become reality: images of deforming volcanoes are being produced in breathtaking color from data acquired by spacecraft! For example, when the ground is uplifted by 10 cm (left, top sketch), satellite images of the area recorded before and after the uplift can be combined to generate a colorful pattern of fringes (left, bottom image). Each of the three fringes (from violet to red) represents a change in the satellite-to-ground distance of about 3 cm. Until recently, all of the techniques we used to measure volcano deformation (for example, electronic distance measurements, tiltmeters, and the Global Positioning System) were based on detecting changes at specific points on the ground surface. The amount and direction of movement of these points enabled us to piece together the overall pattern of deformation on a volcano. The situation is analogous to trying to discern the pattern of an assembled jigsaw puzzle after 99% of the pieces have been removed. By choosing the locations of benchmarks, tiltmeters, and GPS stations carefully, we can usually track recurring patterns of deformation reasonably well, especially over short periods of time (minutes to days). But we can never be sure that we are seeing the whole picture, or that we aren't missing small-scale deformation that slipped through the cracks, so to speak, of our monitoring networks. Under favorable conditions, satellite radar interferometry promises to show us the whole deformation picture. Technique gains recognition after 1992 Landers earthquake in eastern California About 10 years ago, a remarkable new technique for measuring ground deformation from Earth orbit burst on the scene with all the drama of a major earthquake. Using a series of radar images acquired by the European Space Agency's ERS satellites, Didier Massonnet and others (1993) produced a striking image of ground displacements caused by the magnitude 7.3 Landers earthquake, which struck about 150 km east of Los Angeles on 28 June 1992. Geodesists around the world were struck by the remarkable detail visible in the image, which resembled the displacement pattern predicted by theoretical models of such an earthquake. The pattern had never before been fully observed in the field, because conventional ways of measuring ground deformation were capable of filling in only a few pieces of the puzzle. Suddenly, all of the pieces fell into place and the race was on to apply the amazing new technique, called satellite radar interferometry, to other sources of ground deformation, including volcanoes. One important advantage of using radar rather than visible or infrared light to image the Earth's surface: radar waves penetrate most weather clouds and are equally effective in darkness. So our amazing geodetic camera can "see" through clouds and at night! Average displacement along the fault rupture (black line, above) was 3-4 m; maximum displacement was 6 m! More information about the Landers earthquake is available from the Southern California Earthquake Data Center. The interferogram shows that the deformation extended well beyond the immediate area of the surface rupture. Each cycle of interference colors (red through blue) represents an additional 2.8 cm of ground motion in the direction of the satellite. |How satellite radar interferometry works| Return signal from satellite holds the key |The technical details of how and why radar interferometry works are rooted in physics and radar engineering, but for our purposes a much simpler explanation will suffice. A pulse of radar energy is successively emitted from a satellite (left), scattered by the Earth's surface, and recorded back at the satellite (right). The radar energy received by the satellite contains two important types of information.| Figures from T. Freeman, Jet Propulsion Laboratory |The first type information is encoded in the strength or amplitude of the return signal, which is influenced by various physical properties of the surface including ground slope, particle size (i.e., sand versus boulders), and soil moisture. The ERS satellites record return signal strength from a continuous swath of the Earth's surface about 100 km wide (60 mi) wide, and scientists on the ground assemble this information in the form of a radar image. The image is a portrayal of the surface that resembles a conventional photograph in some ways, but not entirely. Think of the difference between a conventional photo and an infrared image, which shows warm areas as bright irrespective of their brightness in visible light. Radar images differ from conventional photos in a similar way. The second type of information contained in the return radar signal has to do with the round trip distance from the satellite to the ground and back again. We can think of a radar pulse as an invisible tape measure calibrated in units of the radar wavelength. We call the fractional part of the round trip distance the phase of the return signal. For the ERS satellites, the radar wavelength is 5.66 cm (2.2 inches). If we were able to acquire two radar images at different times from exactly the same vantage point in space and compare them, any movement of the ground surface toward or away from the satellite would show up as a phase difference between the images. For example, if a point on the ground moved toward the satellite (mostly upward) by one-half wavelength, the phase of the return signal from that point would increase by one full wavelength relative to the first image. It isn't possible to steer a satellite accurately enough to return it to exactly the same point in space on different orbits, but it's relatively easy to get within a few hundred feet and then do the necessary geometric corrections. Combining or "interfering" images from different satellite passesIt turns out that the most accurate way to measure small phase changes is to combine two images together after all of the necessary corrections have been made. This process is sometimes called "interfering" the images, because combining two waves causes them to either reinforce or cancel one another, depending on the relative phases. For example, you may have observed interference between two sources of water waves on a pond. Now imagine that we can keep track of all the places where two radar images reinforce one another, and all the places they cancel one another. We'll represent the first case as a red pixel in a new image that we'll create, and the second case as a blue pixel. Intermediate cases will be represented as intermediate colors of the spectrum from red to blue. The resulting image is called an interferogram. The properties of waves are such that we can't tell the difference between waves that reinforce one another because they are exactly in phase with one another, or out of phase by any number of whole wavelengths (1, 2, 3...). As a result, an interferogram for an area that domed upward during the time interval between two radar images would show a concentric pattern of color bands, called fringes, not unlike the contours on a topographic map (left). In this case, though, each fringe would represent just one-half wavelength of surface movement toward the satellite--nearly 3 cm for ERS (just over an inch). To determine the total amount of movement, we only have to count the number of fringes. Our geodetic camera is ready to track volcano deformation from space! Massonnet, D., Rossi, M., Carmona, C., Adragna, F., Peltzer, G., Feigl, K., and Rabaute, T., 1993, The displacement field of the Landers earthquake mapped by radar interferometry: Nature, v. 364, p. 138-142.
<urn:uuid:f3dbf029-cb31-44ef-b3ce-d84240bccee3>
3.859375
1,620
Knowledge Article
Science & Tech.
36.488221
Visualizing the Electron Wind Force in Nanostructures Ellen D. Williams [This is an invited article based on a recently published work by the authors -- 2Physics.com] Authors: Chenggang Tao, William G. Cullen, and Ellen D. Williams Affiliation: Materials Research Science and Engineering Center & Department of Physics, University of Maryland, USA Link to the Williams Lab >> As electronic devices get smaller and smaller, they are more susceptible to effects of the charge carriers flowing through them. The charge carriers (electrons in metals) can push atoms around by collisions. For some specific types of atomic structures (for example, atomic “steps”, where the surface height changes by one layer of atoms), the scattering force is much stronger than had been thought . These structures are ubiquitous for the surfaces of solid materials, and this becomes very important for nanoscale electronics where surfaces make up a much bigger fraction of the material. W. G. Cullen (left) and Chenggang Tao (right) A very careful measurement is needed to directly observe the forces that electrons exert on the atoms of the material which they are passing through. Yet over a long time, the effects of this force accumulate and can lead to failure of wires which connect components in integrated circuits - a process known as electromigration [2, 3]. In our experiment, we carefully created different types of nanoscale structures on top of a very thin wire of silver. One type of structure consists of single-atom high “islands” that contain between 100 and 100,000 atoms. Another type consists of single-atom high “steps” decorated by C60 buckyballs. We then used a scanning tunneling microscope to watch the structures move or change shape when we ran current through the wire. Amazingly, when we changed the direction of the current, we could move the structures back and forth. The force exerted by the electrons on island edge atoms is up to 20 times larger than previous theoretical calculations had predicted. However, when we decorate the island edge with a chain of C60 molecules (which tend to mildly withdraw electrons locally from the silver atoms, and also change their local configuration) we find that the force is reduced by over a factor of 10. This indicates that the force is very dependent on the local environment of the atoms which comprise the step and island boundaries. Fig. 1 Schematic of the experimental setup; inset shows STM image of silver wire surface. The fundamentally interesting idea here is that all the different ways that electrons can move through the wire can be described by how easily an electron can be “transmitted”. Most atomic structures in a solid allow easy transmission, but the defect sites impede the transmission. This results in a local “resistivity dipole” which means that the defect sites have a local resistance. Our measurements detected the motion of atomic-scale surface structures which results from forces exerted by the passing electrons – as the atoms resist the electron flow, they in turn feel a larger “push” from the electrons. Fig. 2A-B: Island pushed by moving electrons. The current direction is downward, and the island displacement is upward. Here we have demonstrated that nanoscale surface structures can be moved (and even turned around) using the scattering force from electrons. Further, the scattering force can be significantly reduced by attaching C60 molecules to the structures. On the other hand, a particularly exciting implication is the use of this effect to move atoms around intentionally in nanoelectronic devices, or to harness it to do work . This effect might be used to self-assemble or to create structures that could be cycled through different structures under an alternating current. Our work was supported by the NSF Materials Research Science and Engineering Center at the University of Maryland, including the use of shared experimental facilities. Additional support was provided by the University of Maryland NanoCenter and the Center for Nanophysics and Advanced Materials. Chenggang Tao, W. G. Cullen, E. D. Williams, “Visualizing the electron scattering force in nanostructures”, Science 328, 736–740 (2010). Abstract. P. S. Ho and T. Kwok, “Electromigration in metals”, Reports on Progress in Physics, 52, 301 (1989). Abstract. H. Yasunaga and A. Natori, “Electromigration on semiconductor surfaces”, Surface Science Reports 15, 205 (1992). Abstract. D. Dundas, E. McEniry and T. N. Todorov, “Current-driven atomic waterwheels”, Nature Nanotechnology, 4, 99 (2009). Abstract.
<urn:uuid:00e3a91e-fc13-4019-a252-de787416eec2>
3.46875
981
Academic Writing
Science & Tech.
42.43353
Saturday 15 June Sea gooseberry (Pleurobrachia pileus) Sea gooseberry fact file - Find out more - Print factsheet Sea gooseberry description Members of the phylum Ctenophora are known as sea-gooseberries or comb-jellies, and are startlingly beautiful marine invertebrates. They are commonly mistaken for jellyfish, but belong to their own group that is totaally unrelated to jellyfish (3). Pleurobrachia pileus has a transparent spherical body bearing two feathery tentacles, which can be completely drawn back into special pouches. The name comb-jelly refers to the eight rows of hair-like cilia present on the body, which are known as comb-rows. The rhythmic beating of these cilia enables the animal to swim, and also refracts light, creating a multi-coloured shimmer (2).Top Sea gooseberry biology Despite their delicate, almost ghostly appearance, sea-gooseberries are voracious predators, feeding on fish eggs and larvae, molluscs, copepod crustaceans, and even other sea-gooseberries (5). Prey is caught by the long tentacles, which act as a net and bear adhesive cells known as colloblasts. The tentacles are then ‘reeled in’ and the prey is passed to the mouth (2). This species is hermaphroditic. Breeding occurs from spring to autumn; the eggs and sperm are released into the water and fertilisation therefore occurs externally. The larva, known as a ‘cydippid larva’ is free-swimming. Most individuals die following spawning. This species may be preyed upon by fish and other sea-gooseberries (2).Top Sea gooseberry rangeTop Sea gooseberry habitatTop Sea gooseberry status Not threatened (2).Top Sea gooseberry threats This species is not threatened.Top Sea gooseberry conservation Conservation action is not required for this common species.Top Find out more For more information on the sea gooseberry, see: - Fish, J.D. and Fish, S. (1989) A Student’s Guide to the Seashore. Unwin Hyman Ltd., London. For further information on comb-jellies, see: Microscopy UK: Comb-jellies: This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact: - Large and diverse group of minute marine and freshwater crustaceans belonging to the subclass Copepoda. They usually have an elongated body and a forked tail. - Diverse group of arthropods (a phylum of animals with jointed limbs and a hard chitinous exoskeleton) characterised by the possession of two pairs of antennae, one pair of mandibles (parts of the mouthparts used for handling and processing food) and two pairs of maxillae (appendages used in eating, which are located behind the mandibles). Includes crabs, lobsters, shrimps, slaters, woodlice and barnacles. - Possessing both male and female sex organs. - Stage in an animal’s lifecycle after it hatches from the egg. Larvae are typically very different in appearance to adults; they are able to feed and move around but usually are unable to reproduce. - Inhabits the open oceans. National Biodiversity Network Species Dictionary (September, 2003) - Fish, J.D. and Fish, S. (1989) A student’s guide to the seashore. Unwin Hyman Ltd, London. Microscopy UK: Comb-jellies (November, 2003) - Gibson, R., Hextall, B. and Rogers, A. (2001) Photographic Guide to the Sea and Shore Life of Britain and North-west Europe. Oxford University Press, Oxford. University of Bangor. The Distribution and Abundance of Ctenophores in the Menai Straits and Eastern Irish Sea in Comparison to the Distribution and Abundance of Their Copepod Prey (November, 2003) Play the Team WILD game Terms and Conditions of Use of Materials Visitors to this website (End Users) are entitled to: - view the contents of, and Material on, the website; Additional use of flagged material Green flagged material Creative commons material Any other use
<urn:uuid:c607b231-e295-4a18-96e2-2a0c01bb586f>
3.640625
958
Knowledge Article
Science & Tech.
41.496301
Tuesday 18 June Stony coral (Porites lutea) Stony coral fact file - Find out more - Print factsheet Stony coral description Porites corals form some of the largest of all coral colonies, with some reaching an incredible eight metres in height (3). The growth rate of Porites coral is very slow, perhaps only nine millimetres a year, therefore these giant colonies may be up to 1,000 years old, putting them among the oldest life forms on earth (3). Coral colonies are composed of many individual coral polyps, which are basically anemone-like animals that secrete a skeleton. The many polyps of a colony are joined together at the base of their skeletons (4). The colonies of Porites corals may form flat, branching, spherical or hemispherical structures; some hemispherical colonies may be over five meters across (4). The coral polyps possess tentacles which, in most species, are extended only at night, when they give the coral a furry appearance (3).Top Stony coral biology Like other reef-building corals, the polyps of Porites corals have microscopic algae (zooxanthellae) living within their tissues. Through photosynthesis, these symbiotic algae produce energy-rich molecules that the coral polyps can use as nutrition. In return, the coral provides the zooxanthellae with protection and access to sunlight (4). Porites colonies also commonly house a wide variety of other fauna (4). The majority of corals are hermaphrodite, and thus colonies possess both male and female reproductive organs. However, Porites corals have separate male and female colonies. With a few exceptions, fertilization is internal and therefore depends on free-swimming sperm from male colonies reaching the polyps of female colonies. The fertilised eggs then develop into larvae within the female polyp’s body cavity (3). When released, the larvae settle quickly close to the parent colony. Whilst this means that, unlike spawning corals, the coral is not easily dispersed, brooding corals have the advantage of their young settling in an environment that has already proved suitable for successful reproduction (4). Most of the spherical and hemispherical Porites species are tolerant of sedimentary environments, partly because they protect themselves with a thick film of mucous (4).Top Stony coral range Occurs in the Indian and Pacific Oceans, from South Africa up to the Red Sea and across to southern Japan, northern Australia and central America (4).Top Stony coral habitat Porites corals can be found in a wide range of coral reef environments. Many Porites species are very common in shallow water, and most species are tolerant of areas where sediment accumulates (4).Top Stony coral statusTop Stony coral threats Porites corals face the many threats that are impacting coral reefs globally. It is estimated that 20 percent of the world’s coral reefs have already been effectively destroyed and show no immediate prospects of recovery, and 24 percent of the world’s reefs are under imminent risk of collapse due to human pressures. These human impacts include poor land management practices that are releasing more sediment, nutrients and pollutants into the oceans and stressing the fragile reef ecosystem. Over fishing has ‘knock-on’ effects that results in the increase of macro-algae that can out-compete and smother corals, and fishing using destructive methods physically devastates the reef. A further potential threat is the increase of coral bleaching events, as a result of global climate change (5). The predatory starfish, Acanthaster planci, or ‘crown-of-thorns starfish’, feeds on a wide range of coral species. For little understood reasons, outbreaks of this starfish occur at regular intervals, and large numbers of starfish can have devastating effects on the reef. They can eat so much that they can kill most of the living coral in a region, which may take the reef up to fifteen years to fully recover (6). Due to the exceptionally slow growth rate of Porites corals, these species may not be able to fully recover in the time before the next starfish outbreak, and thus may be sent into a period of prolonged decline (7). An additional potential threat arises from collection for the coral trade. Porites is one of four genera that constitute the majority of the dead coral trade, for ornaments and jewellery. Live Porites are also collected at a lower level for the aquarium industry, and has previously been traded for biomedical purposes. This trade, which probably supplied a specialised market for the use of coral in bone grafts, peaked in 1992 but has since declined to extremely low levels (8).Top Stony coral conservation Porites corals are listed on Appendix II of the Convention on International Trade in Endangered Species (CITES), which means that trade in this species should be carefully regulated (2). Indonesia and Fiji have export quotas for Porites corals (2). Porites corals will form part of the marine community in many marine protected areas (MPAs), which offer coral reefs a degree of protection, and there are many calls from non-governmental organisations for larger MPAs to ensure the persistence of these unique and fascinating ecosystems (5).Top Find out more For further information on stony corals see Veron, J.E.N. (2000) Corals of the World. Vol. 3. Australian Institute of Marine Science, Townsville, Australia. For further information on the conservation of coral reefs see: This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact: - Relating to corals: corals composed of numerous genetically identical individuals (also referred to as zooids or polyps), which are produced by budding and remain physiologically connected. - Relating to corals: the stages of development before settlement on the reef. Larvae are typically very different in appearance to adults; they are able to feed and move around but usually are unable to reproduce. - Metabolic process characteristic of plants in which carbon dioxide is broken down, using energy from sunlight absorbed by the green pigment chlorophyll. Organic compounds are produced and oxygen is given off as a by-product. - Typically sedentary soft-bodied component of Cnidaria (corals, sea pens etc), which comprise of a trunk that is fixed at the base; the mouth is placed at the opposite end of the trunk, and is surrounded by tentacles. - Relating to corals: the release of eggs and sperm into the water, where fertilization takes place externally. - A relationship in which two organisms form a close association, the term is now usually used only for associations that benefit both organisms (a mutualism). IUCN Red List (October, 2009) CITES (October, 2009) - Veron, J.E.N. (1986) Corals of Australia and the Indo-Pacific. Angus & Robertson Publishers, London, UK. - Veron, J.E.N. (2000) Corals of the World. Vol. 3. Australian Institute of Marine Science, Townsville, Australia. - Wilkinson, C. (2004) Status of Coral Reefs of the World. Australian Institute of Marine Science, Townsville, Australia. Moran, P. (1997) Crown-of-Thorns Starfish: Questions and Answers. Australian Institute of Marine Science, Townsville, Australia. Available at: - Done, T.J. (1987) Simulation of the effects of Acanthaster planci on the population structure of massive corals in the genus Porites: evidence of population resilience?. Coral Reefs, 6: 75 - 90. - Green, E. and Shirley, F. (1999) The Global Trade in Corals. World Conservation Press, Cambridge, UK. More »Related species Play the Team WILD game Terms and Conditions of Use of Materials Visitors to this website (End Users) are entitled to: - view the contents of, and Material on, the website; Additional use of flagged material Green flagged material Creative commons material Any other use
<urn:uuid:0cc2cbb8-fe33-4b27-af7a-59f814d83baf>
3.796875
1,740
Knowledge Article
Science & Tech.
37.141273
Sensitivity of ecosystems When an oil spill reaches the shoreline, or occurs very near the coast, the phenomena of soiling and coating in oil can have an impact on the populations in the intertidal zone and the various human activities which take place by the sea. Marine birds and mammals are also obvious victims, such as numerous species of birds feeding on the foreshore at low tide and nesting on the seafront, or marine mammals resting on the shore. However, the algae, fish and shellfish which live in coastal pools, on the rocks and in the sand or mud, are inevitably affected. Depending on the type of shoreline, the impact can range from being relatively limited to, at the other end of the spectrum, extremely dramatic. The sensitivity of different substrates to oil varies considerably, from rocky coasts to pebble beaches, gravel, course-grain sand, fine-grain sand, marshland, coral reefs, and so on. Epithelial tissue from the gills of a control fish (left) and an intoxicated fish (right). The intoxicated tissue shows a reduction in the thickness of the gill epithelium and the destruction of certain cells. Rocky coast polluted by oil Aerial view of a polluted rocky coast
<urn:uuid:163603a7-a7f9-4a0c-9549-49528ebd3fc9>
3.515625
256
Knowledge Article
Science & Tech.
33.592
Interviewee: Frederick Sanger. Frederick Sanger talks about the differences between sequencing proteins and sequencing DNA. (DNAi Location: Manipulation > Techniques > Sorting and sequencing > Interviews > Sequencing proteins and DNA) You know, at first we tried to use the methods we'd used for proteins and to some extent these worked. But it was, turned out to be a very different problem because the proteins have twenty components, twenty amino acids, and all different. Whereas the nucleic acids just have the four components, the mononucleotides, and you have to work out sequencing on that. And the methods used for proteins were not generally applicable. For instance they were, the amino acids were all very different chemically, so you could work out methods for separating them. Whereas the nucleic acids only had the four components, which were rather similar and so you knew, had to use quite different methods. sanger dna sequencing,sanger sequencing,frederick sanger,manipulation techniques,nucleic acids,amino acids,dnai,interviewee,proteins,extent Two sequencing techniques were developed independently in the 1970s. The method developed by Fred Sanger used chemically altered "dideoxy" bases to terminate newly synthesized DNA fragments at specific bases (either A, C, T, or G). These fragments are th The sequencing method developed by Fred Sanger forms the basis of automated "cycle" sequencing reactions today. Fluorescent dyes are added to the reactions, and a laser within an automated DNA sequencing machine is used to analyze the DNA fragments produc
<urn:uuid:fc31f59d-5eb7-4a39-8eb6-cb22ee4749ec>
3.234375
335
Audio Transcript
Science & Tech.
25.226318
Contact: Douglas Pierce-Price Caption: This artist's impression shows the disc of gas and cosmic dust around a brown dwarf. Rocky planets are thought to form through the random collision and sticking together of what are initially microscopic particles in the disc of material around a star. These tiny grains, known as cosmic dust, are similar to very fine soot or sand. Astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) have for the first time found that the outer region of a dusty disc encircling a brown dwarf -- a star-like object, but one too small to shine brightly like a star -- also contains millimeter-sized solid grains like those found in denser discs around newborn stars. The surprising finding challenges theories of how rocky, Earth-scale planets form, and suggests that rocky planets may be even more common in the Universe than expected. Credit: ALMA (ESO/NAOJ/NRAO)/M. Kornmesser (ESO) Usage Restrictions: None Related news release: Even brown dwarfs may grow rocky planets
<urn:uuid:ce5a7eb5-026f-4e9a-9fde-d450d54d4ef5>
3.796875
227
Knowledge Article
Science & Tech.
34.713508
You can also watch it on YouTube: Today we'll look at block closures in VA Smalltalk. To get started, you should browse class Block, and take a look at the hierarchy. Typically, when you create a block, it will be an instance of the first subclass: Blocks represent a deferred bit of code - a loose method, if you will. They encapsulate pre-compiled behavior that can be passed around and executed later, by using #value (or one of the variants that take arguments). Below is the code we'll be using to explore blocks: "Blocks" block1 := [10 + 1]. block2 := [:input | input + 1]. block3 := [:a :b :c :d :e :f | a, b, c, d, e, f, ' - concatenated']. val1 := block1 value. 11 val2 := block2 value: 1. 2 val3 := block3 valueWithArguments: #('one ' 'two ' 'three ' 'four' ' five' ' six'). val3 := block3 valueWithArguments: #('one ' 'two ' 'three ' 'four' ' five' ' six') onReturnDo: . val3 := block3 valueWithArguments: #('one ' 'two ' 'three ' 'four' ' five' ' six') onReturnDo: [:returnVal | Transcript show: 'Answer was: ', returnVal]. To create a block, simply encapsulate the desired code in square braces. As you can see above, using the [:arg1 :arg2 26 | ] notation, you can specify arguments to the block. To execute, you use: - #value - No arguments - #value: (up to three arguments with #value:value:value) - #valueWithArguments: (passing an array) You can also specify an action block to execute when the block returns, and this block can (but does not have to) take one argument - the return result from the first block. Blocks, like methods, return the result of the last expression executed. To see that last part in action, try executing the last statement above - you should see something like the following in the Transcript: Just try executing each line in the code above, inspecting or displaying the results - make sure you understand how each one of them works, then try a few examples of your own. Need more help? There's a screencast for other topics like this which you may want to watch. Questions? Try the "Chat with James" Google gadget over in the sidebar. [st4u183-iPhone.m4v ( Size: 7305084 )]
<urn:uuid:7135934d-b8cb-44b2-87b4-e131eda49eb6>
3.4375
569
Tutorial
Software Dev.
60.866998
Lab 6: Fourier Analysis A steady musical tone from an instrument or a voice has, in most cases, quite a complicated wave shape. The oscillations repeat themselves f times a second, where f is called the fundamental frequency. We have learned that f is related to the pitch of the tone. Tones played on different instruments sound different — musicians say that the tones have different timbre or different tone color. How does one describe wave shape? In Fourier Analysis we represent the complex wave shape as a sum of sine waves (or a sum of “partials”), each of a different amplitude. If the wave shape is periodic, the frequencies of the partials are multiples of the fundamental frequency and are called the “harmonics” of the tone being played. If the frequency of the musical tone is, for example, 200 Hz, the fundamental (also called the “first harmonic”) has a frequency of 200 Hz; the second harmonic (also called the first overtone) has a frequency of 400 Hz; the third harmonic (or second overtone) has a frequency of 600 Hz; and so on. Many musical instruments, including voices, have ten or more overtones. A Fourier Analyzer is a device that tells us how much of the various overtones are present in the sound that is being analyzed, i.e. it calculates and displays a graph of the amplitude and the frequency of the various harmonics. Expressed in popular terms, the Fourier Analyzer gives the “voice print” or the “sound spectrum” of any periodic wave shape you feed into it. Our Fourier synthesizer produces a fundamental mode of a given frequency and higher harmonics. The amplitude and phase of each of these waves can be adjusted. Extra features of the synthesizer are in the right column. Study the regions on the Fourier synthesizer. An oscilloscope display at the top allows you to inspect the synthesized wave shape. Below that, the amplitude of each harmonic can be adjusted between 0 and 1 and the phases can range from -180° to +180° shift. To listen to changes in the tone quality, you use a small speaker or headphones. 1. Two Sine Waves of the Same Frequency You will notice that the Fourier synthesizer has the ability to save a waveform and to show the current waveform (red) along with the saved waveform (blue) and a superposition of the two (green). This gives us the opportunity to study the wave shape that results when two waves are added — the questions of superposition. The simplest case adds two sine waves of the same frequency but different phase and different amplitude. A point to remember when you are adding two sine waves of the same frequency is that the result of the superposition will depend on the relative phase of the two components being added. If they are in phase, the resultant sine wave will have large amplitude (the maximal resultant amplitude we can get). If the two superposed sine wave are out of phase, though, the resultant will have smaller amplitude. Exactly how much smaller depends upon exactly how much out of phase the waves are. - Click the Sine button at the top of the right column and then the Save Waveform button near the bottom. Make sure the checkboxes for Show Saved Waveform and Show Superposition are both checked. Vary the amplitude and phase of the current waveform. Listen to the sound. - What conclusion do you reach about the wave shape when two sine waves of the same frequency are added? - Can you get the two waves to cancel one another? 2. Building a Square Wave from Sine Waves The next part of the game is to build a square wave by adding harmonics. Look at it as a puzzle. - Uncheck the checkboxes for Show Saved Waveform and Show Superposition to display only the current wave on the oscilloscope. Start with the fundamental (Harmonic 1). Rather than just playing with the many different harmonics in the hope of making a square wave by luck, it is better to draw a big square wave in your notebook and next draw in the fundamental sine wave, such that it resembles the square wave as closely as possible. - Next ask yourself what higher harmonic should be added to get closer to the square wave — would the second or the third harmonic do a better job, and how should their phases be adjusted compared to the fundamental? One thing to keep in mind is that the square wave is mirror symmetric about an axis (can you point it out?), and the waves you use to build up the square wave should be mirror symmetric too. - Try it out! The oscilloscope pictures in Figure 2 of you Lab Notes might give you some clues. Use the mouse to adjust levels first to draw a rough square wave. Then, positioning the mouse over the amplitude you would like to adjust, use the up/down arrows (hold down the shift/ctrl keys for finer adjustments) to tune the components. - Once you've created a decent square wave, use the table of the components to try to figure out the pattern of harmonics that creates a square wave. 3. Does One Hear Phase? Checking the sound box allows you to hear the waves as you change their properties. Using this, answer the following questions. - Add two or more sine waves (for instance, harmonics 1, 2, and 3) of similar amplitudes. - Change the phase of one of them. - Does the wave shape change? - Does the sound you hear change? - Does the Fourier spectrum of the tone change This experiment shows why the Fourier spectrum is more useful to specify the tone “color” than the wave shape itself. 1. Fourier Analysis of Sine Waves In this part of the lab, we will analyze preset functions and also the signal picked up by a microphone when you sing a steady tone. To learn how to use the equipment, the preset functions are more convenient than your voice because they produce steady output whose frequency we can set accurately. We can also use the Fourier synthesizer to produce complex waveforms. - Click the Sine wave function button in the right column. - Look at the Fourier spectrum of the sine wave (amplitudes section below the waveform). Does it look like it should? (Note: Since it is a simple sine wave, there has to be just one component in the Fourier spectrum.) Read off the frequency and the amplitude of this component 2. Fourier Spectrum of the Square Wave - Now, click the Square wave function button. The Fourier spectrum of this square wave is displayed. - From the screen, measure the amplitude and frequency of each harmonic and write them down in a table in your notebook. Can you observe some regularity in the amplitudes and frequencies of the harmonics? Recall what you did in the first part of the lab, where you generated the square wave with the help of the synthesizer. 3. Fourier Analysis of Your Voice This is done with a different program. Download this applet and run it on your computer. Be sure a microphone is connected to the input of your computer. - Sing a steady tone into the microphone, for instance a tone like “aah” in father. Watch the signal in the upper half of the screen and the Fourier spectrum of the tone in the lower half. Since it is hard to sustain the sound for the length of time needed to measure the spectrum, you can make use of another feature of the software. By clicking anywhere in the Fourier analysis window, you can freeze or release the waveform and frequency spectrum. You can then make measurements at your own pace. You may have to right-click (ctrl-click with one button mice) to bring up a popup menu in order to zoom in on the chosen part of the Fourier spectrum to analyze the data. Using the cursor, try to figure out the fundamental frequency of your voice. (Note: The fundamental is not always the first peak in the Fourier spectrum, nor is it always the highest! The oral cavity might amplify some overtones more than it does the fundamental. The fundamental frequency is determined by the rate of oscillations in your vocal cords, but only those overtones that are amplified by the oral cavity produce audible sound. Therefore, use the fact that if the harmonics are all multiple of the fundamental, they have to be equally spaced, with the spacing in frequency between them being equal to the fundamental! Remember, we use vocal cords, that is vocal strings, to produce sound.) - A frequency range amplified by your oral cavity is called a “formant.” In what frequency range is your formant when you sing “aah?” - Change the pitch of the “aah” an observe the change in the Fourier spectrum. Does the fundamental frequency change? Does the formant region change? To more easily observe the difference between two spectra, you can choose to save a waveform from the popup menu. - Ask your TA to explain (once again) what timbre is! - A more elaborate study of your voice, such as the analysis of different vowels, like “eeh” or “ooh,” can be used to find out what patterns of overtones makes one vowel different from another.
<urn:uuid:2710f5a7-7937-4f4b-bd32-d9d96502c11d>
4.5625
1,959
Tutorial
Science & Tech.
51.312151
How do holographic images work? Visa cards are printed with little holographic doves as forgery protection, and I've seen similar holographic images printed on things no thicker than a piece of construction paper. Soon there will be chocolate bars with holographic decorations etched on the surface (this according to Scientific American). How are these little holographic pictures made and how do they fool the eye into seeing depth where there really is none? As is often the case with technical subjects, Susannah, we are presented with an unfortunate choice: an explanation that is accurate but incomprehensible, or comprehensible but wrong. Being a journalist and therefore shameless, we naturally opt for the latter. What follows is the Ollie North explanation of holography — it might get you past a congressional committee, but don't try it on your Ph.D. board. A reflection hologram, the kind found on a credit card, is a high tech version of those plastic novelty pictures we used to buy at the dimestore — the kind where the image changes when you tilt it. The hologram's surface is an emulsion that can be thought of as consisting of many tiny facets, each containing a fraction of a larger image. As you look at the hologram you see a set of facets that together constitutes one perspective of the holographed scene. As you tilt the hologram, a different set of facets comes into view showing the scene from a slightly different perspective. The changing perspective creates the illusion of three dimensions. Simple, no? OK, now for a Jack Anderson-like expose of the many lies and omissions in the preceding. L&O #1. There aren't really any tiny facets. Actually what you've got is a set of quasihyperboloidal interference fringes. Interference fringes reflect a percentage of the light that strikes them. Amounts to the same thing as tiny facets, but they look a lot different and from the standpoint of conceptual grabbiness they're strictly from hunger. L&O #2. The change of perspective isn't the only thing that creates the 3-D effect. There's also parallax shift. Your eyes, being two inches apart, look at the scene from slightly different angles, and thus see two different sets of "facets." Your brain combines the two images to create one scene with the illusion of depth, just as with a stereoscopic viewer. L&O #3. I didn't tell you anything about lasers, wavefronts, or coherent light. Do I hear anybody complaining? I didn't think so. However, for those who must know, lasers are essential to creating holograms because they're the only known way to create the requisite interference fringes. Memorize the preceding sentence so you can mutter it next time some would-be expert (e.g., your precocious eight-year-old) starts quizzing you on the subject. We may not explain everything in this column, but we give you enough to get by.
<urn:uuid:a0471925-06a0-4c6c-b9e4-b2d6ef6200bd>
3.3125
622
Personal Blog
Science & Tech.
52.770659
Many beginners associate REST url’s with url having some identifier to identify the resource even if the identifier is present in the url parameter. Here the url seems to be identifying a resource (product) with an identifier (productid) 1 However technically this is not a RESTful url. REST ways of looking at resources says that the web is a web of information resources on which you can take standard actions. HTTP is a good example of REST with the resources identified by url’s and the standard actions on those resources are GET, POST, PUT, and DELETE. The most predominant operations done in HTTP is to get the resource representation using GET and make updates to it using POST. A resource oriented RESTful url must serve as an unique identifier without the additional parameters, meaning the url itself should have the identifier in itself, something like this Another important distinction is that the url must not include any verb rather only nouns because the verbs (actions that could be performed) are already defined as standard actions. For e.g., have a look at this url. Some frameworks might make use of this kind of url to map a method “getproducts” in the controller and pass the identifier 1. However this cannot be taken as a true resource identifier as it includes a part “getproducts” which has nothing to do to identify a resource. So in essence a RESTful url should be a resource oriented url which uniquely identifies a resource.
<urn:uuid:ff93e5ed-d2ec-4b8f-9738-4780566a9db8>
2.921875
303
Tutorial
Software Dev.
41.009364
Let’s try and understand why we should care more about this in the first place. This requires a little bit of historical explanation. We all know Moore’s Law. It says the number of transistors on integrated circuits doubles approximately every two years. The prediction has been working pretty well since late ‘70. Couple of things started changing from the middle of last decade (around 2005). - Moore’s law started failing, since hardware makers found a limitation in hardware capability for increasing the clock speed, and, - Secondly there has been an exponential growth of data from around that time. First part will help us understand our current topic of discussion and the second point will lead to another interesting discussion, about the rising trajectory of NoSQL landscape and what was the problem with traditional RDBMS ? Will discuss the second part later. …Unable to increase the clock speed, companies like Intel started adding multi core processors in the same machine. And we got Dual Core, Quad Code … machines. Now the current existing languages, like C, C++, java are designed to use threads to handle concurrency. Now parallelism is different from concurrency. Simply put, concurrency is how we handle multiple request-response and Parallelism is sharing a large CPU intensive work with multiple processors. That’s a different problem, that, even with the threaded model, it’s difficult to write thread safe code that works over time. Livelocks, deadlocks become part of daily affair in maintaining a large application written with threaded code. This is one of the reasons I like Node.js so much; Concurrency is handled by event loop and you don’t have to worry about Locking and synchronization. Mutability becomes nightmare when you have to share your mutable code. So, if something do not change and you share it, you do not have to protect it, which means you don’t have to worry about safety, synchronization if you share immutable code. This is one of the great aspects of functional programming. Immutability. This is what make your code run in multiple processors. It’s not free, but it’s trivial to make your code run on multiple core. Generally it’s achieved with immutable collection of Objects in Scala. Look at this code in Scala 1. val list = (1 to 100000).toList 2. list.map(_ + 42) To make the operation run in parallel, one must simply invoke the par method on the sequential collection, list. awesome !! 1. list.par.map(_ + 42) Another important aspect of FP is functions are first class; they are not second class citizens like in C++ or java. You can treat them as any another variable. Functions are pure, they exhibit idempotent behavior, side-effect free and functions are of higher-order. You can pass a function to a function and you can return a function. Closures are very much derived from this. You take an object and transform it to something else, you don’t change it. Monads !! Scala harnesses all the power of functional programming and combines it with Object Oriented Programming. It’s a JVM language and fully interoperable with Java libraries. SPARK is written with Scala and what scala does to your code in multi-core machine, SPARK does the same thing across machines in a cluster. Parallelism !!
<urn:uuid:9c31957b-2f0e-4b67-af44-e596e983b46c>
3.09375
716
Personal Blog
Software Dev.
54.775543
Let the Earth do its own lifting We hear a lot about what we can do to look after the earth . but clearly not enough about how the earth looks after itself. The media have started using the great idea and word resilience but they don't know what it means ( it was my profession but few therebe that practice and teach it now) This neglected and complex vantage viewpoint ( its easier to subdivide a problem than to see it "in context" ) is critical to the future of our world because the earth can do the job ( say recycling of water and nutrients ) a lot more efficiently than we can. How different is our anthropomorphic view different from that of our forefathers - really? This reality of limited perspective is not widely talked about by engineers who are used to methodically building from the bottom up ( which is fine) and who have been seeking (politically) more control in environmental matters for decades . ( the problem for the poor is that our engineering solutions are often simply too expensive too!) Newcomers to conservation too are prone to take a control mode that doesn't work . Nature maybe subtle and even slow but it can and is often more substantial ( solves 10 problems and not just one ) and efficient. Point is you have to study this to know this ! This post is a stub i will finish later, but trawl through the site for examples for now. I have done a lot of soil water engineering , but I think planning with the environment is a more important focus for all innovative and thinking peoples . Some of the most efficient waste water engineering for example are best seen as involving soils and natural soil processes such as infiltration runoff and reprocessing by the millions of bacteria in soils. Bookmark this page. Add your comments
<urn:uuid:45e00efc-a1fa-46e8-950e-1f7ee7817e09>
2.828125
361
Personal Blog
Science & Tech.
46.911173
This species is widespread and common throughout its range. - Bombus terrestris and lucorum (Laura Brodie, University of Aberdeen, www.bumblebee.org) - Importation of Non-Native Bumble Bees into North America: Potential Consequences of Using Bombus terrestris and Other Non-Native Bumble Bees for Greenhouse Crop Pollination in Canada, Mexico, and the United States (K. Winter, L. Adams, R. Thorp, D. Inouye, L. Day, J. Ascher, and S. Buchmann, North American Pollinator Protection Campaign, August 2006) - Buff-tailed bumble bee - Bombus terrestris - Family: Apidae (Natural England) No one has provided updates yet.
<urn:uuid:96cd1853-0491-47a0-8013-dbf4de165cf6>
3.03125
163
Knowledge Article
Science & Tech.
40.394226
Go to original Utah Monitor of 'Cold Fusion' Casts Doubt on Its Validity The New York Times March 29, 1990 A physicist at the University of Utah says a highly publicized ''cold fusion'' experiment at the school failed to produce any evidence of a nuclear reaction when he monitored it. The scientist, Michael H. Salamon, said last fall that he had been unable to find gamma rays when he monitored the experiment last May and June in the laboratory of Dr. B. Stanley Pons, a chemist at the unversity. Dr. Salamon gave details of his measurements in a paper being published today in the British journal Nature. ''We did not see a peep,'' said Dr. Salamon, who measured the nuclear output of cold fusion gear for five weeks. ''There was not an iota, not a sniff, of conventional fusion occurring,'' he said. ''We saw no neutrons or gamma rays that could be attributed to a fusion process.'' His findings appear to be another blow to the already widely questioned announcement last March by Dr. Pons and Dr. Martin Fleischmann of the University of Southampton in England that they had achieved nuclear fusion at room temperature in a jar of water. The report raised hopes of a revolutionary new source of energy from nuclear fusion, which powers the sun. ''It's another nail in the coffin,'' said Ronald Parker, director of the plasma fusion center at Massachusetts Institute of Technology. ''They did a very careful search for fusion effects and they came up empty.'' In Salt Lake City yesterday, Dr. Pons said most of the allegations in the paper were not true. Dr. Pons said the physicists had ignored energy cells that were producing large amounts of heat and instead had monitored cells that were making only low amounts. He also said the monitoring equipment had been placed at an angle and had missed evidence of nuclear activity. ''They were embarrassed that a chemist had fallen into a nuclear reaction so simply,'' Dr. Pons said. ''Their outside colleagues were putting tremendous pressure on them.'' Dr. Pons also accused Nature of trying to undermine his work by publishing negative studies while ignoring supporting evidence. Fritz G. Will, director of the state-financed National Cold Fusion Institute at the University of Utah, said small changes in experimental conditions, including humidity, could affect whether or not Dr. Pons's fusion cells produce heat. At the time Dr. Salamon checked for signs of fusion, Dr. Will said, ''experimental conditions prevailing in those experiments were not suitable to finding the phenomenon.'' (In accordance with Title 17, Section 107, of the U.S. Code, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. New Energy Times has no affiliation whatsoever with the originator of the original text in this article; nor is New Energy Times endorsed or sponsored by the originator.) "Go to Original" links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted on New Energy Times may not match the versions our readers view when clicking the "Go to Original" links.
<urn:uuid:d7829260-3f42-4c3a-a750-88098ea616e2>
2.8125
673
Truncated
Science & Tech.
45.917051
Practise your skills of proportional reasoning with this interactive haemocytometer. How many generations would link an evolutionist to a very distant Work with numbers big and small to estimate and calculate various quantities in biological contexts. Analyse these beautiful biological images and attempt to rank them in size order. Work with numbers big and small to estimate and calulate various quantities in biological contexts. When a habitat changes, what happens to the food chain? Is this eco-system sustainable? How would you go about estimating populations of dolphins? Maths is everywhere in the world! Take a look at these images. What mathematics can you see? How does shape relate to function in the natural world? Could nanotechnology be used to see if an artery is blocked? Or is this just science fiction? Do you know which birds are regular visitors where you live? A problem about genetics and the transmission of disease. What biological growth processes can you fit to these graphs? What is the chance I will have a son who looks like me? Simple models which help us to investigate how epidemics grow and die out. Build a mini eco-system, and collect and interpret data on how well the plants grow under different conditions.
<urn:uuid:6d46620e-9421-422b-9561-5c1fdde2b16b>
3.453125
257
Content Listing
Science & Tech.
46.602929
Hot Water and Hurricanes What is the ocean's role in powering hurricanes? Why do some storms experience rapid intensification, a rapid increase of wind speeds, as they move over warm waters? Why do the storms become weaker once they move onto land? In this lab, you'll use Hurricane Katrina as a case study to explore where the power to fuel its winds and rains came from. You'll look at visualizations of sea surface temperature and sea surface height to understand how this energy is available to hurricanes. After completing this investigation, you should be able to: - calculate the amount of heat energy absorbed by a given volume of water as its temperature changes; - interpret sea surface temperature images and animations to identify warm water ocean currents; - use image processing software to apply different color tables to an animation of the Loop Current; and - interpret image data that show various measures of heat in the Gulf of Mexico before and after Hurricane Katrina; and - access and interpret current Tropical Cyclone Heat Potential images. Keeping Track of What You LearnThroughout these labs, you will find two kinds of questions. - Checking In questions are intended to keep you engaged and focused on key concepts and to allow you to periodically check if the material is making sense. These questions are often accompanied by hints or answers to let you know if you are on the right track. - Stop and Think questions are intended to help your teacher assess your understanding of the key concepts and skills you should be learning from the lab activities and readings.
<urn:uuid:5d8968f7-d787-415e-a6ad-c49ad5e464e9>
4.0625
308
Tutorial
Science & Tech.
30.672842
The Glory of Geckoes Posted by Richard Conniff on September 24, 2012 When I first held the Giant Leaf tailed gecko (U. fimbriatus) in my hand after catching it in the rainforest of northern Madagascar, it felt as if I were holding a living, breathing beanie baby. It was the size of small puppy, and its skin was velvet-soft and warm. The gecko’s hands grasped my fingers the way a newborn holds its parent’s finger – softly but firmly at the same time. Having this animal sit in my hand was one of the most pleasant tactile experiences of my life. Giant leaf tailed gecko (Uroplatus fimbriatus), proudly displaying its mouth full of teeth (Photo by Piotr Naskrecki) Of course, Leaf tailed geckos are not sweet, fuzzy toys – like all geckos, they are efficient killing machines, predators capable of catching and swallowing remarkably large prey. The smiley face of the gecko hides incredibly sharp teeth, and lots of them. Leaf tailed geckos have the highest number of teeth of any amniote (which includes most of terrestrial vertebrates) – their lower jaw can have 97-148, while the upper between 112 and 169. That’s, potentially, 317 teeth! To put it in perspective, other geckos have between 100-180 teeth, while our puny human jaws carry only 32 teeth. Leaf tail geckos are high masters of camouflage, and this is one of the reasons why so little is known about their biology (Photo by Piotr Naskrecki) Why so many? Nobody knows for sure, because virtually nothing is known about their feeding behavior in the wild. In captivity they will eat almost anything that moves, but in their native habitat, the wet and humid forests of Madagascar, they may be targeting frogs, which are a remarkably species-rich and abundant group in that part of the world. A huge number of small, sharp teeth is likely to help hold such slippery prey. A higher than usual number of teeth may also be very useful in capturing moths, whose scale-covered bodies are as difficult to grasp as those of wet amphibians.
<urn:uuid:aef747fd-744c-4a68-b500-2daa61470ad7>
3.625
468
Personal Blog
Science & Tech.
51.90081
Every few hundred thousand years Earth's magnetic field dwindles almost to nothing for perhaps a century, then gradually reappears with the north and south poles flipped. This phenomenon is know as magnetic fields reversal. Such reversals happen at intervals, ranging from tens of thousands to many millions of years, with an average interval of approximately 250,000 years. It is believed that this last occurred some 780,000 years ago, referred to as the Brunhes-Matuyama reversal. The magnetic field deflects particle storms and cosmic rays from the sun, as well as even more energetic subatomic particles from deep space. Without magnetic protection, these particles would strike Earth's atmosphere, eroding the already beleaguered ozone layer. At present, the overall geomagnetic field is becoming weaker at a rate which would, if it continues, cause the field to disappear, albeit temporarily, by about 3000-4000 AD. The rapid deterioration began at least 150 years ago and has accelerated in the past several years, with a total decrease of 10-15% over these 150 years.
<urn:uuid:2e9554ee-d546-45a1-8f89-b1313a919691>
3.796875
217
Knowledge Article
Science & Tech.
41.327716
Experts in RNA Probes RNA, usually prepared by transcription from cloned DNA, which complements a specific mRNA or DNA and is generally used for studies of virus genes, distribution of specific RNA in tissues and cells, integration of viral DNA into genomes, transcription, etc. Whereas DNA PROBES are preferred for use at a more macroscopic level for detection of the presence of DNA/RNA from specific species or subspecies, RNA probes are preferred for genetic studies. Conventional labels for the RNA probe include radioisotope labels 32P and 125I and the chemical label biotin. RNA probes may be further divided by category into plus-sense RNA probes, minus-sense RNA probes, and antisense RNA probes.
<urn:uuid:eef05f40-c9bb-481a-b58d-984d2a5f07f1>
3.53125
147
Knowledge Article
Science & Tech.
23.549605
Warty tree frog (Scinax boulengeri), Costa Rica. © Piotr Naskrecki Scientists believe that ancient amphibians were the first vertebrates to leave the water and colonize our planet's shores millions of years ago. Modern-day amphibians – frogs, toads, newts, salamanders and caecilians – also constitute an important link between terrestrial and aquatic ecosystems, perpetuating nutrient cycling in their environments. They may be a food source for larger animals, but amphibians themselves also control pests, including insects known to be vectors of human diseases like malaria. Their moist permeable skin makes amphibians particularly susceptible to environmental changes; they are consequently considered to be good indicators of ecosystem health. In recent years, habitat loss, widespread disease and climate change have caused drastic population crashes in amphibian populations across the globe. IN DEPTH: Global Search for 'lost' frogs yields few findings, important warnings Here are just a few of the many fascinating amphibian species who have not been seen for over a decade. Some may be lost forever, while others may still exist, hidden under rocks in a remote stream, waiting to be rediscovered.
<urn:uuid:3dc8656a-8044-4bab-ae38-c2d6c338f5e0>
3.65625
246
Knowledge Article
Science & Tech.
22.01376
HTTPS SERVER FOR USER PASSWORD CHANGE This server provides a safe and friendly way for users to change their password from a web browser. The server is simply a front end to commands or scripts that will perform the real passwords change. This can be used with commands like passwd, yppasswd, smbpasswd, ldappasswd, vncpasswd, ... Why is this useful This server was designed for environments were is not easy to persuade users to enter a Linux server and run a command for changing their passwords. One case where this is useful is when ms-windows users have home directories in samba servers, but don't login to the domain. In this situation some clients don't provide a way for users to change their passwords on the samba server. This service also makes it possible for users to change their passwords from anywhere on the internet. The server acts as follows: - Send the form to the client (web browser). - When a POST is received (fields "username"; "password"; "newpass1"; "newpass2"), the PAM user authentication is checked using the "username" and "password" fields. - If authentication is accepted then the server UID/GID are changed to match the user and the external commands are executed (in a pseudo-terminal) to change the user password. Requirements: OPENSSL, PAM, others(?) - Untar passwdd.tgz and go to directory passwdd just created. - Run "make" (sorry: no configure available for now) - Run "make install", this will generate a RSA 512 bits key and the certificate, you will be prompted for some local data. Then several files will be installed: - /usr/local/sbin/passwdd (the server binary) - /usr/local/etc/passwdd.conf (the server configuration file) - /usr/local/etc/passwdd.prikey (RSA private key) - /usr/local/etc/passwdd.cert (RSA public key certificate) - /usr/local/etc/passwdd_form.html (the form to be presented to the user) - /usr/local/etc/passwdd_ok.html (html page saying the password was changed) - /usr/local/etc/passwdd_ko.html (html page saying the operation failed) - /usr/local/etc/passwd.gif (sample icon) - Configure /usr/local/etc/passwdd.conf (see below) - Make the server available, either in standalone mode or using inetd/xinetd: - STANDALONE: run "/usr/local/sbin/passwdd -D", later you will place this on a startup script like "rc.local". - INETD/XINETD: configure inetd/xinetd/services to run the command "/usr/local/sbin/passwdd" - Now you can use a web browser to test the service. The server sends messages to the system logger so you can see what is going on. Command line options passwdd [-D] [-C filename] -D - run in standalone mode (in background), default is to run in inetd/xinetd mode. -C filename - use configuration file "filename", default is /usr/local/etc/passwdd.conf The sample configuration file has some comments about the available options, all options must start on the first column and are up case: - PORT number - defines the decimal port number to be used when the service is run in standalone mode. Defaults to the standard https port (443). - PAM string - PAM service name for user authentication. Defaults to system-auth. - FORM filename - html file with the form to be presented to users, the form must use the POST method and must contain fields named "username", "password", "newpass1" and "newpass2". Default file is /usr/local/etc/passwdd_form.html. - OK filename - html file to be presented when the operation is successful. Default is /usr/local/etc/passwdd_ok.html. - KO filename - html file to be presented when the operation fails. Default is /usr/local/etc/passwdd_ko.html. - SRC filename - makes the file "filename" (full path required) available on the browser. All filenames will be available at the root of the server (no path). Up to 100 SRC options may be used. Default is no SRC options. - MINLEN value - sets the minimal password length accepted. Default is 6. - MINUID value - users with UIDs bellow this value can't change their passwords. Default is 100. - Options related with external commands This options must be placed in the correct order (command sequence) and have no defaults. The first option is always COMMAND command-filename and will make the server run the named command, full-path required, arguments allowed. The next options deal with the command output and input: - ASKUSER string - wait for the command to print the string as a prompt for the username, then send the username to the command. - ASKPASSWD string - wait for the command to print the string as a prompt for the current password, then send the password to the command. - ASKNEWPASSWD string - wait for the command to print the string as a prompt for the new password, then send the new password to the command. - SAYSUCCESS string - wait for the command to print the string which means the command was successful. This option terminates (closes) a COMMAND sequence. The command-filename to be used and the string arguments are up to you, you must check what does the command prompts for and match those prompts with options ASKUSER/ASKPASSWD/ASKNEWPASSWD. Finally you must check the output of the command on success and match that with the SAYSUCCESS option. All matches are case sensitive and my be partial, the string argument may be a sub-string of the command output. Changing multiple passwords You can use multiple command sequences, in that case they will be performed in the order specified. With multiple command sequences the operation is considered a success only if there is success on all command sequences. This may take to some inconsistency, if the first command is successful and the second fails, then the user will be told the operation failed but the password related with the first command has changed. For now, if you require this use you should place first the commands that fail more often. Changing the HTML files to meet your preferences All 3 html files can be changed at your will, be careful with the form file, it must have a form with the post method containing 4 fields named "username", "password", "newpass1" and "newpass2". The html files can have images and references to other documents that may be provided by this server if the SRC option is used. - The only files with static location are the private key and the public certificate, this will be fixed in next release. - Implementation of an automatic undo for the situation were multiple passwords are changed and the first commands are successful but then one fails. - Create a configure script. - In the current version, whatever goes wrong the user will always get the same message (this is the safer way). Possibly in some situations the user should get other messages.
<urn:uuid:98289ad8-cf93-439f-abad-476250909c52>
2.734375
1,640
Documentation
Software Dev.
57.85808
An abiotic factor is any of a number of the non-living components of a habitat. Abiotic factors can be grouped into the categories of meteorology, soil, air pollution, micro-topographic features, water availability and water quality. In terms of meteorological factors, the primary abiotic factors can be construed to be temperature, precipitation, wind velocity, solar insolation and humidity. It should be noted that statistical variation and seasonal variation of these basic parameters can be important elements of the habitat description as well as the temporal correlation of these variables; for example, for certain amphibian species, it is not only the average annual rainfall which is important to reproductive success, but especially the timing of rainfall that occurs in breeding season or the rainfall that occurs within the temperature optima for breeding. In addition the thawing timing of ponds can also be significant. Edaphic or soils factors include variables such as soil granularity, soil chemistry and nutrient content, as well as nutrient availability. These factors are made more complex in that there may be interactions of the appropriate concentrations of minerals or nutrients with the timing of precipitation; furthermore, the vertical profile of soil chemistry can also be significant. Air pollution factors can be significant for both plants and animals. In the case of fauna, the presence of such gases as carbon monoxide and sulfur dioxide can lead to degradation of circulatory or pulmonary function, and for high and prolonged concentrations, even death. In the case of vascular plants, air pollutants can enter into stomatal openings; upon such penetration, many air pollutants can impair metabolic function, particularly photosynthesis. Air pollutants can also damage leaf, stem and flowering structures. For lichens the interferences with metabolism can be even more sensitive, since most chemical uptake is via air; for example, in the case of certain lichens, there is little tolerance to excesses of air pollutant concentration. The sensitivity of a lichen to air pollution is directly related to the energy needs of the mycobiont, so that the stronger the dependency of the mycobiont on the photobiont, the more sensitive that lichen species is to air pollutants. See main article: Meteorology Meteorologcal factors can strongly influence the functioning of an ecosystem. Even though large scale proceesses of the atmosphere involve interactions with the Earth's crust, oceans and outer space, microscale meteorology is an inherent part of any terrestrial or aquatic ecosystem. The chief meteorological parameters that comprise abiotic factors of ecosystems are temperature, sunlight, wind velocity, barometric pressure, humidity and the gradients and interactions of each variable, as well as their temporal variability. Meteorology and hydrology compose the interdisciplinary field of hydrometeorology, which is also a significant set of abiotic factors. Meteorological abiotic factors may be simply the prevailing climatic features that define an ecosystem's atmospheric abiotic features; in some cases, the meteorogical factors may be episodic or even catastropic events that define major transformations of an ecosystem. Examples of such abiotic upheavals are windtrhow from hurricanes and tornadoes; torrential floods that scour and uproot large amounts of vegetative cover; prolonged drought which may alter the plant association and animal ecology. See main article: Soil In a broad sense soil should be considered to be not only the mineral components commonly deemed inherent in the name, but also air, water and even dead organic material; however. since air and water are treated as separate abiotic factors in this treatment, only the mineral and geometric aspects of soil will be discussed here. Moreover, the dead organic material can be considered as a component of the soil, since it is not living, even though organic in origin. Soil may be considered thus as a complex variety of mineral particles plus the dead organic matter. It is also important to consider the voids within soil as a property of the soil, since the packing density and shape of soil particles affect the resulting characteristics of water and plant root penetration, as well as the hosting of organisms from micro-organisms to large animal burrows. Technically soils can be as impermeable as solid non-porous rock, or as highly pervious as coarse sand. The granularity of soils is generally merely a function of the geologic time weathering of the local earth crust as well as the depositional history of fluvial, marine and aeolian processes. The resulting soil permeability plays an important role in determining the plant palette that can adapt to a given habitat. Loosely packed or highly pervious soils generally are poor in near surface water retention, but effective in encouraging downward percolation of water, with the result of enhancing local groundwater basins and thus sustaining water supply in the wider basin. Such loose soils are also hospitable to root penetration and thus plant growth, provided that rainfall or runoff is sufficient to supply the needs of the plants in an environment of marginal surface soil water retention. The coarsest such soils are gravels, which may be quite ineffective in supporting plant growth, if there no intervening finer soil particles. In the opposite extreme of dense closely packed soils, one may see such examples as solid rock or hardpan clay. Such extremes in impervious soils are not supportive of plant life, since downward percolation of water is impeded and high surface runoff is encouraged. On the other hand, hardpan soils may foster longer term retention of surface waters that manage to accumulate, in the case of level terrain, micro-depressions or high precipitation environments. An example of such a specialized habitat is the vernal pool which manifests strong seasonal variations of plant growth and the encouragement of flora specialists to this dramatic variation in surface soil moisture. The presence of large quantities of dead organic material typically enhances the water retention of surface soils, and generally provides more hospitable growing environments for plants, as well as water storage for many faunal species. In any case the soil texture and dead organic content play a key role in determining not only the plant association but also the life support system for animals in a given habitat. See main article: Air pollution The term air pollution is applied here, since the majority components of air (nitrogen, oxygen, and carbon dioxide) do not typically have great variability over large spatial regimes and hence are not important habitat determinants. On the other hand, sulfur dioxide, carbon monoxide, reactive hydrocarbons, oxides of nitrogen, heavy metals, particulate matter and other man-produced chemicals have considerable spatial variability and hence can play a key role in determining the outcome of plant association and faunal fitness in a given habitat. Certain chemicals such as sulfur dioxide have potent adverse impacts upon both vegetative metabolism as well as animal health. Commonly occurring localized levels of sulfur dioxide from man-made sources can readily reduce plant productivity by about 30 to 50 percent, and it can severely adversely affect respiration function, metabolism and mortality of many faunal species, including humans. Many molecular gases can actually enter the stomatal openings of plants and directly interfere with photosynthesis; particulate matter, on the other hand can clog stomatal openings and reduce the gross intake of carbon dioxide by plants. In the case of heavy metals, the pathway of impact to the ecosystem is typically deposition to soils and subsequent uptake by plant roots. Cadmium, for example, is highly toxic to most plants at concentrations as low as three parts per billion in soils. Presence of heavy metals in soils thus can severely inhibit plant development and effectively reduce the diversity of the plant palette. Correspondingly herbivores and carnivores higher in the food chain can concentrate such trace elements, with generally adverse results to animal fitness and reproduction. See main article: Topography Micro-topography can have important influences upon habitat definition, both as to adaptations of plants and animals, as well as bacteria and other organisms. With respect to plantlife, topography interacts with meteorology in producing a variety of wind shear, turbulence, and thermocline effects that can influence plant growth and even plant selection for a given habitat. Topography interacts with soil type by influencing the ratio of surface runoff to downward percolation following precipitation; in fact, micro-topography shapes the fundamental ponding that leads to surface water retention and vernal pool formation, factors significant in determining plant viability and selection. With respect to animal life, topography influences the suitability of habitat for burrows, for nests, for hiding from predators (and conversely for stalking by predators) and for transport efficiency with respect to animal movement capability (speed and traction). As nesting examples, certain birds have a clear preference for cliffside nesting sites, requiring extreme verticality in micro and macro topography; puffins and many penguins have a slope preference for their burrows, which slope is somewhat dependent on the exact soil type. An example of movement restriction, migrating salamanders have a maximum slope tolerance for micro-topography, which is also soil type dependent. Besides the consideration of water introduced by precipitation or condensation, the availability of water throughout a habitat is a fundamental determinant of what plants and animals can adapt locally. The chief water availability parameters are controlled by soil and topography, but deserve discuss as a separate topic due to the importance of water availability. Thus the subject of water availability can be defined by the issues of surface runoff characteristics, downward percolation, water retention in the upper soils, evapotranspiration, groundwater flows and surface water characteristics. At an operational level the key factors are pathways of water availability to plants and storage/timing of water availability to animals. The extreme conditions of aridity or flooding define special habitats, which are respectively deserts or wetlands. See main article: Water pollution As important as water availability is the quality of water within a habitat. This topic embraces not only concentrations of chemicals present in natural water systems, but also to human introduced chemicals. Significant naturally occurring constituents include nutrients and trace minerals used in organism metabolism; among nutrients, nitrate, phosphate and potassium are some of the most fundamental ions taken up by plants and animals. With regard to man-produced pollutants, some of the chief components are petroleum hydrocarbons, pesticides, herbicides and heavy metals. Trace minerals that are often important to metabolic function include zinc, magnesium and iron; each trace mineral that is beneficial to organisms can be classed as a pollutant if human produced discharges to the environment accumulate to a high level. In many cases heavy metals and complex organic materials may accumulate in plant and animal tissue, subsequent to uptake from the envrionment. Organisms adapted to high aquatic mineralization conditions are examples of extremophiles; brine shrimp (genus Artemia) and brine flies (genus Ephedra) are genera with such specialized habitat requirements. Lake Urmia in Iran and Mono Lake in California are locations where such hypersaline lake habitats are found. One of the earliest mathematical models addressing chemical dissolution in runoff and resulting transport was developed in the early 1970s under contract to the United States Environmental Protection Agency (EPA). This computer model formed the basis of mitigation research that led to strategies for subsequent land use and chemical handling controls. See main article: pH pH is actually a component of water quality, but it is sufficiently important to be treated as a separate parameter within the abiotic factors. The pH of water is the concentration of hydrogen ions (H+) in an aqueous solution; it is also a usedl to measure the acidity or base level of soil. The lower case p in pH stands for "power of" with H being the symbol for the element hydrogen. Mathematically, it is the negative log of the concentration in molarity of hydrogen ions in a solution. For chemists, the term hydronium ion (H3O+ ) is often substituted for hydrogen ion. Water undergoes dissociation into hydrogen ions (H+) and hydroxyl ions (OH-). When the concentrations of these two ions are equal, the solution is considered neutral. If the concentration of hydrogen ions is larger than the concentration of hydroxl ions, the solution is acidic. If hydroxyl ions are in greater concentration, the solution is considered alkaline or basic. Pure water at room temperature will have a neutral pH of 7.00. Values of pH below 7.00 are found in acidic solutions while values above 7.00 characterize basic solutions. Each organism has an optimum range of pH tolerance, since pH governs most basic metabolic processes within cells, as well as molecular transport through cell walls. The pH of a solution controls mineral solubility, solubility and structure of organic molecules, and protein structure. - ^C.Michael Hogan. 2008. Rough-skinned Newt (Taricha granulosa), Globaltwitcher, ed. N.Stromberg - ^ I.H.Beltman, L.J.de Kok, P.J.K.Kuiper and P.R.van Hasselt. 1980. Fatty acid composition and chlorophyll content of epiphytic lichens and a possible relation to their sensitivity to air pollution. Oikos 35 (3): 321–26. - ^ David John Hoffman. 2003. Handbook of ecotoxicology. CRC Press. 1290 pages - ^ S.W.Buol. 2003. Soil genesis and classification. Wiley-Blackwell. 494 pages - ^ Shashi Bhushan Agrawal and Madhoolika Agrawal. 2000. Environmental pollution and plant responses. CRC Press. 393 pages - ^ H.N.Verma. 2006. Air Pollution and It’s Impacts on Plant Growth. New India Publishing. 249 pages - ^ Joseph A.Chapman and John E.C.Flux. 1990. Rabbits, hares and pikas: status survey and conservation action plan. IUCN. 168 pages - ^ Hans Lambers and Francis Stuart Chapin. 2008. Plant physiological ecology. 604 pages - ^ C.M. Hogan, Leda Patmore, Gary Latshaw, Harry Seidman et al. 1973. Computer modeling of pesticide transport in soil for five instrumented watersheds, United States Environmental Protection Agency Southeast Water laboratory, Athens, Ga. by ESL Inc, Sunnyvale, California - ^ Peter Hague Nye and Philip Bernard Tinker. 1977. Solute movement in the soil-system. University of california Press. 342 pages
<urn:uuid:5da0ab58-8650-4eae-96e2-42f4603eae96>
3.40625
2,986
Knowledge Article
Science & Tech.
24.497973
The study attempts to determine the adsorption characteristics of bed sediments of rivers for the control of metal pollution. In particular, it looks at adsorption of zinc ions on bed sediments for the river Ganga at Hardwar. In the natural conditions of river water, suspended loads and sediments have an important function of buffering higher metal concentrations of water, particularly by adsorption or precipitation. The effect of various operating variables viz., solution pH, sediment dose, contact time and particle size on the adsorption of zinc have been studied. The optimum contact time needed to reach equilibrium is of the order of 60 minutes and is independent of the initial concentration of zinc ions. The adsorption curves are smooth and continuous leading to saturation, suggesting the presence of monolayer coverage of zinc ions on the surface of the adsorbent. The extent of adsorption increases with an increase of pH. Furthermore, the adsorption of zinc increases with increasing adsorbent doses and decreases with adsorbent particle size. The important geochemical phases, iron and manganese oxide, act as the active support material for the adsorption of zinc ions. The adsorption data have been analysed with the help of Langmuir and Freudlich adsorption models to determine the mechanistic parameters associated with the adsorption process. An attempt has also been made to obtain thermodynamic parameters of the process, viz, free energy change, enthalpy change and entropy change. The negative values of free energy change indicated spontaneous nature of the adsorption of zinc on the bed sediments and positive values of enthalpy change suggest the endothermic nature of the adsorption process. The study of the adsorptive properties of the sediments can provide valuable information relating to the tolerance of the system to the added heavy metal load. Download the report here:
<urn:uuid:be945638-eced-47f4-ac07-e1884f703a88>
2.703125
384
Academic Writing
Science & Tech.
22.075102
Command File SyntaxEpsilon's command files appear in a human-readable format, so you can easily modify them. Parentheses surround each command. Inside the parentheses appear a command name, and optionally one or more arguments. The command can be one of several special commands described in the next section, or most any EEL subroutine. See the next section for details. Each argument can be either a number, a string, or a key list (a special type of string). Spaces separate one argument from the next. Thus, each command looks something like this: You can include comments in a command file by putting a semicolon or hash sign ("#") anywhere an opening parenthesis may appear. Such a comment extends to the end of the line. You cannot put a comment inside a string. For numbers, you can include bases using a prefix of "0x" for hexadecimal, "0o" for octal, or "0b" for binary, or use an EEL-style character constant like A few commands such as You can also use the special key syntax In addition to the above command syntax with commands inside parentheses, command files may contain lines that define variables, macros, key tables or bindings. Epsilon understands all the different types of lines generated by the list-all, list-customizations, import-customizations, and similar commands. When Epsilon records customizations in your Besides listing variables, macros, key tables, and bindings, the above commands also create lines that report that a particular command or subroutine written in Epsilon's EEL extension language exists. These lines give the name, but not the definition, because command files can't define EEL functions. When Epsilon sees a line like that, it makes sure that a command or subroutine with the given name exists. If not, it reports an error. Epsilon does the same thing with variables that have complicated types (pointers or structures, for example).
<urn:uuid:e227cfa0-4e4f-41a3-8602-b426c7c584e7>
3.1875
419
Documentation
Software Dev.
38.380297
When IDL starts, it sets the values of a variety of system variables. System variables are a special class of predefined variables that are available to all IDL program units; they are described in detail in System Variables. The values of some system variables can be specified by the user when IDL starts, either via operating system environment variables or via preferences specified within the IDL Development Environment. In order to set these system variables, IDL does the following things when it starts up: The process used to set environment variables varies depending on the operating system you are using. On UNIX systems, environment variables are generally specified in a file read by your shell program at startup. Syntax for setting environment variables varies depending on the shell you are using, as does the file you use to specify the variables. If you are unsure how to set environment variables on your system, consult the system documentation or a system administrator. For example, to set the environment variable IDL_PATH to the value /usr/local/idl when using a C shell ( csh), you would add the following line to your setenv IDL_PATH /usr/local/idl Similarly, to set the same variable when using a Bourne shell ( sh), you would add the following line to your IDL_PATH="/usr/local/idl" ; export IDL_PATH On Microsoft Windows systems, environment variables are set in the Environment Variables dialog, which is accessible from the System Control panel. Some Windows versions allow you to set environment variables either only for the user you logged in as ("user variables") or for all users ("system variables") - setting IDL environment variables as user variables means that other users who log on to the computer will not have access to your environment variable values. The following environment variables are checked on all platforms. IDL uses the value of the $HOME environment variable when storing user-specific information in the local file system. C:\Documents and Settings\username where username is the login name of the current user). If USERPROFILE is not set, IDL uses the value of the first of the following it finds: the TEMP environment variable, the TMP environment variable, the Windows system directory. Set this environment variable to a value greater than 0 to specify the number of threads IDL should use in thread pool computations instead of defaulting to the number of CPUs present in the underlying hardware. This defines the number of threads used by IDL when thread pool usage is not otherwise specified. Setting the CPU procedure TPOOL_NTHREADS keyword, or routine-specific thread pool keywords at the time of execution overrides this environment variable setting. !CPU provides details on the state of the system processor and of IDL's use of it. Multithreading in IDL provides information on situations when limiting the number of threads used by IDL may be beneficial. Set this environment variable equal to the name of the default IDL graphics device. Setting this value is the same as setting the value of the IDL system variable Set this environment variable equal to the path to the main IDL directory. Setting this value is the same as setting the value of the IDL system variable Set this environment variable equal to the path to the directory or directories containing IDL dynamically loadable modules. At startup, IDL uses the value of this environment variable, if it exists, to initialize the IDL system variable Set this environment variable equal to the path to the directory or directories containing IDL help files. At startup, IDL uses the value of this environment variable, if it exists, to initialize the IDL system variable Set this environment variable equal to the path to the directory or directories containing IDL library ( .sav) files. At startup, IDL uses the value of this environment variable, if it exists, to initialize the IDL system variable <IDL_DEFAULT>in the path you specify if you want IDL's default libraries to be included in the Create this environment variable to disable IDL's path caching mechanism. The existence of this variable is sufficient to disable path caching; the specific value of the variable is unimportant. Set this environment variable equal to the path to an IDL batch file that contains a series of IDL statements which are executed each time IDL is run. See Startup Files for further details. IDL, and code written in the IDL language, sometimes need to create temporary files. The location where these files should be created is highly system-dependent, and local user conventions are often different from standard practice. By default, IDL selects a reasonable location based on operating system and vendor conventions. Set the IDL_TMPDIR environment variable to override this choice and explicitly specify the location for temporary files. The GETENV system function handles IDL_TMPDIR as a special case, and can be used by code written in IDL to obtain the temporary file location. See GETENV for more information. The following environment variables are used by IDL for UNIX or MacOS X. IDL uses the DISPLAY environment variable to choose which X display is used to display graphics. As with any X Windows program, IDL uses the standard UNIX environment variable TERM to determine the type of terminal in use when IDL is in command-line mode. IDL's FlexLM-based license manager uses the value of this environment variable to determine where to search for valid license files. Consult the license manager documentation for details.
<urn:uuid:9fe7a2f0-57ce-4b99-9091-fc33b49bb997>
2.765625
1,149
Documentation
Software Dev.
31.507739
Whirligig beetle (Gyrinus natator) Royalty Free Science Image Whirligig beetle (Gyrinus natator) Royalty FreeRef: 12640 Description:Whirligig beetle. Coloured scanning electron micrograph of the head of a whirligig beetle, Gyrinus natator. Whirligig beetles are small, shiny black insects that skim round and round on the surface of still or slow-moving water. Each compound eye is divided into two parts. The upper part of the eye is for looking over the water surface and the lower part is for seeing down into the water. Between the two eye parts are the short antennae. At centre are the beetle's strong mouth parts used for biting prey. Two sensory palps (drumstick-like) descend from near the mouth. Magnification x24 at 6x6cm size.
<urn:uuid:7416599f-46b9-493d-b88d-2d7bf511be45>
3.03125
183
Truncated
Science & Tech.
54.752525
Short Summaries of Articles about Mathematics in the Popular Press "Secrets of an Acid Head," by Dana Mackenzie. New Scientist, 23 June 2001,pages 26-30. In the 1920s, a University of Chicago neuroscientist categorized hallucinationsinto four types: tunnels, spirals, cobwebs, and honeycombs. Today,mathematician and neuroscientist Jack Cowan, also at Chicago, is trying tomathematically model brain activity that could produce hallucinations. Cowanand co-workers focused on modeling neural activity in the visual cortex to seewhat kind of activity could bring about hallucinations. One kind of model wasused for tunnels and spirals, and another for cobwebs and honeycombs. "LSDusers see spirals and tunnels because those are the real-world objects that fitthe patterns of neural firing in their cortex," Mackenzie writes. --- Allyn Jackson
<urn:uuid:72665062-6b5c-48a8-bafb-daca4c9104c0>
3.125
188
Content Listing
Science & Tech.
23.926829
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. January 25, 1997 Explanation: The Whirlpool Galaxy is a classic spiral galaxy. At only 15 million light years distant, M51, also cataloged as NGC 5194, is one of the brighter and more picturesque galaxies on the sky. The smaller galaxy appearing here above and to the right is also well behind M51, as can be inferred by the dust in M51's spiral arm blocking light from this smaller galaxy. Astronomers speculate that M51's spiral structure is primarily due to it's gravitational interaction with this smaller Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
<urn:uuid:fdf832f8-a190-423c-9e88-0bf36b034fb0>
3.171875
192
Knowledge Article
Science & Tech.
49.287731
The method of least squares used to invert an orbit problem Bannister, R. (2003) The method of least squares used to invert an orbit problem. American Journal of Physics, 71 (12). pp. 1268-1275. ISSN 0002-9505 Full text not archived in this repository. To link to this article DOI: 10.1119/1.1613270 Six parameters uniquely describe the orbit of a body about the Sun. Given these parameters, it is possible to make predictions of the body's position by solving its equation of motion. The parameters cannot be directly measured, so they must be inferred indirectly by an inversion method which uses measurements of other quantities in combination with the equation of motion. Inverse techniques are valuable tools in many applications where only noisy, incomplete, and indirect observations are available for estimating parameter values. The methodology of the approach is introduced and the Kepler problem is used as a real-world example. (C) 2003 American Association of Physics Teachers. Repository Staff Only: item control page
<urn:uuid:f9d742ca-c3f5-445e-8b04-8e27f0112be7>
3.3125
218
Academic Writing
Science & Tech.
44.271213
is an attribute available on polygon mesh objects. It is an array of strings containing the names of materials that can be used on polygons in that mesh, such as "Sources.Materials.DefaultLib.Blue". The material names are resolved at render time, so undefined material names will not show as errors in the ICE tree. If a material name is not available at render time, the material applied directly to the object in the usual non-ICE way is used instead. is an integer attribute per polygon. It specifies which element in the Materials array to apply to each polygon. A value of 0 specifies to use the material applied to the object itself. A value of 1 uses the first material in the array (array index 0), a value of 2 uses the second material in the array (array index 1), and so on. Any polygons with values that are larger than the Materials array use the object's material. To apply different materials to different polygons, you populate an array with the names of materials and use it to set the mesh's Materials attribute. After that, you calculate and set each polygon's MaterialID attribute to specify the material you want based on whatever criteria you choose. allows you to enter material names in the property editor, or connect string values. It is especially useful when copying and pasting material names. You can add more names by right-clicking on a port and choosing one of the Insert commands. You can remove unused ports if you want — any unresolved materials are replaced by the object's material.
<urn:uuid:b2b61e19-1f55-4403-9c06-70b75560b091>
2.828125
316
Documentation
Software Dev.
47.476667
Alpheus is the most diverse shrimp genus. Some of their defining characteristics are orbital hoods and an oversized claw which they use for defense and killing prey. When this claw is snapped shut it shoots out water which creates a low pressure air bubble behind it. The bubble reaches temperatures of 5000 degrees Kelvin (half of the heat of the sun)(4). When the bubble pops it makes a loud sound and produces a flash of light. The shockwave from the pop stuns or kills the shrimp’s prey. The shrimp have developed orbital hoods to protect their eyes from the shockwave. However, this orbital hood leads to poor vision. In order to make up for this shortcoming some shrimp have developed a symbiotic relationship with gobies. The gobies protect the shrimp from predators by looking around for predators. When a goby detects a predator it moves its tail in a way that warns the shrimp. The shrimp feels the movements with its antennae and they both go hide. The shrimp and goby live in close proximity. - (1) Smithsonian Tropical Research Institute, Alpheid Shrimp Database,http://biogeodb.stri.si.edu/bioinformatics/alpheus/GenusAlpheus.html. November 19, 2010. - (2) ESA, The Sharp Shooter of Marine Life,http://www.esa.org/esablog/tag/pistol-shrimp/. November 19, 2010. - (3) Chris Brumbaugh, The Pistol Shrimp,http://users.soe.ucsc.edu/~cbrumbau/BME200/. November 19, 2010. - (4) How Snapping Shrimp Snap (and Flash),http://stilton.tnw.utwente.nl/shrimp/. November 19, 2010. I like where this brief summary is going (it is fascinating!) but it starts off as if it is just for the genus -- but it is associated with the family so should cover more. Might be good to link the article to the appropriate goby page to help readers get more information.
<urn:uuid:5edc3eb9-ae1f-4079-8370-b3259e23a840>
3.40625
438
Knowledge Article
Science & Tech.
72.604667
Given that there are thousands of asteroids and probably a hundred thousand million comets, these small bodies must be considered essential components of the solar system. Certainly objects closely similar to the small bodies that remain today were involved in the agglomeration of the larger planets and satellites some 4.5 billion years ago, and much of the importance of the small bodies today derives from the clues that they may contain about the processes that took place in the early solar system. This importance is magnified when we realize that asteroid-like parent bodies are the only solar system objects (other than Earth and the Moon) of which we have samples for detailed laboratory studies. Although our understanding of small bodies is relatively limited, we know enough to realize that geologically these objects are best studied separately from the larger bodies, such as Earth and the Moon. For one thing, gravity is so much smaller on these bodies that it is difficult to extrapolate our experiences with surface processes on larger objects with any great confidence. For another, many of the small objects are irregular and call for mapping and geodetic techniques quite distinct from those commonly used for the larger (usually almost spherical) planets and satellites. 7.1. What is a Small Body? It is not easy (nor is it necessary) to give a rigorous definition of a small body. Certainly implicit in the term is that the object has a low surface gravity and small escape velocity. Rather arbitrarily, we can take the largest small body to be the size of the biggest asteroid, Ceres, which has a diameter of some 1000 km. Most small bodies are considerably smaller; the two satellites of Mars, Phobos (21 km) and Deimos (12 km), are more representative. For an object the size of Phobos, surface gravity is only about 1 cm sec-2, and the escape velocity is some 10 m sec-1. Weak gravity has several important implications. Since such bodies cannot have atmospheres, their regoliths are immune to weathering processes involving the presence of an atmosphere. On the other hand, they are directly exposed to the whole spectrum of meteoroidal impacts, cosmic rays, solar radiation, and the solar wind. Low gravity also makes it impossible for the body to achieve or retain a spherical shape during its history, and many small bodies tend to be irregular in shape. Additionally, low gravity affects the development of the surface under meteoroidal bombardment. Craters probably tend to remain deeper, ejecta become more dispersed, and the proportion of strongly shocked material retained is smaller than on larger bodies. Furthermore, the chances that an asteroid-like small body will suffer a catastrophic, or nearly catastrophic, impact during its history are non-negligible. The study of meteorites has provided incontrovertible proof that some small parent bodies underwent differentiation (Dodd, 1981). In addition, there is strong evidence of subsurface aqueous processes in some parent bodies (Kerridge and Bunch, 1979) and of surface eruptions of lavas on others (Drake, 1979). The realization of the importance of short-lived nuclides such as 26AI as possible heat sources early in the solar system's history has made it quite plausible that some small bodies should have had early histories of melting and other internal activity (Sonett and Reynolds, 1979). Thus, whereas some small bodies (comet nuclei?) may have had dull evolutionary histories and may rightly be regarded as primitive, others have probably experienced histories almost as complex and certainly as interesting as some larger objects. The solar system's small bodies can be divided conveniently into three broad categories: (1) rocky objects (asteroids and some small satellites), (2) icy objects (mostly small satellites, but perhaps including such objects as Chiron), and (3) comet nuclei. The inventory of known small bodies includes thousands of asteroids in the main belt, as well as about 60 Amor, Apollo, and Aten objects. Only about 35 asteroids are larger than 200 km across, although physical measurements have been made of objects as small as 200 meters (Gehrels, 1979). None has yet been studied by spacecraft. The inventory also includes the small satellites of Mars and of the outer planets. Phobos and Deimos, the two tiny satellites of Mars, are the only very small bodies that have been investigated sufficiently by spacecraft (Mariner 9 and Viking) to permit meaningful discussions of surface geologic processes (Veverka and Thomas, 1979). Jupiter has at least a dozen small satellites. Except for a few low-resolution images of Amalthea obtained by Voyager, we know almost nothing about the geology of these bodies. There are also at least 70 known Trojan asteroids near the libration points of Jupiter's orbit, and speculations exist that some of Jupiter's outer satellites may be related to them (Degewij and van Houten, 1979). Recent Earth-based and Voyager observations have greatly expanded our list of Saturn's small satellites, and at least in the case of Mimas and Enceladus, the Voyager data are adequate to support geologic investigation. Beyond Saturn, most of the satellites of Uranus, Neptune's Nereid, and Pluto's Charon probably fall within our definition of small bodies. However, it will be at least 1986 before any spacecraft data on any of these objects are available. It is worthwhile to stress that the above list is almost certainly incomplete and that new small bodies will continue to be discovered. In addition, there are indications that small, so far undetected satellites are associated with the rings of Uranus and perhaps those of Saturn and Jupiter as well. Comets are the most abundant small bodies in the solar system: one estimate is that some 1011 exist in the Oort cloud at the fringes of the solar system (Wilkening, 1982). From the geologic point of view, it is only the nuclei of comets that are of interest and not the comas and tails that develop when the nucleus approaches close enough to the Sun for its surface ices to vaporize. Most comet nuclei are believed to be bodies of rock and ice less than 10 km across, but very little direct information about them exists. None has been studied by spacecraft yet. They could be the parent bodies of some volatile-rich meteorites, and there may be an evolutionary connection between them and some asteroids. For example, it has been suggested that some Apollo asteroids are the remnants of extinct short-period comets (Shoemaker and Helin 1977 Kresak 1979). In summary, three facts about small bodies must be kept in mind: (1) their vast number, (2) their great diversity, and (3) our lack of knowledge concerning them. The next two decades of solar system exploration should remedy our current lack of information about small bodies. We cannot gain a true understanding of the solar system's evolution by ignoring them. They are of interest not only in their own right, but as the solar system's most abundant projectiles, they have influenced, m some cases probably dramatically, the evolution of the surfaces of the larger planets and satellites. 7.3. Why Study Small Bodies? At least four major reasons for studying small bodies in the geologic context can be given: It could also be argued that another important reason for studying small bodies is that their geologic record may extend further back in time than that preserved on the surfaces of the larger bodies. Also, many small bodies (including satellites) probably are collisional fragments of large bodies and in some instances could provide accessible information on the differentiation of large parent bodies. 7.3.1. Effects of Small Bodies on Larger Objects Surfaces in the solar system continue to be modified by impacts, and there is abundant evidence that during the first half billion years of the solar system's existence, the surfaces of planets and satellites were influenced dramatically by collisions with small bodies. From the geologic point of view, we are interested in the time history of the flux and population (size and composition) of the impacting objects at different distances from the Sun. The early fluxes appear to have had a profound influence on the evolution of the crusts of larger bodies, and subsequent fluxes are important in determining relative chronologies of different surface units (chapter 3). The actual nature of the impacting bodies (whether volatile-rich or volatile-poor) may have played a role in determining the evolution of some atmospheres and perhaps even of subsequent weathering processes. For instance, it has been proposed that a significant fraction of some gases in the atmospheres of the terrestrial planets were brought in by comets. Some of the important questions to be addressed are: In the above, the term "flux" should be understood to mean not only total flux of bodies of all sizes (or masses) but also information about the relative fluxes of bodies of various sizes (or masses). A vigorous program of searching for Apollo, Aten, and Amor asteroids, as well as for comets, can answer the first of these questions. The second and third questions are more difficult, but considerable progress is being made in addressing some aspects of them by theoretical calculations. A closely related issue involves the orbital evolution of the various classes of impacting objects (origin, lifetime, and eventual fate). For example, how do objects end up in Apollo orbits? How long do they stay? What happens to them? 7.3.2. Unique Surface Features and Processes Not surprisingly, there are processes that are important on small bodies but impossible to predict from an extrapolation of our terrestrial or lunar experience. In fact, it is sometimes even difficult to predict a priori what form a well-known process will take in the small-body environment. For example, a decade ago, there was a legitimate discussion about whether or not there would be recognizable craters on bodies as small as the satellites of Mars. A more serious debate developed about whether appreciable regoliths would form on such small objects. Although we have now learned.... ....the answers to such rudimentary questions, we cannot pretend to fully understand the process of cratering and regolith formation on small bodies (Cintala et al., 1978; Housen and Wilkening, 1982). For example, we have no convincing explanation for the gross y different appearance of the surfaces of Phobos and Deimos. Why is it that the surface of the smaller Deimos appears to have retained considerably more regolith than that of the larger Phobos? Our very limited experience in exploring small bodies has already confirmed that unique and unexpected surface features and processes come into play. No one anticipated the existence of grooves on Phobos, yet this type of feature may well be a common one on many small bodies (Thomas and Veverka, 1979). There is every reason to expect that additional, important surface features and processes will be discovered as our exploration of small bodies proceeds, especially in the cases of small icy satellites and the nuclei of comets. 7.3.3. Small Bodies as Natural Laboratories Due to their great diversity in size and composition, small bodies provide ideal testing grounds for studying various processes especially those involving cratering. In principle, one can find small.... ....bodies of similar surface gravity but drastically different surface composition (rock versus ice), or bodies of similar composition but very different surface gravity, to test the importance of such variables on crater morphology, ejecta patterns, etc. Much could be learned by comparing surface features and regolith characteristics on three small asteroids of similar surface gravity but of different composition (carbonaceous, stony, or metallic). As a next step, one could investigate the effects of rotation rate on regolith characteristics by comparing two asteroids that are identical in all bulk characteristics except their spin rates. Full exploitation of such possibilities would require an aggressive program of future solar system exploration. 7.3.4. Evolution and Interrelationship There is ample evidence that some small bodies have had complicated evolutionary histories that involved processes of high interest to planetary geologists. The meteorite record proves that some parent bodies experienced internal differentiation, aqueous metamorphism, and even the eruption of lava onto their surfaces (Dodd, 1981). In many cases, very mature and very complex regoliths were developed (Housen and Wilkening, 1982). Understanding the geologic evolution of such interesting bodies is not only worthy in its own right, but would improve our understanding of the possible interrelationships among small bodies and between the small bodies and larger planets. First, there are questions of the following type to be considered: what styles of eruption and what types of volcanic constructs would one expect on a body as small as Vesta? Or, what kinds of structure control the local emission of gases from a comet nucleus? Second, there are the interrelationship questions; for example, is it geologically reasonable that a comet nucleus can evolve into something like an Apollo asteroid or that some volatile-rich carbonaceous chondrites could come from comets? Unfortunately, in many cases we still lack key observational data to address such important questions meaningfully. The small bodies of the solar system are of great intrinsic geologic interest that goes beyond their original role as building blocks of planets and their subsequent role as projectiles. They are characterized by vast numbers and by their diversity. So far, their geologic study has been hampered by a lack of first-hand information of the sort that can be obtained only by direct spacecraft exploration. Even after Viking and Voyager, our inventory of small objects about which enough is known to carry out detailed geological investigations is very meager. It is restricted to a few icy satellites of Saturn and to the two rocky moons of Mars. We have yet to carry out a geologic reconnaissance of an asteroid or a comet nucleus. Although our accumulated knowledge may be adequate to guess what asteroid surfaces may be like in a general way, we really know next to nothing about comet nuclei. Thus, a first-order requirement for progress in our understanding of small bodies is the exploration of at least one asteroid and one comet nucleus during the coming decade. Some important questions, however, can be addressed only by studying a variety of objects. In the meantime, it is important to continue the ongoing active programs of Earth-based observations of small bodies as well as related laboratory and theoretical investigations. It is especially crucial to continue monitoring the neighborhood of Earth's orbit for small comets and asteroids, since there is no other way of obtaining adequate statistics on the population of such objects. In terms of data analysis and interpretation, there are enough unresolved questions concerning the small satellites of Mars and of the outer planets to justify a healthy program of analysis of Viking and Voyager data in these areas. For example, the Viking IRTM * measurements of Phobos and Deimos must be fully correlated with imaging data to gain information on regolith characteristics. We must also develop techniques for mapping irregular satellites and making accurate measurements of their topography and volume. We should make a special effort to apply the many lessons we have learned from comparative planetology during the past two decades to considerations of surface and near-surface processes on small bodies. Such extrapolations from our experience with larger bodies will have to be done judiciously, but the effort should prove beneficial to our general understanding of the solar system. *Infrared Thermal Mapper.
<urn:uuid:b0129493-5831-4a19-8fbb-125afd1da1a2>
4.1875
3,152
Academic Writing
Science & Tech.
33.5655
Harnessing nuclear fusion, the energy that powers the sun and the stars, has been a goal of physicists worldwide since the 1950s. It is essentially inexhaustible and it can be created using hydrogen isotopes — chemical cousins of hydrogen, like deuterium — that can readily be extracted from seawater. […] Fusion energy generates zero greenhouse gases. It offers no chance of a catastrophic accident. It can be available to all nations, relying only on the Earth’s oceans. When commercialized, it will transform the world’s energy supply. There’s a catch. The development of fusion energy is one of the most difficult science and engineering challenges ever undertaken. Among other challenges, it requires production and confinement of a hot gas — a plasma — with a temperature around 100 million degrees Celsius. […] But potential solutions to these daunting technical challenges are emerging. In one approach, known as magnetic fusion, hot plasma is confined by powerful magnets. A second approach uses large, intense lasers to bombard a frozen pellet of fusion fuel (deuterium and tritium nuclei) to heat the pellet and cause fusion to occur in a billionth of a second. Whereas magnetic fusion holds a hot plasma indefinitely, like a sun, the second approach resembles an internal combustion engine, with multiple mini-explosions (about five per second). Once a poorly understood area of research, plasma physics has become highly developed. Scientists not only produce 100 million-degree plasmas routinely, but they control and manipulate such “small suns” with remarkable finesse. Since 1970 the power produced by magnetic fusion in the lab has grown from one-tenth of a watt, produced for a fraction of a second, to 16 million watts produced for one second — a billionfold increase in fusion energy. Seven partners — the European Union, China, India, Japan, Russia, South Korea and the United States — have teamed up on an experiment to produce 500 million watts of fusion power for 500 seconds and longer by 2020, demonstrating key scientific and engineering aspects of fusion at the scale of a reactor. However, even though the United States is a contributor to this experiment, known as ITER, it has yet to commit to the full program needed to develop a domestic fusion reactor to produce electricity for the American power grid. Meanwhile other nations are moving forward to implement fusion as a key ingredient of their energy security. […] What has been lacking in the United States is the political and economic will. We need serious public investment to develop materials that can withstand the harsh fusion environment, sustain hot plasma indefinitely and integrate all these features in an experimental facility to produce continuous fusion power. This won’t be cheap. A rough estimate is that it would take $30 billion and 20 years to go from the current state of research to the first working fusion reactor. But put in perspective, that sum is equal to about a week of domestic energy consumption, or about 2 percent of the annual energy expenditure of $1.5 trillion.
<urn:uuid:c7b98153-9650-4d50-b947-27579a472e8a>
3.546875
616
Personal Blog
Science & Tech.
35.2702
Let's Explore Electricity! Electricity is the most widely used form of energy. Its uses range from the miniature batteries that operate your wristwatch to huge motors that power trains and ships. Electricity operates our lights, runs our refrigerators and powers motors. It first must be changed to other forms of energy such as heat, light or mechanical to be useful. You can't see electricity but you can see what it does like when you turn on a light. How It Works Atoms make up all things. You can't see atoms because they're so tiny but you can imagine what they look like. Each atom is made up of PROTONS, NEUTRONS and ELECTRONS. Protons have a positive (+) charge, electrons have a negative (-) charge and neutrons have no charge. The protons and neutrons make up the NUCLEUS or center of the atom. The electrons circle around the nucleus like the planets orbit around the sun. If an atom has the same number of protons and electrons it is balanced and has a neutral charge. When an electron gets knocked out of its orbit then it is called a free electron. This means the atom has a positive charge. The free electrons then may join another balanced atom giving it a negative charge. Atoms with the same charge move away from each other. But atoms with different charges attract each other. The free electrons may be attracted to atoms where there is an electron missing. When this happens continuously, the jumping of the electrons makes electrical energy we call current. When you touch a doorknob after you shuffle across the carpet, you feel a shock or static electricity. Your movement across the carpet causes you to lose some electrons. They start jumping around from one to another and you feel a shock when you make contact with the doorknob.
<urn:uuid:cf80ec77-5889-4ad1-ab1a-888bde8f1889>
3.9375
372
Knowledge Article
Science & Tech.
54.79263
Here are some circle bugs to try to replicate with some elegant programming, plus some sequences generated elegantly in LOGO. Explore this how this program produces the sequences it does. What are you controlling when you change the values of the variables? Can you puzzle out what sequences these Logo programs will give? Then write your own Logo programs to generate sequences. Can you continue this pattern of triangles and begin to predict how many sticks are used for each new "layer"? Make new patterns from simple turning instructions. You can have a go using pencil and paper or with a floor robot. How many different sets of numbers with at least four members can you find in the numbers in this box? What are the next three numbers in this sequence? Can you explain why are they called pyramid numbers? Explore the different tunes you can make with these five gourds. What are the similarities and differences between the two tunes you Investigate what happens when you add house numbers along a street in different ways. While we were sorting some papers we found 3 strange sheets which seemed to come from small books but there were page numbers at the foot of each page. Did the pages come from the same book? Polygonal numbers are those that are arranged in shapes as they enlarge. Explore the polygonal numbers drawn here. Investigate the numbers that come up on a die as you roll it in the direction of north, south, east and west, without going over the path it's already made. Investigate these hexagons drawn from different sized equilateral In this section from a calendar, put a square box around the 1st, 2nd, 8th and 9th. Add all the pairs of numbers. What do you notice about the answers? If I use 12 green tiles to represent my lawn, how many different ways could I arrange them? How many border tiles would I need each If the numbers 5, 7 and 4 go into this function machine, what numbers will come out? Three beads are threaded on a circular wire and are coloured either red or blue. Can you find all four different combinations? Find the next number in this pattern: 3, 7, 19, 55 ... EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules. Ben’s class were making cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see? I've made some cubes and some cubes with holes in. This challenge invites you to explore the difference in the number of small cubes I've used. Can you see any patterns? Let's suppose that you are going to have a magazine which has 16 pages of A5 size. Can you find some different ways to make these pages? Investigate the pattern for each if you number the pages. These sixteen children are standing in four lines of four, one behind the other. They are each holding a card with a number on it. Can you work out the missing numbers? There are ten children in Becky's group. Can you find a set of numbers for each of them? Are there any other sets? Have a go at this 3D extension to the Pebbles problem. July 1st 2001 was on a Sunday. July 1st 2002 was on a Monday. When did July 1st fall on a Monday again? Your challenge is to find the longest way through the network following this rule. You can start and finish anywhere, and with any shape, as long as you follow the correct order. In this investigation, you are challenged to make mobile phone numbers which are easy to remember. What happens if you make a sequence adding 2 each time? Liitle Millennium Man was born on Saturday 1st January 2000 and he will retire on the first Saturday 1st January that occurs after his 60th birthday. How old will he be when he retires? A story for students about adding powers of integers - with a festive twist. Investigate the successive areas of light blue in these diagrams. An environment which simulates working with Cuisenaire rods. What is the remainder when 2^2002 is divided by 7? What happens with different powers of 2? Place four pebbles on the sand in the form of a square. Keep adding as few pebbles as necessary to double the area. How many extra pebbles are added each time? In this activity, the computer chooses a times table and shifts it. Can you work out the table and the shift each time? Can you find a way to identify times tables after they have been shifted up? According to an old Indian myth, Sissa ben Dahir was a courtier for a king. The king decided to reward Sissa for his dedication and Sissa asked for one grain of rice to be put on the first square. . . . "Tell me the next two numbers in each of these seven minor spells", chanted the Mathemagician, "And the great spell will crumble away!" Can you help Anna and David break the spell? Explain why the arithmetic sequence 1, 14, 27, 40, ... contains many terms of the form 222...2 where only the digit 2 appears. Watch these videos to see how Phoebe, Alice and Luke chose to draw 7 squares. How would they draw 100? Formulate and investigate a simple mathematical model for the design of a table mat. Explore one of these five pictures. Three people chose this as a favourite problem. It is the sort of problem that needs thinking time - but once the connection is made it gives access to many similar ideas. Here is a machine with four coloured lights. Can you develop a strategy to work out the rules controlling each light? Alison, Bernard and Charlie have been exploring sequences of odd and even numbers, which raise some intriguing questions... Powers of numbers behave in surprising ways. Take a look at some of these and try to explain why they are true. A introduction to how patterns can be deceiving, and what is and is not a proof. There are lots of ideas to explore in these sequences of ordered Choose any 4 whole numbers and take the difference between consecutive numbers, ending with the difference between the first and the last numbers. What happens when you repeat this process over and. . . . Take any two positive numbers. Calculate the arithmetic and geometric means. Repeat the calculations to generate a sequence of arithmetic means and geometric means. Make a note of what happens to the. . . .
<urn:uuid:e2577466-8d7c-4d0d-b1c6-2cdd5c3dd4a0>
3.078125
1,418
Content Listing
Science & Tech.
65.067726
HTML Object Element The HTML 4.0 <OBJECT> element is designed for extending HTML. The <OBJECT> element allows an author to download external data or programs into the current page. This element can be used to download Java Applet's, ActiveX controls, Scriptlets, or other types of information. The long-term goal for the <OBJECT> element is to replace the <APPLET> and <IMG> element with a single way to embed data. Part of the design goals of the <OBJECT> element is to ensure that it graceful degrades in browsers that do not support the element or possibly the object type or data. When a browser does not support the <OBJECT> element or a particular use of the element, the contents within the element are to be displayed. For In the above example, we defined an OBJECT element that would manipulate the data, When the object element works correctly, it can run an applet or control, or display a specific file type. The object below displays our home page. This object is defined as follows: If you are familiar with the <IFRAME> element, the above use of <OBJECT> should look very similar. The single difference is that the <OBJECT> element cannot navigate indepedently from the containing document. This means that clicking on a link in the <OBJECT> always replaced the entire document, while with an <IFRAME> it is possible to have a a document navigate and appear within the same <IFRAME> element. Copyright © 1997-2008 InsideDHTML.com, LLC. All rights reserved.
<urn:uuid:1f38aecc-8489-451f-8647-a4f5c93767b2>
2.75
345
Documentation
Software Dev.
45.421533
Earth’s Clouds Are Getting Lower, NASA Satellite FindsFebruary 22, 2012 Earth’s clouds got a little lower — about one percent on average — during the first decade of this century, finds a new NASA-funded university study based on NASA satellite data. The results have potential implications for future global climate. Scientists at the University of Auckland in New Zealand analyzed the first 10 years of global cloud-top height measurements (from March 2000 to February 2010) from the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA’s Terra spacecraft. The study, published recently in the journal Geophysical Research Letters, revealed an overall trend of decreasing cloud height. Global average cloud height declined by around one percent over the decade, or by around 100 to 130 feet (30 to 40 meters). Most of the reduction was due to fewer clouds occurring at very high altitudes. Lead researcher Roger Davies said that while the record is too short to be definitive, it provides a hint that something quite important might be going on. Longer-term monitoring will be required to determine the significance of the observation for global temperatures. Continue at Science Daily
<urn:uuid:0d1d7372-10fd-492d-9aa9-b51be25aca90>
3.25
239
Truncated
Science & Tech.
32.221875
For 140 million years, sauropods roamed Earth--but today we have just fossils to tell us about what these animals looked like from the outside. How big were they? How big were their young? What did their skin look like? How fast could they move? And while fossils can't answer every question--like what color the animals were--they do reveal an astonishing amount of information that helps paleontologists understand these massive creatures. All dinosaurs reproduced by laying eggs, just as living birds and many modern reptiles do. But surprisingly, the babies that hatched out of sauropod eggs were generally no bigger than a modern adult goose. Sauropods didn't start out extremely big--they just grew very, very fast. As amazing as it seems, we have footprints of sauropods on nearly every continent, left during their 140-million-year stint on Earth. And those footprints, many in long trackways, provide some of the best data on the animals' daily life. With them, scientists can tackle such questions as: How fast could these animals walk? Did they travel in groups? Did young and old move together? Sauropods came in different sizes--most of them big. An adult female Mamenchisaurus would have weighed about 13 tons (12,000 kilograms). That may sound big, but it's actually below average for sauropods. In humans, skin is the biggest organ--an average adult's skin weighs as much as a gallon of milk. The skin of an adult Mamenchisaurus weighed about as much as a small car.
<urn:uuid:322ab38e-1948-4ae2-91c2-511204ffce63>
4.1875
327
Knowledge Article
Science & Tech.
59.511455
Markov processArticle Free Pass Markov process, sequence of possibly dependent random variables (x1, x2, x3, …)—identified by increasing values of a parameter, commonly time—with the property that any prediction of the next value of the sequence (xn), knowing the preceding states (x1, x2, …, xn − 1), may be based on the last state (xn − 1) alone. That is, the future value of such a variable is independent of its past history. These sequences are named for the Russian mathematician Andrey Andreyevich Markov (1856–1922), who was the first to study them systematically. Sometimes the term Markov process is restricted to sequences in which the random variables can assume continuous values, and analogous sequences of discrete-valued variables are called Markov chains. See also stochastic process. What made you want to look up "Markov process"? Please share what surprised you most...
<urn:uuid:232bd5d9-ef25-497a-971a-1c17bc48b6f2>
2.90625
201
Knowledge Article
Science & Tech.
44.204683
Chemguide: Support for CIE A level Chemistry Learning outcome 9.6(d) This statement is about the uses of ammonia, and other nitrogen containing compounds produced from it. Before you go on, you should find and read the statement in your copy of the syllabus. This is seriously boring stuff, and you probably already know enough from work you did before you started to do A level chemistry. Uses of ammonia Uses of nitric acid I suspect you would be safe if you just learnt that both ammonia and nitric acid are used in the manufacture of fertilisers; that ammonia is also used to make nitric acid; and that nitric acid is also used to make explosives. © Jim Clark 2011
<urn:uuid:b60e7c63-35b0-40b7-9253-685c8c81ff61>
3.25
152
Tutorial
Science & Tech.
44.295
Graphene and Other Two-Dimensional Materials We found a new class of materials which is now referred to as 2D atomic crystals. Such crystals can be seen as individual atomic planes “pulled out” of bulk, 3D crystals. Despite being only one atom thick and unprotected from the immediate environment, these materials are stable under ambient conditions, exhibit high crystal quality and are continuous on a macroscopic scale. - Carbon flatland. Summer Science Exhibition 2011 - Science Perspectives on Graphane(January 2009) - Nanowerk on Graphene-based Liquid Crystal Display(April 2008) - BBC News(April 2008) - Scientific American (April 2008) - Scientific Americam (November 2005) - Physics Web (November 2005) - Physics Web (July 2005) - BBC News (October 2004)
<urn:uuid:8126d9fc-3ba3-4fd2-be0f-cc6c9302f0c5>
2.703125
177
Academic Writing
Science & Tech.
25.0775
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. Post a reply Topic review (newest first) what is muirheads' inequality? Try a = 2, b = 2, c = 1. You are okay after A) Mistake made in B. It was a pretty good idea though. It is okay. I am not sure about the problem either. Ok, my mistake. Not if you have This is for grade 9 and the other two were fairly straight forward. Could the question be: By Muirheads inequality: Therefore it is not mandatory that this can be done by the AMGM. So put these together and you get what you wanted. Now for (3). edit: Many hours later. I cannot do this one yet. I've put out a general request for more brains. but we know from the case that m and 1 + m > 0 so Now consider the alternative case that m < 0 That means we can replace |m| with -m this time we know 1 - m > 0 as m < 0 so If a product is negative then one factor must be + and one must be - so either m > 0 and (m + 1) < 0 but this contradicts the assumption that m < 0 or m < 0 and (m + 1) > 0 which leads to -1 < m < 0 Now to look at number (2). See next post. Hope that helps
<urn:uuid:6b7a2f21-92a9-4639-8eac-d6a44977c882>
2.84375
362
Comment Section
Science & Tech.
92.423611
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. Post a reply Topic review (newest first) So far all I can say is the Taylor theorem already states that somewhere so you are using a theorem to prove the linear approximation. Good to learn, thanks! That depends on a lot of things. Least squares is for discrete data. To fit to continuous data or functions can be done using collocation or Taylor series or Fourier series. Yes. This is what I meant when I said "But that's just one way of looking at the error". You can "define" your error to be whatever you want (as long as it's logical). You can define the error to be the sum of the absolute values (like I first said) of the differences but then you run into non-differentiability problems because of the absolute value and hence the minimization-maximization tools of calculus cannot be applied, which is why one uses least squares so that it is differentiable. I think that has already been done. You take the sum of the squares of the errors, this is called least squares. They use the squares because it is simpler than using the absolute value for computation. Me neither. I was just shooting ideas of what came to mind. But that's just one way of looking at the error, I guess. First of all I didnt say "the error OF the whole interval", I said "the error ON the whole interval". My idea was: I know what the error between F and L is at a single point. Then I wondered: How do I measure the error between F and L over the whole interval I? A plausible answer (although of course it may be wrong, hence why I'm asking here) was to take the error between F and L at every single point of the interval and add them up. (this is exactly the integral over I of |F - L|). This is then what I will call the error between F and L over the interval I. Then by looking at this integral, im assigning an "error" over the whole interval I to the line L. That is exactly how I do it. To be simple, take the case of a function from R to R, and a point c in R. What does it mean exactly that the derivative is THE best linear approximation? I guess it means that in the set of all lines passing through (c,f(c)), the line having slope = f'(c) , ie, the tangent line, is the one that provides a better approximation around a small interval I at c. However, how do we decide when a line is a "better approximation" than another line? Of course, we must speak of the error between the function values and the values of the line in the interval I, but how do we formalize this?
<urn:uuid:e4a28e83-c750-4afa-bbb4-061fdafc90da>
3.140625
641
Comment Section
Science & Tech.
68.551227
Angles, Interior and Corresponding Angles, Parallel |Two cars pass each other, going in opposite directions on separate and parallel roadways. A crazy driver breaks through all the guard rails and narrowly misses the other two drivers, Pip and Porter. Fortunately, Pip and Porter are outstanding math students, and once their hearts stopped pounding, they analyzed the situation. They first noticed that the crazy driver creates a transversal, which is a line that intersects two parallel lines. The situation looks like this: |Supplementary angles are angles that form a linear pair (add up to 180 degrees). Examples of supplementary angles are angles 1 and 3, 1 and 2, 2 and 4, 3 and 4, etc. Corresponding angles are angles that are in the same relative position that the two parallel lines make with the transversal. Examples of corresponding angles are angles 2 and 6, 1 and 5, etc. These angles have the same measure. Exterior angles are angles that are on the outside of the parallel lines that are cut by the transversal. Examples of exterior angles are angles 2 and 8, 1 and 7. These pairs add up to 180 degrees. Interior angles are the angles formed on one side of the transversal that cuts the parallel lines. Angles 4 and 6 as well as angles 3 and 5 are interior angles. These pairs add up to 180 degrees. The parallel lines property creates the above relationships. The Parallel Lines Property states that if two lines are cut by a transversal such that alternate interior or exterior angles have equal measure, then the two lines are parallel. It also states that two lines cut by a transversal are parallel if and only if corresponding angles have the same measure.
<urn:uuid:450bef2a-14ff-4fed-9961-d151fb4e802b>
3.90625
378
Documentation
Science & Tech.
50.904701
Joined: 16 Mar 2004 |Posted: Wed Apr 02, 2008 3:37 pm Post subject: Return of the nanostripes |4 July 2007 NanoTechWeb Return of the nanostripes A research team in Germany has discovered nanoscale stripe-like structures in high-temperature superconductors made from samarium, barium, copper and oxygen. Many physicists believe that such structures are responsible for the ability of these superconductors to carry currents without resistance, but others think they aren't. The new result could not only open up this debate again, but also shed more light on the origins of high-temperature superconductivity, which has been one of the biggest mysteries in physics for the last two decades. All high-temperature superconductors consist of parallel planes of copper oxide, with other elements sandwiched in between these layers. The copper atoms lie on a square lattice and the charge is carried by "holes" sitting on oxygen sites. Previous X-ray scattering measurements on yttrium barium copper oxide superconductors revealed spectra containing diffuse features that were attributed to the formation of stripes in the copper oxide planes. Many physicists believe that these stripes serve as channels along which the super-current can flow. In 2004, however, researchers in Germany found that these features had their origins in oxygen defects instead. Meanwhile, an independent team in the US observed "nanodomains", suggesting the same superstructures seen by the German group. These results meant that the stripes of charge might not be responsible for the ability of high-temperature superconductors to carry current without resistance after all. Now, Michael Koblischka and colleagues at Saarland University in Saarbruecken have observed nanoscale stripe-like structures in samarium barium copper oxide (SmBCO). The stripes are sometimes parallel over several microns and sometimes wavy. The researchers say that the structures may act as effective "pinning centres" thanks to their small-scale periodicity, which is typically 10–60 nm. This is the ideal pinning-centre size for these materials to achieve high critical current densities even at elevated temperatures of around 77 K. Koblischka and co-workers saw the stripes in single crystals of SmBCO grown by the so-called top-speed pulling technique and in melt-textured samples. Detailed atomic-force and scanning-tunnelling-microscopy measurements revealed that the nanostripes are formed by chains of individual nanoclusters from unit cells of the samarium-rich phase, Sm1+xBa2–xCu3Oy. "The higher transition temperature, Tc (of 93.5 K) and the larger critical current densities, Jc (of around 38,000 A/cm2 at T = 77 K and 2 T applied field) make these SmBCO materials interesting for bulk applications, such as levitation," Koblischka told nanotechweb.org. "Although the reason for their improved Jc is not yet clear, the appearance of the nanostripes may be the key." The researchers reckon that controlling these pinning structures, which run through the whole sample volume, could help improve the Jc further – especially at high external magnetic fields. Story posted: 5th July 2007
<urn:uuid:67ddcc7a-28cb-43c6-9336-81c1cc2cd3f1>
3.328125
692
Comment Section
Science & Tech.
29.835748
THE genetically engineered crops of tomorrow could be created in orbit. Away from the Earth's gravity, it becomes much easier to insert foreign genes into soya beans, claim biotechnologists in Indiana. Millions of hectares in the US are already planted with genetically modified soya beans. But creating these crops was hard work because soya is notoriously reluctant to accept new genes. "It's a major stumbling block," says Rick Vierling of Purdue University in West Lafayette. Like many plant biotechnologists, Vierling and his colleagues use the natural pathogen To deliver its genetic cargo, To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:5517c8b6-916c-4ca3-b838-6f2db32a890d>
3.5
150
Truncated
Science & Tech.
35.947105
BCH5425 Molecular Biology and Biotechnology Dr. Michael Blaber The restriction/modification system in bacteria is a small-scale immune system for protection from infection by foreign DNA. W. Arber and S. Linn (1969) Plating efficiencies of bacteriophage lambda (l phage) grown on E. coli strains C, K-12 and B, when plated on these bacteria: |E. coli strain on which parental phage had been grown| Thus, this combination of a specific methylase and endonuclease functioned as a type of immune system for individual bacterial strains, protecting them from infection by foreign DNA (e.g. viruses). Such endonucleases are referred to as "restriction endonucleases" because they restrict the DNA within the cell to being "self". The combination of restriction endonuclease and methylase is termed the "restriction-modification" system. Of course, this type of protective system is beaten if the attacking phage was previously grown on the same strain as that which it is infecting. In this case the phage will have its DNA already methylated at the appropriate sequence, and will be recognized as "self" (see the table above). E. coli strain 'C' (above) is strain which has no known restriction-modification system. We will discuss DNA replication later, but it should be mentioned that: Structural and biochemical studies have indicated that for the common R/M systems (so called type II), the methylase recognizes and methylates one strand of the DNA duplex, whereas the restriction endonuclease recognizes both strands of the DNA (i.e. both strands must be non-methylated for recognition). It is able to do this because it is a homo-dimer protein. Since different bacterial strains and species have potentially different R/M systems, their characterization has made available over 200 endonucleases with different sequence specific cleavage sites. |Alu I||Arthrobacter luteus|| | | 5' A G C T 3' 3' T C G A 5' | |"Four cutter". Leaves blunt ends to the DNA.| |Bfa I||Bacteroides fragilis|| | | 5' C T A G 3' 3' G A T C 5' | |"Four cutter". Leaves 5' overhang.| |Nci I||Neisseria cinerea|| | | C 5' C C G G G 3' 3' G G C C C 5' G | |"Five cutter". Middle base can be either cytosine or guanine. Leaves 5' overhang. Different recognition sites may have non-complementary sequences.| |Eco R1||Escherichia coli|| | | 5' G A A T T C 3' 3' C T T A A G 5' | |"Six cutter". Leaves 5' overhang. Behaves like a "four cutter" ('star' activity) in high salt buffer. $44 for 10,000 units.| |Hae II||Haemophilus aegyptius|| | | 5' Pu G C G C Py 3' 3' Py C G C G Pu 5' | |"Six cutter". Pu is any purine, Py is any pyrimidine. Leaves 3' overhang.| |EcoO109I||Escherichia coli|| | | 5' Pu G G N C C Py 3' 3' Py C C N G G Pu 5' | |"Seven cutter". Pu is any purine, Py is any pyrimidine, N is any base. Leaves 5' overhang. Different recognition sites may have non-complementary sequences.| |Bgl I||Bacillus globigii|| | | 5' GCCN NNNNGGC 3' 3' CGGNNNN NCCG 5' | |"Six cutter with interrupted palindrome". Leaves 5' overhang. Different recognition sites may have non-complementary sequences.| |Bsa HI||Bacillus stearothermophilus|| | | 5' G Pu C G Py C 3' 3' C Py G C Pu G 5' | |"Six cutter". Different recognition sites will be complementary.| |Aat II||Acetobacter aceti|| | | 5' G A C G T C 3' 3' C T G C A G 5' | |"Six cutter" with 3' overhang. Same recognition sequence as Bsa HI, but different cleavage position.| |Bpm I||Bacillus pumilus|| | | 5' C T G G A G N16 3' 3' G A C C T C N14 5' | |Non-palindrome, distal cleavage. Leaves 3' overhang. $50 for 50 units.| |Not I||Nocardia otitidiscaviarum|| | | 5' G C G G C C G C 3' 3' C G C C G G C G 5' | |"Eight cutter". Leaves 5' overhang.| |Bsm I||Bacillus stearothermophilus|| | | 5' G A A T G C N 3' 3' C T T A C G N 5' | |"weird". Leaves 3' overhang.| |Four||Alu I||256 (0.25 Kb)| |Five||Nci I||1024 (1.0 Kb)| |Six||EcoR I||4096 (4.1 Kb)| |Seven||EcoO109I||16384 (16.4 Kb)| |Eight||Not I||65536 (65.5 Kb)| Thus, on average, any given DNA will contain an Alu I site every 0.25 kilobases, whereas a Not I site occurs once about every 65.5 kilobases. The assortment of DNA fragments would represent a specific "fingerprint" of the particular DNA being digested. Different DNA would not yield the same collection of fragment sizes. Thus, DNA from different sources can be either matched or distinguished based on the assembly of fragments after restriction endonuclease treatment. These are termed "Restriction Fragment Length Polymorphisms", or RFLP's. This simple analysis is used in various aspects of molecular biology as well as a law enforcement and genealogy. For example, genetic variations which distingish individuals also may result in fewer or additional restriction endonuclease recognition sites. Restriction endonucleases are supplied in various concentrations with activities that are based upon cleavage rates of "standard" DNA samples. The reference DNA may actually have one or more recognition sites for the nuclease in question. DNA's used as "standard" samples may include phage l DNA, or the plasmid pBR322. The endonuclease hydrolysis is a spontaneous reaction and does not, for example, require addition of ATP. Reaction buffers for restriction endonucleases usually contain a buffer component (typically 10 mM TRIS buffer around pH 8.0), magnesium salt (often 10 mM MgCl2), a reducing agent (usually 1mM dithiothreitol, or DTT), a protective carrier protein (typically 100 ug/ml bovine serum albumin, or BSA), and salt (sodium chloride). The biggest determinant of enzyme activity is typically the ionic concentration (NaCl content) of the buffer. Although there are hundreds of different restriction endonucleases, the majority of them can exhibit between 30-100% activity using a simple system of three buffers, containing either low (20 mM), medium (100 mM) or high (250 mM) salt (NaCl) concentrations in the above described buffer. Enzyme digests are typically performed for 1-2 hours at 37 °C. However, quantitative digestion can sometimes only be achieved after extended incubation (i.e. overnight). 1998 Dr. Michael Blaber
<urn:uuid:a62dfcfd-9158-4556-aa3a-ce71bcea2523>
3.046875
1,739
Academic Writing
Science & Tech.
65.519808
Some people claim this story is based on a real event. But of course, it’s just an urban myth. The creation/evolution controversy is not immune to “urban myths.” Recent research has helped unravel one such “myth,” namely the belief that the vertebrate retina exhibits bad design. In 1996, evolutionary biologist Richard Dawkins wrote in regards to the vertebrate retina, Any engineer would…laugh at any suggestion that the photocells might point away from the light, with their wires departing on the side nearest the light. Yet this is exactly what happens in all vertebrate retinas. Each photocell is, in effect, wired backwards.1 The retina is a thin layer of light-responsive neural cells lining the interior back wall of the eye. It consists of photoreceptor cells that generate an electrical signal when light impinges upon them. At first glance the retina appears to be based on a questionable design. The light-sensitive region of photoreceptor cells orients away from the source of light. Furthermore, the nerve cell conduits to the optic nerve lie between the light source and photosensitive region of the photoreceptor cells—a design that would make any self-respecting engineer cringe. Thus, vertebrates’ “backward-wired” retina became an exemplar of bad design. Evolutionary biologists like Dawkins consider faulty designs in biological systems as prima fascia evidence that life stems from undirected mechanistic processes, not from the activity of a Creator. Debunking the myth But further research into the construction and function of the vertebrate eye has unraveled the bad design myth. Most recently, a team of Israeli physicists performed modeling studies on the optical properties of radial glial cells. Their results confirm previous work by German scientists.2 In 2007, a team of German researchers demonstrated that radial glial cells associated with the retina act as optical fibers.3 That is, the radial glial cells (star-shaped cells that help maintain the structure of nervous tissue and transport nutrients to neurons) form fibers oriented in the direction of light propagation through the retina. This allows them to efficiently transmit light from the surface of the retina to the photoreceptors. Radial glial cells have a higher refractive index than the surrounding tissue matrix, serving as a low-scattering conduit for light, and thus transmitting images capably and with little distortion. The work by the Israeli physicists supports the conclusion that these optical fibers compensate for the retina’s “bad design.” In light of these new insights, it is easy to see that the inverted retina is a well-designed system, worthy of the Creator. The Snopes.com website, devoted to debunking urban myths, has an entry for the “blind pilot,” but the last time I checked there was nothing about the vertebrate retina’s bad design. Looks like it’s time to update the site.
<urn:uuid:8c8cb556-459a-4d0c-96b3-ff9b5b9bc71c>
3.6875
615
Personal Blog
Science & Tech.
41.066923
So, NASA sent a new satellite up at Vandenburg Air Force Base today here in Southern California. The Aquarius observatory - their materials promise - will, within a few months, "collect as many sea surface salinity measurements as the entire 125-year historical record from ships and buoys." They expect to cover the earth's surface once every seven days. It's so accurate it can measure a pinch of salt in a gallon of water. Aquarius has research related goals, too: • The water cycle - 86% of global evaporation and 78% of global precipitation occur over the ocean; thus SSS is the key variable for understanding how fresh water input and output affects ocean dynamics • Ocean circulation - With temperature, salinity determines seawater density and buoyancy, driving the extent of ocean stratification, mixing, and water mass formation • Climate - As computer models evolve, Aquarius will provide the essential SSS data needed to link the two major components of the climate system: the water cycle and ocean circulation "Data from this mission will advance our understanding of the ocean and prediction of the global water cycle," said Michael Freilich, director of NASA's Earth Science Division in the Science Mission Directorate at agency headquarters in Washington. "This mission demonstrates the power of international collaboration and accurate spaceborne measurements for science and societal benefit. This would not be possible without the sustained cooperation of NASA, CONAE and our other partners." In addition to Aquarius, the observatory carries seven instruments that will monitor natural hazards and collect a broad range of environmental data. Other mission partners include Brazil, Canada, France and Italy. "This mission is the most outstanding project in the history of scientific and technological cooperation between Argentina and the United States," said CONAE Executive and Technical Director Conrado Varotto. "Information from the mission will have significant benefits for humankind." It's an interesting time to remember that not everybody likes a satellite as much as they do. A few months ago in Congress, the sprawling budget debate leaked briefly into environmental news, when House Republicans demanded cuts to NOAA. House Republicans, who have been looking for ways to shave $61.5 billion from the 2011 federal budget, stress that they don't want to specifically cut either of the warning centers -- a network of ocean buoys and deep-water sensors that alert scientists to changes in ground movement and tide levels and could indicate a tsunami is on the way. They just think the parent agency has some fat to trim. The Republicans are also going after federal funding for climate change research and other environmental projects in their proposed cuts. "Look, I think that all of us need to be tempered by the fact that we've got to stop spending money we don't have," said House Majority Leader Rep. Eric Cantor, R-Va., during a press briefing. "I mean, essentially what you're saying is, go borrow money from the Japanese so we can go and spend it there to help the Japanese." Now, okay. NOAA and NASA aren't the same. Budget debates are't good referendums on how people feel about satellites, exactly. Weather buoys got caught in the crossfire. But it's worth remembering that the bipartisan discussion is haunted by another spectre, too: climate. An article in Popular Science earlier this year documented how satellite contractors are selling climate sensing satellites based on their weather sensing capabilities. Even NOAA's Jane Lubchenco doesn't like saying the c-word: “Not having satellites and not applying their latest capabilities could spell disaster,” said NOAA's Lubchenko. “We are likely looking at a period of time a few years down the road where we will not be able to do severe storm warnings and long-term weather forecasts that people have come to expect today.” Anyway, this all reminded me of a few months ago, when Mike Conathan, who directs ocean policy at the Center for American Progress, wrote this: Environmental satellites are not optional equipment. This is not a debate about whether we should splurge on the sunroof or the premium sound system or the seat warmers for our new car. Today’s environmental satellites are at the end of their projected life cycles. They will fail. When they do, we must have replacements ready or risk billions of dollars in annual losses to major sectors of our economy and weakening our national security. Infrastructure, man. The boringest subject you never stopped needing to pay attention to. But maybe it's bold of NASA to admit it's interested in measuring something related to climate? Or maybe it was just too late to stop 'em.
<urn:uuid:31d95c1b-fe80-438b-8411-5c261dd3b9bc>
3.1875
946
Personal Blog
Science & Tech.
36.640364
I tend to avoid the creationist blogs. Every time I get sucked into that vortex of pseudoscience, I find the exact same debunked claims that were bunk when I was 12. There are better bloggers out there who have the energy and patience to systematically dissect the same tired old rubbish day after day, but I’m not one of them. This claim, however, is special. There’s nothing new in the rhetoric behind it, it’s just another “how could this commensalism/symbiosis/mutualism evolve? It must be magic!” mantra. And the analysis isn’t terribly sophisticated, anyone could do the basic googling to find out why every argument in it is either wrong or deceptive. What’s special is that it’s about one of my favorite critters, Osedax – the bone eating worm. The author misses the mark from the very beginning. He claims that: “Bone worms have specialized features that enable them to bore holes through whale bones. Ecologically, they serve to recycle whale bones back into the undersea environment.” While it is true that Osedax were first discovered on whale carcasses, since then they’ve been found on a host of vertebrate skeletons, including seals, pigs, and cows, and have been hypothesized to occur on dolphins, shark cartilage, and even large fish. Osedax occurrence is not limited exclusively to whales. That fact alone essential makes the rest of the article flawed. They also don’t recycle entire whale bones back into the ecosystem. Osedax extract certain compounds from the bones and (with the help of autotrophic endosymbionts) convert those compounds into energy. But they leave plenty of material behind, most notably calcium carbonate. Also, it should go without saying that the “purpose” of Osedax is not to recycle whale bones, the “purpose” is to survive and reproduce another generation of viable individuals and they’ve developed a way to occupy a specific ecologic niche in order to do that. At this point I should make clear that, while the author seems to be of the impression that there is only one species of Osedax, there are more than 20, each with different life-histories, distributions, and carcass preferences. The author goes on to make this bizarre claim: “The very fact that a creature as large as a whale became fossilized in the first place testifies to a uniquely terrible watery disaster in earth’s past. A whale would simply swim away from any of today’s local-level disturbances or tsunamis.” I guess we never see beached whales, predation, or illness, and whales must be immortal, since they clearly can’t die from old age. In fact, the only thing that could kill a whale is a catastrophic global event. Or maybe he’s arguing that whales wouldn’t be fossilized, since they couldn’t be buried quickly enough? In that case, isn’t he disproving his own point? Whale carcasses on the sea-floor can last for decades, providing a food source to a huge community. Osedax can only colonize the carcass once the bones are exposed. Fossil whale bones colonized by Osedax would never occur in a rapidly buried carcass. Further, the whale bones examined in the study had not yet been entirely recycled. They were largely intact yet riddled with bore holes. This shows that the bones were preserved before decomposition–a process that takes months, not millions of years–could be completed. Yet more confusion about what Osedax does. I’ll admit, “bone-eating worm” is a bit of a misleading name. Although the feeding ecology is not completely understood, Osedax do not eat the entire skeleton. They bore into hydrocarbon-rich bones and convert hydrocarbons and collagen into energy. They do not utilize calcium carbonate. So you would absolutely expect to see fossil bones with bore-holes. So when the author makes the argument that: The energy associated with inundating a whale is on an order of magnitude consistent with a global flood. And the relative lack of decay during the whale’s fossilization indicates a rapid, not lengthy, time for its burial and preservation. Is simply inconsistent with reality. His final statement: But evolutionary paleontologists ought to also be less happy about this co-occurrence of Osedax with whales, since their presence from the very beginning of the whale’s fossil record is consistent with biblical history. If God created them both on the same day or during the same week, it would stand to reason that there would be no time gap between the origin of whales and the origin of whale-bone-eating Osedax worms. Is contradicted by his own report that “The PNAS study concluded that Osedax was ‘at least 30 million years old.”‘ The earliest whale fossils date back nearly 65 million years. The seven sources at the bottom of this page go on to provide more details about these amazing animals. This is a prime example of pseudoscience distorting science under the assumption that readers won’t bother to fact-check their claims. ~Southern Fried Scientist Jones WJ, Johnson SB, Rouse GW, & Vrijenhoek RC (2008). Marine worms (genus Osedax) colonize cow bones. Proceedings. Biological sciences / The Royal Society, 275 (1633), 387-91 PMID: 18077256 Rouse, G., Wilson, N., Goffredi, S., Johnson, S., Smart, T., Widmer, C., Young, C., & Vrijenhoek, R. (2008). Spawning and development in Osedax boneworms (Siboglinidae, Annelida) Marine Biology, 156 (3), 395-405 DOI: 10.1007/s00227-008-1091-z Vrijenhoek, R., Collins, P., & Van Dover, C. (2008). Bone-eating marine worms: habitat specialists or generalists? Proceedings of the Royal Society B: Biological Sciences, 275 (1646), 1963-1964 DOI: 10.1098/rspb.2008.0350 Glover AG, Kemp KM, Smith CR, & Dahlgren TG (2008). On the role of bone-eating worms in the degradation of marine vertebrate remains. Proceedings. Biological sciences / The Royal Society, 275 (1646) PMID: 18505721 Haag, A. (2005). Marine biology: Whale fall Nature, 433 (7026), 566-567 DOI: 10.1038/433566a Goffredi SK, Orphan VJ, Rouse GW, Jahnke L, Embaye T, Turk K, Lee R, & Vrijenhoek RC (2005). Evolutionary innovation: a bone-eating marine symbiosis. Environmental microbiology, 7 (9), 1369-78 PMID: 16104860 Kiel, S., Goedert, J., Kahl, W., & Rouse, G. (2010). Fossil traces of the bone-eating worm Osedax in early Oligocene whale bones Proceedings of the National Academy of Sciences, 107 (19), 8656-8659 DOI: 10.1073/pnas.1002014107
<urn:uuid:03ac69eb-e2f5-48b5-991b-2a3b868f0a68>
2.75
1,582
Personal Blog
Science & Tech.
55.399708
Carbon Trapping or Is It Carbon Wasting? The EPA (Environmental Protection Agency) announced that they have concluded a first draft of a carbon trapping rules. These rules would require smokestack owners to filter the emissions from their smokestacks and create liquified CO2 that could be stored underground. Granted, preventing pollution is a must, but in these troubled economic times in America, isn’t there another option besides costing business more money? When the price of doing business goes up, we all know the worker is the first disposable asset to be hurt. Carbon Trapping or Carbon Sequestration as it is sometimes called, seems to be too expensive and includes no economic incentive. Last time we checked, plants were pretty great at converting CO2 into oxygen and trapping carbon in the structure of the plant. Why not turn the geseous CO2 pollution into CO2 tanks that can be used by professional plant growers? People who grow plants use CO2 tanks to promote accelerated growth and are accustomed to paying for their CO2 tanks. What about requiring businesses that are producing CO2 pollution to also grow plants in greenhouses, using the C02 created by the smokestack to feed the plants? This way energy producers and smokestack factories would still be spending money on preventing air pollution but they could also consider that expense as an investment into an agricultural product. Even if the greenhouses grew non edible plants that could be used to create biofuels, wouldn’t this solution create an economic incentive as part of the regulation of smokestack industries? The biofuel created could also be sold for additional profits. An additional benefit in this manufacturing structure would also be the fact that you can use the excess waste heat created by the smokestack to control the temperature in the greenhouses, allowing for the production of plants that normally do not grow in colder climates. Selling tropical fruits in a place in the world where those plants need to be imported from far away is a great way to make money on a cash crop. One other idea which might merit further exploration would be nanocarbon. Recently, inventors have used a super resilient material called nanocarbon in vehicles and other manufactured products. Is it possible to use the trapped carbon to create nanocarbon materials? Sticking something in your closet is one way to get it out of your face, but it is still taking up space in your storage space and doing nothing positive. You will have to expend more energy moving it out of your closet at some point anyways, so why not figure out something productive to do with it in the first place?
<urn:uuid:961a9c13-2694-4bb5-9090-86d54fc13920>
2.75
538
Personal Blog
Science & Tech.
38.244958
Newfoundland Predator Project CSI Newfoundland: molecular determination of caribou calf predators Student: Matt Mumma Visit Matt's research page The Newfoundland woodland caribou (Rangifer tarandus) population has decreased by greater than 66% since 1996. Habitat loss and overgrazing are likely implicated, but current recruitment is low due to high calf predation by black bears (Ursus americanus) and invasive coyotes (Canis latrans). Previously, kill site observations and necropsy results, when available, were used to assign the predator species to calf predation mortalities, but 26% of kills were unable to be assigned to a predator species. We used molecular techniques to identify the predator species at caribou calf kill sites in Newfoundland, Canada. In 2010, radio-telemetry collars were placed on 1 to 3 day-old caribou calves and calves were monitored from June through October. When a mortality signal was detected, the collar location was investigated to determine if predation had occurred. Calf mortality sites were searched for predator scat and hair. Calf carcasses were inspected for killing bite wounds as evidenced by hemorrhaging. An ethanol-soaked cotton swab was used to sample killing bite wounds for predator saliva cells. Other non-killing wounds were also sampled. In the absence of a carcass, bones, hide, and/or the collar were swabbed. Scat, hair, and swab samples were analyzed using several DNA species identification tests to distinguish among black bears, coyotes, lynx (Lynx canadensis) and red foxes (Vulpes vulpes). Molecular techniques identified a predator species at 92% of kill sites. None of these kill sites identified more than one predator species. 70% were attributed to coyotes and 30% were attributed to black bears. There was a 100% success rate in identifying the predator species from killing bite wound swabs. 75% were identified as coyotes and 25% were identified as black bears.
<urn:uuid:5ba18cf8-17d3-4522-b565-9c8e82fbf088>
3.203125
415
Academic Writing
Science & Tech.
29.968894
NASA satellites pinged the surface of the Earth with 2.5 million carefully positioned laser pulses to measure the height of the world's forests. The map above shows the peak height of the canopy with a spatial resolution of 0.6 miles. The tropics as you might expect, have a high canopy height with large areas of red for South America, south-central Africa, Indonesia, and Papua New Guinea. (Perhaps California redwoods are also appearing red.) Note the deserts of Africa and Australia have an absence of trees. NASA scientists hope the map will assist in estimating the carbon sinks formed by the forests. It will also be a valuable record for measuring climate change.
<urn:uuid:67858347-5d7f-472c-9759-8dc4a25aaf82>
3.75
136
Personal Blog
Science & Tech.
52.266571
In this image provided by NASA Tuesday Sept. 13, 2005 the Hubble Space Telescope "caught" the Boomerang Nebula in these new images taken with the Advanced Camera for Surveys. This reflecting cloud of dust and gas has two nearly symmetric lobes (or cones) of matter that are being ejected from a central star. Over the last 1,500 years, nearly one and a half times the mass of our Sun has been lost by the central star of the Boomerang Nebula in an ejection process known as a bipolar outflow. The nebula's name is derived from its symmetric structure as seen from ground-based telescopes. Hubble's sharp view is able to resolve patterns and ripples in the nebula very close to the central star that are not visible from the ground. The opinions expressed are solely those of the author and do not necessarily reflect the views of Comcast.
<urn:uuid:f7073be4-0186-41ea-80e1-3ce7ba941f90>
3.75
181
Truncated
Science & Tech.
51.596954
5. Diffusion and the mechanism To determine the frequencies of modes of oscillation for a star requires only that we solve the adiabatic equations. Solving the full nonadiabatic equations of stellar oscillation allows us to calculate the growth rates of the modes, and hence to determine which of the modes are overstable; also, by considering the work integral we can investigate the contributions of the different parts of the star to the excitation and damping of the mode. The nonadiabatic oscillation package used was generously provided to us by W. Dziembowski and follows the procedure first described by Dziembowski (1977). We are mainly concerned here with excitation via the mechanism on which abundance variations have a direct impact. We note, however, that the present calculations lack a good modeling of the effect of convection; this must be kept in mind in the analysis of the results. The physics of the mechanism has been reviewed extensively (e.g. Cox 1980; Unno et al. 1979; Gautschy & Saio 1995). As a quick reminder, generating pulsations in a star requires that the energy gained by an oscillation mode over a complete cycle be larger than the energy lost. We are then looking for a positive net work over the entire star over one cycle. In the case of the mechanism, the energy is transferred from the outward radiation flux to the oscillation mode via the opacity. A mode becomes overstable by this mechanism if the opacity profile and its derivatives have the right features. Following Unno et al. (1979), from the definition of the work (W) as the variation of the kinetic energy (E) over a cycle, one can write where denotes Lagrangian perturbations. Also, is the (angular) oscillation frequency, T is temperature, is the mass interior to the radius r, is the nuclear energy generation rate, and and are the radiative and convective fluxes. If one neglects the contribution from the nuclear () and convective terms (), and only keeps the perturbation of the radiative flux (), one can isolate the contribution of the mechanism to the driving of a given mode of oscillation. To obtain a simple estimate of this contribution to the work integral we make the quasi-adiabatic approximation (i.e. , evaluate the work integral by means of adiabatic eigenfunctions), and furthermore assume that the adiabatic thermodynamic derivatives , and are constant. Then the work done by the mechanism is proportional to contribute to the excitation. Local increases in the logarithmic derivatives of are necessary and a decrease in in partial ionization zones of a dominant species (H or He) is helpful. It also follows that regions where the gradients of and are negative contribute to damping of the pulsation. The term usually dominates over the other term. The numerical results reported in the text, including the growth parameters and work integrals, are computed using the full nonadiabatic procedure of the Dziembowski code. In order that the excitation by the mechanism should not be cancelled by damping elsewhere, it is necessary that the driving region lie in the so-called transition zone between the quasi-adiabatic and nonadiabatic regimes; in that case, the oscillations are strongly nonadiabatic outside the driving region, and this part of the star therefore does not contribute to the damping, giving rise to net driving. This leads to an approximate relation between the period of a given mode of pulsation and the position of the transition region in a star (Cox 1980): here is the mass outside the transition region, is the average over that part of the star, being the specific heat, and L is the luminosity. The normalized growth rate is defined as In this formulation, varies from , if there is driving in the entire star, to -1, if there is damping in the entire star. The value of zero defines neutral stability. Diffusion affects the mechanism by decreasing driving from helium in favour of driving from metals. As a consequence of Eq. (6), the pulsation period of the unstable modes depends on the depth of the driving region. During a star's evolution the helium ionization zone gradually shifts deeper in the star, thereby increasing the period of the observed pulsation modes. Additionally, as the driving in the deeper iron-peak driving region increases while the driving due to helium decreases as a result of diffusion, one might expect the observed pulsation periods to shift to even longer periods. The effect of abundance variations on the opacity profiles is discussed below for selected models (see Fig. 3). © European Southern Observatory (ESO) 2000 Online publication: August 17, 2000
<urn:uuid:175f7cae-e99c-4cc1-a9dc-880cbbad6a08>
2.9375
980
Academic Writing
Science & Tech.
32.028684
Today I found out that one Calorie is equivalent to one gram of TNT in terms of energy. First of all, lets clarify what is one calorie means. A Calorie is the amount of energy required to raise the temperature of one kilogram of water by one degree Celsius. One Calorie is also approximately 4.184 kilojoules or about 1.16 watt/hours. TNT (Trinitrotoluene), is a chemical compound. This yellow-colored solid is sometimes used as a reagent in chemical synthesis, but it is best known as a useful explosive material with convenient handling properties. The explosive yield of TNT is considered to be the standard measure of strength of bombs and other explosives. Unlike a Calorie though, TNT is also an actual thing, namely Trinitrotoluene, which is a yellow colored substance that has some interesting properties for an explosive. The explosive yield of TNT is considered the standard measure for strength of bombs and other explosives with 1 ton of TNT equaling 4.184 gigajoules. So 1 kg of TNT then equals 4.6 megajoules, thus a single gram of TNT is equivalent in energy to one Calorie. For further comparison, 1 kg of gunpowder will produce 3 megajoules of energy when exploding (about 2/3 kg of TNT), 1 kg of dynamite contains 7.5 megajoules when exploding (about 1.6 kg of TNT), 1 kg of gasoline produces 47.2 megajoules (about 10.26 kg of TNT), though of course requires an oxidant. Now I know... Originated from Today I Found Out
<urn:uuid:0b64e07f-1fbd-420a-aeab-8ad096aa43bd>
3.015625
340
Personal Blog
Science & Tech.
59.531903
As I move around the states looking at the differences between the GISS station temperatures, the USHCN homogenized temperatures and the original raw temperature data (albeit modified by a correction for the Time of Observation (TOBS), we have returned to Kansas, which I had originally written about before the TOBS data became available, and then written again looking at that data. Since then the style of the presentations has changed a little bit, so I am going to redo the Kansas post, and just put the combined temperature reference into the list down the right side of the site, so as to bring it (and later some of the other states) into the same format as I have now evolved. So to begin , Kansas has 31 USHCN stations, that are reasonably evenly distributed around the state: USHCN station locations in Kansas (USHCN) Back when I did the initial TOBS data analysis I was still sufficiently naïve that I did not realize how, by manipulation of the station picks, GISS could manipulate the temperatures that it reports. Chiefio has explained both how the current selection raises the overall U..S.A. station record by 0.6 deg C, over the USHCN average, by selective deletion of stations, for 2008, for example. There is also a new set of 59 GISS stations that have been added, but these do not have the historic data that I have been comparing to, so I am going to ignore those additions for the rest of this series. However there is one additional point that this study has brought out about how NASA shall we say “fudges” their data. I first mentioned, when looking at Idaho’s temperatures, the GISS habit of marrying a short-term station with a long-term one. E.M. Smith (Chiefio) has done a detailed post explaining how, by using this type of combination it is possible to generate a trend that does not exist in the initial data. What has since been interesting, as I have continued with this series, is finding just how many state temperature situations that applies to. And the first one that I noticed, but not then realizing the reason, was Kansas, the second state that I had looked at There are 5 GISS stations on the list, including Witchita , Topeka, Concordia,, Dodge City and Goodland. It is Goodland that only has data since 1948. Goodland KS GISS station data Because of the small number of stations, the impact of the partial temperature record is significant on the relative difference between the GISS and USHCN average values (using the homogenized USHCN data for this) Difference between the average GISS station temperature and that of the homogenized USHCN stations Turning to the TOBS data, and looking at the average temperature change in the state, over time, the temperature rise is 0.85 degrees per century (the homogenized data would suggest a rise of 1.2 degrees.) Average TOBS temperatures for Kansas (USHCN) The geography of Kansas is that it is 400 miles long and 210 miles wide, running from roughly 94.5 deg W to 102 deg W, and 37 deg N to 40 deg N. The highest point is at 1,231 m, and the average elevation is at 609.6 m. (The average USHCN station is at 511 m and the average GISS station at 586 m). Looking therefore at the variation in temperature with the geographical parameters of the state: Effect of station latitude on recorded temperatures in Kansas This is a state where the effect of latitude is clearly defined. Given that the state elevation is steadily rising to the West, as we move towards the Rockies it is not surprising that we see: The stronger and real correlation is with the changing elevation There was not, in the original search for data, much problem in finding population numbers The effect of current population size on average station temperature for Kansas Note that for the curve above, because it is the only source of information for the larger cities in the state, I have included the GISS station data. And finally there is the difference that shows the effect of USHCN homogenization of the data relative to that recorded and adjusted only for time of observation. I am replacing the original two posts of Kansas on the comprehensive list with this post so as to give a greater consistency to the formatting.
<urn:uuid:a6c6c56f-a08a-459c-b955-21915cc0589e>
2.703125
918
Personal Blog
Science & Tech.
41.251923
Coronet Cluster: A region of star formation about 420 light years from Earth. Caption: The Corona Australis region is one of the nearest and most active regions of ongoing star formation in our Galaxy. At only 420 light years away, the Coronet is 3.5 times closer than the Orion Nebula Cluster. The Coronet contains a loose cluster of a few dozen known young stars with a wide range of masses at various stages of evolution. The central area of the star-forming region contains the densest clustering of very young stars, embedded in dust and gas. This composite image shows the Coronet in X-rays (Chandra, purple) and infrared emission (Spitzer, orange, green, and cyan). By studying the variability in different energies, scientists hope to better understand the evolution of very young stars. Scale: Image is 16.8 arcmin per side. Chandra X-ray Observatory ACIS Image
<urn:uuid:ede59da5-ce69-4cc0-9554-63c7c773df7d>
3.5625
212
Truncated
Science & Tech.
51.765703
Because for truly reliable information on a subject matter as extensively researched as earth's climate, what you should be looking for are statements of large and independent scientific organizations of good reputation. In the case of climate change, that would be national or international bodies of geologists, climatologists, biologists, geophysical unions, stuff like that. Once you check out what they have to say, you will find out that there really isn't much of a debate: An increasing body of observations gives a collective picture of a warming world and other changes in the climate system [...] There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities. - Climate Change 2001: Working Group I: The Scientific Basis, IPCC, January 2001. That is just one of the many statements of prominent organizations. Even those statements should be checked against others of equal standing: That is the level on which we determine whether there is a consensus or an actual controversy. In the case of climate change, there is an overwhelming consensus. Not a single scientific body of national or international standing has maintained a dissenting opinion. There is no controversy; just some big money trying to dispute the facts. Kinda like creationism, only without the weird rituals.
<urn:uuid:e528b5eb-6dbb-4be4-8a37-47b75c227f49>
2.84375
255
Personal Blog
Science & Tech.
36.286778
St. Helens might be part of a supervolcano. IS A supervolcano brewing beneath Mount St Helens? Peering under the volcano has revealed what may be an extraordinarily large zone of semi-molten rock, which would be capable of feeding a giant eruption. Magma can be detected with a technique called magnetotellurics, which builds up a picture of what lies underground by measuring fluctuations in electric and magnetic fields at the surface. The fields fluctuate in response to electric currents travelling below the surface, induced by lightning storms and other phenomena. The currents are stronger when magma is present, since it is a better conductor than solid rock. Graham Hill of GNS Science, an earth and nuclear science institute in Wellington, New Zealand, led a team that set up magnetotelluric sensors around Mount St Helens in Washington state, which erupted with force in 1980. The measurements revealed a column of conductive material that extends downward from the volcano. About 15 kilometres below the surface, the relatively narrow column appears to connect to a much bigger zone of conductive material. I remember when St. Helens blew in 1980; besides all the general noise, for months afterward every time the thing hiccupped- or seemed about to- the voice on the Civil Defense phone would put out a warning. Assuming it actually is one part of a supervolcano base, if it ever blew for real there'd probably be one warning- "Ok, folks, we're all screwed"- and that's it.
<urn:uuid:0a88f040-dfde-4eaf-9783-b271e2f123a6>
3.53125
313
Personal Blog
Science & Tech.
42.325709
An object is hashable if it has a hash value which never changes during its lifetime (it needs a __hash__() method), and can be compared to other objects (it needs an __eq__() or __cmp__() method). Hashable objects which compare equal must have the same hash value. Hashability makes an object usable as a dictionary key and a set member, because these data structures use the hash value internally. All of Python’s immutable built-in objects are hashable, while no mutable containers (such as lists or dictionaries) are. Objects which are instances of user-defined classes are hashable by default; they all compare unequal, and their hash value is their id(). Definition from http://docs.python.org/glossary.html
<urn:uuid:6aa8753b-b6d9-4387-afb1-0f4073c32a57>
3
163
Structured Data
Software Dev.
47.419228
The blind cavefish (Astyanax mexicanus) is a sightless version of a popular aquarium species, the Mexican tetra. They live in 29 deep caves scattered throughout Mexico, which their sighted ancestors colonised in the middle of the Pleistocene era. In this environment of perpetual darkness, the eyes of these forerunners were of little use and as generations passed, they disappeared entirely. They now navigate through the pitch-blackness by using their lateral lines to sense changes in water pressure. But there is a deceptively simple way of restoring both the eyes and sight that evolution has taken, and Richard Borowsky from New York University’s Cave Biology Research Group has found it. You merely cross-bred fish from different caves. Unlike their completely eyeless parents, the hybrids develop eyes albeit ones that are smaller than those of their relatives on the surface. More amazingly still, many of them could actually see, as shown by their ability to reflexively follow a series of moving stripes. In the most successful inter-cave cross, over a third of the offspring had working eyes. And if the blind fish were bred with surface ones, every single one of their offspring could see. Not bad for a lineage that hasn’t seen light for over a million years! The hybrids’ restored eyes are a reflection of the genetic changes of their parents. Eyes are very complicated structures and their development is governed by a whole suite of genes. In a previous study, Borowsky found eye genes in twelve different places around the genome of one cavefish population. Mutating any of these could interfere with the production of a working eye, which means that there are many ways of evolving blindness. Fish populations from different caves have each taken their own individual route, involving changes to different combinations of genes. Based on his new data, Borowsky thinks that this happened on at least three independent occasions, with each group losing their eyes through changes in three or four of the twelve key sites. But in the hybrids, every faulty gene from one parent was compensated for by the working version from the other. Borowsky also found that two fish had a greater chance of producing a hybrids if they hailed from closer caves. That suggests that fish from neighbouring caves are more closely related than those from distant ones and have more similar genes underlying their blindness. More on hybrids: Image courtesy or Richard Borowsky Reference: Borowsky, R. (2008). Restoring sight in blind cavefish. Current Biology, 18(1), R23-R24.
<urn:uuid:782b916b-ca0c-4074-b40f-1bdd2eb30e2f>
4.28125
523
Personal Blog
Science & Tech.
42.545625
Eternal Life of Stardust The Eternal Life ENLARGE Image IPAC NASA High Resolution by Staff WritersPasadena CA (SPX) Sep 01, 2006 A new image from NASA's Spitzer Space Telescope is helping astronomers understand how stardust is recycled in galaxies. The cosmic portrait shows the Large Magellanic Cloud, a nearby dwarf galaxy named after Ferdinand Magellan, the seafaring explorer who observed the murky object at night during his fleet's historic journey around Earth. Now, nearly 500 years after Magellan's voyage, astronomers are studying Spitzer's view of this galaxy to learn more about the circular journey of stardust, from stars to space and back again. "The Large Magellanic Cloud is like an open book," said Dr. Margaret Meixner of the Space Telescope Science Institute, Baltimore, Md. "We can see the entire lifecycle of matter in a galaxy in this one snapshot." Meixner is lead author of a paper on the findings to appear in the November 2006 issue of the Astronomical Journal. The vibrant false-color image, a mosaic of approximately 300,000 individual frames, shows a central blue sea of stars amidst lots of colorful, choppy waves of dust. Space dust is important for making stars, planets and even people. The tiny particles -- flecks of minerals, ices and carbon-rich molecules -- are everywhere in the universe. Developing stars and solar systems are constantly consuming dust, while older stars shed dust back into space, where it will one day provide the ingredients for new generations of stars. Spitzer, an infrared observatory orbiting the sun, is extremely sensitive to the infrared glow of dust that arises when stars heat it up. The observatory's unprecedented view of the Large Magellanic Cloud offers a unique look at three stops on the eternal ride of dust through a galaxy: in collapsing envelopes around young stars; scattered about in the space between stars; and in expelled shells of material from old stars. "The Spitzer observations of the Large Magellanic Cloud are giving us the most detailed look yet at how this feedback process works in an entire galaxy," said Meixner. "We can quantify how much dust is being consumed and ejected by stars." In addition to dust, Spitzer's view reveals nearly one million never-before-seen objects, most of which are stars in the Large Magellanic Cloud. The hidden stars, both young and old, are embedded in layers of dust that block visible starlight but shine in infrared. The Large Magellanic Cloud is one of a handful of dwarf galaxies that orbit our own Milky Way. It is located near the southern constellation Dorado, about 160,000 light-years from Earth. About one-third of the whole galaxy can be seen in the Spitzer image. Astronomers believe that approximately six billion years ago, not long before our solar system formed, this dwarf galaxy was shaken up via a close encounter with the Milky Way. The resulting chaos triggered bursts of massive star formation similar to what is thought to occur in more primitive galaxies billions of light-years away. This and other distant-galaxy traits, such as an irregular shape and low abundance of metals, make the Large Magellanic Cloud the perfect nearby target for studying the faraway universe. This research is part of a Spitzer Legacy program called Surveying the Agents of a Galaxy's Evolution, also known as Sage. The international Sage team includes more than 50 astronomers spread over the globe from Japan to the United States. The main data centers are located at: the Space Telescope Science Institute, Baltimore, Md., led by Meixner; University of Arizona, Tucson, led by Gordon; and University of Wisconsin, Madison, led by Dr. Barbara Whitney. NASA's Jet Propulsion Laboratory, Pasadena, Calif., manages the Spitzer Space Telescope mission for NASA's Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology, also in Pasadena. Caltech manages JPL for NASA. Spitzer's infrared array camera and multiband imaging photometer captured the new image. The camera was built by NASA's Goddard Space Flight Center, Greenbelt, Md. Its principal investigator is Dr. Giovanni Fazio of the Harvard-Smithsonian Center for Astrophysics. The photometer was built by Ball Aerospace Corporation, Boulder, Colo.; the University of Arizona; and Boeing North American, Canoga Park, Calif. Its principal investigator is Dr. George Rieke of the University of Arizona, Tucson. Original Text: Space Daily 01 Sept: The Eternal Life Of Stardust Portrayed General Relativity Survives Gruelling Pulsar Test: Einstein At Least 99.95 Percent Right An international research team led by Prof. Michael Kramer of the University of Manchester's Jodrell Bank Observatory, UK, has used three years of observations of the "double pulsar", a unique pair of natural stellar clocks which they discovered in 2003, to prove that Einstein's theory of general relativity - the theory of gravity that displaced Newton's - is correct to within a staggering 0.05%. Their results are published on the14th September in the journal Science and are based on measurements of an effect called the Shapiro Delay. Here's a depiction of the double pulsar system currently being tracked by the international team of radio astronomers who discovered it, including Dr. Duncan Lorimer and Dr. Maura McLaughlin of West Virginia University. The pulsars are the remnants of two massive stars that burned out by way of supernova explosions. They measure just 12 miles across, but each weighs more than our own Sun. Note the "bend" in the space-time fabric from the sheer mass of the two bodies. Image courtesy of West Virginia University The double pulsar system, PSR J0737-3039A and B, is 2000 light-years away in the direction of the constellation Puppis. It consists of two massive, highly compact neutron stars, each weighing more than our own Sun but only about 20 km across, orbiting each other every 2.4 hours at speeds of a million kilometres per hour. Separated by a distance of just a million kilometres, both neutron stars emit lighthouse-like beams of radio waves that are seen as radio "pulses" every time the beams sweep past the Earth. It is the only known system of two detectable radio pulsars orbiting each other. Due to the large masses of the system, they provide an ideal opportunity to test aspects of General Relativity: Gravitational redshift: the time dilation causes the pulse rate from one pulsar to slow when near to the other, and vice versa. Shapiro delay: The pulses from one pulsar when passing close to the other are delayed by the curvature of space-time. Observations provide two tests of General Relativity using different parameters. Gravitational radiation and orbital decay: The two co-rotating neutron stars lose energy due to the radiation of gravitational waves. This results in a gradual spiralling in of the two stars towards each other until they will eventually coalesce into one body. By precisely measuring the variations in pulse arrival times using three of the world's largest radio telescopes, the Lovell Telescope at Jodrell Bank, the Parkes radio-telescope in Australia, and the Robert C. Byrd Green Bank Telescope in West Virginia, USA, the researchers found the movement of the stars to exactly follow Einstein's predictions. "This is the most stringent test ever made of General Relativity in the presence of very strong gravitational fields -- only black holes show stronger gravitational effects, but they are obviously much more difficult to observe", says Kramer. Since both pulsars are visible as radio emitting clocks of exceptional accuracy, it is possible to measure their distances from their common centre of gravity. "As in a balanced see-saw, the heavier pulsar is closer to the centre of mass, or pivot point, than the lighter one and so allows us to calculate the ratio of the two masses", explains co-author Ingrid Stairs, an assistant professor at the University of British Columbia in Vancouver, Canada. "What's important is that this mass ratio is independent of the theory of gravity, and so tightens the constraints on General Relativity and any alternative gravitational theories." adds Maura McLaughlin, an assistant professor at West Virginia University in Morgantown, WV, USA. Though all the independent tests available in the double pulsar system agree with Einstein's theory, the one that gives the most precise result is the time delay, known as the Shapiro Delay, which the signals suffer as they pass through the curved space-time surrounding the two neutron stars. It is close to 90 millionths of a second and the ratio of the observed and predicted values is 1.0001 +/- 0.0005 - a precision of 0.05%. A number of other relativistic effects predicted by Einstein can also be observed. "We see that, due to its mass, the fabric of space-time around a pulsar is curved. We also see that the pulsar clock runs slower when it is deeper in the gravitational field of its massive companion, an effect known as "time dilation". A key result of the observations is that the pulsar's separation is seen to be shrinking by 7mm/day. Einstein's theory predicts that the double pulsar system should be emitting gravitational waves - ripples in space-time that spread out across the Universe at the speed of light. "These waves have yet to be directly detected ", points out team member Prof. Dick Manchester from the Australia Telescope National Facility, "but, as a result, the double pulsar system should lose energy causing the two neutron stars to spiral in towards each other by precisely the amount that we have observed - thus our observations give an indirect proof of the existence of gravitational waves." Michael Kramer concludes; "The double pulsar is really quite an amazing system. It not only tells us a lot about general relativity, but it is a superb probe of the extreme physics of super-dense matter and strong magnetic fields but is also helping us to understand the complex mechanisms that generate the pulsar's radio beacons." He concludes; "We have only just begun to exploit its potential!" Science Daily releases: 14th Sept 2006 Source: PPARC Freedom means the opportunity to be what we never thought we would be. Daniel J. Boorstin
<urn:uuid:722dccf9-152a-4d11-9a48-903ec11ce51d>
3.4375
2,154
Personal Blog
Science & Tech.
39.529802
How do we know about the different climates that Britain has experienced in the past? Geologists find evidence in all sorts of places. The characteristics of different rocks depend on the environment in which the sediments were deposited. Some sands and gravels are dropped by glaciers as they melt and they become a distinctive rock called till. Where till is found there must have been glaciers and therefore it must have been cold. Rocks that form in a hot desert environment are often coloured red with iron deposits. Also at high temperatures, sea water can evaporate quickly leaving behind a layer of salt on the ground which becomes preserved in the rocks and is another indicator of a hot climate. Different species of plants and animals need different conditions to survive. Some plants and animals can be very sensitive to climate and do not adapt easily to change. For example coral reefs live in tropical waters. They need a particular temperature, a specific depth of water and the right amount of light. If the depth of the water changes just a fraction, they cannot survive. Therefore where fossil corals are found it is possible to estimate fairly precisely the environment they lived in by assuming that they needed the same conditions as those that thrive today. Plants produce pollens and spores that are particularly useful in helping to determine climate. They are tiny with a resistant outer case and are produced in millions. This means that they can be covered in mud quickly and are more easily preserved as fossils than large animals. Each plant has different shaped pollen or spores so when the fossil is put under a microscope it is possible to identify the type of plant it came from. Different plants are adapted to different climates therefore looking at all the types of pollen present in a layer of rock can be a good indication of the climate at the time when they were living. Microfossils are tiny fossils, usually smaller than 4 mm in size. They can be the remains of minuscule life forms or microscopic parts of plants and animals, for example tiny bones from larger animals. Like other plants and animals they may require specific living conditions, but they are more abundant and are found in many types of sedimentary rocks. Because of this they can be a useful indicator of how climate has changed over a period of time. |Stages||Diversity trends of Cytherelloidea in the boreal North Sea||Bioevents| Third phase of diversification associated with rising temperatures Extinction related to the onset of cooler climate Second phase of diversification during the warmer late Aptian Extinction related to deteriorating climatic conditions First phase of diversification during the warmer Hauterivian First consistent appearance in the North Sea Earliest appearance in the Cretaceous North Sea Today's landscape gives clues to the climate of the past. Glaciers for instance, left telltale signs of their activity. As glaciers moved slowly down river valleys in the UK, they often carved out a semicircular shape, similar to the deep U-shaped valleys of the Norwegian fjords. Glaciers also pick up pieces of rock and gravel and, as they move forwards, this debris scratches grooves called 'striations' into the rocks on the valley floor. When the ice melts, the very large pieces of rock they have been carrying are dumped, often many miles from where they were picked up. These abandoned boulders are called 'erratics'. All of these kinds of erosion and deposition, clearly tell of temperatures much colder than our current climate. There is more than one type of oxygen and these different types are called isotopes. Some of these isotopes are ‘heavier’ than others and this mass is determined by the number of neutrons the isotopes contain. Oxygen-16 (containing 8 protons and 8 neutrons in each atomic nucleus) is lighter than oxygen-18 (8 protons and 10 neutrons). In water the relative amount of each of type of oxygen varies with the temperature. Rain and ice contain water high in oxygen-16 so when there is lots of ice the ocean water is left with a lot of oxygen-18. Shells and sediments carry the isotopic signature of the water in which they were formed. Therefore, by analysing the ratio of oxygen-16 to oxygen-18 recorded in rocks, fossils, ice and sediments we can find evidence of the climate at the time they were formed.
<urn:uuid:ca78d49b-05f7-4e5f-9727-a8d80eb40199>
4.53125
911
Knowledge Article
Science & Tech.
37.463814
Bio-Rock is a technology that uses low voltage electrical current on artificial underwater structures to encourage growth of Corals and other reef life. Experiments with the technology worldwide have shown that it can help counteract some of the difficult environmental factors affecting coral growth. In conjunction with Save Koh Tao, Big Blue and a consortium of other dive schools launched a pilot project a few years ago to see if the technology would be successful here. The pilot project has been so successful that a new larger Bio-Rock was constructed in 2008. Now that the structure is in situ, regular dives are scheduled to the site for two main reasons. The first is to continue to plant broken Coral pieces on the structure and the second is to continue to monitor the growth of the test subjects. When the structure was first finished, test subjects were placed on the structure and tagged so that their growth could be monitored. This information is collated by Marine Conservation Koh Tao for scientific purposes. When a dive group visits the structure, various data including photographs are taken and then forwarded to Marine Conservation Koh Tao. For more information on the technology visit www.biorock.net.
<urn:uuid:858f2667-7d9d-46c5-b24f-38f059e7bd22>
2.96875
230
Knowledge Article
Science & Tech.
44.018235
NASA Arctic Observatory & Sea Ice in the Polar Regions Field of Science or Math 9, 10, 11, 12 Arctic Observatory - Interactively addresses Arctic phenomena and processes, allowing students to ask and answer questions about interrelationships among several physical aspects of the Arctic system. A printable teacher's guide is included on the CD. Sea Ice in the Polar Regions - Describes sea ice classification, observation, and climate impacts. <a href= http://www.usra.edu/esse/learnmod.html>Earth System Science and Global Change </a>
<urn:uuid:a17d4cef-00dd-4310-a299-24749dfd106b>
3.03125
118
Content Listing
Science & Tech.
38.883323
The basics of embedded software testing: Part 1 Test is the last step in traditional software development. We gather requirements, do high level design, detailed design, create code, do some unit testing, then integrate and start—finally— final test. Since most projects run late, what do you think gets cut? Test, of course. The implication is that we deliver bug-ridden products that infuriate our customers and drive them to competitive products. Best practice development includes code inspections. Yet inspections typically find only 70% of a system’s bugs, so a fabulous test regime is absolutely essential. Test is like a double-entry bookkeeping system that insures mistakes don’t leak into the deployed product. In every other kind of engineering testing is considered fundamental. In the USA, every Federally funded bridge must undergo extensive wind tunnel tests, for instance. Mechanical engineers subject spacecraft to an almost bizarre series of evaluations. It’s quite a sight to see a 15-foothigh prototype being nearly torn to pieces on a shaker, which vibrates at a rate that puts a thousand-Hertz tone into the air. The bridge prototype, as well as that of the shaken spacecraft, are discarded at great expense, but in both cases that cost is recognized as a key ingredient of proper engineering practices. Yet in the software world test is the ugly stepchild. No one likes to do it. Time spent writing tests feels wasted, despite the fact that test is a critical part of all engineering disciplines. Many segments of the embedded systems design community have thankfully embraced test as a core part of their processes, and advocate creating tests synchronously with writing the code, realizing that leaving such a critical step till the end of the project is folly. Application versus embedded testing. Embedded systems software testing shares much in common with application software testing. Thus, much of this two part article is a summary of basic testing concepts and terminology. However, some important differences exist between application testing and embedded systems testing. Embedded developers often have access to hardware-based test tools that are generally not used in application development. Also, embedded systems often have unique characteristics that should be reflected in the test plan. These differences tend to give embedded systems testing its own distinctive flavor. This article covers the basics of testing and test case development and points out details unique to embedded systems work along the way. Before you begin designing tests, it’s important to have a clear understanding of why you are testing. This understanding influences which tests you stress and (more importantly) how early you begin testing. In general, you test for four reasons: • To find bugs in software (testing is the only way to do this) • To reduce risk to both users and the company • To reduce development and maintenance costs • To improve performance To Find the Bugs. One of the earliest important results from theoretical computer science is a proof (known as the Halting Theorem) that it’s impossible to prove that an arbitrary program is correct. To Reduce Costs.The classic argument for testing comes from Quality Wars by Jeremy Main. In 1990, HP sampled the cost of errors in software development during the year. The answer, $400 million, shocked HP into a completely new effort to eliminate mistakes in writing software. The $400M waste, half of it spent in the labs on rework and half in the field to fix the mistakes that escaped from the labs, amounted to one-third of the company’s total R&D budget and could have increased earnings by almost 67%. The earlier a bug is found, the less expensive it is to fix. The cost of finding errors and bugs in a released product is significantly higher than during unit testing, for example (Figure 2-1 below). Figure 2-1: The Cost to Fix a Problem. Simplified graph showing the cost to fix a problem as a function of the time in the product life cycle when the defect is found. The costs associated with finding and fixing the Y2K problem in embedded systems is a close approximation to an infinite cost model. To Improve Performance. Testing maximizes the performance of the system. Finding and eliminating dead code and inefficient code can help ensure that the software uses the full potential of the hardware and thus avoids the dreaded “hardware re-spin.”
<urn:uuid:7b765437-16a6-49eb-9b39-bf1dcd901f4e>
2.953125
896
Tutorial
Software Dev.
42.4325
|Paleo Slide Set: The Ice Ages| |Earth's axial tilt, adapted from Pisias and Imbrie [1986/1987]| Croll was aware that the obliquity of earth's axis varied through time, but Leverrier's astronomical calculations only allowed Croll to include eccentricity and precession in his theory. Milankovitch, on the other hand, benefited from innovations in astronomy that made it possible to incorporate changes in tilt into his calculations. Earth's axial tilt varies from 24.5 degrees to 22.1 degrees over the course of a 41,000-year cycle. Changes in axial tilt affect the distribution of solar radiation received at the earth's surface. When the angle of tilt is low, polar regions receive less insolation. When the tilt is greater, the polar regions receive more insolation during the course of a year. Like precession and eccentricity, changes in tilt thus influence the relative strength of the seasons, but the effects of the tilt cycle are particularly pronounced in the high latitudes where the great ice ages began. Pilgrim's new calculations as his guide, Milankovitch embarked on an exhaustive series of calculations. Without a computer or even a calculator, the task was arduous indeed. While the calculations were complex, the reasoning behind them was quite simple. Croll had argued that winter insolation was the key factor in understanding the ice ages, but Milankovitch thought that summer insolation was more important. During periods of lower summer temperatures, he reasoned, less of the previous winter's snow would melt. Glaciation would soon begin after the snows of several winters piled up. Milankovitch set out to determine how variations in precession, eccentricity, and obliquity affected the amount of solar radiation received during the summer at particular latitudes.
<urn:uuid:a4143478-97ae-4893-aa77-2f7d31e0d1bb>
4.25
384
Knowledge Article
Science & Tech.
36.820075
A tenuous layer of gas, tens of kilometres above our heads, is an essential part of the "life support system" of Planet Earth. Without the ozone layer, it is doubtful that there would be any life on land EARTH is unique among the planets of our Solar System in having an atmosphere that is chemically active and rich in oxygen. Chemical reactions ought to reduce this atmosphere in a few thousand years to an unreactive state, with oxygen becoming locked up in stable compounds such as carbon dioxide and water. This does not happen because life on Earth is constantly renewing the oxygen content of the atmosphere. All the other planets that have atmospheres are surrounded by blankets of inert gases, such as carbon dioxide, hydrogen and methane. No interesting chemical activity can be going on there, because there are no active chemicals available for reactions. Alien explorers visiting our ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:e331cc1b-7071-45a6-8a32-5b3b0ff2955e>
3.796875
200
Truncated
Science & Tech.
39.366283
Using the Style AttributeHow to use the style attribute in an html tag To help clarify this, let's look at an example. If you want the color of some text to look red, the style attribute would look like this: The style sheet property is "color". The value of the color is "red". Notice there is a colon in between color and red, not an equal sign, and there are no extra quote marks. Now, you just insert this into an HTML tag, such as the <DIV> tag. DIV is just a division on a page. Remember to close the tag when you are through or you will have more red text than you bargained for! Here is the sample code: <DIV style="color:red">Wow, I am totally red!</DIV> This will give you the following: You can also apply more than one property in the style attribute. Place a semicolon after your first property and value, and add another. So if we want the text to be red and to be italic, we would do the following: <DIV style="color:red; font-style:italic">I'm some red-hot italic text!</DIV> Now we will have text that is both red and italic: In this way, you can add any number of properties to the section of text. Just separate them with semicolons: <DIV style="color:red; font-style:italic; font-weight:bold; font-family:Arial"> Now I'm also bold and have an Arial font! Now you would get this: Don't worry about learning all of these properties right now, we'll get to examples of all of the css properties in later tutorials. This is just showing you how to use the style attribute for now. If you want to see a page with a listing of the css properties, check out the Css Properties Tables and browse through the listings. You can even use the style attribute now to test some of them out if you'd like. Well, if you want to move on to the next section, you can see how to add the style properties to tags by using the <STYLE></STYLE> tags in the head section of your page. So, let's move on to: Declaring Styles in the Head Section. ||By: John Pollock||
<urn:uuid:f645bc92-204f-4143-9b09-6d7425b9af59>
3.546875
507
Tutorial
Software Dev.
66.729193
Larger (smaller) velocity means larger (smaller) acceleration. Two cars are travelling down a road. At 12:15 pm the two drivers of the cars look at their speedometers. The driver of one car reads 30 mph, the other driver reads 60 mph on their speedometer. Which car has a greater acceleration? Explain your answer. If you are unable to determine the answer, explain why you can't. The answer cannot be determined, since there is no information about whether (or how) the cars are speeding up or slowing down. This website was written by Tom Brown and Jeff Crowder. Please let us know if you encounter any difficulties in accessing the site, or if you have any suggestions on how we may improve it.
<urn:uuid:2c5ced6f-b302-4f64-a6e8-f7ece1bb913e>
4.03125
171
Tutorial
Science & Tech.
63.93771
Principal ideal domains are thus mathematical objects which behave somewhat like the integers, with respect to divisibility: any element of a PID has a unique decomposition into prime elements (so an analogue of the fundamental theorem of arithmetic holds); any two elements of a PID have a greatest common divisor. A principal ideal domain is a specific type of integral domain, and can be characterized by the following (not necessarily exhaustive) chain of class inclusions: Examples of integral domains that are not PIDs: It is not principal because the ideal generated by 2 and X is an example of an ideal that cannot be generated by a single polynomial. If M is a free module over a principal ideal domain R, then every submodule of M is again free. This does not hold for modules over arbitrary rings, as the example of modules over shows. All Euclidean domains are principal ideal domains, but the converse is not true. An example of a principal ideal domain that is not a Euclidean domain is the ring . Every principal ideal domain is a unique factorization domain (UFD). The converse does not hold since for any field K, K[X,Y] is a UFD but is not a PID (to prove this look at the ideal generated by It is not the whole ring since it contains no polynomials of degree 0, but it cannot be generated by any one single element). The previous three statements give the definition of a Dedekind domain, and hence every principal ideal domain is a Dedekind domain. So that PID ⊆ Dedekind∩UFD . However there is another theorem which states that any unique factorisation domain that is a Dedekind domain is also a principal ideal domain. Thus we get the reverse inclusion Dedekind∩UFD ⊆ PID, and then this shows equality and hence, Dedekind∩UFD = PID. (Note that condition (3) above is redundant in this equality, since all UFDs are integrally closed.)
<urn:uuid:35295fea-56cf-4ab1-8015-da6acf3cf16e>
3.515625
425
Knowledge Article
Science & Tech.
31.936467
The Expect package contains a program for carrying out scripted dialogues with other interactive programs. First, fix a bug that can result in false failures during the GCC test suite run: patch -Np1 -i ../expect-5.43.0-spawn-1.patch Now prepare Expect for compilation: ./configure --prefix=/tools --with-tcl=/tools/lib \ --with-tclinclude=$TCLPATH --with-x=no The meaning of the configure options: This ensures that the configure script finds the Tcl installation in the temporary tools location instead of possibly locating an existing one on the host system. This explicitly tells Expect where to find Tcl's source directory and internal headers. Using this option avoids conditions where configure fails because it cannot automatically discover the location of the Tcl source directory. This tells the configure script not to search for Tk (the Tcl GUI component) or the X Window System libraries, both of which may reside on the host system but will not exist in the temporary environment. Build the package: To test the results, issue: make test. Note that the Expect test suite is known to experience failures under certain host conditions that are not within our control. Therefore, test suite failures here are not surprising and are not considered critical. Install the package: make SCRIPTS="" install The meaning of the make parameter: This prevents installation of the supplementary expect scripts, which are not needed. Now remove the TCLPATH variable: The source directories of both Tcl and Expect can now be removed.
<urn:uuid:8fb29c23-f39f-4a24-932c-57dbae74f895>
3
333
Tutorial
Software Dev.
48.681727
On March 14, Reuters shipped a story about rapid recession of the Himalayan glaciers—the largest nonpolar ice mass in the world. They quoted from a World Wildlife Fund press release stating “Himalayan glaciers are among the fastest retreating glaciers globally due to the effects of global warming. WWF timed its press release before a two-day “Energy and Environmental Ministerial Conference” in London. At this meeting the United States was (predictably) blasted because it won’t commit economic suicide by adopting the Kyoto Protocol on global warming. This is one of those repeating news stories, like “Strife in Haiti” or “Irish Unrest.” It goes like this. To wit: “The (glaciers, polar bears, butterflies) of (anywhere) are in dramatic decline because of global warming. Unless the (US, US, US) signs on to the Kyoto Protocol, their continued decline is assured.” Well, here at World Climate Report we have our own repeating news story. To wit: “It appears that the (UN, World Wildlife Fund, Sierra Club) forgot to check the temperature histories where the (glaciers, polar bears, butterflies) are in decline, and the (US, US, US) isn’t going along with counterfactual nonsense produced by agenda-driven environmentalists.” We offer this evidence. WWF was especially interested in the Gangotri glacier, which they said is retreating at an average rate of 23 meters per year. Glaciers are in steady-state when the annual snowfall and the summer melting rate are roughly in balance. This is actually a rare case. When they melt too much in the summer, they retreat, and if it snows more in the winter than it used to, they advance. The United Nations Intergovernmental Panel on Climate Change (IPCC) publishes historical temperature records around the planet. They are averages for 5 X 5 degree latitude/longitude rectangles. They used these somewhat large areas so that, in general, many local records are averaged up to form a regional picture. The Gangotri Glacier, which feeds the Ganges River, is in the 30-35N, 75-80E box. High altitude glaciers melt during the summer. Figure 1 shows the IPCC June-August temperatures back to the beginning of the record in 1875. The net decline in temperature over the last 130 years is striking. In fact, it is one of the largest summer coolings of any grid box in the entire world that is so close to the equator. Figure 1. Summer (June, July, August) temperature history from the IPCC gridcell containing the Gangotri glacier. No one doubts that the Gangotri glacier is receding. It was far expanded beyond where it was today when the cooling record starts over a century ago. Interestingly, temperatures reached their nadir in 1990 and have popped up to their long-term average since then. Perhaps this has something to do with Gangotri’s recent more rapid retreat. But the fact that it has been in such a decline as overall century-scale temperatures have cooled tells us a lot about the long-term fate of glaciers away from polar regions: they are relics of the ice age, destined to melt. Another place with the same ice history is our own Glacier National Park in Montana. 150 years ago, near the start of the Gangotri temperature record, there were 147 glaciers in the park. Today there are only 37. What happened to summer temperatures? Unlike the case of Gangotri, they didn’t cool. They merely have remained fairly constant, with no statistically significant warming since records began in 1895. Most scientists think that the mid-19th century marks the end of a multi-century period a the “Little Ice Age,” although there is a small but vocal core of skeptics who maintain a view known as the “Hockey Stick” history—one in which temperatures do not change for nearly a millennium and then shoot up in the last 100 years (a plot that indeed looks like a hockey stick). This view has been seriously challenged in a number of papers in the scientific literature over the last year. Indeed, Glacier’s glaciers went into retreat at the end of this cold period. Gangotri still receded even though temperatures there bucked the Little-Ice-Age model and continued to decline. Incidentally, the Northern Hemisphere’s largest ice mass—the Greenland icecap—is in retreat in the southern part of the island (continent?), where temperatures are cooling. All of this leads to an obvious conclusion. Southern Greenland, Glacier National Park and the Himalayan glaciers are on there way out, with little or no nudging needed from people. They’re relics of the big ice age that ended 11,000 years ago. It’s too bad, though, that in the fight to hype global warming, the truth is also rapidly becoming another relic.
<urn:uuid:587c58d3-863d-4e90-a0ee-f680a9ad5e59>
3.328125
1,044
Nonfiction Writing
Science & Tech.
47.825257
When it Comes to Accelerators, What is Cold? Heather Rock Woods Superconductivity arises in special materials at super cold temperatures. At these temperatures—a few degrees above absolute zero—the materials’ electrical resistance virtually vanishes. A Fermilab technician works on an array of superconducting niobium cavities at Fermilab. (Photo courtesy of Fermilab) Superconducting technology will be used to accelerate electrons and positrons into extremely energetic collisions in the proposed International Linear Collider (ILC). summer, the International Technical Recommendation Panel (ITRP) decided that the international physics community should design the ILC linear accelerators (linacs) with cold technology, rather than the warm technology espoused by SLAC and other institutions. panel stressed that both technologies were mature and viable. SLAC has strongly promoted the project, independent of the technology choice, and is now refocusing its efforts to optimally design and achieve a machine with cold technology (see TIP, September thoroughly familiar with warm technology. A lower-energy version runs the linac here. The particles travel through the center of copper cavities kept at 113 degrees Fahrenheit. Does Cold Work? Loew (DO), who spent two years steeped in warm and cold details as chair of the ILC Technical Review Committee, explained the cold technology. Cavities (roughly seven inches in diameter with a hole through the middle) are cooled to 1.8 Kelvin (271 degrees Celcius below the freezing point of water). The super-cooled cavities are made of a metal called niobium that looks like stainless steel. At that temperature, niobium is superconducting. The electrons in the niobium material (not the electrons being accelerated through the niobium cavity holes) flow with virtually no resistance, like pairs of skaters on perfectly smooth ice. calls for two linacs (one for electrons, one for positrons) pointed at each other. Whether a linac is warm or cold, particles get accelerated by microwave power that is injected into the array of cavities. The microwave power generates longitudinal electric fields and cylindrical magnetic fields. The electric field attracts (or pulls) the particles traveling through the cavity, giving them an energy boost. they have almost no resistance, the superconducting cavities can hold the microwave power longer. In warm technology, some of the power ends up heating the copper cavity walls—which have some resistance. Cost vs. Energy Reach “Superconducting cavities allow you to store microwave energy very efficiently for a long time,” said David Burke (ILC). accelerate particles in ‘trains’ with multiple cars—bunches of particles—containing a cargo of 20 billion particles per bunch for the cold design, or 7.5 billion particles per bunch for the warm design. Because cold cavities store microwave energy longer, each microwave pulse can accelerate longer bunch trains. Cold trains carry 15 times more bunches than warm trains, but arrive less frequently—five times each second compared to 120 times each second for warm. In the end, the two designs generate similar luminosities, or event rates, for Compared to warm technology, cold technology uses less electricity from the power company while accelerating longer energy-efficient trains. That is like adding loaded railroad cars to a steam train without needing more coal to power the train. However, some of what superconducting technology gains by saving power, it loses in particle energy. surrenders its superconductivity when exposed to too strong a magnetic field (see sidebar). Accordingly, the cold linacs will use lower magnetic and electric fields which means particles will get a smaller tug, and gain less energy, for every meter traveled. Thus, a cold machine needs to be longer than a warm machine to reach the final beam energy of 250 Giga electron volts (GeV) for each beam. The machine length will be determined by the design, but will be at least 20 miles long from the end of one linac to the end of the other. design a cold machine won’t be completely new to SLAC, Loew pointed out. In the late 1960s, people at SLAC and Stanford explored how to equip the linac with superconducting cavities, but it proved unfeasible at the time. Now the time appears right for super-energetic cold linacs. more information on the linear collider, see:
<urn:uuid:a0720c36-74dc-4c32-ab9d-a58a004311e8>
4.03125
1,000
Knowledge Article
Science & Tech.
32.517731
The Code wrote: Did the spiral in our galaxy have a hidden unseen reason. Was the gravity at the center of our galaxy really stronger at some point that makes it looks like it does ? If So, Could our sun have another point of creation ? if Not..... Explain the spiral Please..... <<The first acceptable theory for the spiral structure was devised by C. C. Lin and Frank Shu in 1964. They suggested that the spiral arms were manifestations of spiral density waves, they assumed that the stars travel in slightly elliptical orbits, and that the orientations of their orbits is correlated i.e. the ellipses vary in their orientation (one to another) in a smooth way with increasing distance from the galactic center. This is illustrated in the diagram. It is clear that the elliptical orbits come close together in certain areas to give the effect of arms. Stars therefore do not remain forever in the position that we now see them in, but pass through the arms as they travel in their orbits.>> The Code wrote: We all come from somewhere. If you wind the clock back far enough, we all come from the same place. Sometime about 4.5 billion years ago, the sun was born, and a disk of debris swirling around it soon coalesced into Earth and the rest of the planets. But where did that happen? Where was the sun born? http://blogs.scientificamerican.com/obs ... continues/ <<Messier 67 (also known as M67 or NGC 2682) is an open cluster in the constellation of Cancer. It was discovered by Johann Gottfried Koehler in 1779. Age estimates for the cluster range between 3.2 and 5 billion years, with the most recent estimate (4 Gyr) implying stars in M67 are younger than the Sun. Distance estimates are likewise varied and typically range between 800-900 pc. M67 is not the oldest known open cluster, but there are few Galactic clusters known to be older, and none of those are as close as M67. M67 is an important laboratory for studying stellar evolution, since the cluster is well populated, obscured by negligible amounts of soot, and all its stars are at the same distance and age, except for approximately 30 anomalous blue stragglers, whose origins are not fully understood. M67 is the nearest old open cluster, and thus has become a standard example for studying stellar evolution. It is probably the second best observed open cluster after the Hyades cluster, which is amongst the nearest open clusters and younger than M67. M67 is one of the most-studied open clusters, yet estimates of its physical parameters such as age, mass, and number of stars of a given type, vary substantially. Richer et al. estimate its age to be 4 Gyr, its mass to be 1080 solar masses, and the number of white dwarfs to be 150. Hurley et al. estimate its current mass to be 1400 solar masses and its initial mass to be approximately 10 times as great.M67 has more than 100 stars similar to the Sun, and countless red giants. The total star count has been estimated at well over 500. The ages and prevalence of Sun-like stars contained within the cluster had led astronomers to consider M67 as the possible parent cluster of our own Sun. However, computer simulations have suggested that this is highly unlikely to be the case. The cluster contains no main sequence stars bluer than spectral type F, other than perhaps some of the blue stragglers, since the brighter stars of that age have already left the main sequence. In fact, when the stars of the cluster are plotted on the Hertzsprung-Russell diagram, there is a distinct "turn-off" representing the stars which have terminated hydrogen fusion in the core and are destined to become red giants. As the cluster ages, the turn-off moves progressively down the main sequence. It appears that M67 does not contain an unbiased sample of stars. One cause of this is mass segregation, the process by which lighter stars (actually, systems) gain speed at the expense of more massive stars during close encounters, which causes the lighter stars to be at a greater average distance from the center of the cluster or to escape altogether.>> The Code wrote:We all come from somewhere. If you wind the clock back far enough, we all come from the same place. Sometime about 4.5 billion years ago, the sun was born, and a disk of debris swirling around it soon coalesced into Earth and the rest of the planets. But where did that happen? Where was the sun born? Chris Peterson wrote:All you can really talk about is what other bodies might have been neighbors of our system when it was formed. But since everything in the galaxy is moving, and nothing is following exactly the same path, there's a natural scrambling of position over time. We may never identify another object that formed from the same stellar nursery as the Sun. That information may be hopelessly lost. rstevenson wrote:I seem to recall, perhaps last year and perhaps posted here, there was some work being done to try to identify stars that may have formed in the same cloud of material from which our Solar System formed. They thought there were some good candidate stars very similar to our own Sun and not too far away. (Can't recall if they were using spectrographic analysis or something else to define "similar".) The work was at that point inconclusive though. Users browsing this forum: CommonCrawl [Bot] and 3 guests
<urn:uuid:e5971abb-0086-4265-a703-021d3657e120>
4.0625
1,138
Comment Section
Science & Tech.
60.471527
We read about global warming raising sea levels. Does this hold true for Lake Michigan as well? It does not, because Lake Michigan and all the Great Lakes, unlike the oceans, are not the final repository of the world’s water. The Great Lakes drain into the Atlantic Ocean via the St. Lawrence River, and their volume of outflow increases when the level of the lakes rises. It’s a different for story with the oceans. Water, once it finds its way into the world’s oceans, can’t drain out; it can escape the oceans only by evaporation. Researchers believe sea-level rise will be the result of melting land ice (mountain glaciers and the Greenland ice cap) and accelerated “drainage” of ice into the oceans from the icecaps of Greenland and Antarctica. Thermal expansion of warming ocean water will also add contribute to sea-level rise.
<urn:uuid:6ce0d0d4-35c6-4f0d-837d-5c3e81b0b310>
3.8125
188
Knowledge Article
Science & Tech.
53.526321
This composite image contains X-rays from Chandra (blue) and optical data from the VLT (gold) of the galaxy NGC 3115. Using the Chandra data, the flow of hot gas toward the supermassive black hole in the center of this galaxy has been imaged. This is the first time that clear evidence for such a flow has been observed in any black hole. The new Chandra data also supports the previous optical observations that suggest that NGC 3115ís black hole has a mass of about two billion times that of the Sun. This would make NGC 3115 the host of the nearest billion-solar-mass black hole to Earth. View Page | View Handout | More Images
<urn:uuid:b76d27ca-26de-4024-9838-406857951e6d>
3.15625
143
Truncated
Science & Tech.
61.371739
The thermite reaction This demonstration shows the highly exothermic reaction between aluminium and iron(III) oxide that produces molten iron. This is a competition reaction, showing aluminium to be a more reactive metal than iron. A redox reaction takes place. The reaction is violent but safe provided the procedures are followed exactly. Some teachers have had accidents when performing the procedure outside in a strong breeze; the powders blew into the flame, caught fire and caused burns to the hand and/or face. Siting the demonstration in a fume cupboard has caused damage to the cupboard. The method described here is performed on a laboratory bench and does not produce many fumes. Do NOT do this demonstration in a fume cupboard or out of doors. It produces a result within seconds of setting it off because the water cools the iron down very quickly. A rehearsal is essential if this experiment has not been done before. There have been occasional reported explosions when using methods similar to this. It is essential not to exceed the stated quantities and that the demonstrator and students are protected by safety screens. The bench should be clear of combustible materials and protected with a sheet of hardboard or heat resistant mats. Pupils should not look directly at the glare of the burning magnesium but cover their eyes with their fingers slightly apart. The demonstrator must have room to move quickly away to a safe distance. The demonstration takes about 10 minutes to carry out if the apparatus is set up and the solid reagents are weighed in advance. The quantities given are for one demonstration Thermite mixture (Note 1): Aluminium powder (medium grade) (HIGHLY FLAMMABLE), 3 g Iron(III) oxide, 9 g Igniter mixture (Note 2): Barium nitrate(V) (OXIDISING, HARMFUL, ), 2 g Magnesium powder (HIGHLY FLAMMABLE), 0.2 g Magnesium ribbon, 10 cm length Refer to Health & Safety and Technical notes section below for additional information. Eye protection: Safety glasses for observers, goggles or face shield for the demonstrator For one demonstration: Filter papers, 12 cm diameter, 2 Pipe-clay triangle (or similar) Beaker, thick-walled (1 dm3) Dry sand (see diagram) Heat resistant mats Small bar magnet Health & Safety and Technical notes A face shield or goggles and a laboratory coat (it can become messy at the end) should be worn by the demonstrator. Safety screens must be used to surround the apparatus. Students should stand further than 4 m from the reaction and wear eye protection. Aluminium powder, Al(s), (HIGHLY FLAMMABLE) - see CLEAPSS Hazcard. Iron(III) oxide, Fe2O3(s) - see CLEAPSS Hazcard. Barium nitrate(V), Ba(NO3)2(s), (OXIDISING, HARMFUL) - see CLEAPSS Hazcard. Magnesium powder (HIGHLY FLAMMABLE) and magnesium ribbon, Mg(s) - see CLEAPSS Hazcard. 1 It is important that the iron(III) oxide used in this demonstration is absolutely dry. An hour or so in a warm oven, or heating in an evaporating dish over a Bunsen flame, should suffice. The oxide should be allowed to cool completely before mixing. The weighed quantities of iron(III) oxide (9 g) and aluminium (3 g) may be thoroughly mixed beforehand by repeatedly pouring the mixture to-and-fro between two pieces of scrap paper, and then stored for the demonstration in a suitable container labelled ‘Thermite mixture’ 2 The weighed quantities of magnesium powder (0.2 g) and barium nitrate (2 g) may also be thoroughly mixed beforehand using the same method as described in Note 1, and then stored for the demonstration in a suitable container labelled ‘Igniter mixture’. The demonstrator may wish (or be persuaded by the audience) to do a repeat demonstration. In this event it is important to keep the second set of materials well away from the first demonstration site. a Fold two 12 cm diameter circles of filter paper into fluted cones and place oneinside the other. b Into a 1 dm3, thick-walled beaker, pour dry sand until it is one-third full and then add water until it is two-thirds full. c Cover an area of the bench with several heat resistant mats and place the beaker in the centre. Set up the equipment as shown in the diagram above and surround it with safety screens. Add the Thermite mixture (see Note 1) to the fluted filter paper cone sitting in the pipe clay triangle. d Make a depression in the Thermite mixture with a spatula and place the igniter mixture (see Note 2) into it. e Insert a magnesium ribbon fuse upright into the igniter mixture. It must extend above the fluted filter paper. Light the magnesium fuse with a Bunsen burner flame and retreat to a safe distance behind the safety screens. A very vigorous reaction should follow, with some sparks flying upwards. The very hot residue containing molten iron will fall through into the water. f Once the reaction has stopped, remove the beaker and decant the water into the sink. Retrieve the iron formed with a magnet. Wash the iron under running water. The reaction is: iron(III) oxide + aluminium → aluminium oxide + iron This shows that aluminium is above iron in the reactivity series. The ‘Thermite’ mixture is stable until strong heating is applied, hence the need for an initiating reaction between the barium nitrate and magnesium powder. Once underway, the reaction is highly exothermic, rapidly reaching temperatures as high as 2000 oC, well in excess of the melting point of iron (1535 oC). The practical use of this reaction to weld railways lines together should be mentioned – see web link below. Do not use potassium manganate(VII) and hot glycerol as an alternative to initiate the reaction in this version because the filter papers catch fire. Do not use any other metal oxides, such as copper oxides, chromium(VI) oxide, lead oxides or manganese(IV) oxide. However, chromium(III) oxide and Mn3O4 can be used. Health & Safety checked March 2009 There are several experiments in this series on Practical Chemistry which use the competition principle: There are many video clips of Thermite reactions on the internet, some carried out on a scale and in a manner which is extremely hazardous. Two clips of reactions carried out safely using a different procedure to that outlined here can be found at: Note that in the following reaction a much coarser mixture of the solids, as in commercial Thermite charges, is used. Using powdered solids on this scale would be extremely hazardous: Details and pictures of the thermite welding of railway tracks can be found at: Page last updated on 31 July 2012
<urn:uuid:85cbc1c2-319f-4f08-8105-1b7a4ae97700>
4.09375
1,490
Tutorial
Science & Tech.
43.506239
The SQL type system is used by the language compiler to determine the compile-time type of an expression and by the language execution system to determine the runtime type of an expression, which can be a subtype or implementation of the compile-time type. Each type has associated with it values of that type. In addition, values in the database or resulting from expressions can be NULL, which means the value is missing or unknown. Although there are some places where the keyword NULL can be explicitly used, it is not in itself a value, because it needs to have a type associated with it. The syntax presented in this section is the syntax you use when specifying a column's data type in a CREATE TABLE statement.
<urn:uuid:a0f04f62-d282-4f98-a4c0-8b1fede3d556>
3.203125
145
Documentation
Software Dev.
35.713116
Photograph by Deshakalyan Chowdhury/AFP/Getty Images Tsunamis can wreak havoc on coastal populations and landscapes. The December 26, 2004, tsunami in the Indian Ocean claimed some 150,000 lives and cleared the landscape on millions of acres of oceanfront terrain. Here are some measures you can take to avoid trouble if you're caught in a tsunami. - When in coastal areas, stay alert for tsunami warnings. - Plan an evacuation route that leads to higher ground. - Know the warning signs of a tsunami: rapidly rising or falling coastal waters and rumblings of an offshore earthquake. - Never stay near shore to watch a tsunami come in. - A tsunami is a series of waves. Do not return to an affected coastal area until authorities say it is safe. @NatGeoGreen on TwitterTweets by @NatGeoGreen National Geographic Magazine They are the Earth’s pollinators. And they come in more than 200,000 shapes and sizes. It’s a new name for a new geologic epoch—one defined by our own massive impact on the planet. The World's Water NG's new Change the Course campaign launches. When individuals pledge to use less water in their own lives, our partners carry out restoration work in the Colorado River Basin. A special series on how grabbing water from poor people and future generations threatens global food security, environmental sustainability, and local cultures.
<urn:uuid:8e11ca7d-fd09-4a17-a7cb-4e72192fc109>
3.78125
301
Listicle
Science & Tech.
47.244711
First, I have to give them props for using the term "forecast" rather than "predict". I know a number of volcanologists who are touchy about using "predict", because any conclusions about what a volcano may or may not do are necessarily based on probabilities, just like a weather forecast. When people start thinking that we can say for certain when a volcano will erupt, we get the blame when it doesn't, and they have to deal with the consequences of precautionary evacuations - or if it erupts sooner and takes everyone by surprise. It's very important that people know that no scientist can be 100% sure when a volcano will erupt and what it will do - voclanoes are simply too complicated. (ASTER image of Bezymianny volcano lava flow from NASA Visible Earth image archive) Ramsay and Carter (together with U.S. scientists at the University of Alaska-Fairbanks and Russian experts at Kamchatka's Institute of Volcanology and Seismology) worked at Bezymianny volcano, using the FLIR to record temperature increases in the lava dome just days prior to an eruption. They were able to correlate their ground data with data from NASA's ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) instrument, which records several bands of thermal infrared data. This is particularly significant because it means that for volcanoes with fairly well defined thermal precursors to explosive eruptions, scientists can use satellites to monitor them, rather than traveling to remote locations with expensive, cumbersome equipment. There are a few caveats. Any thermal satellite image records not only the temperature at any one time in a location, but the temperature history. Thermal emissivity varies with the amount of energy a surface absorbs, and with the rate that the surface re-emits that energy. Some surfaces absorb lots of energy but lose it quickly; some absorb energy and emit it slowly; others don't absorb much at all. All of this shows up in a thermal image, which should be treated more as a time exposure than a single snapshot. The article sums it up pretty well: "Because the satellite images capture an average temperature reading for the entire volcano at a given moment, the scientists knew the reading in some areas was probably many times higher."This must be taken into account when analyzing satellite data. A single "bright" ASTER image can mean that temperature increased suddenly, or that temperatures increased steadily over the course of hours or even days, depending on how quickly the lava's surface loses heat. Several images, taken hours or days apart, however, would prove very useful. Unfortunately, obtaining even one line of ASTER data - or data from any satellite that records in the thermal band - is expensive, and requires a special requisition process. Additionally, most thermal data comes from instruments on satellites that also serve other purposes; finding a way to dedicate any satellite entirely to recording thermal imagery of volcanoes would be difficult and expensive (again). Still, it would be extremely useful - and it's exciting research even without that. The research is being partially supported by the National Geographic Society's Committee for Research and Exploration. A Bulletin of Volcanology article about their work can be found here.
<urn:uuid:741efca8-c156-4f35-af6f-69cab56492d8>
3.296875
663
Personal Blog
Science & Tech.
24.019638
Experience indicates that most of the matter in the Universe is composed of ionized or partially ionized gas permeated by magnetic fields. Celestial objects are magnetized and magnetic fields of significant strength are found everywhere in the interstellar space, and over small and very large scales, in the extragalactic universe. In general, small compact objects have the largest magnetic field strengths, while larger low-density objects have weaker magnetic fields. The Earth has a bipolar field of about 0.5 G at its surface, originating from an idealized current due to the charged fluid motion going circularly in a ring inside the liquid molten metallic core. On the Jupiter surface the magnetic field is about 4 G, owing to the fast Jupiter rotation. In the interplanetary space of the solar system the magnetic fields are of the order of 50 µG. On the Sun, the magnetic field is of 10 G at the poles, while localized sunspots on the surface near the equatorial zone of the Sun, and more generally of a star, can have magnetic field strengths of 2000 G. In protostellar envelopes and protostars, fields are of ~ 1 mG. A bipolar field is "frozen" into the gas of a star during the contraction from a normal star to a degenerate star. It will remain bipolar-shaped but its intensity will increase as r-2, thus magnetic fields of pulsars and neutron stars are of the order of 1012 G, those of white dwarfs are around 106 G. A widespread field of ~ 5 µG, characterized by a spiral shape, is present in the Galaxy. At the Galaxy nucleus, highly organized filaments with strength of ~ 1 mG are detected. Fields in other spiral galaxies are of ~ 10 µG on average, with values up to ~ 50 µG in starburst galaxies and ~ 30 µG in massive spiral arms. Fields of ~ µG are found in the radio emitting lobes of radio galaxies. Fields of similar or weaker strength are detected in the intracluster medium of clusters of galaxies, and in more rarefied regions of the intergalactic space. Upper limits of 10-8 - 10-9 G have been obtained for the cosmological fields at large redshift. In this review large-scale magnetic fields in clusters of galaxies will be analyzed. In the last years the presence of cluster magnetic fields has been unambiguously proven and the importance of their role has been recently recognized. The study of cluster magnetic fields is relevant to understand the physical conditions and energetics of the intracluster medium. Cluster magnetic fields provide an additional term of pressure and may play a role in the cluster dynamics. They couple cosmic ray particles to the intracluster gas, and they are able to inhibit transport processes like heat conduction, spatial mixing of gas, and propagation of cosmic rays. They are essential for the acceleration of cosmic rays and allow cosmic ray electron population to be observed by the synchrotron radiation. Despite many observational efforts to measure their properties, our knowledge on cluster magnetic fields is still poor. Overviews on observational and theoretical arguments can be found in the literature [1, 2, 3, 4, 5, 6]. The focus of this review is primarily observational, however, we present the basic theory needed for the interpretation of the data. We analyze some of the main issues that have led to our knowledge on magnetic fields in clusters of galaxies and discuss some of their limitations. An outline of the review is as follows: In Sec. 2 we summarize some general properties of clusters of galaxies. Sec. 3 is devoted to theoretical background related to the detection of cluster magnetic fields and to the estimate of their strengths. We recall the basic theory concerning synchrotron radiation, inverse Compton radiation and Faraday rotation. These are the main observed features which provide information on the cluster magnetic fields. The observational results of cluster magnetic fields through synchrotron radio and inverse Compton hard X-ray emissions are described in Secs. 4 and 5. In Sec. 6 we give the results obtained by analyzing rotation measures of radio galaxies located within or behind clusters of galaxies. In Sec. 7 we present cluster magnetic fields detected through the study of cold fronts. In Sec. 8 we report the evidence for a radial decline of cluster magnetic fields. In Sec. 9 we discuss how magnetic field values obtained with different approaches can be reconciled. In Sec. 10 we summarize the results of a numerical technique which can significantly improve our interpretation of the data and thus the knowledge of the strength and structure of magnetic fields. In Sec. 11 we briefly review the current knowledge on the cluster magnetic field origin and amplification. Throughout this paper we assume the CDM cosmology with H0 = 71 km s-1 Mpc-1, m = 0.3, and = 0.7, unless stated otherwise.
<urn:uuid:bd379e26-fae3-4f94-bf6c-174bc197f6c9>
3.203125
992
Academic Writing
Science & Tech.
47.408932
The creatures of the deep are different—so different that some don’t even look like creatures. (Images: A couple of shots from the Hawai`i Undersea Research Laboratory deep sea creature collection include the outrageously spiky crab Lethodidae neolithodes and the impressively patterned starfish Pentaceraster cumingi. Credit: HURL.) To help understand the amazing diversity of deep ocean life around the Islands, the Hawai`i Undersea Research Laboratory (HURL), which has been photographing them for more than 30 years, has created an online identification guide. The resource was once access-limited to scientists—it is loaded onto iPads that submersible staffs can take along for critter identification--but is now publicly available, here. You’ll find there more than 1,500 images, some of them still shots and others taken from videos, that have been collected during deep dives by the University of Hawai`i’s various manned and remotely operated submersibles. A press release with more details on the program is available here. © Jan TenBruggencate 2012
<urn:uuid:5b87549a-a46e-4762-901d-d31a9b3c0d5c>
3.40625
234
Personal Blog
Science & Tech.
23.551737
Learn more physics! How can I prove to my boyfriend that lead is more dense than water? And I have forgotten how density is defined...molecules are closer together something something...? - Andi (age 36) Kansas City, MO Get a lead brick and drop it into a bucket of water. If the brick sinks it is denser than water. If the brick floats then it is less dense than water. If this doesn't convince your boyfriend then find a new one who is not as dense. By the way, density is defined as the weight per unit volume, for example the density of water is very close to 1 gram per cubic centimeter. p.s. It may be hard to find a brick. Maybe some lead shot or an old fishing weight would be easier to find. Mike W. (published on 10/04/2008) Follow-up on this answer.
<urn:uuid:40445d8d-9f21-4a6d-ab81-b231f4d489db>
3.046875
188
Q&A Forum
Science & Tech.
81.173921
One of the biggest pieces of this argument has been that the RNA, which now helps DNA produce proteins, is too complex to arise from undirected chemical reactions. New research disproves this claim. The new findings map out a series of simple, efficient chemical reactions that could have formed molecules of RNA, a close cousin of DNA, from the basic materials available more than 3.85 billion years ago, researchers report online May 13 in Nature. . . The new research lends support to the idea that RNA-based life-forms were the first step toward the evolution of modern life. Called the RNA world hypothesis, the idea was first proposed some 40 years ago. But until now, scientists couldn’t figure out the chemical reactions that created the earliest RNA molecules. Today, DNA encodes the genetic blueprint for life — excluding some viruses, for those who consider viruses living — and RNA acts as an intermediary in the process, making protein from DNA. But most scientists think it’s unlikely that DNA was the basis of the origin of life, says study coauthor John Sutherland of the University of Manchester in England. Information-bearing DNA holds the code needed to put proteins together, but at the same time, proteins catalyze the reactions that produce DNA. It’s a chicken-or-egg problem. Scientists don’t think that DNA and proteins could have come about independently — regardless of which came first — and yet still work together in this way. It’s more plausible that the first life-forms were based on a single molecule that could replicate itself and store genetic information — a molecule such as RNA (SN: 4/7/01, p. 212). RNA world proponents speculate modern DNA and proteins evolved from this RNA-dominated early life, and RNA in cells today is left over from this early time. RNA molecules are chains with units that include "a sugar, a base and a phosphate group." The breakthrough involves combining precursor molecules that are neither sugars nor bases, but are instead a hybrid of the two. These are more reactive than stand alone sugars or bases.
<urn:uuid:25aa2390-022c-439b-bb65-19785c8d21c7>
4.15625
429
Personal Blog
Science & Tech.
50.294295
The oceanic food chain begins with microscopic drifting plants called phytoplankton. Phytoplankton are found close to the surface of the water where there is adequate sunlight for photosynthesis. Phytoplankton are eaten by tiny floating animals known as zooplankton. Zooplankton include the larvae of crabs, jellyfish, corals and worms, as well as adult animals like tiny shrimps, copepods and euphausiids (krill). They keep buoyant with the help of gas-filled chambers and oil droplets which reduce their density. Moving up the food chain, zooplankton provide food for fish. Big fish eat smaller fish and at the very top of the food chain are large predatory fish like sharks, mammals like seals, and seabirds. A very large fish, the whale shark, and some very large mammals, the baleen whales, feed directly on zooplankton. Millions of people on all continents depend on fish for food. That is why it is so important that fish populations are conserved. Overfishing by huge modern fishing fleets is threatening the entire ocean food chain.
<urn:uuid:00e4be8e-8a2a-49b8-8870-70b62ae20af4>
3.9375
249
Knowledge Article
Science & Tech.
47.554796
|Where will the next Perseid meteor appear? Sky enthusiasts who trekked outside for the Perseid meteor shower that peaked over the past few days typically had this question on their mind. Six meteors from this past weekend are visible in the above stacked image composite, including one bright fireball streaking along the band of the background Milky Way Galaxy. All Perseid meteors appear to come from the shower radiant in the constellation of Perseus. Early reports about this year's Perseids indicate that as many as 100 meteors per hour were visible from some dark locations during the peak. The above digital mosaic was taken near Credit & Copyright:
<urn:uuid:30c84791-5d4e-4fe5-9661-da4c2f8f7dcc>
2.734375
143
Truncated
Science & Tech.
41.561
FIGURE E1. Surface temperature anomalies (°C, top) and surface temperature expressed as percentiles of the normal (Gaussian) distribution fit to the 1971–2000 base period data (bottom). Analysis is based on station data over land and on SST data over the oceans (top). Anomalies for station data are departures from the 1971–2000 base period means, while SST anomalies are departures from the 1971–2000 adjusted OI climatology. (Smith and Reynolds 1998, J. Climate, 11, 3320-3323). Regions with insufficient data for analysis in both figures are indicated by shading in the top figure only.
<urn:uuid:86b2a3c4-2c8a-4f6f-8709-a3a60cb31a38>
3.03125
143
Documentation
Science & Tech.
43.451579
We hear about hurricanes every year, but how do hurricanes work? Most hurricanes that form in the Atlantic Ocean start as thunderstorms off the coast of Africa. As they travel across the tropical waters around the equator, they pick up moisture and energy, eventually crashing into land where they quickly fall apart. Before they break up though, hurricanes create a lot of problems for people in their path. The sustained winds in a category 1 hurricane can be as low as 75+ miles per hour and for a category 5, they can be upwards of 155+ miles per hour! These winds can cause huge amounts of damage and storm surges, giant walls of water up to 18 feet tall, and flying debris. Those 3 factors (wind, water and debris) mixed together is a recipe for danger. Tornadoes often spin out of hurricanes as an added bonus. The spinning action of a hurricane is the result of the Coriolis effect. Because hurricanes are so massive in size, the relatively tiny Coriolis effect is capable of accumulating rapidly, forcing the storm into a rotating pattern. There is a long standing myth that the Coriolis effect has a direct effect on the direction of the water flowing down your sink or toilet. This is only a myth. In the case of the sink or toilet, the size of the water container is simply too small to cause any rotational preference. Sink and toilet draining directions are determined by the water’s motion in the container or in the case of a toilet, by the direction of the flush jets. There are 40-50 of these giant storms manifesting around the world every year. It’s a good thing we live far away from where they hit, although because weather is such an amazing thing, we in the Midwest will no doubt be effected by residual rain and wind from the hurricane. If you found this interesting, check out some of these related articles. Tell us what you're thinking by leaving a comment below...
<urn:uuid:646b4878-ebca-47ba-8139-d642561d449f>
3.515625
402
Personal Blog
Science & Tech.
55.310482
Northcoast Regional Land Trust: Wood Creek Tidal Marsh Enhancement Project The Northcoast Regional Land Trust (NCRLT) has worked to restore historical tidal flow and native vegetation to the Wood Creek Tidal Marsh. Over the past 150 years, this site has been altered by diking and removal of vegetation and large woody debris. Despite these alterations, surveys have shown that endangered and threatened fish species utilize this area for rearing. The Wood Creek Tidal Marsh Enhancement Project’s primary climate change benefit comes in the form of flood mitigation for the lower Wood Creek/Freshwater Creek area. Anticipated increases in winter precipitation will likely bring increased flooding to the local watersheds. The reconnection of Wood Creek to Freshwater Creek through opening (and eventual removal) of the tidegate and creation of a more complex wetland channel system will expand the flow capacity of the project area, thereby reducing the velocity and shear potential of flood flows. The Wood Creek habitat restoration project was funded by US Fish and Wildlife Service (Coastal Partners, Coastal Program, and NAWCA grants), NOAA, the Natural Resource Conservation Service, the California Coastal Conservancy, and the California Department of Fish and Wildlife. The project has been implemented and is in the monitoring stage. Read the full CAKE case study here. Full CCN case study coming soon.
<urn:uuid:34d68e8b-a24d-437c-95f2-5da2ae0189f2>
2.859375
276
Knowledge Article
Science & Tech.
30.034385
Powering Remote Instrumentation in the Antarctic The British Antarctic Survey often need to power remote instruments a long way from the nearest mains electricity connection! These particular instruments are used to measure ozone. The ozone monitor draws around 5W when it is on, and the data is logged by a small, very low power computer that also acts as a control box telling the ozone monitor when to sample. In the winter it will only switch on for 2 hours every three days - but sampling will be nearly continuous in the early spring. A heater is needed at times to prevent the inlet tube freezing - it draws another 5W. A Kyocera 40W solar panel is used to provide power in the summer months, but in the winter the sun is below the horizon for three months, so a Forgen wind turbine is used to provide additional power. Forgens are exceptionally robust, and are one of the few wind turbines able to withstand the Antarctic winter.
<urn:uuid:ae246d4c-cbff-4512-8234-aaea695eb444>
3.140625
195
Knowledge Article
Science & Tech.
41.397273
Sea Level Rise News Sea Level Rise Sea level rise refers to the relatively recent net increase in Earth's ocean volume and depth, a trend that is widely attributed to rising global temperatures. While sea levels fluctuated dramatically in prehuman history, scientists link the past century's rapid rise at least partly to manmade global warming. Sea levels rose about 1.7 mm per year during the 20th century, according to the U.N. Intergovernmental Panel on Climate Change, and are now rising about 3.1 mm per year. This is partly because seawater naturally expands as it absorbs heat, increasing the overall oceanic volume, but also because excess warmth melts glaciers, sea ice, ice caps and ice sheets, adding more water to the oceans. Sea level rise can have disastrous effects on the environment. As seawater washes inland, it erodes land, floods wetlands and damages habitats. As levels continue to rise, low-lying islands and coastlines will be vulnerable to flooding and eventually, submersion. (Photo: Wikimedia Commons)
<urn:uuid:64ea727c-8b19-42ff-84e4-032e89ebf218>
4
214
Knowledge Article
Science & Tech.
41.063394
00:00 11 March 2009 The burgeoning field of astrobiology has a less well-known offshoot right here on Earth: the search for a "shadow biosphere" - a second, independent form of life unrelated to the sort we know Image 3 of 6 One promising avenue for discovering a new form of life is to look for "mirror life". Normal organisms use right-handed sugars and left-handed amino acids , but there is no theoretical reason why an alternative form of life could not use their mirror-image equivalents. Such mirror life would almost certainly be microbial, as otherwise we would probably have discovered it already. These images show the most common form (left) and the L isomer (right) of isoleucine , an amino acid. (Images: Wikimedia Commons)
<urn:uuid:baf6e203-8252-4110-8a33-cd7e5b17205f>
3.03125
168
Truncated
Science & Tech.
30.505357
Summary: Math 1550 Fall 2005 Section 31 P. Achar Due: October 25, 2005 Suppose f(x) and g(x) are two functions whose derivatives you know. There are various ways to combine them to get a new function. In class, we have developed various rules that tell us how to find the derivative of the combined function in terms of f (x) and g (x). For example: · If you add them, the Sum Rule says (f(x) + g(x)) = f (x) + g (x). · If you multiply them, the Product Rule tells you that (f(x)g(x)) = f(x)g (x) + g(x)f (x). · If you divide them, the Quotient Rule tells you that
<urn:uuid:9067ff95-b199-411b-9ee9-39202f30c934>
3.4375
179
Academic Writing
Science & Tech.
94.688659
Here are four different ways chemists use to show a molecule of hydrogen peroxide. In the colored molecule models, hydrogen is white and oxygen is red. Click on image for full size Windows to the Universe original artwork by Randy Russell. Hydrogen Peroxide - H2O2 Hydrogen peroxide (H2O2) is a kind of chemical. A molecule of hydrogen peroxide has oxygen and hydrogen atoms in it. Hydrogen peroxide is a clear liquid. Hydrogen peroxide is in some liquids that are used to clean wounds. There is a small amount of hydrogen peroxide in Earth's air. Hydrogen peroxide helps change sulfur trioxide (SO3) in the atmosphere into sulfuric acid (H2SO4). Sulfuric acid is part of acid Shop Windows to the Universe Science Store! Our online store on science education, ranging from evolution , classroom research , and the need for science and math literacy You might also be interested in: Most things around us are made of groups of atoms bonded together into packages called molecules. The atoms in a molecule are held together because they share or exchange electrons. Molecules are made...more Sulfuric acid is a very common type of acid. Acid rain has sulfuric acid in it. Acid rain harms plants, fish, and other living things. A type of air pollution causes acid rain. When people burn fossil...more Acid rain is a general term used to describe different kinds of acidic air pollution. Although some acidic air pollutants return directly back to Earth, a lot of it returns in rain, snow, sleet, hail,...more Nitric acid is a very strong acid that can burn your skin. Nitric acid has nitrogen, oxygen, and hydrogen atoms in it. There is a very tiny bit of nitric acid gas in Earth's atmosphere. Nitric acid is...more Methane is a kind of gas. There is a small amount of methane in the air you breathe. A methane molecule has carbon and hydrogen atoms in it. Methane is a greenhouse gas. That means it helps make Earth...more Ammonia is a kind of gas. Ammonia molecules (NH3) have hydrogen and nitrogen atoms in them. The air you breathe has a tiny bit of ammonia in it. When plants and animals die and decay, they give off ammonia....more There is more nitrogen gas in the air than any other kind of gas. About 4/5ths of Earth's atmosphere is nitrogen gas! A molecule of nitrogen gas is made up of two nitrogen atoms. There are other molecules...more
<urn:uuid:cdfab0ff-9e5c-4e7b-995a-0e7229c4ade3>
3.84375
548
Content Listing
Science & Tech.
52.366326
STUDENT INSTRUCTION AND ANSWER Activity 3: Concept Application-Using The Drake Equation = R x fp x ne x f1 x fi x fc x L In Activity 2, we estimated the number of students that had particular characteristics. In this activity, we will use the same estimation techniques to discover the number of existing extraterrestrial civilizations that possess the technology to communicate beyond their home planet. Your task is to complete the table below and use those values to solve the Drake Equation in order to estimate the number of intelligent civilizations in the Milky Way. You might wish to review the Drake Equation Background Information Sheet before making your estimation. After you make the calculation, answer the reflection questions. R - Number of target stars in the galaxy that: - are second generation stars with heavy elements - are hot enough to have a large habitable zone - have a long enough lifetimes for life to develop - Fraction (percentage) of those stars with planets or planet systems. -Number of "Earth-like planets" in a planetary system that are at the right temperature for liquid water to exist (in the habitable - Fraction (percentage) of Earth-like planets where life actually - Fraction (percentage) of Earth-like planets with at least one species of intelligent life - Fraction (percentage) of Earth-like planets where the technology to communicate beyond their planet develops |L - "Lifetime" of communicating civilizations (years) - Note: This number must be divided by the age of the galaxy, 10 billion years, when you make your |N - Number of communicative civilizations Questions about the Drake Equation N = R x fp x ne x f1 x fi x fc x L A. What value did you get for the number of civilizations? B. How does the value change if you double the lifetime of communicating C. How does the estimate change if we discover that only 1/3 of Sun-like target stars have planets? D. How would you change your estimate if we discovered that early life developed on both Venus and Mars? E. Determine the most reasonable maximum and minimum values that your group believes the terms fp, ne, f1, fi, and fc could have. Record your values for each term below. F. Calculate the range of values for N that result from using the maximum and minimum values that your group recorded in the previous question. G. Do the maximum and minimum values that you calculated make sense to your group? Explain why you think they might be too large, too small, or just right. H. How many intelligent, communicating species in the galaxy do we actually know about? What then is the actual minimum value for N. (Hint it is not zero.) Explain your reasoning. In this paragraph we will offer some values for several of the terms in the Drake Equation that are often used by scientists when making these estimates. If we think that all stars that are like our Sun have planets, then we could estimate fp = 1 to represent 100%. If we use our solar system as a model then there is only one planet in the habitable zone that we know has liquid water on its surface (Earth) so we could imagine setting ne =1. Since Earth is the only planet in our solar system that we know to have developed life, it seems reasonable to set fl = 0.1 to represent that about one out of every 10 planets has life. It is essentially impossible to know the fraction of species that develop on a planet that turn out to be intelligent and able to communicate so a conservative estimate for fi and fc that we might use is 0.1 for each term. As a rough guess we might imagine that across the galaxy intelligent communicating civilizations last for about 20,000 years out of the 10 billion year existence of the galaxy, which sets L = 2 x 10-6. I. What value do you get if you use the estimates provided in the preceding paragraph? How does this value compare to your original estimate and your estimate for a maximum value or your estimate for a minimum value? CHALLENGE PROBLEM: Scientists recently discovered a massive gas giant planet orbiting the star 51 Peg. This planet orbits in the star's habitable zone (where liquid water can exist). Describe how this finding might change your estimate.
<urn:uuid:9212fdbf-14de-477d-9017-325d1b649bfa>
3.65625
968
Tutorial
Science & Tech.
49.822209
In chemical thermodynamics, activity (symbol a) is a measure of the “effective concentration” of a species in a mixture, meaning that the species' chemical potential depends on the activity of a real solution in the same way that it would depend on concentration for an ideal solution. By convention, activity is treated as a dimensionless quantity, although its actual value depends on customary choices of standard state for the species. The activity of pure substances in condensed phases (solid or liquids) is normally taken as unity (the number 1). Activity depends on temperature, pressure and composition of the mixture, among other things. For gases, the effective partial pressure is usually referred to as fugacity. The difference between activity and other measures of composition arises because molecules in non-ideal gases or solutions interact with each other, either to attract or to repel each other. The activity of an ion is particularly influenced by its surroundings. Activities should be used to define equilibrium constants but, in practice, concentrations are often used instead. The same is often true of equations for reaction rates. However, there are circumstances where the activity and the concentration are significantly different and, as such, it is not valid to approximate with concentrations where activities are required. Two examples serve to illustrate this point: - In a solution of potassium hydrogen iodate at 0.02 M the activity is 40% lower than the calculated hydrogen ion concentration, resulting in a much higher pH than expected. - When a 0.1 M hydrochloric acid solution containing methyl green indicator is added to a 5 M solution of magnesium chloride, the color of the indicator changes from green to yellow—indicating increasing acidity—when in fact the acid has been diluted. Although at low ionic strength (<0.1 M) the activity coefficient approaches unity, this coefficient can actually increase with ionic strength in a high ionic strength regime. For hydrochloric acid solutions, the minimum is around 0.4 M. where μi is the chemical potential of the species under the conditions of interest, μ oi is the chemical potential of that species in the chosen standard state, R is the gas constant and T is the thermodynamic temperature. This definition can also be written in terms of the chemical potential: Hence the activity will depend on any factor that alters the chemical potential. These include temperature, pressure, chemical environment etc. In specialised cases, other factors may have to be considered, such as the presence of an electric or magnetic field or the position in a gravitational field. However, the most common use of activity is to describe the variation in chemical potential with the composition of a mixture. The activity also depends on the choice of standard state, as it describes the difference between an actual chemical potential and a standard chemical potential. In principle, the choice of standard state is arbitrary, although there are certain conventional standard states which are usually used in different situations. Activity coefficient The division by the standard molality b o or the standard amount concentration c o is necessary to ensure that both the activity and the activity coefficient are dimensionless, as is conventional. When the activity coefficient is close to one, the substance shows almost ideal behaviour according to Henry's law. In these cases, the activity can be substituted with the appropriate dimensionless measure of composition xi, mi/m o or ci/c o. It is also possible to define an activity coefficient in terms of Raoult's law: the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol ƒ for this activity coefficient, although this should not be confused with fugacity. Standard states In most laboratory situations, the difference in behaviour between a real gas and an ideal gas is dependent only on the pressure and the temperature, not on the presence of any other gases. At a given temperature, the "effective" pressure of a gas i is given by its fugacity ƒi: this may be higher or lower than its mechanical pressure. By historical convention, fugacities have the dimension of pressure, so the dimensionless activity is given by: where Φi is the dimensionless fugacity coefficient of the species, yi is its fraction in the gaseous mixture (y = 1 for a pure gas) and p is the total pressure. The value p o is the standard pressure: it may be equal to 1 atm (101.325 kPa) or 1 bar (100 kPa) depending on the source of data, and should always be quoted. Mixtures in general The most convenient way of expressing the composition of a generic mixture is by using the amount fractions x (or y in the gas phase) of the different components, where The standard state of each component in the mixture is taken to be the pure substance, i.e. the pure substance has an activity of one. When activity coefficients are used, they are usually defined in terms of Raoult's law, where ƒi is the Raoult's law activity coefficient: an activity coefficient of one indicates ideal behaviour according to Raoult's law. Dilute solutions (non-ionic) A solute in dilute solution usually follows Henry's law rather than Raoult's law, and it is more usual to express the composition of the solution in terms of the amount concentration c (in mol/L) or the molality b (in mol/kg) of the solute rather than in amount fractions. The standard state of a dilute solution is a hypothetical solution of concentration c o = 1 mol/L (or molality b o = 1 mol/kg) which shows ideal behaviour (also referred to as "infinite-dilution" behaviour). The standard state, and hence the activity, depends on which measure of composition is used. Molalities are often preferred as the volumes of non-ideal mixtures are not strictly additive and are also temperature-dependent: molalities do not depend on volume, whereas amount concentrations do. The activity of the solute is given by: Ionic solutions When the solute undergoes ionic dissociation in solution (a salt e.g.), the system becomes decidedly non-ideal and we need to take the dissociation process into consideration. One can define activities for the cations and anions separately (a+ and a–). It should be noted however that in a liquid solution the activity coefficient of a given ion (e.g. Ca2+) isn't measurable because it is experimentally impossible to independently measure the electrochemical potential of an ion in solution. (One cannot add cations without putting in anions at the same time). Therefore one introduces the notions of - mean ionic activity - a±ν = a+ν+a–ν– - mean ionic molality - b±ν = b+ν+b–ν– - mean ionic activity coefficient - γ±ν = γ+ν+γ–ν– where ν = ν+ + ν– represent the stoichiometric coefficients involved in the ionic dissociation process Even though γ+ and γ– cannot be determined separately, γ± is a measureable quantity that can also be predicted for sufficiently dilute systems using Debye–Hückel theory. For electrolyte-solutions at higher concentrations, Debye-Hückel theory needs to be extended and replaced, e.g., by a Pitzer electrolyte solution model (see external links below for examples). For the activity of a strong ionic solute (complete dissociation) we can write: - a2 = a±ν = γ±νm±ν The most direct way of measuring an activity of a species is to measure its partial vapor pressure in equilibrium with a number of solutions of different strength. For some solutes this is not practical, say sucrose or salt (NaCl) do not have a measurable vapor pressure at ordinary temperatures. However, in such cases it is possible to measure the vapor pressure of the solvent instead. Using the Gibbs–Duhem relation it is possible to translate the change in solvent vapor pressures with concentration into activities for the solute. Another way to determine the activity of a species is through the manipulation of colligative properties, specifically freezing point depression. Using freezing point depression techniques, it is possible to calculate the activity of a weak acid from the relation, where m' is the total molal equilibrium concentration of solute determined by any colligative property measurement(in this case ΔTfus, b is the nominal molality obtained from titration and a is the activity of the species. There are also electrochemical methods that allow the determination of activity and its coefficient. where R is the gas constant and µi o is the value of µi under standard conditions. Note that the choice of concentration scale affects both the activity and the standard state chemical potential, which is especially important when the reference state is the infinite dilution of a solute in a solvent. Formulae involving activities can be simplified by considering that: - For a chemical solution: - the solvent has an activity of unity (only a valid approximation for rather dilute solutions) - At a low concentration, the activity of a solute can be approximated to the ratio of its concentration over the standard concentration: Therefore, it is approximately equal to its concentration. - For a mix of gas at low pressure, the activity is equal to the ratio of the partial pressure of the gas over the standard pressure: - Therefore, it is equal to the partial pressure in bars (compared to a standard pressure of 1 bar). - For a solid body, a uniform, single species solid at one bar has an activity of unity. The same thing holds for a pure liquid. The latter follows from any definition based on Raoult's law, because if we let the solute concentration x1 go to zero, the vapor pressure of the solvent p will go to p*. Thus its activity a = p/p* will go to unity. This means that if during a reaction in dilute solution more solvent is generated (the reaction produces water e.g.) we can typically set its activity to unity. Solid and liquid activities do not depend very strongly on pressure because their molar volumes are typically small. Graphite at 100 bars has an activity of only 1.01 if we choose p o = 1 bar as standard state. Only at very high pressures do we need to worry about such changes. Example values Example values of activity coefficients of sodium chloride in aqueous solution are given in the table. In an ideal solution, these values would all be unity. The deviations tend to become larger with increasing molality and temperature, but with some exceptions. |Molality (mol/kg)||25 °C||50 °C||100 °C||200 °C||300 °C||350 °C| See also - Fugacity, the equivalent of activity for partial pressure - Chemical equilibrium - Electrochemical potential - Excess chemical potential - Partial molar property - Thermodynamic equilibrium - McCarty, Christopher G.; Vitz, Ed (2006), "pH Paradoxes: Demonstrating that it is not true that pH ≡ -log[H+]", J. Chem. Ed. 83 (5): 752, Bibcode:2006JChEd..83..752M, doi:10.1021/ed083p752 - IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "activity (relative activity), a". - International Union of Pure and Applied Chemistry (1993). Quantities, Units and Symbols in Physical Chemistry, 2nd edition, Oxford: Blackwell Science. ISBN 0-632-03583-8. pp. 49–50. Electronic version. - Kaufman, Myron (2002), Principles of thermodynamics, CRC Press, p. 213, ISBN 0-8247-0692-7 - Cohen, Paul (1988), The ASME Handbook on Water Technology for Thermal Systems, American Society of Mechanical Engineers, p. 567, ISBN 0-7918-0300-7 - Equivalences among different forms of activity coefficients and chemical potentials - Calculate activity coefficients of common inorganic electrolytes and their mixtures - AIOMFAC online-model: calculator for activity coefficients of inorganic ions, water, and organic compounds in aqueous solutions and multicomponent mixtures with organic compounds.
<urn:uuid:68adcc9f-3825-4e0d-b782-61287d5fe196>
3.90625
2,628
Knowledge Article
Science & Tech.
34.558661
One more little side trip before we proceed with the differential geometry: Lie algebras. These are like “regular” associative algebras in that we take a module (often a vector space) and define a bilinear operation on it. This much is covered at the top of the post on algebras. The difference is that instead of insisting that the operation be associative, we impose different conditions. Also, instead of writing our operation like a multiplication (and using the word “multiplication”), we will write it as and call it the “bracket” of and . Now, our first condition is that the bracket be antisymmetric: Secondly, and more importantly, we demand that the bracket should satisfy the “Jacobi identity”: What this means is that the operation of “bracketing with ” acts like a derivation on the Lie algebra; we can apply to the bracket by first applying it to and bracketing the result with , then bracketing with the result of applying the operation to , and adding the two together. This condition is often stated in the equivalent form It’s a nice exercise to show that (assuming antisymmetry) these two equations are indeed equivalent. This form of the Jacobi identity is neat in the way it shows a rotational symmetry among the three algebra elements, but I feel that it misses the deep algebraic point about why the Jacobi identity is so important: it makes for an algebra that acts on itself by derivations of its own structure. It turns out that we already know of an example of a Lie algebra: the cross product of vectors in . Indeed, take three vectors , , and and try multiplying them out in all three orders: and add the results together to see that you always get zero, thus satisfying the Jacobi identity.
<urn:uuid:8b2044f9-7aa5-4b63-82bf-ddcb39ac8bec>
2.828125
391
Personal Blog
Science & Tech.
31.102163