text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
cp ~bfarr/Surf_start.mws ~ You can copy the worksheet now, but you should read through the lab before you load it into Maple. Once you have read to the exercises, start up Maple, load the worksheet Surf_start.mws, and go through it carefully. Then you can start working on the exercises. One of the most valuable services provided by computer software such as Maple is that it allows us to produce intricate graphs with a minimum of effort on our part. This becomes especially apparent when it comes to functions of two variables, because there are many more computations required to produce one graph, yet Maple performs all these computations with only a little guidance from the user. The simplest way of describing a surface in Cartesian coordinates is as the graph of a function over a domain, e.g. a set of points in the plane. The domain can have any shape, but a rectangular one is the easiest to deal with. Another common, but more difficult way of describing a surface is as the graph of an equation , where is a constant. In this case, we say the surface is defined implicitly. A third way of representing a surface is through the use of level curves. The idea is that a plane intersects the surface in a curve. The projection of this curve on the plane is called a level curve. A collection of such curves for different values of is a representation of the surface called a contour plot. Some surfaces are difficult to describe in Cartesian coordinates, but easy to describe using either cylindrical or spherical coordinates. The obvious examples are cylinders and spheres, but there are many other situations where these coordinate systems are useful. What does the contour plot look like in the regions where the surface plot has a steep incline? What does it look like where the surface plot is almost flat? What can you say about the surface plot in a region where the contour plot looks like a series of nested circles?
<urn:uuid:881aedb6-3cef-4434-aa08-16a37fcdb565>
4.0625
407
Tutorial
Science & Tech.
50.303732
Lepisorus clathratus is a polypod fern and belongs to the Polypodiaceae family. It is usually found growing on rocks, mainly in the Sino-Himalayan region at altitudes from approximately 2,000 to 5,000 metres above sea level. This species has an unusual behaviour for a fern - it sheds its leaves during winter leaving the rhizome alive (this is known as a sleeping rhizome). The distribution of this species is centred in the Sino-Himalayan region with extension towards north and central China and Taiwan. The species is epiphytic and grows on rocks. It shows a preference for growing in alpine habitats - usually on the top of mountains at between 2,000m and 5,000m above sea level. This species is reported as an endangered species in Taiwan and Japan because there are only a few populations there. However, the species is widespread and locally common in the western Himalaya and Hengduan Mountains. Lepisorus clathratus is a deciduous fern species.© Li Wang Lepisorus clathratus in China.© Li Wang Paraphyses of Lepisorus clathratus.© Li Wang Rhizome scale of Lepirosurs clathratus.© Li Wang Sori on the leaves of Lepisorus clathratus.© Li Wang Plant structures that open at maturity. Grows on another plant or structure, but does not derive nutrition from it. Plant structures, such as fruits that do not open at maturity. Having a shape between oval and lance-like. Erect sterile filaments occurring amongst reproductive organs in plants. A segregate genus is created when a genus is split off, from another genus. Clusters of sporangia. Spore producing structures. Occurs when the demand for water exceeds the available amount during a certain period.
<urn:uuid:df015fc0-6272-4766-abc3-568b425aa09a>
3.203125
415
Knowledge Article
Science & Tech.
43.862673
by Staff Writers Paris (ESA) Jul 27, 2011 Observing Saturn, Herschel has detected evidence of water molecules in a huge torus surrounding the planet and centred on the orbit of its small moon, Enceladus. The water plumes on Enceladus, which were detected by the Cassini-Huygens mission, inject the water into the torus and part of it eventually precipitates into Saturn's atmosphere. The new study has identified Enceladus as the primary water supply to Saturn's upper atmosphere; this is the first example in the Solar System of a moon directly influencing the atmosphere of its host planet. Astronomers have detected water, which is a fundamental molecule on Earth, in many different environments throughout the Universe. Given its key role during the formation and evolution of the Solar System, determining the abundance and investigating the origin of this molecule on and around planets can provide crucial insight into the history of our cosmic neighbourhood. The origin of the water in the upper atmosphere of the giant planets - Jupiter, Saturn, Uranus and Neptune - as well as in that of Titan, the largest moon of Saturn, is particularly enigmatic. The first evidence of this water was found by ESA's Infrared Space Observatory (ISO) in 1997 and confirmed, a couple of years later, by NASA's Submillimeter Wave Astronomy Satellite (SWAS). These bodies have chemically reducing atmospheres; oxidation reactions do not occur - due to the lack of oxygen - and astronomers do not expect large amounts of water there. Reducing atmospheres are similar in composition to the nebula from which the Solar System originated; the early Earth also had such an atmosphere, which was later supplied with oxygen by the first living organisms that appeared on the planet. Although chemically reducing, the outer planets' atmospheres do contain traces of water in their warm, deep layers; however, the cold temperatures present at cloud level, which cause water to condense, prevent its transport to higher layers. Thus, the existence of water in the upper atmospheres of these objects calls for an external supply of such molecules, which may vary from planet to planet. A new study, based on data taken with ESA's Herschel Space Observatory, offers a first answer to the puzzle in the case of Saturn. The main water provider to this planet's upper atmosphere appears to be its small moon Enceladus, whose plumes of water vapour and ice were detected by the NASA/ESA/ASI Cassini-Huygens mission in 2005. "This is the first time we see a moon directly acting on its host planet's atmosphere and modifying its chemical composition," comments Paul Hartogh from the Max-Planck-Institut fur Sonnensystemforschung (MPS) in Katlenburg-Lindau, Germany, who led the study. Hartogh is the Principal Investigator of the Herschel Key Programme Water and related chemistry in the Solar System, within which the observations have been performed. "This unique situation has not previously been observed in the Solar System," he adds. The new detection, obtained with the HIFI spectrometer on board Herschel, is fundamentally different from those achieved over a decade ago. Whereas the spectra acquired by ISO and SWAS only revealed emission by water molecules in Saturn's atmosphere, Herschel has unexpectedly recorded absorption lines, as well. Thus, the newly detected water must be located somewhere in the foreground of the planet, along its line of sight to the observatory, and in a colder environment than that responsible for the emission lines. "The best explanation suggests that the water detected by Herschel is distributed in the Enceladus torus, a tenuous ring of material fed by this moon's plumes and centred on its orbit," explains co-author Emmanuel Lellouch from the Laboratoire d'Etudes Spatiales et d'Instrumentation en Astrophysique (LESIA) of the Observatoire de Paris, at Meudon, in France, who performed the modelling of the observations. The Enceladus torus is located at a distance from the centre of the planet of nearly four times Saturn's radius. The first sign of absorption was found in spectra taken during Herschel's calibration phase, in the summer of 2009. Further observations performed in 2010 confirmed the earlier detection at various wavelengths. The fact that no absorption was detected by SWAS over ten years ago is due to the varying geometry of Saturn's system of rings and satellites as viewed from the Earth and its vicinity. The rings and satellites are currently seen almost edge-on, whereas the configuration was much more oblique at the time of the SWAS observations. "This is a further clue implying that the absorbing water revealed by Herschel is distributed in a torus along the planet's equatorial plane," adds Lellouch. Ultimately, part of the water ejected by Enceladus precipitates into Saturn's and Titan's upper atmospheres, while the rest reaches the other satellites and the rings. The high velocity resolution and sensitivity of the HIFI spectra allowed the astronomers to characterise the torus' dynamics and to investigate the fate of the water molecules. "Combining Herschel data with models of water evolution in Saturn's environment, we could estimate the source rate of Enceladus and the rate at which water precipitates into Saturn's atmosphere," notes Hartogh. The source rate is in agreement with in-situ measurements performed by the Cassini spacecraft. The study concludes that the water abundance observed in Saturn's upper atmosphere can be fully explained in terms of an Enceladus origin. On the other hand, Enceladus does not seem to provide enough water to match the values observed for Titan, and the issue of this object's water supply remains open. "After Cassini's detection of Enceladus' plumes, Herschel has finally shown where the water emanating from this moon ends up - a nice piece of team work exploiting the complementarity of two very different missions in ESA's Science programme," comments Goran Pilbratt, ESA Herschel Project Scientist. "We now look forward to hopefully shedding new light on the origin of water in the atmosphere of Titan and the other giant planets as well," he concludes. Related publications: P. Hartogh, E. Lellouch, et al., 'Direct detection of the Enceladus water torus with Herschel', Astronomy and Astrophysics, 2011, 532, L2, DOI: 10.1051/0004-6361/201117377 Comment on this article via your Facebook, Yahoo, AOL, Hotmail login. Tempest-from-hell seen on Saturn Paris (AFP) July 6, 2011 Imagine being caught in a thunderstorm as wide as the Earth with discharges of lightning 10,000 times more powerful than normal, flashing 10 times per second at its peak. Now imagine that this storm is still unfolding, eight months later. One of the most violent weather events in the Solar System began to erupt on Saturn last December and is still enthralling astronomers, the British jou ... read more |The content herein, unless otherwise known to be public domain, are Copyright 1995-2011 - Space Media Network. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:c54b11d0-1e4a-45c0-9cc7-944a9e09721b>
3.125
1,592
Truncated
Science & Tech.
30.029759
Zamia furfuracea, Photo: Michael Lahanas Zamia furfuracea L.f. in Aiton * Hortus Kew. 3: 477 (1789). Zamia furfuracea is a cycad native to southeastern Veracruz state in eastern Mexico. Although not a palm tree (Arecaceae), its growth habit is superficially similar to a palm; therefore it is commonly known as "Cardboard Palm" but the alternate name Cardboard Cycad is preferable. Other names include Cardboard Plant, Cardboard Sago, Jamaican Sago and Mexican Cycad (from Mexican Spanish Cícada Mexicana). The plant's binomial name comes from the Latin zamia, for "pine nut", and furfuracea, meaning "mealy" or "scurfy". Description and ecology The plant has a short, sometimes subterranean trunk up to 20 cm broad and high, usually marked with scars from old leaf bases. It grows very slowly when young, but its growth accelerates after the trunk matures. Including the leaves, the whole plant typically grows to 1.3 m tall with a width of about 2 m. The leaves radiate from the center of the trunk; each leaf is 50-150 cm long with a petiole 15-30 cm long, and 6-12 pairs of extremely stiff, pubescent (fuzzy) green leaflets. These leaflets grow 8-20 cm long and 3-5 cm wide. Occasionally, the leaflets are toothed toward the tips. The circular crowns of leaves resemble fern or palm fronds. They are erect in full sun, horizontal in shade. This plant produces a rusty-brown cone in the center of the female plant. The egg-shaped female (seed-producing) cones and smaller male (pollen-producing) cone clusters are produced on separate plants. Pollination is by certain insects, namely the belid weevil Rhopalotria mollis. Cardboard Cycad plant can only be reproduced by the fleshy, brightly crimson-colored seeds produced by the female plants. The germination process is very slow and difficult to achieve in cultivation; as a result, many plants sold for horticultural use are illegally collected in the wild, leading to the species being classified as Vulnerable. This plant is easy to care for and grows best in moist, well-drained soil. They do well in full sun or shade, but not in constant deep shade. They are fairly salt- and drought-tolerant, but should be protected from extreme cold. They should occasionally be fed with palm food. After Cycas revoluta, this is probably the most popular cycad species in cultivation. In temperate regions it is commonly grown as a houseplant and, in subtropical areas, as a container or bedding plant outdoors. All parts of the plant are poisonous to animals and humans. The toxicity causes liver and kidney failure, as well as eventual paralysis. Dehydration sets in very quickly. No treatment for the poisoning is currently known. * Donaldson (2003). Zamia furfuracea. 2006. IUCN Red List of Threatened Species. IUCN 2006. www.iucnredlist.org. Retrieved on 11 May 2006. Listed as Vulnerable (VU A2acd v3.1) Source: Wikispecies, Wikipedia: All text is available under the terms of the GNU Free Documentation License
<urn:uuid:4c8aa142-ab2a-4bcc-aad4-fee4c2a89b2c>
2.6875
729
Knowledge Article
Science & Tech.
55.664512
COURTESY OF CITIZEN SKY Since the early 19th century, astronomers have observed this extremely long-period eclipsing binary located in the constellation Auriga, the charioteer. In 1928, astronomer Harlow Shapley correctly concluded that the two stars were about equal in mass. Based on this information they should be about equal in brightness as well. But the spectrum of the system showed no light from the companion at all. The visibly bright first star (called the primary) was being eclipsed by a massive, invisible second star (called the secondary). Epsilon Aurigae is bright enough to be seen with the unaided eye even in the most light-polluted cities, and it is visible every fall, winter and spring. The change in brightness that this star undergoes is called an eclipse (a process of fading and coming back to its usual brightness). - PRINCIPAL SCIENTIST: Arne Henden, Project Principal Investigator - SCIENTIST AFFILIATION: American Association of Variable Star Observers - DATES: Ongoing - PROJECT TYPE: Observation - COST: Free - GRADE LEVEL: All Ages - TIME COMMITMENT: Variable - HOW TO JOIN: Contact the Citizen Sky project.
<urn:uuid:aedac95c-bd6b-4461-988f-cf62a6f6b150>
3.8125
273
Knowledge Article
Science & Tech.
31.1675
The Orion Nebula - A stellar nursery Click on image for full size HII Regions - Stellar Nurseries Hidden in the sword of Orion the hunter, a weapon and symbol of death, lies the birthplace of new stars - the stunning Orion Nebula, one of the many objects catalogued by Charles Messier in the 18th century during his hunt for comets. Messier noted the position of the nebula because he didn't want to mistake it for a comet. Too bad he didn't notice the spectacular event taking place in the nebula, which he named M42. Nestled in the center of M42 is a group of stars, known as the Trapezium, which have formed from the gas in the nebula. From their spectra, we can determine that these stars are blue and hot. We can determine the distance to these stars and thus figure out that they are quite bright. The theory of stellar evolution tells us that hot, bright, blue stars are very young. So by studying the stars in the centers of objects like the Orion Nebula, we realize that HII regions are the birthplaces of stars. The Orion nebula is more or less circular in shape, and it glows with a characteristic red color. All these types of stellar nurseries, called HII regions, have this color. There are many HII regions with young stars in the disk of our Galaxy. Shop Windows to the Universe Science Store! The Fall 2009 issue of The Earth Scientist , which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store You might also be interested in: In the 1960's, the United States launched a series of satellites to look for very high energy photons, called Gamma Rays, that are produced whenever a nuclear bomb explodes. These satellites soon detected...more The introduction of telescopes to the study of astronomy opened up the universe, but it took some time for astronomers to realize how vast the universe could be. Telescopes revealed that our night sky...more Neutron Stars are the end point of a massive star's life. When a really massive star runs out of nuclear fuel in its core the core begins to collapse under gravity. When the core collapses the entire star...more Spiral galaxies may remind you of pinwheels turning slowly as though in some intergalactic breeze. They are rotating disks of gas, dust and stars. Through a telescope or binoculars, the bright nucleus...more White Dwarfs are the remnants of stars that were massive enough to stay alive using nuclear fusion in their cores, but not massive enough to blow apart in a Type II supernova. When stars like our own sun...more What's in a Name: Arabic for "head of the demon" Claim to Fame: Represents Medusa's eye in Perseus. A special variable star that "winks" every 3 days. Type of Star: Blue-white Main Sequence Star, and...more What's in a Name: Nicknamed the "Pup" because it is the companion to Sirius, "the Dog Star" Claim to Fame: Highly compressed white dwarf remnant. Density about 50,000 times that of water. It has approximately...more
<urn:uuid:255d36e3-5c33-4923-97f5-ee0301e7cba3>
3.578125
693
Content Listing
Science & Tech.
58.932563
Gas Chromatography is common type of chromatography which is used to analyze or separate volatile components of a mixture. This technique helps us to test the purity of a particular substance or separate different components of a structure. Basically, the mechanism of this technique is carried out by injecting syringe needle which contains a small amount of sample into the hot injector port of gas chromatography. The injector is set to the temp that is higher than the boiling points of the components so that the components will be evaporated into gas phase inside the injector. The carrier gas (normally is Helium) then pushes the gaseous components into gas chromatography column. The separation of components occurs here, form partition between mobile phase (carrier gas) and stationary phase (boiling liquid). More interestingly, gas chromatography column showed what’s inside, the maximum temperature along with the length and diameter due to the presence of metal identification tag on the column. Additionally, the column temperature is raised by the presence of heating element. The detector inside the gas chromatography will recognized the differences in partition between mobile and stationary phases. The molecules reach the detector, hopefully, at different intervals depending on their partition. The number of molecules that regenerate the signal is proportional to the area of the peaks. Although gas chromatography has many uses, GC does have certain limitations. It is useful only for the analysis of small amounts of compounds that have vapor pressures high enough to allow them to pass through a GC column, and, like TLC, gas-liquid chromatography doesn't identify compounds unless known standards are available. Coupling GC with a mass spectrometer combines the superb separatiion capabilities of GC with the superior ID methods of mass spectrometry. GC can also be combined with IR spectroscopy. IR can help to identify that a reaction has gone to completion. If the functional groups of the product are depicted in the IR, then we can be sure that the reaction has gone to completion. This can also be depicted in the GC analysis. The presence of peaks that do not correlate with the standards may be due to an incomplete reaction or impurities in the sample. The basic parts of a GC machine are as follows: - Source of high- pressure pure carrier gas - Flow controller - Heated injection port - Column and column oven - Recording device A small hypodermic syringe is used to inject the sample through a sealed rubber septum or gasket into the stream of carrier gas in the heated injection port. the sample vaporizes immediately and the carrier gas sweeps it into the column. The column is enclosed in an oven whose temperature can be regulated. After the sample's components are separated by the column, they can pass into a detector, where they produce electronic signals that can be amplified and recorded. 1. Wash syringe with acetone by filling it completely and pushing it out into a waste paper towel. ~Possible errors that can occur during Gas Chromatograpy can be due to the improper rinsing of the syringe. The syringe should be rinsed twice with acetone and once or twice with the sample. If improper rinsing ensues, unknown peaks can occur and alter our analysis of the sample. This error can be easily avoided. About 1 micro liter of sample is needed. 2. Pull some sample into the syringe. Air bubbles should be removed by quickly moving the plunger up and down while in the sample. 3. Turn on chart recorder, adjust chart speed in cm/min, set baseline by using zero so that the baseline is 1 cm from bottom of chart paper ( set 0), turn on the chart. 4. Inject sample into either column A or column B and push the needle completely into the injector till we can’t see the needle, then we pull the syringe out of the port. 5. Mark the initial injecting time on the chart. ~The sample should be injected at exactly the same time as the 'start' button is pressed. Otherwise, take note of how long after injection recording started. If the sample is not injected at the exact time the button is pressed, retention times will be off in the calculations. 6. Clean the syringe immediately.The syringe should be rinsed with acetone before injecting a different sample. Rinse before any other sample is injected and after every sample. 7. Record current (in milliamperes), temperature (in Celsius). Notes on Injection: 1. The injection site, the silver disk, is very hot. 2. The needle will pass a rubber septum so there will be some resistance. Some machines have a metal plate near the septum, so if there feels like metal resistance, the needle should be pulled out and tried again. The needle should be completely inserted into the injection point if done correctly. 3. Quick injection is needed for good results. 4. Take out the needle immediately after injection.
<urn:uuid:b8430c6e-3dca-4d1e-ac0f-f2efa34debad>
3.53125
1,025
Tutorial
Science & Tech.
44.435
Family: Talpidae, Moles view all from this family Description Tail is longer and less hairy than other moles in its range. Foretoes have webbing between them, hence the name “aquaticus.” Although they can swim, they are not aquatic. Three upper and lower premolars on each side. No external ears, and the eyes are completely covered with skin. Northern animals are larger and dark colored; southern animals are golden or silvery colored. Males tend to be larger than females in all areas. Dimensions 103-208mm, 16-38mm, 40-140g; / 129-168mm, 20-28mm, 32-90g Habitat Cities, suburbs & towns, Meadows & fields, Scrub, shrub & brushlands, Grasslands & prairies, Forests & woodlands Range Plains, Great Lakes, New England, Mid-Atlantic, Rocky Mountains, Southeast, Florida, Texas Discussion Active year-round, and feeds on a variety of invertebrates including earthworms and ant larvae. About 99% of their time is spent underground, in tunnels and associated chambers. Construction of roads and golf courses has provided quality soils and increased moisture, allowing spread of Eastern Moles in some areas. Both surface and deeper tunnels are constructed in moist, loamy soils throughout eastern North America.
<urn:uuid:53a6be69-29a2-48b9-9f3d-136ef0fc4109>
3.203125
280
Knowledge Article
Science & Tech.
48.002383
When a USGS colleague wandered into geology professor LeeAnn Munk’s UAA office one day and asked, “What do you know about lithium?” she had to honestly answer, “Not much.” But that was just the question to launch a curious researcher like Munk, already interested in all things related to the geochemistry of ore deposits. She started reading and researching. Most people know that lithium is ubiquitous in all those little gadgets and widgets we’ve grown to love and depend on. Lithium-ion batteries are the most efficient way to store, power and recharge the energy needed to run our laptops (maybe you’re using your Li-ion battery to view this article right now?), cameras, smartphones and even hybrid, battery-powered automobiles. (Imagine: It takes 100 laptop-sized batteries to power a hybrid car.) Lithium is considered a critical element, both for technological advancement and also for national security. It didn’t take Munk long to locate a funding opportunity that could help extend her knowledge. It came in the form of a $70,000 grant from the USGS Mineral Resource External Research Program (MRERP) to explore lithium brine deposits. First steps: where and how In the United States, the only lithium brine in production is located in Clayton Valley, Nev. Munk, with help from undergraduate researcher Hillary Jochens and University of Utah colleague Scott Hynek, worked there over the course of three years. Her goal was to identify which rocks lithium turns up in and to understand what geochemical processes might be at work to help concentrate this valuable metallic element. To date, they’ve identified potential sources of lithium in the basin and are building a knowledge base to understand the relevant processes that concentrate and replenish the brine with lithium. But that’s barely half the story, as Munk explains: “The overarching question is how sustainable are lithium brines as a source for the metal? How old are they? How long do they take to regenerate?” She suspects the brines could be hundreds of thousands of years in the making. So during that timeframe, did the brine go through some evaporative process that further concentrated it? And what role do geothermal sources play? Another important question related to sustainability is what happens as the lithium brine gets pumped out of the ground and into drying beds to isolate the salts for lithium processing. Does fresh water flow into the salty brine and change its character? Can pumping lithium brines deplete or change them forever? Shifting to Chile At Clayton Valley in Nevada, Munk built her collaborative research with Hynek and together they pursued establishing a research effort on the lithium brines of South America, in particular Chile. The highest lithium concentration brine is being produced from a large salt flat called Salar de Atacama, located in the vast Atacama Desert running 600 miles along the Pacific coast from Peru to northern Chile. A feature in National Geographic calls the Atacama “the driest place on earth” where some locations haven’t received any rain since the beginning of recorded time. Wikipedia reports that this salar contains 27 percent of the world’s lithium reserves. Lithium is the lightest metal found in the Earth and the third element on the Periodic Table. The Chile location is being worked by Rockwood Lithium and SQM, chemical companies extracting lithium from the salty, subterranean brines. A video on the Rockwood Lithium website reports that over the last few years it has been pushing its facility to capacity to meet ever-increasing world demand for lithium. Here, wells pump the brines to the surface where they fill large, shallow evaporation ponds. According to NASA, the dry and windy desert climate enhances evaporation of the water, leaving concentrated salts behind perfect for extracting lithium. This is a more modern, more environmentally friendly way to extract the metal than traditional mining methods. Second grant focuses the sustainability question Rockwood Lithium is providing two years of financial support to cover research work both at Clayton Valley and Salar de Atacama. Munk and her collaborators visited Atacama last January and again in March. Jochens, who graduates this fall and then begins an interdisciplinary master’s degree under Munk, went along. They will return this September for additional fieldwork. Munk hopes her scientific work will help industry develop a global model for locating lithium deposits accurately and effectively, and answer the question of how sustainable the lithium brines are. Already her work has turned up details about lithium brines that her sponsoring company didn’t realize. While under contract, her findings are proprietary. But after she completes the contract, she’ll be able to publish her findings in scientific journals on behalf of UAA. Undergraduate research opportunities Munk is excited to be working on a problem so essential to our way of life. “We divvy up research work into ‘basic’ and ‘applied,’” she says. “I really like the applied side of things. It’s very satisfying for me to see an immediate result. It always ends up positive, even if we discover things that don’t necessarily help the business. They end up with an expanded knowledge base that’s helpful going forward.” But another favorite aspect is the opportunity to work with undergraduates, like Jochens, bringing them into the world of science research. “I could do science anywhere, really,” Munk says. “But you only get to work with students at a university. Here, I am able to provide research opportunities for undergraduates.” Munk smiles when she thinks back to her own undergraduate years. She had always been friendly to science, but which one to choose? As a college freshman at 18 and just two weeks into “Introduction to Geology,” she knew she’d found it. “I like to solve problems about the Earth through the tools of chemistry and geology,” she said. “You get to do it outside,” an important point for a farm girl who grew up out of doors, “and you get to use modern technology to address pressing problems.” She, too, had been the happy recipient of an undergraduate research opportunity at St. Norbert College in Wisconsin. “It’s what got me into graduate school,” Munk says. “It just enhances an education so much.”
<urn:uuid:3a176cc2-79a2-432b-a16d-4f574c0dd1c3>
3.703125
1,382
Knowledge Article
Science & Tech.
43.760719
Delphi Win32 -Building Database Application - P6 In this video, we will start discussion about SQL, specifically we will talking about query data with SQL. We will learn about query data with very simple SQL command with basic query statement. Delphi Win32 -Building Database Application - P5 There are many searching methods we can use with ClientDataset, fore example are Locate, Lookup FindKey and FindNearest. Locate will find and move cursor if specified record is found, but lookup doesn’t. Findkey and FindNearest are searching method that use index. Findkey is used to find data and move the cursor if found. FindNearest used to moves the cursor to the record that most closely matches a specified set of key values. Delphi Win32 -Building Database Application - P4 Insert, Edit, and Delete are the basic operation to modify dataset. Although they can be done very easy with user interface component like DBGrid or DBNavigator, as a Delphi programmer you have to know how to do that at run time. This video covers how to modify data inside TClientDataset at runtime. Delphi Win32 -Building Database Application - P3 Field is the element of dataset. In the real world application, you will intensively work with fields. The basic type of field is TField object and it has different descendant for different data type, for example TFloatField, TIntegerField, TStringField, etc. Delphi Win32 -Building Database Application - P2 There are many aspects on developing database application with Delphi. One of them is how to manage and process data in local memory. Delphi gives us an alternative to do that with a descendant of a Dataset component called ClientDataset. Delphi Win32 -Building Database Application - P1 This video is the first part of a video set about Building Database Application with Delphi, Firebird and dbExpress. In this video we will talk about introduction to database application with Delphi in general. Lazarus Beginner - P2 This video will talk about basic Free Pascal program skeleton, including basic data types, constant variable, one and multi dimension array, and looping with for, while and repeat...until statement. Delphi Win32 - Delphi for Beginner - P10 - Final Delphi has many ways to access database, that is, BDE, ADO, IBExpress, and dbExpress. This video will talk about basic usage of dbExprsess components to access Interbase database. Delphi Win32 - Delphi for Beginner - P9 Most modern programming languages support object-oriented programming (OOP). OOP languages are based on three fundamental concepts: encapsulation, inheritance, and polymorphism. Lazarus Beginner - P1 Lazarus is an open source, multi platform, Delphi like IDE and RAD Tools built with Free Pascal compiler. This series of Lazarus tutorial video for beginner covers the fundamental elements of application development with Lazarus and Free Pascal. Delphi Win32 - Delphi for Beginner - P8 This video tutorial is perfect for beginner Delphi programmers and covers the fundamental elements of development application with Delphi. Array is a collection of data with same type. Internally it resides on contiguous memory. Delphi Win32 - Delphi for Beginner - P7 Delphi is one of the most used development tools for desktop application. This video tutorial is perfect for beginner Delphi programmers and covers the fundamental elements of development application with Delphi. Units are individual source code modules in Delphi. Delphi Win32 - Delphi for Beginner - P6 This chapter will talk about branching and looping, the most important structure in programming, including if – then – else, case, for, repeat - until, and while.
<urn:uuid:990d2288-9752-470d-8a01-30f2ae245c06>
2.75
792
Content Listing
Software Dev.
32.623284
I'm revising for my exam and there's this one problem where I can't get the correct answer, so I'd like to know how to solve it. The problem is: Two boys, each with a mass of 30kg, are sitting on the ends of a horizontal girder which is 2.5m long and has a mass of 15 kg. The girder rotates at 7 turns/minute around a vertical axis through its center. a) What is the angular speed if each boy moves forward 29 cm? b) What is the change in kinetic energy? (I added the homework tag, but it's not like I have to solve this, I just don't know how to.)
<urn:uuid:5237f5a7-1b23-4412-8d81-959fa89ea6c4>
2.875
146
Q&A Forum
Science & Tech.
85.074118
A model of the geodynamics between the reversals of Earth’s magnetic field. Image courtesy of G.A. Glatzmaier/Los Alamos National Lab (via Wikimedia Commons) A supercomputer simulation shows magnetic field lines continuously generated and distorted by the Earth's liquid metal core. Partially mimicking this process, a rotating tank of liquid sodium can amplify the field, the first in a two-step process for generating a self-sustaining magnetic.
<urn:uuid:f4230bfc-18f1-4a7c-bf22-f9b17a594dcc>
3.15625
98
Truncated
Science & Tech.
37.520592
Microscopic plants, using volcanic ash for dating 24 April 2012 This week in the Planet Earth Podcast: we take a closer look at tiny marine plants, which underpin the entire marine food chain and play a vital role in the Earth's climate. Also, how scientists are using volcanic ash called tefra to tell how people may have responded to rapid environmental changes in the recent past. TV nature programmes have done a brilliant job of showing us the majesty of the world's rainforests. Not only do they harbour a vast diversity of plants and animals, but they also absorb huge amounts of carbon dioxide and so play a role in keeping our climate cool. But how many people have heard of phytoplankton? Phyto what, I hear you say? Unfortunately, they're not quite as well-known as rainforests, except maybe as Sheldon Plankton in the television cartoon SpongeBob SquarePants. But it turns out that while phytoplankton represents just two per cent of all plant matter on Earth, it accounts for half of all carbon dioxide absorption from the atmosphere. Richard Hollingham meets plankton expert Katy Owen at the UK's most easterly point, Lowestoft, to find out more. Click the play button above to listen now. A full text transcript is available. Later, Sue Nelson gets up close and personal with volcanic ash from an Italian island called Lipari. Victoria Cullen and Christine Lane from the University of Oxford explain how they're using traces of ash to investigate how our ancestors coped with rapid changes in climate during the last 80,000 years. If there's a subject you'd like to hear about in the Planet Earth Podcast, don't forget to let us know. Email your ideas to firstname.lastname@example.org or if you're on Facebook or Twitter, contact us there – see the links below. Interesting? Spread the word using the 'share' menu on the top right.
<urn:uuid:60f21c28-ded8-48d3-8263-d43a3f7c5f26>
3.21875
408
Truncated
Science & Tech.
54.793246
In this second excerpt from my book Out of Nature: Why Drugs From Plants Matter to the Future of Humanity (University of Arizona Press, 2012), I have selected a passage from chapter 8, which discusses the importance of economic and public investment in biodiversity conservation. Part 1 of this series can be viewed here. -- Kara Rogers International and national legislation can go a long way toward changing the world. The ultimate success of enacting eco-friendly regulations, however, depends very much on the voices of individuals in communities. Excellent models for illustrating the far-reaching impacts of the collective action of citizens are smoking bans. In the latter part of the twentieth century, as the negative health effects of secondhand smoke were exposed, public health concerns over smoking in workplaces and public places increased dramatically. Drunk driving is illegal because it places the welfare of others at stake, and so it is with smoking in public places. Cardiovascular diseases and respiratory conditions such as lung cancer caused by secondhand smoke are preventable, and in places that have implemented smoking bans, prevention is paying off. Towns and cities where bans exist have experienced significant decreases in the incidence of acute heart attacks. On average, the incidence of heart attacks in the places investigated decreased by 15 percent in the first year of the ban relative to years before the laws were enforced. Three years into the bans, a 36 percent decrease in incidence of heart attacks was detected. In the long run, smoking bans will save the world trillions of dollars in preventable health-care expenses. If people were as passionate about the welfare of the environment and other species as they have been about the right to breathing smoke-free air in public spaces, the conservation of biodiversity hotspots would be in much better shape than it is currently. The magnitude of what we could accomplish by initiating conservation efforts on local levels, which feed into conservation trends on national and international levels, is enormous. The difference between environmental conservation and smoking bans, however, lies with the fact that all of us, in one way or another, contribute to the activities that are undermining the survival of species and ecosystems. As a result, the collective voice of individuals in communities is far quieter when it comes to supporting conservation than it is when it comes to lobbying on behalf of smoking bans. People generally are not willing to make lifestyle sacrifices to save little-known species like Penland beardtongue. These days, skepticism about the climate and environment has become a worrisome threat to biodiversity conservation. But the evidence exists -- increasing numbers of studies have concluded that the world is warming, its glaciers are melting, and its biodiversity is decreasing. Still, many people would rather point fingers or deny the situation rather than take responsibility for their actions. Within the public sphere, understanding of the scope of the issues faced by conservation is tangential. People may read about biodiversity in popular science magazines or other media, and they recycle and take reusable shopping bags to the grocery store. But unlike smoking, where there was broad recognition of its adverse effects on human health, the impact of biodiversity deterioration on our well-being is recognized by comparatively few. The lack of knowledge and public concern about what is happening to life on Earth as a result of our activities causes conservation efforts to limp along. There are also major hindrances to prioritizing biodiverse areas for conservation. Examples include determining the size of land area that must be set aside, which generally must be very large to ensure that ecosystems can maintain their functions, and determining the value of these places in economic terms. Ecosystem services historically have been left out of economy and policy discussions because measuring their monetary value with any remote degree of accuracy was too difficult. In the past, many ecosystem services could exist outside the frame of economics, since they were so prevalent and their exhaustion through human activities was perceived as unlikely. But things are different now. There are so many people in the world, and the population is growing so quickly, that many ecosystems services, in order to last and sustain humanity, must be given economic meaning. From Out of Nature: Why Drugs from Plants Matter to the Future of Humanity by Kara Rogers © 2012 The Arizona Board of Regents. Reprinted by permission of the University of Arizona Press. Learn more about Out of Nature at the book's web site . Read a review here (pdf)
<urn:uuid:2d4d0caa-a62f-4f52-895b-565851628902>
2.90625
872
Truncated
Science & Tech.
25.16819
NO and NO are also closely related, interconverting in the daytime in a time scale of a few minutes. The concentration of NO+NO (denoted NOx) controls the oxidative reaction chemistry as shown in the following reaction sequences, which describe the effect of hydrocarbon oxidation in atmospheres of high, low, and very low first, in the control of the sign and amount of ozone production with hydrocarbon oxidation (Crutzen, 1979): The single most important criteria in the control of ozone production is the concentration of NOx. Ozone is of well-known significance as an atmospheric pollutant and also because of its role in the generation of other key species such as OH and HO (Crutzen 1979, Logan 1983). At the present time, boundary layer over many part of the continents, and most parts of the ocean, are currently in the low-NOx regime. Ozone, produced in continental regions where sufficient NOx and hydrocarbons are emitted in to the air, is destroyed in rural regions and over the ocean. To understand the impact of anthropogenic input into the atmosphere, it is vital that the concentrations of these gases be well measured and their role in continental and marine photochemistry and the impact of ozone generation and destruction over the vast regions of the marine atmosphere understood. As reported by J. Logan (1981), "The potential yield of ozone from oxidation of CO and methane could be as large as 8x10 molecules/cm/s if an adequate supply of NO were available. This source would suffice to double the concentration of tropospheric ozone in about 2 weeks." The instrumentation built by AOML/OCD for the measurement of the most significant compounds is listed below. |GAS||METHOD OF ANALYSIS||DETECTION LIMIT |nitric oxide (NO)||chemiluminescence||5 ppt |nitrogen dioxide (NO)||chemiluminescence||7 ppt |sum of active N-oxides (NOy)||chemiluminescence||5 ppt |peroxyacetyl nitrate (PAN)||gas chromatography||1 ppt The instrumentation is capable of measuring the above mentioned compounds at the levels expected for the near-surface remote marine boundary layer troposphere. In addition, a number of related chemical and meteorological measurements must be made in order to separate meteorological from chemical influences on the measured air concentration. Most significant are the measurement s of ozone (O) and carbon monoxide (CO). This instrumentation is also regularly used in conjunction with the N- oxides instrumentation, A meteorological data acquisition system (ADAS) has been built, which records temperature (wet and dry bulb), air speed and direction, and UV light intensity. A system of launching and receiving data from rawinsondes (weather balloons) is regularly employed. The task is a component of NOAA's Radiatively-Important Trace Species (RITS) program. Additional experimental venues are being planned. 1993 North Atlantic Cruise In 1993, AOML/OCD organized a cruise in the North Atlantic (Iceland to Miami). In the accompanying graphic, the concentration of nitrogen dioxide as measured aboard the R/V Malcolm Baldrige during the cruise is shown (Carsey, et al., 1994b). NO concentrations averaged 7.7 pptv (range 1 to 33 pptv). Rapid elevations of N-oxides (and other species) were observed at the onset of continental plumes on 6-Sept, 10-Sept, and 12-Sept. A complete description of the cruise data, including cruise information, sampling and analytical descriptions, and graphical and tabular data lists, is given in a data report (Carsey et al., 1995b). Chemical and aerosol results are currently being prepared for additional publications. In a related study, the concentration of PAN (peroxyacetyl nitrate), an important reservoir of nitrogen oxides, was found in the eastern North Atlantic to be a good indicator of continental (European) air masses with a distinctive diel cycle indicating active photochemistry underway in the air mass [Gallagher, et al., 1992]. 1994 NO/NO/NOy Instrumentation AOML/OCD co-organized and participated in an intercomparison of instrumentation for the measurement of nitrogen oxides in air (NO, NO, and NOy) at Harvard Forest, Mass. The results, distributed to all participants, indicated good agreement on most measurement parameters. 1995 Atlantic / Indian Ocean Cruise AOML/OCD co-organized and participated in a research cruise in the South Atlantic and western Indian Ocean during 1995 on the NOAA ship MALCOLM BALDRIGE. The cruise track traversed five major wind and chemical regimes, North Atlantic northeasterly trade winds, S. Atlantic southeasterly trade winds, polar westerlies, S. Indian southeast trades, and Indian northeast monsoons. The cruise track was designed to obtain a broad view of the photochemical environment in the south and central Indian Ocean prior to monsoon development, as well as in the equatorial and south Atlantic Ocean regions while biomass burning is at a minimum. Along segments of this cruise track, measurements of nitrogen oxides, ozone, carbon dioxide, aerosols and other pertinent species were obtained, as well as associated meteorological and oceanographic data. The results have been recently published (Dickerson 1996, Rhoads, 1997). 1995 ACE-1 Cruise AOML/OCD participated in the Aerosol Characterization Experiment (ACE-1) during the fall of 1995, on the NOAA ship DISCOVERER. The experiment included a transit cruise, from Seattle, WA, to Hobart, Tasmania, and the ACE-1 cruise, in the southwestern Pacific south and west of Tasmania. The experiment included other ships, aircraft, and ground station measurement (Cape Grim, Tasmania). During the cruise, AOML/OCD measured nitric oxide, nitrogen dioxide, and NOy. Some measurement results are shown here. the data has been transmitted to Codiac data distribution center. Results are currently being written up for publication. 1999 Indian Ocean Experiment The equatorial Indian Ocean during the northeast winter monsoon season is a unique natural laboratory to study how air pollution affects climate processes over the ocean. It may be the only place in the world where an intense source of continental aerosols, anthropogenic trace species and their reaction products (e.g., sulfates and ozone) from the northern hemisphere is directly connected to the pristine air of the southern hemisphere by a cross equatorial monsoonal flow into the intertropical convergence zone (ITCZ). Asia and the Indian subcontinent, which together have a population of over 2 billion people, emit large quantities of pollutants that can be carried to the Indian Ocean during the northern hemisphere winter by monsoon winds from the northeast. The Indian Ocean Experiment (INDOEX) was designed to investigate how these pollutants are transported through the atmosphere and how they affect the atmospheric composition and solar radiation processes over the ocean. Approximately 150 scientists conducted field experiment in INDOEX 21-Feb through 2-April-1999. Measurements were made from four aircraft (the U.S. C-130, the Geophysica (http://ape.iroe.fi.cnr.it), the French Mystere, and the Dutch Citation); an Indian research vessel (Sagar Kanya); a U.S. research vessel (NOAA R/V Ron Brown); and several ground stations: Kaashidhoo Climate Observatory, Maldives (http://www-indoex.ucsd.edu/KCO.html), Mauritius Universit, Pune, India, Trivandrum, India, Mt. Abu, India, and Tromelin Island, Reunion. Information was also obtained from operational and research satellites, 4-D high-resolution analyzed fields, and global climate models. The Ron Brown’s cruise track included time both south and north of the ITCZ, as well as intercomparison experiments with ground stations and aircraft where possible. Early findings of the presence of a dense pollution haze layer derived from distant continental sources suggest that the pollution events observed in INDOEX may be symptomatic of large-scale pollution transport that may be occurring in other regions of the earth. The project has received considerable press coverage here (N.Y.Times Nat’l., 10-June-1999, p. A23) and abroad (Times of India, 24-June-1999). Preliminary results show that air pollutants dramatically impact this region. A dense brownish pollution haze extending from the ocean surface to 1 to 3 km altitude was found (see photo). The haze layer covered much of the research area almost constantly during the 6-week intensive experiment. The affected area includes most of the northern Indian Ocean including the Arabian Sea, much of the Bay of Bengal, and the equatorial Indian Ocean to about 5 degrees south of the equator. The haze is caused by high concentrations of sub-micron aerosols composed of soot, sulfates, nitrates, organic particles, fly ash and mineral dust. The haze layer also contains relatively high concentrations of gases including carbon monoxide, various organic compounds, and sulfur dioxide. Visibility over the open ocean was often under 10 km, a range that is typically found near polluted source regions of the United States and Europe. The aerosols reduce the solar radiation absorbed by the ocean surface by as much as 10%. Cloud formation is also affected because water vapor condenses on the pollution particles. Information on the INDOEX project can be found at http://www-indoex.ucsd.edu. AOML-derived references to date: T. P. Carsey, "Shipboard measurements of active nitrogen gases during INDOEX." Presented at the 1999 Fall Meeting of the American Geophysical Union, San Francisco, California, December, 1999. B. G. Doddridge, W. T. Luke, C. A. Piety, R. R. Dickerson, A. M. Thompson, J. C. Witte, J. E. Johnson, T. S. Bates, P. K. Quinn, T. P. Carsey, "Trace gas and aerosols over the Atlantic during the ACE-Aerosols Cruise." Presented at the 1999 Fall Meeting of the American Geophysical Union, San Francisco, California, December, 1999. T. P. Carsey, D. D. Churchill, M. L. Farmer, C. J. Fischer, A. A. Pszenny, V. B. Ross, E. S. Saltzman, M. Springer-Young, and B. Bonsang. Nitrogen Oxides and Ozone Production in the North Atlantic Marine Boundary Layer. J. Geophys. Res. 102, 10653-10665, 1997. Tropical North Pacific (Marine Inorganic Halgens) Experiment, 1999 A significant observation from the pre-INDOEX cruise in 1995 was large diel variation (~32%) of ozone concentration. Simulation of these results was attempted with MOCCA, a photochemical box model with detailed aerosol chemistry (Sander and Crutzen, 1996). The model was constrained with photolysis rates, humidity, aerosol concentrations, NO, CO, and O3 specified by shipboard observations and ozonesonde data. Conventional homogeneous chemistry, wherein ozone photolysis to O(1D) and HOx dominates ozone destruction chemistry (as described above) was able to account for only about one third of the observed diel variation. Inclusion of bromine (Br) chemistry (Sander and Crutzen, 1996; Vogt et al., 1996) provided an additional photochemical ozone sink and improved the simulation considerably. These results suggested that bromine plays an important role in photochemistry in parts of the marine boundary layer and imply that the marine atmosphere may represent a stronger natural ozone sink than previously assumed. In addition, chlorine may also be released from moist seasalt aerosols in forms that can be easily photolyzed to yield Cl atoms (Keene et al., 1990). The Cl atom is an extremely powerful oxidant. It reacts with methane about 10 times faster than does OH, with dimethylsulfide about 20 times faster, and with some nonmethane hydrocarbons as much as 200 times faster. Thus, Cl chemistry may have a significant impact on the lifetimes of several trace gases that play key roles in atmospheric chemistry and climate (Keene et al., 1993). The field program was conducted at a shorefront sampling station on the windward side of Oahu, Hawaii, now maintained by Prof. Barry Huebert and his group of the University of Hawaii. AOML-OCD measured ozone and nitrogen oxides (NO, NO2, NOy) during the experiment. Data from the experiment is still being evaluated. A preliminary analysis; however, a preliminary presentation of the diel variation in ozone concentration is shown in the figure; these data were consistent with other gas-phase measurements obtained during the experiment. Carsey, T. P., and M. L. Farmer, 1992. Active nitrogen gases in the North Atlantic boundary layer during ASTEX. Presented at the Fall 1992 meeting of the American Geophysical Union, December 7, 1992 (Eos 73, 84, 1992). Carsey, T. P., M. L. Farmer, C. J. Fischer, A. Mendez, A. A. Pszenny, V. Ross III, P.-Y. Whung, M. Springer-Young, and M. P. Zetwo, 1994a. Atmospheric Chemistry Measurements from the 1992 ASTEX/MAGE Cruise, 30-May-1992 through 21-July 1992, Cruise Number 91-126. NOAA Data Report ERL AOML-26. Carsey, T. P., M. L. Farmer, V. Ross III, M. Springer-Young, and M. Zetwo. 1994b. Significant Trace Species in the Boundary Layer of the North Atlantic During September, 1993. Presented at the Fall, 1994 AGU meeting in San Francisco, CA, 5 Dec 1994 (Eos 75, 95, 1994). Carsey, T. P., D. D. Churchill, M. L. Farmer, C. J. Fischer, A. A. Pszenny, V. B Ross, E. S. Saltzman, M Springer-Young, and B. Bonsang, 1995a. Nitrogen oxides and ozone production in the North Atlantic Marine Boundary Layer. J. Geophys. Res. (in review). Carsey, T. P., M. L. Farmer, C. J. Fischer, A. Mendez, V. B. Ross, and M. Springer-Young, 1995b. Atmospheric Chemistry Measurements during Leg 4 (RITS) of the 1993 North Atlantic Cruise. NOAA Data Report (in press). Crutzen, P., 1979. The Role of NO and NO2 in the Chemistry of the Troposphere and Stratosphere. Ann. Rev. Earth Planet. Sci. 7, 443-472. Gallagher, M. S., T. P. Carsey, and M. L. Farmer, 1990. Peroxyacetyl nitrate in the North Atlantic. Global Biogeochem. Cyc. 4, 297- 308. Huebert, B. J., A. Pszenny, and B. Blomquist, 1994. The ASTEX/MAGE experiment. J. Geophys. Res. (submitted). Logan, J, M. L. Prather, S. C. Wofsy, and M. B. McElroy, 1981. Tropospheric Chemistry: A Global Perspective. J. Geophys. Res. 86, 7210-7254. Logan, J, 1983. Nitrogen Oxides in the Troposphere: Global and Regional Budgets. J. Geophys. Res. 88, 10785-10807. McFarland, M., D. Kley, J. W. Drummond, A. L. Schmeltekopf and R. H. Winkler, 1979. Nitric Oxide Measurements in the Equatorial Pacific Region. Geophys. Res. Lett. 6, 605-607. Pszenny, A. A., T. P. Carsey, P. Y. Whung, M. P. Zetwo, M. L. Farmer, and C. J. Fischer, 1994. Measurements of various chemical concentrations in the marine boundary layer during the 1992 ASTEX/MAGE experiment. Presented at the 1994 AGU Spring Meeting, May 25, 1994 (Eos 75, 89, 1994). Torres, A. L., and A. M. Thompson, 1993. Nitric Oxide in the Equatorial Pacific Boundary Layer: SAGA 3 Measurements. J. Geophys. Res. 98, 16949-19954. R. Dickerson, P. Kelley, K. Rhodes, T. Carsey, M. Farmer,and P. Crutzen. "Measurement of reactive nitrogen compounds over the Indian Ocean." Presented at the meeting of the American Chemical Society, New Orleans, LA March, 1996 (Chem. Eng. News 74, 84, 1996.) K. Rhoads, P. Kelley, R. Dickerson, T. Carsey, M. Farmer, D. Savoie, and J. Prospero, "Composition of the troposphere over the Indian Ocean during the monsoonal transition," J. Geophys. Res. 102, 18981-18995, 1997.
<urn:uuid:b180aa0d-7451-4301-ab1f-9929b0af31bd>
3.359375
3,766
Knowledge Article
Science & Tech.
55.18332
atomic nucleus is built from two major kinds of particles: protons and neutrons. A proton carries one unit of positive charge, which balances the negative charge on an electron. The neutron is uncharged. The standard unit for measuring masses of atoms is the atomic mass unit (amu) defined such that the most common kind of carbon atom weighs exactly 12 amu. this scale, a proton has a mass of 1.00728 amu and is slightly lighter than a neutron, which has a mass of 1.00867 amu. Protons and neutrons usually are thought of as having unit masses (1 amu) unless exact calculations are called for. On this scale, an electron weighs only 0.00055 amu. The charge and mass relationships between these three fundamental particles are summarized in the table to the left.
<urn:uuid:15732217-6f6b-466e-b42d-03d015bdc5b0>
4.4375
185
Knowledge Article
Science & Tech.
58.731649
Hi, I have a very basic question. We cannot create instance of an abstract class. e.g. Calendar class of util pkg is ab abstract class. so if we try to create an instace of this using new operator, it gives error. However, we can call getInstance() method to get an instance of Calendar class. Can anyone explain me, what exactly happens when we call getInstance() method?? TIA Grishma JavaBeginnersFaq "Yesterday is history, tomorrow is a mystery, and today is a gift; that's why they call it the present." Eleanor Roosevelt Joined: Dec 10, 2001 Note that GregorianCalendar is a subclass of Calendar. This is an example of Polymorphism in the Java 2 Standard Edition API. Another example can be found with the public void paint(Graphics g) method that many windowing components have.
<urn:uuid:f17a3492-0c3d-4ea6-91ff-fb82a53cd833>
2.859375
181
Q&A Forum
Software Dev.
56.547949
The Gesellschaft für Schwerionenforschung mbH (GSI), or Association for Heavy Ion Research, is a research facility where scientific researchers work with heavy ions for a wide range of experiments to explore the structure of matter. GSI is a particle accelerator facility where ions are accelerated up to 90% the speed of light. Of the accomplishments at GSI, atoms of atomic number 107 through 112 were discovered at GSI: Bohrium, Hassium, Meitnerium, Darmstadtium, Roentgenium and Ununbium. Another major accomplishment is the use of heavy ions to treat cancer. In the United States, ionic cancer treatment is primarily done by bombarding protons at a patient’s tumor. The heavy ion accelerator, as the name suggests, accelerates muclei of heavier elements, and for cancer treatment, carbon. These carbon nuclei are particularly adept at destroying tissue, yet are able to destroy tissue at a point. The diagram below shows the higher energy released from carbon ions. When carbon atoms penetrate the patient’s skull, they pass through the brain tissue, but when they reach a specified depth, they radiate the tissue. This means that the bone, tissue, and everything between the environment and the patient’s tumor, is virtually untouched, but the tumor is destroyed. This specialized radiation beam is created in the huge GSI complex. Although the carbon nuclei treatment has obvious advantages, including damaging less good tissue and destroying tumors more effectively than using a proton beam, the technique is not used in the US. A problem is that creating a heavy ion beam is more diffucult than creating a proton beam and the only current place for treatment is at GSI, near Darmstadt, Germany. Patients often bike to their daily painless radiation treatment that lasts normally a bit less than a month. Currently a smaller heavy ion beam still capable of penetrating any depth within a human body is being built in Frankfurt, Germany as a sole medical facility. Other centers are planned through-out Europe to precision treat cancerous tumors. European funding for public facilities and research project has been surging in recent years. This is particularly evident in higher education, where Germany has funded a total 1.9 billion Euros known as the “excellence initiative” where young scientists and PhD students receive one million Euros each at certain Universities. The above photos (click to enlarge) from left to right: (1) The control room at GSI oversees the UNILAC linear accelerator and synchrotron. (2) The yellow section is the acceleration phase of the synchrotron where millions of volts accelerate ions. (3) The red section is the steering phase where the ions are precisely turned using very strong magnetic fields produced by the huge wire coils. (4) A research sensor array for detecting scattered ions and atoms.
<urn:uuid:97b45c15-ff39-4d47-b4c1-10e9af4b6ee8>
3.390625
596
Knowledge Article
Science & Tech.
38.661522
||It has been suggested that this article be merged into Ridge regression. (Discuss) Proposed since November 2012.| In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference. When the regression model has errors that have a normal distribution, and if a particular form of prior distribution is assumed, explicit results are available for the posterior probability distributions of the model's parameters. Model setup This corresponds to the following likelihood function: where is the design matrix, each row of which is a predictor vector ; and is the column -vector . This is a frequentist approach, and it assumes that there are enough measurements to say something meaningful about . In the Bayesian approach, the data are supplemented with additional information in the form of a prior probability distribution. The prior belief about the parameters is combined with the data's likelihood function according to Bayes theorem to yield the posterior belief about the parameters and . The prior can take different functional forms depending on the domain and the information that is available a priori. With conjugate priors Conjugate prior distribution For an arbitrary prior distribution, there may be no analytical solution for the posterior distribution. In this section, we will consider a so called conjugate prior for which the posterior distribution can be derived analytically. A prior is conjugate to this likelihood function if it has the same functional form with respect to and . Since the log-likelihood is quadratic in , the log-likelihood is re-written such that the likelihood becomes normal in . Write The likelihood is now re-written as where is the number of regression coefficients. This suggests a form for the prior: where is an inverse-gamma distribution In the notation introduced in the inverse-gamma distribution article, this is the density of an distribution with and with and as the prior values of and , respectively. Equivalently, it can also be described as a scaled inverse chi-squared distribution, . Further the conditional prior density is a normal distribution, In the notation of the normal distribution, the conditional prior distribution is Posterior distribution With the prior now specified, the posterior distribution can be expressed as With some re-arrangement, the posterior can be re-written so that the posterior mean of the parameter vector can be expressed in terms of the least squares estimator and the prior mean , with the strength of the prior indicated by the prior precision matrix Therefore the posterior distribution can be parametrized as follows. where the two factors correspond to the densities of and distributions, with the parameters of these given by This can be interpreted as Bayesian learning where the parameters are updated according to the following equations. Model evidence The model evidence is the probability of the data given the model . It is also known as the marginal likelihood, and as the prior predictive density. Here, the model is defined by the likelihood function and the prior distribution on the parameters, i.e. . The model evidence captures in a single number how well such a model explains the observations. The model evidence of the Bayesian linear regression model presented in this section can be used to compare competing linear models by Bayesian model comparison. These models may differ in the number and values of the predictor variables as well as in their priors on the model parameters. Model complexity is already taken into account by the model evidence, because it marginalizes out the parameters by integrating over all possible values of and . This integral can be computed analytically and the solution is given in the following equation. Here denotes the gamma function. Because we have chosen a conjugate prior, the marginal likelihood can also be easily computed by evaluating the following equality for arbitrary values of and . Note that this equation is nothing but a re-arrangement of Bayes theorem. Inserting the formulas for the prior, the likelihood, and the posterior and simplifying the resulting expression leads to the analytic expression given above. Other cases In general, it may be impossible or impractical to derive the posterior distribution analytically. However, it is possible to approximate the posterior by an approximate Bayesian inference method such as Monte Carlo sampling or variational Bayes. The special case is called ridge regression. A similar analysis can be performed for the general case of the multivariate regression and part of this provides for Bayesian estimation of covariance matrices: see Bayesian multivariate linear regression. See also ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (August 2011)| - The intermediate steps are in Fahrmeir et al. (2009) on page 188. - The intermediate steps of this computation can be found in O'Hagan (1994) on page 257. - Carlin and Louis(2008) and Gelman, et al. (2003) explain how to use sampling methods for Bayesian linear regression. - Box, G.E.P. and Tiao, G.C. (1973) Bayesian Inference in Statistical Analysis, Wiley, ISBN 0-471-57428-7 - Carlin, Bradley P. and Louis, Thomas A. (2008). Bayesian Methods for Data Analysis, Third Edition. Boca Raton, FL: Chapman and Hall/CRC. ISBN 1-58488-697-8. - O'Hagan, Anthony (1994). Bayesian Inference. Kendall's Advanced Theory of Statistics 2B (First ed.). Halsted. ISBN 0-340-52922-9. - Gelman, Andrew, Carlin, John B., Stern, Hal S. and Rubin, Donald B. (2003). Bayesian Data Analysis, Second Edition. Boca Raton, FL: Chapman and Hall/CRC. ISBN 1-58488-388-X. - Gero Walter, and Thomas Augustin (2009). Bayesian Linear Regression—Different Conjugate Models and Their (In)Sensitivity to Prior-Data Conflict, Technical Report Number 069, Department of Statistics, University of Munich. - Michael Goldstein, David Wooff (2007) Bayes Linear Statistics, Theory & Methods, Wiley. ISBN 978-0-470-01562-9 - Fahrmeir, L., Kneib, T., and Lang, S. (2009). Regression. Modelle, Methoden und Anwendungen, Second Edition. Springer, Heidelberg. doi:10.1007/978-3-642-01837-4. ISBN 978-3-642-01836-7. - Peter E. Rossi, Greg M. Allenby, and Robert McCulloch, Bayesian Statistics and Marketing, John Wiley & Sons, Ltd, 2006 - Thomas P. Minka (2001) Bayesian Linear Regression, Microsoft research web page - Bayesian estimation of linear models (R programming wikibook). Bayesian linear regression as implemented in R. A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia.
<urn:uuid:6552aa5e-02ef-4e7e-a871-db947331951e>
2.84375
1,479
Knowledge Article
Science & Tech.
43.086571
Barny Wiggin, former Meteorologist-In-Charge at the NWS Office in Buffalo was once quoted as saying that the weather often "clears up stormy" to the lee of the Great Lakes during the winter. In particular, long after the passage of cold fronts across the region, the relatively warm waters of the Great Lakes often create convective instability in an otherwise stable, arctic or polar continental airmass. So while other parts of the northeastern U.S. are clearing up after a recent cold frontal passage, Great Lakes communities wait for the lake effect snow machine to fire up! Basically there are a couple of main ingredients that you need to produce lake effect snow. The first is a relatively warm body of water (aka Great Lake). The second ingredient is a source of cold air. In the Great Lakes Region, that source comes from the high latitudes of North America where arctic airmasses often "spill southward" over those warm bodies of water. Heat and moisture from the warm lakes rises into the "modified" arctic air where it then cools and condenses into snow clouds. The prevailing wind direction through the depth of the snow clouds (third ingredient!!) determines where the snow will occur. Lake Effect Snows describe mesoscale convective snow events that occur in the Great Lakes Region. However, common sense would suggest that these types of snowstorms should occur wherever you get cold air "channeling" across a warm body of water. We have indicated some of the other locations on Earth where these snows occur, including such diverse places as the Great Salt Lake in the U.S., parts of Japan, Korea and Scandinavia to name a few, just click the above maps and see! "Lake Effect" weather does not only occur during the Fall and Winter. The Great Lakes influence the local climate throughout the entire year. There are many positive impacts that the lakes have on the area climate. Winter snows are a boon to the local skiing industry, which boasts some of the best slopes in the east. At other times of the year the moderating effects of the marine climate allow for the cultivation of excellent fruit and vegetable crops, and the cooling effects of the lakes during the summer months provide a natural air conditioner to the region. Lake Effect Snow contributes significantly to the total seasonal snowfall in Western and Central New York. In fact, the higher elevations east of Lake Ontario get over 200 inches of snow annually, making that area the snowiest populated region to the east of the Rocky Mountains!
<urn:uuid:78a33f15-3a75-4dd9-845f-4abf39b369a5>
3.671875
554
Knowledge Article
Science & Tech.
47.130953
Tweaking the PATH environment variable setting for Python (Windows) Usually, you will use the Python Shell from within Idle or and IDE, but you sometimes you may need to be able to run python within a terminal shell, e.g. to run setup.py for installing certain python modules. You may think that installing your Python system takes care of this as well, but it may not, so even your install went OK and you can run Idle, etc. you may still not be able to run the Python interpreter executable via the command line i.e. within an old fashioned terminal shell, such as cmd.exe or the Windows Power Shell. To test this, run cmd.exe, punch in python, and hit Enter - if you get something like the image above shows you're fine (you'll just get python 2.6 not 2.4), if it can't find python.exe read on. The python.exe in question should be within your python26 (for Python 2.6) folder, usually in C;|python26. Executing (running) that python.exe file from within any folder requires that Windows "knows" globally in which folder python.exe lives (i.e.you need to point it to that C:\Python26 folder). For this you have to append C:\Python26 to a environment variable called Path (the executable or DOS path), which sounds scary but is actually pretty simple once you know where to look. The Python Windows FAQ (http://docs.python.org/faq/windows.html) describes this adding (appending) of your python folder a bit down (search for DOS Path) and points to a video describe this tweak (http://showmedo.com/videos/video?name=960000&fromSeriesID=96). Here's more on how to use cmd.exe (http://www.voidspace.org.uk/python/articles/command_line.shtml), search for PATH to get to the part where the Path environment variable is tweaked. BTW, I've been using the Windows Power Shell instead of cmd.exe which came standard with Windows 7. It seems to behave more like the Unix shell (bash, sh) that I'm used to (yes, it understands ls!).
<urn:uuid:25968ff3-dd38-4e11-a974-5e5c30d9376a>
2.859375
474
Tutorial
Software Dev.
77.011109
|Darkfield photograph of a gastrotrich| The gastrotrichs (from Greek γαστήρ, gaster ["stomach"], and θρίξ, thrix ["hair"]), often called hairy backs, are a phylum of microscopic (0.06-3.0 mm), pseudocoelomate animals abundant in fresh water and marine environments. Most fresh water species are part of the periphyton and benthos. Marine species are found mostly interstitially in between sediment particles, while terrestrial species live in the water films around grains of soil. Gastrotrichs are bilaterally symmetric, with a transparent body and a flat underside. Many species have a pair of short projections at the posterior end. The body is covered with cilia, especially about the mouth and on the ventral surface, and has two terminal projections with cement glands that serve in adhesion. This is a double-gland system where one gland secretes the glue and another secretes a de-adhesive to sever the connection. Like many microscopic animals, their locomotion is primarily powered by hydrostatics. Gastrotrichs demonstrate eutely, with development proceeding to a particular number of cells, and further growth coming only from an increase in cell size. The mouth is at the anterior end, and opens into an elongated pharynx lined by myoepithelial cells. In some species, the mouth includes an eversible capsule, often bearing teeth formed from the outer cuticle of the body wall. The pharynx opens into the intestine, which is lined with glandular and digestive cells. The anus is located close to the hindmost part of the body. In some species, there are pores in the pharynx opening to the ventral surface; these may allow egestion of any excess water swallowed while feeding. The excretory system consists of a single pair of protonephridia, which open through separate pores on the lateral underside of the animal, usually in the midsection of the body. Unusually, the protonephridia do not take the form of flame cells, but instead the excretory cells consist of a skirt surrounding a series of cytoplasmic rods that in turn enclose a central flagellum. These cells, termed cyrtocytes, connect to a single outlet cell which passes the excreted material into the protonephridial duct. As is typical for such small animals, there are no respiratory or circulatory organs. Nitrogenous waste is probably excreted through the body wall, as part of respiration, and the protonephridia are believed to function mainly in osmoregulation. The nervous system is relatively simple. The brain consists of two ganglia, one on either side of the pharynx, connected by a commisure. Each ganglion gives rise to a single nerve cord, which runs the length of the body and includes further, smaller ganglia. Gastrotrichs are simultaneous hermaphrodites, possessing both male and female sex organs. There is generally a single pair of gonads, including sperm-producing cells anteriorly, and producing ova from the posterior part. Sperm are released through ducts that open on the underside of the animal roughly two-thirds of the way along the body. Once the sperm are produced, they are picked up by an organ on the tail that functions as a penis to transfer the sperm to the partner. Fertilisation is internal, and the eggs are released by rupture of the body wall. Many species of chaetotonid gastrotrichs reproduce entirely by parthenogenesis. In these species the male portions of the reproductive system are degenerate and non-functional, or, in many cases, entirely absent. Some species are capable of laying eggs that can remain dormant during times of desiccation or cold temperatures; these species, however, also produce regular eggs during good environmental conditions, which hatch in one to four days. The eggs hatch into miniature versions of the adult. The young typically reach sexual maturity in about three days, and gastrotrichs can live up to ten days under laboratory conditions. The relationship of gastrotrichs to other phyla is unclear. Morphology suggests that they are close to the Gnathostomulida, the Rotifera, or the Nematoda. On the other hand genetic studies place them as close relatives of the Platyhelminthes, the Ecdysozoa or the Lophotrochozoa. About 790 species have been described. |Wikispecies has information related to: Gastrotricha| Here you can share your comments or contribute with more information, content, resources or links about this topic.
<urn:uuid:eea17036-e53e-40cc-9c3a-31a113f6f331>
3.734375
1,004
Knowledge Article
Science & Tech.
30.99491
The answer isn't in space. It isn't on the ground, or in between. The answer to determining air quality is a combination of all of those things, Jim Crawford told a January Colloquium audience Tuesday. For more than 30 years, researchers at NASA's Langley Research Center have been measuring the stratospheric ozone layer, which extends from about 10 to 30 miles above the Earth’s surface and protects life on Earth from the sun's harmful ultraviolet (UV) rays. CERES engineering checkouts -- including an initial test scan -- are leading up to 'first light,' scheduled for December 11. NASA's Stratospheric Aerosol and Gas Experiment (SAGE) III has reached its third generation in a lineage of instruments that studies the Earth's atmosphere and protective ozone layer. On Nov. 9, the Clouds and the Earth's Radiant Energy System Flight Model Five instrument team received great news -- their in-orbit instrument was activated. NASA's Students' Cloud Observations On-Line (S'COOL) program has received 100,000 cloud observations from students around the world. On a bluff overlooking the Atlantic, Grady Koch spent a month watching ocean winds. Want to learn more about fires and how they affect the environment, climate change, and the air we breathe? On Wednesday, Oct. 26, fire expert Dr. Amber Soja will answer your questions. For more than a decade, instruments on NASA satellites have revolutionized what scientists know about fire's role in land cover change, ecosystem processes, and the global carbon cycle. Three NASA interns were selected to speak at the International Astronautical Congress Plenary in Cape Town, South Africa, on Oct. 5.
<urn:uuid:4ef89407-7028-4859-bad3-51f8a9f3d44d>
3.15625
351
Content Listing
Science & Tech.
51.564853
Because of the strong ozone absorption, any photons with wavelengths shorter than 280 nm at ground level are most likely due to human activity (or lightning). This is just as well since these short wavelength photons have enough energy to break many chemical bonds. We have found many uses for this bond breaking capability in material processing. Irradiation with 254 nm radiation is useful for cleaning organics from optical surfaces and from semiconductor wafers. "Germicidal radiation" (UVC) is used for sterilization and germicidal lamps can still be found in some European meat shops. UV curing is extensively employed in industry and dentistry. Current DOE (Department of Energy) sponsored work at the National Renewable Energy Laboratory in Golden, CO, aims at using the aggressive nature of ultraviolet (solar) radiation to detoxify hazardous wastes.Fig. 1 The terrestrial spectrum and the photopic curve, photosynthesis. There is a concern about the observed increase in UVB because of the effect of UVB on many important biological molecules. We cannot yet assess the severity of the potential problem because of the shortage of reliable measurements of the ultraviolet loading, and because of continued uncertainty about the impact of ultraviolet. The Uncertain UV; Definition of UVA, UVB and UVC UV radiation offers many technical challenges. Transmittance and refractive index of many optical materials change rapidly through the ultraviolet. Detectors and optical coatings, and even some UV filter materials (see Fig. 2) are not stable as the high energy photons cause changes. Even the definition of the UVA, UVB, and UVC is in dispute. The Commission Internationale d'Eclairage (CIE), the world authority on definitions relating to optical radiation, changes from tradition in defining the regions. Table 1 Definitions of UV Regions ||100 - 280 nm ||280 - 315 nm ||315 - 400 nm1 ||200 - 290 nm ||290 - 320 nm ||320 - 400 nm2 ||280 - 320 nm ||320 - 400 nm3 1) International Lighting Vocabulary, CIE Publ. No. 17.4 2) UVA Biological Effects of Ultraviolet Radiation with Emphasis on Human Responses to Longwave Ultraviolet, Parrish et al. Plenum Press 1978 3) Influences of Atmospheric Conditions and Air Mass on the Ratio of Ultraviolet to Solar Radiation, Riordan, C et al. SERI/TP 215 3895. August 1990 Most instrumentation to measure UVB in use in the U.S., and most publications, use one of the traditional definitions. Meyrick and Jennifer Peak of Argonne National Laboratory have argued persuasively that there are good reasons to retain the historic 320 nm boundary between UVB and UVA. The differences may seem small but are very significant. 5 nm is 14% of the total UVB range, but the rapid fall-off of terrestrial solar UV in the 290 - 320 nm wavelength range gives disproportionate significance to the location of these boundaries. It is important to know what definition any publication or meter is using. Here we use:Fig. 2 Change in transmittance of a filter after UV irradiation. Measurement of Solar and Simulator UV For precision quantitative work, detailed spectro-radiometric measurements are preferred over data from UVB or UVC meters. When you use a broadband meter, with a filter to exclude all but the UVB irradiance, the spectral distribution of the calibration source must be a good match to that of the unknown. Repeated independent studies by Diffey and Sayre have shown that using calibrated broadband meters can lead to huge errors because of mismatch of calibration and measured spectra.Fig. 3 Typical solar noon UV spectra in summer and winter. Spectroradiometry is more complicated than using a simple meter. The very steep fall-off of terrestrial UV (see Fig. 3) puts stringent demands on UV spectroradiometers used to measure the radiation below 300 nm; for accuracy the instrument requires a well characterized instrumental spectral function, exact spectral calibration and wide dynamic range without the usual problem of scattered longer wavelength radiation. A 1 nm error in calibration makes little difference in the visible but at 295 nm, 0.1 nm corresponds to a 10% difference in recorded solar irradiance. We use the accurate 253.7, 289.4, 302.2 and 337 nm lines from our spectral calibration lamps to ensure wavelength accuracy. We use filter techniques and solar blind detectors to ensure that the holographic gratings in our 77274 Double Monochromator have adequate rejection. You cannot achieve high accuracy even with the best instrumentation and care, because UV calibration standards are limited to a few percentages of absolute accuracy. Simulation of the Solar UV Biological testing requires accurate simulation of the solar UV especially the UVB region. One problem is that there is no accepted standard data set for solar UVB. The data in ASTM 891 and 892 is calculated from the E 490 standard using sophisticated models for atmospheric radiation transfer. The table below shows the values for the lowest wavelengths covered by the standards; the ASTM standards are obviously not adequate for the 280 - 320 nm region. Table 2 Irradiance Values for Lowest Wavelengths Covered by ASTM Standards |Lowest Wavelengths (nm) ||ASTM 891 AM 1.5 D Irradiance (W m-2 nm-1) ||ASTM 892 AM 1.5 G Irradiance (W m-2 nm-1) ||CIE AM 1 D Irradiance (W m-2 nm-1) To meet the needs of researchers in the cosmetics industry for a usable standard, the CIE accepted a proposal from The Sunscreen High SPF Working Group of the Cosmetic, Toiletry and Fragrance Association. This proposal defines an acceptance band criterion for simulators used for testing sunscreen efficacy. The spectral output of the simulator below 320 nm must fall within two curves separated by 6 nm. Fig. 4 shows the acceptance band, which is based in Albuquerque, NM. Fig. 5 also shows how we meet this requirement with our UV Simulators and Atmospheric Attenuation filter. Simulators matching this standard allow meaningful laboratory testing of sunscreen protection factors for the UVB. Extension of the standard to the UVA is required because of the growing recognition of the dangers of longer wave UV.Fig. 4 Proposed acceptance band for simulators for SPF testing.Fig. 5 Oriel UV Simulator with Atmospheric Attenuation (AA) filter falls in the acceptance band. Reduced Visible/Infrared Simulator UV constitutes about 3.5% of an AM 1D simulator output. Higher UV radiation levels speed studies of UV effects. However, the higher UV levels from a conventional simulator are accompanied by proportionally more visible and infrared. Biological samples that are normally just warmed by normal solar radiation levels can be heated above viability by an intense simulator. Less drastic thermal effects may mask the true UV dependence of an effect under investigation. Our Solar UV Simulators remove most of the visible and infrared (85%), allowing exposure with UV levels many times above solar levels.Fig. 6 Irradiance from a UV simulator with Atmospheric Attenuation Filter compared with actual UV solar spectra. Long pass filters remove shortwave UV. Transmittance falls rapidly to negligible values below the cut-on. Fig. 7 shows the transmittance of the filters we use to mimic the atmospheric transmittance, to cut out UVC and UVC plus UVB. Our comprehensive range of broadband and narrowband filters allows versatility in selection of output spectrum. The collimated beam from our simulators simplifies filter design. Note: any filter for use in a simulator with UV output should be stabilized by UV exposure. The order of filter positioning should be considered; highly absorbing filters should be farthest from the source. If UV is not required, then a suitable long pass filter will protect subsequent optics and simplify safety requirements.Fig. 7 Transmittance of UV filters. Some Biological Effects of Light Radiation has benign and harmful effects on biological systems. Photosynthesis is obviously of vital importance; other benign effects include the production of vitamin D3, the setting of mood and the circadian rhythms, and the benefits from the mild germicidal bath provided by the sun. There are many harmful effects of solar radiation on humans, particularly of UV radiation. Skin cancer, cataracts, and loss of skin elasticity, erythema, suppression of the immune system, photokeratitis and conjunctivitis can all result from UV exposure, though in many cases the precise relationship between exposure and effect remains unclear. Simultaneous irradiance with different wavelengths enhances some processes. Understanding how solar radiation effects plant and plankton growth is also important in assessing the results of environmental change. Key Action Spectra The action spectrum characterizes the wavelength dependence of a specified biological change. Researchers continue to measure action spectra for important biological processes, leading to better understanding of the effects of irradiation and potential changes due to ozone layer depletion. Knowledge of action spectra helps in the development of protective agents. Action spectra for various detrimental ultraviolet effects were used to compile the maximum recommended exposure graph.Fig. 8 Two versions of the erythemal action spectrum. The Diffey version has a simple mathematical model that simplifies calculation of effectiveness spectra.Fig. 9 Erythemal effectiveness of noon Summer sun, Winter sun and Oriel Solar UV Simulator with Atmospheric Attenuation (AA) Filter.Fig. 10 Action spectra of DPC (DNA to protein crosslinks) in humans2 and tumorigenesis in mice3. Current research efforts include studies of the relationship between monochromatic and broadband action spectra, and better understanding of photoaddition, photorecovery and the questionable photoaugmentation. Here we show action spectra for erythema, for carcinogenisis and for DNA changes and photosynthesis inhibition in plant life. All of these spectra peak below 300 nm, but have measurable values through the UVA. There are several established action spectra for erythema. This is understandable since there is no "standard skin" and measurements indicate that spectra differ depending on the delay from exposure to assessment. Diffey's spectrum has an uncomplicated mathematical formula that simplifies determination of the effective erythemal dose, given the solar or simulator spectrum. We use Diffey's formula to calculate effectiveness spectra for solar UV and our UV simulator. The effectiveness spectra in Fig. 8 use Diffey's formula and the sun and simulator spectra from Fig. 6. Fig. 9 shows wavelengths actually produce erythema, taking into account the action spectrum and the availability of radiation at each wavelength. You can see that the peak effectiveness is at 305 nm for the summer sun. This is the wavelength at which the rapidly rising solar spectrum compensates the falling action spectrum for maximum effect. In the winter, the effectiveness is much reduced and the peak shifted to longer wavelengths. The UTR5 spectrum for tumorigenisis in hairless mice and the action spectrum for DNA damage in human cells indicate a strong deep UV dependence. Like all the action spectra we show, these have been normalized and do not indicate absolute sensitivity to radiation level. Quantification of actual sensitivity in the case of tumor induction is complicated not only by the statistical nature of cancer development, but also by shielding factors. In vitro irradiated cells do not have the in vivo shielding of epidermal layers. Fig. 11 shows two action spectra for photosynthesis inhibition in Antarctic phytoplankton (drawn from Helbling4 and Mitchell5). Unlike tropical phytoplankton, Antarctic phytoplankton, particularly dark-adapted (subpynocline) phytoplankton from greater depths, are strongly affected by UV radiation. The sensitivity and photoadaptability of this basic material will influence the impact of the Antarctic ozone hole on the local food web. Long pass filtering of solar radiation was used to determine both of these spectra; differences may be due to sampling technique (e.g. sample depth) or the low spectra resolution of this technique. This figure also shows a spectrum (discrete points) for DNA damage to alfalfa seedlings. We have scaled the points for a maximum value of 1. The original careful quantitative work by Quaite and the Sutherlands at Brookhaven National Laboratory shows that outer layers shield the cells and reduce the sensitivity to UVB from that expected from data gathered from work on the susceptibility of unshielded plant cells. CAUTION: Newports Oriel® Solar Simulators are not designed for research on humans. Exposure to intense UV radiation can cause delayed severe burns to the eyes and skin.Fig. 11 Action spectra for photosynthesis inhibition in Antarctic phytoplankton, and spectrum for DNA damage to alfalfa seedlings. All the action spectra increase dramatically with decreasing wavelength. At first glance, any increase in UVB will lead to dramatic increase in DNA damage, erythema and inhibition of photosynthesis. Madronich7 used satellite measurements of ozone concentration from 1979 to 1989 to estimate the changes in UV reaching the earth's surface over this period. He used various action spectra to calculate the DNA and plant damage at various latitudes over this ten year period, estimating a 7.4% increase at 50 N and a 34% increase at 75 S. The difficulty in drawing conclusions, even from detailed measurements of ozone, lies in uncertainty of the key action spectrum. Madronich used Setlow's8 generalized DNA damage spectrum and Caldwell's9 plant damage spectrum. Recent detailed studies by Sutherland's group10 on human skin and plant seedlings casts some doubt on whether the potential increase in DNA damage due to higher UVB levels will be as high as predicted. They point out that much previous work underestimated the effects of UVA. Although the sensitivity of the effects to UVA is very low, UVA penetrates layers that shield the DNA much better than does UVB, and there is a lot more terrestrial UVA. Convolving Quaite and Sutherland's action spectrum for DNA damage in alfalfa seedlings with the relatively high level of UVA solar irradiation shows that the UVA contribution is significant. Since the levels of UVA will not change with ozone depletion, conclusions based on UVB increase, overestimate the impact of the loss of ozone. Additional data11 on UV damage to phytoplankton supports Sutherland's position. 1) Diffey, B. Private Communication (1992-1993) 2) Peak, J.G. and Peak M.J. Mutation Research, 246 (1991) 187-191 3) Van der Leun Private Communication (1992-1993) 4) Helbling et al. Marine Ecology Progress Series Vol 80:89-100, 1992 5) Mitchell, B.G. and Karentz, D., Antarct. J. U.S. 26, 119-120, 1991 6) Quaite, F.E., Sutherland, B.M. and Sutherland J.C. Nature Vol 358, p 576, August 1992 7) Madronich, S. Geophysical Research Letters, Vol 19, No. 1, pp 37-40, 1992 8) Setlow, R.B., Proc. Natl. Acad. Sci. 71 3363-3366, 1974 9) Caldwell, M.M. et al. pp 87-111 Stratospheric ozone reduction, solar ultraviolet radiation and plant life, Worrest and Caldwell Editors, Springer Verlag, 1986 10) Freeman et al., Proc Nat'l. Acad Sci. U.S.A., Vol 86 pp 5605-5609, July 1989 11) Ryan, K.G.J. Photochem. Photobiol. B: Biol., 13 (1992) 235-240
<urn:uuid:be696e7d-bdfa-4f43-b56d-09afdaf23a4c>
3.046875
3,302
Academic Writing
Science & Tech.
43.00272
SOME genetically modified fish are like ferocious tigers in a bare tank but appear to be pussy cats under more natural conditions. This finding suggests it will not be easy for biologists to predict the ecological consequences of escaped transgenic animals. Salmon genetically engineered to overproduce growth hormone can put on up to 25 times the weight of wild salmon and could provide aquaculturists with a faster way to raise fish to market size. However, lab tests showed that these transgenic fish are more aggressive predators than wild salmon, raising concerns that they could harm wild ecosystems if they escape. Fredrik Sundström and his colleagues at Fisheries and Oceans Canada's Center for Aquaculture and Environmental Research in Vancouver tested whether the GM fish would have the same superiority in more natural conditions. When they raised the fish alongside unmodified salmon in stream tanks complete with gravel, large rocks, logs and ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:2c5f40e9-e2b0-4344-9af9-b2a048ffe3ba>
3.28125
208
Truncated
Science & Tech.
33.141658
|Feb12-12, 06:09 PM||#1| what is electricity??? For a long time i thought that electricity was just electron moving in a coil from negative to positive terminals. and thats what my teachers have taught me too... But i recently stumbled upon something called electricity through induction which has led me to doubt the above explanation. someone please clear my doubts and please explain how electrons are transferred through air(electric induction) where positive and negative terminals cannot really be specified. |Feb12-12, 06:32 PM||#2| In electrostatics, when there is no magnetic field (and therefore no induction), the electric field is conservative, and thus can be described by a potential. the current flows from high potential( positive terminal) to low potential (negative terminal), the electrons themselves flowing in the other direction ( negative charge). In induction, due the magnetic field, the electric field is no longer conservative, there is no potential for it. Therefore, for example, the current can flow repeatedly in closed wire due to induction. |Feb12-12, 08:15 PM||#3| Lets say I push a metal square loop into a B field, The electrons will feel a Lorentz force F=q(vxB) and start to move. now when each electron moves it will push the one in front of it and we will have a current. |Feb12-12, 09:31 PM||#4| what is electricity??? ""please explain how electrons are transferred through air(electric induction) "" they aren't transferred through air. But while flowing along a wire they will interact with a magnet, just hook a D cell battery to a loudspeaker and watch the cone move. When electrons move through a magnetic field they get pushed sideways. When sideways happens to be along a wire, that's induction. And polarity is predictable. When sideways is perpendicular to the wire it tries to push the electrons out through the insulation so the wire feels that sideways force - that's an electric motor (of which a loudspeaker is one variety and your automobile starter is another) Read up on "Faraday" and "Hall Effect" for starters. Use Google or Altavista search engines, there's tons of stuff on 'net at any level you want. Have Fun ! old jim |Similar Threads for: what is electricity???| |Electricity||Introductory Physics Homework||1| |Electricity||Introductory Physics Homework||4| |Electricity||Introductory Physics Homework||10| |Static electricity and current electricity .||General Physics||4|
<urn:uuid:c5972a68-e991-4d24-9400-2dc186b7bae5>
3.421875
557
Comment Section
Science & Tech.
50.750667
Okay, I will undoubtedly be overstepping my bounds as far as my knowledge of this goes, so if I say anything utterly false please don't be afraid to correct me. So atoms are the smallest measurable unit of matter. Earth and the atmosphere, having it's theoretical limits separating the atmosphere from the rest of space, must theoretically contain a finite number of atoms. It might certainly be an incomprehensibly large number, but it would definitely be a finite number. Then, we essentially create a series of statistical data for each individual atom in the set. This would include but is not limited to the elemental composition, the speed and direction it is heading, it's relative location on earth, etc. So a small series of very basic information about each atom. An important note is that we'd collect the data based on a moment in time, so imagine we paused time and gathered the data that each atom held at that exact moment. With that data, we create a program that we can input this all into. The program would manipulate the data in a few ways: Mainly, it'd create a graphical and visual representation of the data (ie. a map of the earth with detail of atomic accuracy). After that, we'd gather more data into subsets, based on relative changes in time. We'd compare the subsets against each other and note the trends in physical or chemical reactions and how they occur over time. From that information we can then apply trending models to every atom that dictate it's future conditions and state. Through creating enough trending models and gathering/analysing enough of the information, we could eventually have a working model of the earth, with which one could examine the state of the earth in any time period, past, present, or future. We could eventually move the information coverage to the entire universe, and the information would stretch from the very beginning and end of it's existence. There are a few estimations to this: 1. How many atoms comprise our earth and atmosphere? 2. Given the most complete model of the program would include the entire universe, encompassing every moment of time that universe has or will ever exist in, how much disk space would be required to hold that information?
<urn:uuid:0fd5baa5-3a06-4810-a220-8d233f9cede4>
3.453125
456
Q&A Forum
Science & Tech.
42.457735
Revista chilena de historia natural versión impresa ISSN 0716-078X SABAT, PABLO; NESPOLO, ROBERTO F y BOZINOVIC, FRANCISCO. Water economy of three Cinclodes (Furnariidae) species inhabiting marine and freshwater ecosystems. Rev. chil. hist. nat. [online]. 2004, vol.77, n.2, pp. 219-225. ISSN 0716-078X. doi: 10.4067/S0716-078X2004000200001. Birds living in desert environments have been the preferred models for the study of physiological adaptations to water scarcity. Passerine birds living in marine coastal habitats face similar problems, yet physiological adaptations to water conservation in such species have been poorly documented. We measured total evaporative water loss (TEWL) and rates of oxygen consumption (VO2) in three species of passerine birds dwelling in marine and fresh water habitats. Mass specific total evaporative water loss was significantly lower in the marine species, Cinclodes nigrofumosus, than in species inhabiting areas near freshwater sources. We found a positive relationship between TEWL and VO2. The ratio of TEWL to VO2 (relative evaporative water loss, RTEWL) showed significant variation among Cinclodes species, and was highest for the fresh-water living species, C. oustaleti and C. fuscus. The variation in TEWL found in Cinclodes is likely a consequence of differential exploitation of marine prey with high osmotic loads, which, in turn, may impose the need for water conservation Palabras clave : evaporative water loss; Cinclodes; osmoregulation; passerines; salt.
<urn:uuid:044d2631-e73b-43f9-bcca-3b87e3cac3fe>
2.953125
385
Academic Writing
Science & Tech.
32.411016
Ground shaking caused by the sudden release of accumulated strain by an abrupt shift of rock along a fracture in the earth or by volcanic or magmatic activity, or other sudden stress changes in the earth. Information on earthquake activity, earthquake science, and earthquake hazard reduction with links to news reports, products and services, educational resources for teachers, glossary, and current U.S. earthquake activity map. Answers to a wide variety of questions (FAQs) about earthquakes, such as dictionary of terms, earthquake activity and probabilities, common myths, faults, plate tectonics, and earthquake measurement techniques. Place to provide information about ground shaking associated with significant earthquakes. A questionnaire to let us know what you felt following an earthquake in the United States or in other countries.
<urn:uuid:5f142c0d-64b5-4c6c-914a-e346f42a1192>
3.140625
156
Knowledge Article
Science & Tech.
25.6012
C Programming: What is the difference between an array and a pointer? Why is a raven like a writing-desk? (Lewis Carroll) This is a copy of an article I wrote a long time ago. I'm putting it here to give it a more permanent home. Sorry for being off topic again! I'm glad you asked. The answer is surprisingly simple: almost everything. In other words, they have almost nothing in common. To understand why, we'll take a look at what they are and what operations they support. An array is a fixed-length collection of objects, which are stored sequentially in memory. There are only three things you can do with an array: sizeof- get its size You can apply sizeofto it. An array x of N elements of type T ( T x[N]) has the size N * sizeof (T), which is what you should expect. For example, if sizeof (int) == 2and int arr;, then sizeof arr == 10 == 5 * 2 == 5 * sizeof (int). &- get its address You can take its address with &, which results in a pointer to the entire array. - any other use - implicit pointer conversion Any other use of an array results in a pointer to the first array element (the array "decays" to a pointer). That's all. Yes, this means arrays don't provide direct access to their contents. More specifically, there is no array indexing operator. A pointer is a value that refers to another object (or function). You might say it contains the object's address. Here are the operations that pointers support: sizeof- get its size Like arrays, pointers have a size that can be obtained with sizeof. Note that different pointer types can have different sizes. &- get its address Assuming your pointer is an lvalue, you can take its address with &. The result is a pointer to a pointer. *- dereference it Assuming the base type of your pointer isn't an incomplete type, you can dereference it; i.e., you can follow the pointer and get the object it refers to. Incomplete types include structtypes that haven't been defined yet. -- pointer arithmetic If you have a pointer to an array element, you can add an integer amount to it. This amount can be negative, and ptr - nis equivalent to ptr + -n(and -n + ptr, since +is commutative, even with pointers). If ptris a pointer to the i'th element of an array, then ptr + nis a pointer to the (i + n)'th array element, unless i + nis negative or greater than the number of array elements, in which case the results are undefined. If i + nis equal to the number of elements, the result is a pointer that must not be dereferenced. That's it, really. However, there are a few other pointer operations defined in terms of the above fundamental operations: ->- struct dereference p->mis equivalent to .is the struct/union member access operator. This means pmust be a pointer to a struct or union. - indexed dereference a[b]is equivalent to *(a + b). This means bmust be a pointer to an array element and an integer; not necessarily respectively, because a[b] == *(a + b) == *(b + a) == b[a]. Another important equivalence is p == 0[p] == *p. A quirk of parameter declarations However, there's one thing that confuses this issue. Whenever you declare a function parameter to have an array type, it gets silently converted to a pointer and any size information is ignored. Thus the following four declarations are equivalent: void foo(int ); void foo(int ); typedef int t_array; void foo(t_array); void foo(int *); A more common example is int main(int argc, char *argv), which is the same int main(int argc, char **argv). However, int main(int argc, char argv) would be an error because the above rule isn't recursive; the result after conversion would be int main(int argc, char (*argv)), i.e. argv would be a pointer to an array of unknown size, not a pointer to a pointer. Arrays by themselves are nearly useless in C. Even the fundamental operator, which is used for getting at the array's contents, is an illusion: it's defined on pointers and only happens to work with arrays because of the rule that any use of an array outside of & yields a pointer.
<urn:uuid:452f33eb-cbab-4254-b286-bd704a9b8177>
3.65625
1,030
Documentation
Software Dev.
59.93331
Benthic foraminifera: their importance to future reef island resilience Dawson, John L., Hua, Quan, and Smithers, Scott G. (2012) Benthic foraminifera: their importance to future reef island resilience. Proceedings of the 12th International Coral Reef Symposium. 12th International Coral Reef Symposium , 9-13 July 2012, Cairns, QLD, Australia , pp. 1-7. |PDF (Published Version) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader| View at Publisher Website: http://www.icrs2012.com/proceedings/manu... The provenance, age and redistribution of sediments across Raine Reef (11°35'28"S 144°02'17"E), northern Great Barrier Reef (GBR) are described. Sediments of both the reef flat and sand cay beaches are composed predominantly of benthic foraminifera (35.2% and 41.5% respectively), which is a common occurrence throughout the Pacific region. The major contemporary sediment supply to the island was identified as Baculogypsina sphaerulata, a relatively large (1-2 mm exclusive of spines) benthic foraminifera living on the turf algae close to the reef periphery, and responsible for beach sand nourishment. Radiometric ages of foraminiferal tests of ranging taphonomic preservation (pristine to severely abraded) included in surficial sediments collected across the reef flat were remarkably young (typically <60 years). Results indicate rapid transport and/or breakdown of sand with a minimal storage time on the reef (likely <102 years), inferring a tight temporal link between the reef island and sediment production on the surrounding reef. This study demonstrates the critical need for further research on the precise residence times of the major reef sediment components and transport pathways, which are fundamental to predicting future island resilience. Repository Staff Only: item control page
<urn:uuid:151442d2-0b2e-4879-a592-40a47f3a9ab7>
2.84375
419
Academic Writing
Science & Tech.
35.756547
Abraham de Moivre, (a good friend of Issac Newton) was born on May 16th 1667 in Vitry (close to Paris), France and died November 27th 1754 in London, England. Although De Moivre attended college and studied privately, it doesn't appear that he received a degree. He apparently served time in prison for about a year for being protestant, after serving his term and with the expulsion of the Huguenots, he emingrated to England. During his late teens, he worked as a private math tutor. In 1697 he was elected a fellow of the Royal Society. By 1710 he was appointed to the Commission set up by the Royal Society to review the claims of Newton and Leibniz who eventually discovered calculus. de Moivre was a foreigner which makes it difficult to gain an appointment with the Commission, however, due to his friendship with Newton, he was appointed. De Moivre's main income came from tutoring and it's believe that he lived in poverty for his entire life. - De Moivre pioneered the analytic trigonometry/geometry and the theory of probability. He is famous for De Moivre's Formula. a fundamental formula on complex numbers. - During his studies in probability, he also developed the foundations in the theory of annuities. - He published The Doctrine of Chance in 1718. Remarkable Mathematicians Author Ioan profiles 60 famous mathematicians who were born between 1700 and 1910 and provides insight to their remarkable lives and their contributions to the field of math. This text is organized chronologically and provides interesting information about the details of the mathematicians lives.
<urn:uuid:e1778e98-e51e-47bc-afb4-a2300df001cf>
3.25
348
Knowledge Article
Science & Tech.
44.125545
For a general description of the launch see this ESA web page. Because the instruments on PROBA2 are expected to monitor solar conditions around the clock, PROBA2 follows a sun-synchronous orbit. This means that PROBA2’s orbit will track the terminator, following the dividing line between day and night on earth over the poles as in the figure on the right. Sometimes this type of orbit is referred to as a dawn/dusk orbit. As the earth rotates below, PROBA2 remains fixed either at dawn or dusk depending on which side of the planet it is over at the time. To account for the effects of the earth’s motion around the sun, PROBA2’s orbit must precess by approximately one degree every day, tracking the slow changes of the position of the sun in the sky throughout the year. - Cartoon of PROBA2’s Orbit This means that, for most of the year, PROBA2 will have a full-time view of the sun, and will not experience eclipses of the sun behind the earth. However, because the orbit does not follow the terminator exactly, PROBA2 will experience brief periods of several weeks when eclipses of the sun by the earth do occur, specifically around December each year. During approximately 80 days (from November until January), visible eclipses occur every orbit with a duration ranging from a few minutes in November up to a maximum of 18 minutes and back to 0. These eclipses that will last for several minutes of every orbit are also scientifically useful. In this case, they provide an opportunity for onboard instruments to obtain special calibration images that cannot be captured when the spacecraft is in full sun. These images will help us better characterize the response of the SWAP and LYRA instruments to sunlight. Since PROBA2 has only 2 star trackers, which must always have a clear view of the open sky in order to maintain spacecraft attitude control, during an orbit the spacecraft will perform four large-angle rotations of 90 degrees (every 24 minutes). The timing of the maneuvers is such that star trackers are constantly oriented to point as far away from the Earth as possible. Each maneuver will take a few minutes and will be centered around these optimal switching times. Scientific users of PROBA2 data should take into account that approximately 20 minutes per orbit are lost for spacecraft stabilization procedures.
<urn:uuid:9c5643b7-bd8a-433e-8b9f-ca37ec19d314>
3.484375
490
Knowledge Article
Science & Tech.
38.859991
INTRO: Spotted seals and harbor seals look so much alike that when it comes to telling them apart, even biologists end up scratching their heads. In places like Bristol Bay, where the two species coexist, scientists are turning to DNA for answers. As Doug Schneider reports in this week's Arctic Science Journeys Radio, being able to tell the difference may help researchers learn more about seals and about how these marine mammals might respond to climate change. STORY: Alaska's Bristol Bay is a place on the edge of two worlds. To the north is the Bering Strait and the icy High Arctic. To the south are the somewhat warmer waters of the North Pacific Ocean. Being on the edge makes Bristol Bay a gathering point for two normally separated species. One is the harbor seal, the plump, ubiquitous marine mammal that prefers ice-free areas; and the spotted seal, a species at home along the ice edge. Now you might think two seal species that prefer very different habitats would also look quite different. Yet, it turns out that distinguishing between these two seals in the wild is almost impossible. Robert Small is a marine mammal biologist with the Alaska Department of Fish and Game. He's been studying the bay's seals for four years. Even so, he admits that even he can't tell the two species apart. SMALL: "No, I couldn't. If you have a live spotted seal and a live harbor seal side by side, you could make a guess." Normally, spotted seals prefer the leading edge of sea ice as it advances and retreats with the changing seasons. But in Bristol Bay, spotted seals stick around even after the ice retreats, preferring to hang out with harbor seals and feed on returning salmon. Small and other scientists want to figure out just how many harbor seals there are in the bay. But until recently, they could only guess because they didn't know whether they were actually counting harbor seals or spotted seals. SMALL: "Spotted seals have not received a lot of attention. In terms of their geographic range, we know that they occupy Bristol Bay and probably down the Aleutian chain. But to really know how many are there at certain times of the year, we really haven't done the work. It's complicated because you can't distinguish between the two seals. It's hard enough when you have them in hand, but a lot of the work we do is through aerial surveys. When you're up in the air, there's no way you can tell the difference." About the only way you can know for sure is to examine the seal's DNA. For the last two summers, Small and several colleagues captured as many seals as they could. And from each took a tiny skin sample that was sent to the National Marine Fisheries Service for positive identification. SMALL: "It's one tool to look at the different lineages between the spotted seal and a harbor seal. Last year we caught a total of 39 animals and five of them were spotted seals. This year we caught a few more total number of seals but it looks like only two or three are spotted seals." Sorting out Bristol Bay's seals is important to learn just how each species is faring in the bay. With four years of aerial population counts and about 150 skin samples, Small thinks the harbor seal population is stable at around 18,000 animals. He says spotted seals probably number around 1,600 animals, but since they come and go frequently from the bay, that number could be higher. Small also says Bristol Bay is a good place to study the impacts of climate change because the two seal species may respond very differently to a warmer climate. Harbor seals, for example, may extend their range as receding sea ice pushes spotted seals farther north. SMALL: "I think these two species would be beneficial because they're at an area where their ranges overlap. If changes start to occur within the Bristol Bay area that folks are starting to see in the Arctic, then the opportunity is there to see how these species will change with the changing ecosystem." OUTRO: This is Arctic Science Journeys Radio, a production of the Alaska Sea Grant Program and the University of Alaska Fairbanks. I'm Doug Schneider. Audio version and related Web sites (sidebar at top right) Thanks to the following individual for help preparing this script: Arctic Science Journeys is a radio service highlighting science, culture, and the environment of the circumpolar north. Produced by the Alaska Sea Grant College Program and the University of Alaska Fairbanks. The shortcut to our ASJ news home page is www.asjnews.org. Alaska Sea Grant In the News The URL for this page is http://seagrant.uaf.edu/news/ Listen to story on RealAudio Seal travel map Spotted seals are famous for their wandering ways. See just how far one spotted seal traveled. (Courtesy ADF&G.) Related Web sites Spotted Seal (ADF&G) Harbor Seal (ADF&G) What do you think of ASJ? Take our survey.
<urn:uuid:a5055c4c-d083-4f1d-afac-10ac103f1067>
3.84375
1,057
Audio Transcript
Science & Tech.
62.690926
Scientific cosmology is the study of the entire universe, its origin, evolution, composition, and structure. Cosmology is today in the midst of a golden age, during which the basic story of the evolution of the universe is coming into clear focus. But this recent progress rests on foundations laid in the late nineteenth and early twentieth centuries, during which the first telescopes were built that allowed measurement of the distances to objects outside our home galaxy, the Milky Way. It was during this same period that the two great physical theories required for understanding the larger universe were constructed, relativity and quantum mechanics. The ancient Egyptians and Mesopotamians pictured the earth as flat, with water above and below. This same picture is reflected in the opening words of Genesis, in which on the second day God separates the waters with a firmament and on the third day creates dry land. The first great cosmological revolution occurred when this flat earth was replaced by the Greek picture of a spherical earth surrounded by heavenly spheres carrying the moon, sun, planets, and fixed stars, with the whole system rotating about the earth every day. In the second century AD, Claudius Ptolemy developed a detailed mathematical treatment of the motions of the planets. This spherical geocentric picture prevailed in Europe and the Middle East for more than a millennium. In the second great cosmological revolution, the geocentric universe was overthrown in the seventeenth century by Galileo's telescope and Newton's mechanics. But Newton's laws were applied just to the solar system; the true size and nature of the universe were a deep mystery. A third cosmological revolution today is constructing the first picture of the structure and history of the larger universe that may actually be true, since it is cross-checked by a wealth of new data.1 In the early years of the seventeenth century, Dutch opticians learned to make crude spyglasses. Galileo Galilei, then a young professor of mathematics in Italy, improved these early telescopes and turned his new instruments to the sky. He reported in 1610 what he had seen: that the moon has mountains, that Jupiter has four moons of its own, and that the Milky Way is made of countless stars. Shortly afterward, he discovered that Venus went through phases like the moon, except that when Venus is crescent it is large (because it is nearer to earth than to the sun) but when it is full it is much smaller (because it is then farther than the sun). The phases of Venus disproved the Ptolemaic system, in which Venus is always between the earth and the sun. Jupiter's moons strengthened the case for Copernicus's system in which the planets - including the earth with its moon - all go around the sun. Galileo's contemporary Johannes Kepler made key discoveries about planetary motion, including that the planetary orbits are ellipses. All this was subsequently explained by Newton's mechanics and his theory of universal gravitation. Newton also invented the reflecting telescope. But improvements in telescopes were slow, and it was not until 1838 that astronomers could measure the distance to even nearby stars. To go farther required really large telescopes, a goal embraced by American astronomers and philanthropists. John Brashear was a self-taught telescope builder who raised funds from captains of industry such as Andrew Carnegie. James Lick, who struck it rich in San Francisco real estate after the Gold Rush, was persuaded to build a 36-inch refracting telescope in the first mountain top observatory, near San Jose, California. Astronomer George Ellery Hale then persuaded Charles Tyson Yerkes, the Chicago street car magnate, to finance the University of Chicago's new observatory in Wisconsin with a 40-inch refracting telescope, still the world's largest. Hale constantly strove to build larger telescopes to see better and farther. He persuaded Andrew Carnegie to finance a 60-inch reflecting telescope at his new Mt. Wilson Observatory near Pasadena, California. Then he convinced Los Angeles hardware and oil millionaire and amateur astronomer John Hooker to provide the funds to build a 100-inch telescope there. Hale's last telescope was the 200 inch (5 meter) reflector on Mt. Palomar in Southern California, financed by the Rockefeller Foundation.2 It was the largest telescope in the world from 1948 until 1993-96, when the twin 10 meter telescopes were finished on top of Mauna Kea, on the big island of Hawaii, with funds from oil billionaire W. M. Keck. Meanwhile, charge coupled device (CCD) detectors had improved the efficiency with which astronomers could capture light by a factor of ten compared to the best photographic plates. Astronomers had long wondered about the nature of the "spiral nebulae" - faint clouds of light that were clearly not individual stars. The largest such object was discovered in the constellation Andromeda by the great Persian astronomer Al Sufi in the 10th century. The philosopher Immanuel Kant had speculated that the spiral nebulae were distant island universes like our own Milky Way. By 1802 William Herschel had found over 2500 nebulae in the northern sky, and then his son John Herschel added additional southern nebulae. Larger telescopes allowed astronomers to discover ever more spiral nebulae, but did not establish their nature. In the early years of the 20th century many astronomers, including Harlow Shapley, thought that they were probably clouds of gas in the Milky Way, but some astronomers made observations suggesting that Kant's island universe hypothesis was right. It was astrophotography and the new generation of large telescopes that would determine the answer. Henrietta Leavitt, working at the Harvard College Observatory, studied Cepheid variable stars that were all about the same distance from earth. She showed in 1912 that the brighter ones had longer periods - that is, they took longer to go through the cycle from bright to dark and then bright again. This meant that wherever such stars were seen, their luminosities could be determined by measuring their periods. Their observed brightness could then be used to measure their distances. Harlow Shapley used Cepheid variables and the Mt. Wilson 60 inch telescope in 1917 to measure the size of the Milky Way and show that the sun is located far from its center. Shapley then accepted the directorship of the Harvard College Observatory, but in doing so he lost access to the huge Mt. Wilson telescopes. Using Hale's new 100-inch telescope there, Edwin Hubble was able to measure the periods of many Cepheid variable stars in spiral nebulae. Hubble showed in 1925 that the largest spiral nebula, the great Andromeda galaxy, lies far outside the Milky Way. Cecilia Payne-Gaposhkin, who had discovered in her doctoral research that the stars are mostly made of hydrogen and helium, was in Shapley's office when he received a letter from Hubble reporting preliminary results. He held it out to her and said, "Here is the letter that has destroyed my universe." 3 All the information we have about the distant universe comes to us in the form of light. In 1676, the Danish astronomer Ole Roemer estimated the speed of light by measuring how much later he saw an eclipse of one of Jupiter's moons when Jupiter was on the opposite side of its orbit from the earth compared to when it is nearer. Newton showed that a prism spreads a beam of white light into colors from blue to red. In 1800 the astronomer William Herschel showed that invisible light beyond the red end of the spectrum of light from a prism is radiant heat - infrared radiation. Shortly afterward, ultraviolet light was also discovered. Newton had thought that light was made of particles. But in 1803 the English polymath Thomas Young, who subsequently also decoded Egyptian hieroglyphics, proved by an ingenious experiment that light is a wave phenomenon. The modern era of astrophysics began in 1814 when the German optician Joseph von Fraunhofer discovered that the spectrum of sunlight has many dark and bright lines. The German chemist Robert Bunsen and physicist Gustav Kirchoff were able to identify the characteristic spectra of a number of chemical elements, and astronomers showed that the spectra of many of these same elements are found in starlight. In 1864, the theoretical physicist James Clark Maxwell showed that electricity and magnetism are intimately connected. He deduced that light is an electromagnetic phenomenon, and his calculation of the speed of light agreed with the best measurements then available. The American physicist Albert A. Michelson improved on methods developed by French physicists, and by 1879 he had made a highly accurate measurement of the speed of light. As Michelson and others improved their measurements, the results have continued to agree with Maxwell's theory. Maxwell had followed tradition by assuming that light is an undulation in an underlying medium called the luminiferous aether. Michelson set out to detect the effect of the earth's motion through the ether. His ingenious experiment with Edward Morley made essential use of the wave nature of light. As Michelson later explained it to his children, "Two beams of light race against each other, like two swimmers, one struggling upsteam and back, while the other, covering the same distance, just crosses the river and returns. The second swimmer will always win, if there is any current in the river." 4 But the sensitive experiment revealed no evidence of any such current. In 1905 Einstein published four amazing papers that set the agenda for physics for much of the rest of the 20th century. Two of these papers concerned special relativity. In the first of these, Einstein dispensed with any need for a luminiferous aether. Although this paper contained no references at all, Einstein later acknowledged the importance of Michelson's experimental work in leading to relativity.5 Einstein's second 1905 relativity paper derived his famous formula connecting energy and mass, E=mc2. Another of Einstein's 1905 papers was the first convincing proof of the existence of atoms. His fourth paper was on the implications of the quantum nature of light for the photoelectric effect. The American experimental physicist Robert A. Millikan made major contributions on both topics through his oil-drop measurement of the quantum of electric charge and his subsequent experiments confirming Einstein's predictions regarding the photoelectric effect. Einstein's greatest achievement was to create in 1915 our modern theory of space, time, and gravity, the general theory of relativity. This conceptual breakthrough provided the essential framework for cosmology. However, when he applied his theory to the entire universe, Einstein discovered that the universe could not be static - it must be contracting or expanding. In the absence of any evidence for this, Einstein reluctantly introduced what he called the cosmological constant, effectively a repulsion of space by space, to offset the attraction of matter. It was in 1917, the very year that Einstein introduced the cosmological constant, that the American astronomer Vesto Slipher published the first observational evidence that the universe is actually expanding.6 Slipher had determined the speeds of 25 spiral nebulae by measuring the wavelength shifts of the characteristic lines in their spectra, and 21 of them were flying away at unexpectedly large speeds. The key to interpreting these redshifts was to the measure the distances to these spiral nebulae. Hubble was able to do this with the large Mt. Wilson telescopes, and in 1929 he discovered that the velocities of distant galaxies are proportional to their distances - which implies that the universe is expanding. This has since been confirmed by many thousands of observations. Einstein said that he never would have introduced the cosmological constant if he had known of Hubble's expansion. In the past few decades, new kinds of telescopes on the ground and in space have led to the discovery of violent phenomena in the universe - including the discovery in 1965 of the cosmic background radiation from the Big Bang itself. As Cepheid variable stars allowed the measurement the distances to nearby galaxies nearly a century ago, Type 1a supernovae are so bright and sufficiently similar to each other that they have allowed astronomers to measure the distances to extremely distant galaxies. In 1997, two teams independently reached the conclusion that about five billion years ago the universe started expanding more and more rapidly, after previously slowing its expansion for about eight billion years. That means that Einstein's cosmological constant - or a generalization of it called "dark energy" - is actually the main constituent of the universe! Everything we can see - all the stars, gas, dust, planets - only makes up about half a percent of the universe. Numerous observations have shown that the vast majority of the matter in the universe is invisible "dark matter," a mysterious substance that is not even made of atoms or their component particles. The Double Dark theory based on dark matter and dark energy has successfully predicted the distribution of the hot and cold regions in the heat radiation from the Big Bang and the distribution of galaxies both nearby and at great distances. The pioneering research of Einstein, Hubble, and their contemporaries had made it possible to ask the basic cosmological questions - but they would surely have been surprised at the answers! Building on their work, a new generation has succeeded in creating the first picture of the history of the universe that might actually be true. The new big questions probe dark matter and dark energy theories.
<urn:uuid:db8a5d58-2040-49e1-9931-b99d1d3d76e9>
3.984375
2,711
Knowledge Article
Science & Tech.
36.301664
In this classic paper on landscape-scale disturbance, Sprugel questions the simplistic notion of climax generally accepted in the 50's and 60's. In his introduction Sprugel says: "ecologists spent much time and effort searching for, describing, and classifying 'climax' ecosystems even though there was often little or no evidence that stable systems of this sort would ever come into existence under natural conditions. In fact, many studies have indicated that natural disturbance plays a far more vital role in ecosystem dynamics than that attributed to it by the classical climax theory." To address this issue Sprugel studied balsam fir communities, the uppermost tree zone in the northeast U.S. It was well known that in these high-altitude fir forests "waves" of crescent-shaped bands of dead trees were found in systematic patterns. The waves are areas of standing dead trees with mature and healthy forest surrounding them. From left to right the cross section in Fig.1 shows a mature forest an adjacent area of dead and dying trees, an area where dead trees are being replaced by fir samplings of successive age, and a second area of dead trees. The paper also includes several photographs of the fir waves. Sprugel's main site was Whiteface Mountain in New York; Whiteface is the most northerly peak in the Adirondacks and in his study locale 99% of trees are balsam fir. He also worked in New Hampshire and Maine. Sprugel measured direction of tree die off by taking transects through the waves; here he also determined tree ages by coring them. For another part of the study he marked trees for several years and classified them into improved or deteriorated categories by examining browning of tips and overall browning. Sprugel found that waves move in the direction of the prevailing wind. He next considered the cause of tree death and, using data on wind speeds in a conifer forest, reasoned that wind velocity at the edge of a tree canopy was over 50% higher than that within the forest. Rime-ice, ice formed when water droplets hit solid surfaces and immediately freeze, was a well known phenomenon on Whiteface Mountain. (The paper includes Weather Bureau statistics that riming occurs there on about 1/3 of days from October through April.) Rime accumulates more on trees exposed to wind. Sprugel's conclusion is that trees at the leeward edge of the canopy opening in the wave are exposed to winds and die from loss of needles and branches due to heavy ice accumulation. He also describes winter desiccation and lowered rates of production in summer as a result of needle cooling. As these trees die, adjacent firs experience the same conditions and begin to die. The overall direction of the wave motion is therefore directly related to wind direction. Regeneration of waves occurs at about 60 year intervals and thus all stages of regeneration and deterioration can be found in the forest. In this sense the system is steady-state.
<urn:uuid:db21d084-51d2-4735-bdc7-71e157133b76>
3.15625
613
Academic Writing
Science & Tech.
42.286017
In order to process an XSL stylesheet, a stylesheet processor accepts data in XML and an XSL stylesheet to define the presentation of that XML. But there are two parts of the presentation: - Construction of a final document, called a results tree - defining how the document will be used, for instance in print, on the Web, on a handheld device, and so on - Interpretation of that results tree to produce formatted results - defining the look and feel of the document The results tree is generated by XSLT, and is outside the scope of this article. This article is going to talk about XSL:FO - the formatting objects that allow you to interpret your XML and produce formatted results. Look at an XSL:FO Document (Note: line numbers included for reference, they are not a part of the document.) 1 <?xml version="1.0" encoding="iso-8859-1"?> 2 <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format"> 4 <fo:simple-page-master master-name="basic_page"> 5 <fo:region-body margin="1in"/> 8 <fo:page-sequence master-reference="basic_page"> 9 <fo:flow flow-name="xsl-region-body"> 10 <fo:block font-family="Arial" font-size="14pt"> 11 Hello, world! 12 </fo:block> Lines 1 and 2 declare that this is an XML document. Line 1 is the XML declaration and line 2 is the root element with the namespace listed. Line 3 is the wrapper around all masters used in the document. I'm using a simple-page-master (line4) to define the geometry of the page and give it a name. My basic_page will have a margin of 1 inch on all sides. Once you've defined the master pages, you can start defining the sequences of your pages and how they will look. Since I only have one page in my sample document, I only need to define the sequence of one page (line 8). First you enclose all text in a flow element. Since I only defined one flow in my master, that's the one I should use - "xsl-region-body" (line 9). Then the fun begins. I place all my text inside block elements. Text cannot be placed into a flow without an enclosing element. I'm using the font-family and font-size attributes of the block element to format my text. What to Do with an XSL:FO Document Once you have written an XSL:FO document, what can you do with it? Well, you could open it in a Web browser, but it would just display as XML. Instead, you need to get an XSL formatter and convert your XSL document into a formatted masterpiece.
<urn:uuid:d6487933-c22c-416b-b914-8228b19688f7>
2.96875
614
Documentation
Software Dev.
66.412734
In probability theory Probability theory is the branch of mathematics concerned with analysis of random phenomena. The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities that may either be single... , the law of total variance or variance decomposition formula states that if X are random variable In probability and statistics, a random variable or stochastic variable is, roughly speaking, a variable whose value results from a measurement on some type of random process. Formally, it is a function from a probability space, typically to the real numbers, which is measurable functionmeasurable... s on the same probability space In probability theory, a probability space or a probability triple is a mathematical construct that models a real-world process consisting of states that occur randomly. A probability space is constructed with a specific kind of situation or experiment in mind... , and the variance In probability theory and statistics, the variance is a measure of how far a set of numbers is spread out. It is one of several descriptors of a probability distribution, describing how far the numbers lie from the mean . In particular, the variance is one of the moments of a distribution... is finite, then In language perhaps better known to statisticians than to probabilists, the two terms are the "unexplained" and the "explained component of the variance" (cf. fraction of variance unexplained In statistics, the fraction of variance unexplained in the context of a regression task is the fraction of variance of the regressand Y which cannot be explained, i.e., which is not correctly predicted, by the explanatory variables X.... , explained variation In statistics, explained variation or explained randomness measures the proportion to which a mathematical model accounts for the variation of a given data set... The nomenclature in this article's title parallels the phrase law of total probability In probability theory, the law of total probability is a fundamental rule relating marginal probabilities to conditional probabilities.-Statement:The law of total probability is the proposition that if \left\... . Some writers on probability call this the "conditional variance formula" or use other names. Note that the conditional expected value is a random variable in its own right, whose value depends on the value of X . Notice that the conditional expected value of Y given the event X is a function of y (this is where adherence to the conventional rigidly case-sensitive notation of probability theory becomes important!). If we write E( Y ) = g ) then the random variable is just g ). Similar comments apply to the conditional variance In probability theory and statistics, a conditional variance is the variance of a conditional probability distribution. Particularly in econometrics, the conditional variance is also known as the scedastic function or skedastic function... The law of total variance can be proved using the law of total expectation The proposition in probability theory known as the law of total expectation, the law of iterated expectations, the tower rule, the smoothing theorem, among other names, states that if X is an integrable random variable The proposition in probability theory known as the law of total expectation, ... from the definition of variance. Then we apply the law of total expectation by conditioning on the random variable X Now we rewrite the conditional second moment of Y in terms of its variance and first moment: Since expectation of a sum is the sum of expectations, we can now regroup the terms: Finally, we recognize the terms in parentheses as the variance of the conditional expectation E[Y The square of the correlation In cases where (Y, X) are such that the conditional expected value is linear; i.e., in cases where it follows from the bilinearity of Cov(-,-) and the explained component of the variance divided by the total variance is just the square of the correlation In statistics, dependence refers to any statistical relationship between two random variables or two sets of data. Correlation refers to any of a broad class of statistical relationships involving dependence.... ; i.e., in such cases, One example of this situation is when (Y, X) have a bivariate normal (Gaussian) distribution. A similar law for the third central moment In probability theory and statistics, central moments form one set of values by which the properties of a probability distribution can be usefully characterised... For higher cumulant In probability theory and statistics, the cumulants κn of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. The moments determine the cumulants in the sense that any two probability distributions whose moments are identical will have... s, a simple and elegant generalization exists. See law of total cumulance In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of time series...
<urn:uuid:b70382e8-0156-437b-8f81-d18e757f839f>
3.5
1,057
Content Listing
Science & Tech.
30.339833
Onto Land and Back: Dr. Maureen O'Leary Studies Whale Evolution Why study whales? Consider these enormous, intelligent animals. They're mammals, but they abandoned dry land over 50 million years ago to recolonize the sea. And they look nothing like the land ancestors they left behind. "They've lost their hair, they've lost their hind limbs completely, and their forelimbs have been transformed into flipper-like structures that look more on the surface like a fish's fin than a forelimb," points out Dr. Maureen O'Leary, a professor of anatomical sciences at Stony Brook University on New York's Long Island. Since paleontologists such as O'Leary have discovered through their research that whales started out as dog- or pig-like animals, they can investigate how this extraordinary transition occurred step-by-step over time. "It's really exciting to study a group of animals that encapsulates so much change, because it's possible to see how evolution has modified organisms in very peculiar ways, and how much they've changed," she points out. Fossils and DNA: two kinds of data that determine relatedness Studying whales also happens to be something of a scientific hot potato. It's at the front lines of a debate about the tools that modern evolutionary biologists use to study the history of life. Biologists used to rely entirely on morphology—the physical features of organisms like the shapes of bones or muscles, or the presence of fins or fur—to figure out the relationships among organisms (their phylogeny, a family tree of species). Comparing similarities and differences among both living and extinct organisms enabled these morphologists to classify them into species, and to construct evolutionary trees. Then, in the 1980s, the widespread use of new tools developed by molecular biologists made it possible to study and sequence the genetic makeup of different organisms. Molecular biologists could now use DNA and other molecules to compare the genomes, or complete sets of genes, of different organisms to unravel their evolutionary histories. The more the genomes overlap, the more closely related the organisms. The availability of these two sets of information—morphological data from extinct and living organisms, and molecular data from living ones—has upset a few apple carts, because the two types of data do not always provide the same result. How do scientists construct the Tree of Life? "The best way for scientists to establish relatedness is to use modern phylogenetic methods," says O'Leary. Phylogenetics is the study of evolutionary relatedness among species. This research involves choosing at least three species and identifying heritable features, or characters, to compare across them. For morphologists, these features consist of specific physical characteristics; for molecular biologists, they consist of nucleotide sequences in the DNA of an organism. Both approaches rely on the same computer algorithms to analyze the distribution of those features. "The scientist codifies those features in what we call a matrix, which looks almost like an Excel file," O'Leary elaborates. "For example, your character might be wing color and your two character states might be blue or red. You look at your organisms, and for every blue wing, you put a "1" and for every red wing a "0." You then simply apply this across as many features as you choose. When you assemble one of these matrices, it's now numerical, so you can submit it to an algorithm that will apply what are called optimality criteria. These are algorithms that generate a tree based on those data. That's how people determine that two species are more closely related to each other than either is to a third species." Molecular and morphological data have told two different tales of the origin of the whale. What do we know about whale ancestry? Whales, dolphins, and porpoises have long been recognized as being more closely related to each other than to other mammals, and so they are united in the group Cetacea. Cetaceans are related to Artiodactyla, a group of mammals that consists of camels, deer, pigs, hippopotamuses, and their living and fossil relatives. These animals typically have an even number of digits on their hands and feet: two or four, unlike the five that humans have. Early fossil whales also had even-numbered digits on their feet, which is one of many features that suggest a relationship to Artiodactyla on the Tree of Life. As molecular biologists started investigating whale ancestry, they began to find DNA evidence that cetaceans were contained within Artiodactyla, rather than as a sister group to it. This means that the closest relative of whales is a specific artiodactylan—a hippopotamus—rather than Artiodactyla as a whole. In other words, hippos are more closely related to whales than either is to other artiodactylans such as pigs. By the early 90s, molecular biologists were finding more confirmation of this hypothesis while, to their great consternation, paleontologists (or morphologists), were not. "Not only were scientific ideas changing, but scientific methods as well," O'Leary comments. Skeptical about the value of applying molecular technology to evolutionary questions, some paleontologists were reluctant to believe the molecular evidence. Part of the reason was because the anklebone in artiodactyls is distinctively shaped like a pulley on both ends, and paleontologists have long considered this to be the basis for classifying a mammal as an artiodactylan. "The belief that an organism had to have this ankle to be an artiodactylan was quite ingrained, and many paleontologists were unwilling to consider relationships supported by the molecular evidence until such a fossil was found," O'Leary explains. Drawing on all the evidence O'Leary wasn't happy about that. "The concept of having a 'Rosetta stone' character runs contrary to modern phylogenetic methods," she points out. "Paleontologists shouldn't give more weight to particular characters, nor should they assume that certain characters, like a distinctive ankle, cannot reverse. Instead, they should let the data reveal which characters ultimately inform us about phylogeny." She also thought it important to confront the fact that the fossil record contradicted the molecular data. "We can't solve scientific problems by getting rid of evidence," she maintains. "Phylogenetics forces us to back away from assumptions and look at things more baldly, to compare all the data." There may be good and interesting reasons why fossils and molecules sometimes appear to diverge. Over 99 percent of the organisms that ever lived are extinct, a fact that should give us pause, as O'Leary points out, because living things are just a snapshot of life on Earth. "We don't have fossils of all of that 99 percent either, but we do have a lot of fossils, and they do tell us a lot of the actual history. Reconstructing the history of life using less than 1 percent of the available data from living things alone may lead us astray." A fossil find resolves a dispute—but leaves other questions unanswered Both the fossil and the molecular record have their advantages and disadvantages, but each records the same story. Since the late 1970s, University of Michigan professor Philip D. Gingerich has been searching for evidence that would resolve the whale-evolution debate. Backbones were abundant, but hands and feet missing. Finally, in Pakistan in 2000, Gingerich discovered the 47-million-year-old fossil whales Artiocetus andRodhocetus, the latter with a fully developed hind limb and both with ankles very much like the artiodactyls', down to the double pulley. "That's gone a long way to convincing many paleontologists that whales and artiodactyls are close relatives. It just took us a while to find the fossil," says O'Leary. Combined analyses of molecules and anatomical information produce phylogenetic trees that indicate that whales, fossil and living, are artiodactyls because they're consistent with trees based on molecular data alone. So scientists have now replaced the term Artiodactyla with the term Cetartiodactyla to describe the common ancestor of whales and artiodactylans. Still a mystery is the fact that the newly discovered ankle does nothing to reinforce whales' link to hippos. "It fits the artiodactyl group, but there's nothing that makes it hippo-like as opposed to pig-like or camel-like," O'Leary elaborates. Looking for more "walking whales" Further finds may fill in the puzzle. Recent excavations of intermediate fossils (between terrestrial and marine life forms) in India, Pakistan, and Egypt have sparked increasing interest in whale paleontology. "For example, they've found an animal not much bigger than large dog called Ambulocetus—that's Latin for 'walking' plus 'whale'—with large legs that look like they could support its weight on land. Someone without knowledge of its evolutionary history might say it looks like a crocodile or a dog, but we can tie it to whales phylogenetically because of certain features of the teeth and ear region," O'Leary recounts. O'Leary works mostly in the Republic of Mali in West Africa. The northern part of Mali is part of the Sahara Desert, but Mali was once inundated by a shallow sea that ran north to south and cut North Africa in two halves. "This means that there are now exposed fossils and rocks of marine life from about 55 million years ago, early in placental mammal evolution, in the early part of what we call the Tertiary period. It's my hope that we will ultimately find whale fossils in this area, the way paleontologists have elsewhere in Africa," she says. Whales have more to teach us Why did a group of terrestrial mammals abandon life on land for life in the sea? The answer, as scientists piece it together, has much to tell us about the pattern and process behind a major evolutionary transition. "If you can reliably, in an evidence-based fashion, place whales within the context of mammals like sheep and hippos, it's hard not to step back and say, 'Wow, it's amazing that evolution is capable of transforming an organism that much over 50 million years or less,'" O'Leary points out. "You're looking in the broadest sense at change through time, and in those terms, whales are where it's at." A summary of the fossil evidence that helps fill in the gaps in understanding whale evolution. Berkeley: Introduction to Cetaceans A brief article about whales and dolphins and their developmental history. Discovery: Walking with Prehistoric Beasts Explore some of the ancestors to several modern species, including Basilosaurus and Ambulocetus, both closely related to whales. ©2007 American Musuem of Natural History. All rights reserved. More About This Resource... Seminars on Science is the Museum's online professional development program for educators. Since 2000, the program has engaged educators in cutting-edge research and provided them with powerful classroom resources. This essay, part of the "Evolution" online course, is designed to help students understand the extraordinary transition of whales that has occurred step-by-step over time. It includes a set of related links and also briefly answers the following questions: - Why study whales? - How do scientists construct the Tree of Life? - What do we know about whale ancestry? - What more do whales have to teach us? Supplement a study of biology or evolution with a classroom activity drawn from this Seminars on Science essay. - Send students to this online article, or print copies of it for them to read. - Working individually or in small groups, have them learn more about the Tree of Life and report their findings to the class.
<urn:uuid:eb6db8d7-7ad1-4748-b8a9-984bea9ebf8d>
4.15625
2,458
Knowledge Article
Science & Tech.
34.78375
and carbon dioxide levels In order to carry on photosynthesis, green plants need a supply of carbon dioxide and a means of disposing of oxygen. In order to carry on cellular respiration, plant cells need oxygen and a means of disposing of carbon dioxide (just as animal cells do). Unlike animals, plants have no specialized organs for gas exchange (with the few inevitable exceptions!). The are several reasons they can get along without them: - Each part of the plant takes care of its own gas exchange needs. Although plants have an elaborate liquid transport system, it does not participate in gas transport. - Roots, stems, and leaves respire at rates much lower than are characteristic of animals. Only during photosynthesis are large volumes of gases exchanged and each leaf is well adapted to take care of its own needs. - The distance that gases must diffuse in even a large plant is not great. Each living cell in the plant is located close to the surface. While obvious for leaves, it is also true for stems. The only living cells in the stem are organized in thin layers just beneath the bark. The cells in the interior are dead and serve only to provide mechanical support. - Most of the living cells in a plant have at least part of their surface exposed to air. The loose packing of parenchyma cells in leaves, stems, and roots provides an interconnecting system of air spaces. Gases diffuse through air several thousand times faster than through water. Once oxygen and carbon dioxide reach the network of intercellular air spaces (arrows), they diffuse rapidly through them. The exchange of oxygen and carbon dioxide in the leaf (as well as the loss of water vapor in transpiration) occurs through pores called stomata (singular = stoma). Normally stomata open when the light strikes the leaf in the morning and close during the night. The immediate cause is a change in the turgor of the guard cells. The inner wall of each guard cell is thick and elastic. When turgor develops within the two guard cells flanking each stoma, the thin outer walls bulge out and force the inner walls into a crescent shape. This opens the stoma. When the guard cells lose turgor, the elastic inner walls regain their original shape and the ||Osmotic Pressure, lb/in2 The table shows the osmotic pressure measured at different times of day in typical guard cells. The osmotic pressure within the other cells of the lower epidermis remained constant at 150 lb/in2. When the osmotic pressure of the guard cells became greater than that of the surrounding cells, the stomata opened. In the evening, when the osmotic pressure of the guard cells dropped to nearly that of the surrounding cells, the stomata closed. The increase in osmotic pressure in the guard cells is caused by an uptake of potassium ions (K+). The concentration of K+ in open guard cells far exceeds that in the surrounding cells. This is how it accumulates: - Blue light is absorbed by phototropin - a proton pump (an H+-ATPase) in the plasma membrane of the guard cell. - ATP, generated by the light reactions of photosynthesis, drives the pump. - As protons (H+) are pumped out of the cell, its interior becomes increasingly negative. - This attracts additional potassium ions into the cell, raising its osmotic pressure. Although open stomata are essential for photosynthesis, they also expose the plant to the risk of losing water through transpiration. Some 90% of the water taken up by a plant is lost Abscisic acid (ABA) is the hormone that triggers closing of the stomata when soil water is insufficient to keep up with transpiration (which often occurs around mid-day). The density of stomata on a leaf varies with such factors as: - ABA binds to receptors at the surface of the plasma membrane of the guard cells. - The receptors activate several interconecting pathways which converge to produce - a rise in pH in the cytosol - transfer of Ca2+ from the vacuole to the cytosol - The increased Ca2+ in the cytosol blocks the uptake of K+ into the guard cell while - the increased pH stimulates the loss of Cl- and organic ions (e.g., malate2-) from the - The loss of these solutes in the cytosol reduces the osmotic pressure of the cell and thus turgor. - The stomata close. - the temperature, humidity, and light intensity around the plant; - and also, as it turns out, the concentration of carbon dioxide in the air around the leaves. The relationship is inverse; that is, as CO2 goes up, the number of stomata goes down, and vice versa. Some evidence: - Plants grown in an artificial atmosphere with a high level of CO2 have fewer stomata - Herbarium specimens reveal that the number of stomata in a given species has been declining over the last 200 years - the time of the industrial revolution and rising levels of CO2 in the atmosphere. These data can be quantified by determining the stomatal index: the ratio of the number of stomata in a given area divided by the total number of stomata and other epidermal cells in that same area. How does the plant determine how many stomata to produce? It turns out that the mature leaves on the plant detect the conditions around them and send a signal (its nature still unknown) that adjusts the number of stomata that will form on the Two experiments (reported by Lake et al., in Nature, 411:154, 10 May 2001): Because CO2 levels and stomatal index are inversely related, could fossil leaves tell us about past levels of CO2 in the atmosphere? Yes. As reported by Gregory Retallack in Nature, 411:287, 17 May 2001), his study of the fossil leaves of the ginkgo and its relatives shows: - When the mature leaves of the plant (Arabidopsis) are encased in glass tubes filled with high levels (720 ppm) of CO2, the developing leaves have fewer stomata than normal even though they are growing in normal air (360 ppm). - Conversely, when the mature leaves are given normal air (360 ppm CO2) while the shoot is exposed to high CO2 (720 ppm), the new leaves develop with the normal stomatal index. These studies also lend support to the importance of carbon dioxide as a greenhouse gas playing an important role in global warming. - their stomatal indices were high Both these periods are known from geological evidence to have been times of - late in the Permian period (275 - 290 million years ago) and again - in the Pleistocene epoch (1 - 8 million years ago). - low levels of atmospheric carbon dioxide and - ice ages (with glaciers). - Conversely, stomatal indices were low during the Cretaceous period, a time of high CO2 levels and Woody stems and mature roots are sheathed in layers of dead cork cells impregnated with suberin - a waxy, waterproof (and airproof) substance. So cork is as impervious to oxygen and carbon dioxide as it is to water. However, the cork of both mature roots and woody stems is perforated by nonsuberized pores called lenticels. These enable oxygen to reach the intercellular spaces of the interior tissues and carbon dioxide to be released to the atmosphere. The photo shows the lenticels in the bark of a young stem. In many annual plants, the stems are green and almost as important for photosynthesis as the leaves. These stems use stomata rather than lenticels for gas exchange.
<urn:uuid:aad6b8e9-0974-426b-9775-b50be425ae66>
3.671875
1,781
Knowledge Article
Science & Tech.
45.909251
|Fertilized Eggs||Antarctic Sea Urchin| |Tracy is looking to collect urchins in a shallow area. The distance between the sea ice and the bottom is only five feet here.| |Seals like to hang out around the dive holes to use them for breathing. The primary reason for drilling several safety holes while diving is so in case a seal decides to take up residence in the dive hole while you're underwater, you still have a way to get out.| |In a deep ravine a large solitary sponge glows in the lights. McMurdo Sound has many beautiful sponges.| |This time, Tracy will spawn the urchins inside of a cage in order to prevent the starfish from getting at them.| |24 hours after the spawning urchins have been put inside the cage, the starfish have been attracted to the site and cover the downstream side of the cage. Although the cage has wroked in terms of keeping the starfish out, there are just too many starfish on it for the divers to observe what is going on. We need a bigger cage next time.| |At the end of a dive Tracy exits out through the dive hole.| |Common bottom inhabitants near Tent Island: the sea urchin Sterechinus neumayeri , the starfish Odontaster validus , and a small fish at top center, Trematomus beranchii| |This is the start of a spawning experiment. Tracy has collected urchins and injected them with Kcl, which stimulates them to release their gametes. We want to know if the eggs will float off the bottom or sink in between the rocks.| |24 hours after the spawning above, Tracy returns to find that the spawning area has now been overun by starfish. In fact, there are too many starfish for her to see where the eggs have gone. Maybe the starfish are eating the eggs? We've got to do more experiments to figure out what's going on.| |The starfish Odontasteris very common along the coast of Ross Island and can aggregate in dense assemblages. They have very sensitive chemoreceptors that allow them to detect potential food items over large distances. Perhaps some chemical released by the spawning urchins attracts the starfish, as they move in for an easy meal of urchin eggs.| |The ice sheet of McMurdo sound grinds into Tent Island as it slowly creeps seaward. The resulting pressure produces many cracks and ice crevices that sculpt the overhead ice layer into a 'cavernous' appearance.| |One of the interesting things about the marine invertebrate community in McMurdo Sound is the high prevalence of large sponge species.| |As the divers ascend the safety line, they approach the 'tube' that has been bored through the 8' ice; the dive tenders in the hut are watching for them to surface.| Tracy Hamilton is the diver in our group and here she is out collecting urchins at Little Razor Back Island. The Delbridge Islands are essentially the remains of an old caldera rim from an ancient volcano. They are due west of Mt. Erebeus, which is an active volcano here that has probably developed from the same geothermal activity that produced the Delbridge caldera rim. Underwater, the upper layer of the ice sheet forms a cloud-like layer which when combined with the 800+ ft visibility makes the underwater realm look like another atmoshopere. Bottom contour has a steep slope as the sides drop quickly off into the deep caldera. As the divers start to head back, you can see two bright circles in the ice sheet above: those are the outer safety ice holes. The primary ice hole through which the divers enetered the water is in between these two, but darkened because there is a dive-hut over it that obstructs the direct sunlight.
<urn:uuid:97fad40a-f643-4a13-9712-3f18129bfd5f>
3.515625
831
Knowledge Article
Science & Tech.
49.815035
your extensive knowledge and understanding of data to make a prediction of where to find the fish. collect data to make a prediction! Link to the Print out as many copies of the Fisherman's Log as you will need for the activity. Each copy allows you to record data for 4 days. - For each day that you collect data, color one of the maps to reflect that day’s conditions. Enter longitude, latitude, and scale on each map, as well as the date. - After collecting data for the allotted time, write your fishing article. It should have the following elements: - Explanation of the factors affecting “where to find the fish”— include some insight as to what kind of fish you may find off the coast of NJ. - A “how to” tutorial on how to use COOL room data to predict where to find the fish - A forecast (prediction) of where to find fish using real-time data from the COOL room
<urn:uuid:23646257-9db0-4d9d-88f6-8717887da983>
2.984375
219
Tutorial
Science & Tech.
58.859028
Archived:Basic PySymbian app - series2 The first article in this series was very important because it focused mainly on how to guide your application by using a proper architecture. In that article there were the basic things to consider when building a PySymbian application. Now it is time to go more technical and learn about other important fundamentals application development. In PySymbian, a developer should always keep in mind that application s/he is designing should always exit properly when the user wants to do so. This article will help in understanding this point. A Simple Application Consider the code snippet written below: When someone runs this application on a device, it simply shows a note saying HI. One important thing to notice is that the application displays the note for a few seconds, and then it vanishes automatically without user intervention. Next Important Step Now consider another important code snippet below: print u"exit key pressed" appuifw.app.title = u"Main" appuifw.app.exit_key_handler = quit app_lock = e32.Ao_lock() In this example, our application has Main as its title and it displays a note. One important thing to notice is that this application does not exit automatically. This application exits only when the user presses the right soft key (i.e. the exit key). Some important things about the previous code snippet: - we have used two modules: one is the e32 module and the other is the appuifw module. - We have used a function named exit_key_handler which is a member of the app object of the appuifw module. - Whatever callback function we assign to exit_key_handler is called when the right soft key is pressed. - We have used an object from the e32 module, Ao_lock. This object is mainly responsible for creating a lock for the application. - The Ao_lock object has two functions named wait and signal. - When we call the function wait, a lock is created on the application. The lock is released when the corresponding signal function is called. - So we have called the signal function in a callback function quit which the exit_key_handler function calls. - This makes our application user-controlled, which means that whenever the user presses the exit button, the application exits. This is a very important concept one should always keep in mind when designing an application.
<urn:uuid:915ee4ba-7448-4b3a-99aa-1afecf2f7301>
2.828125
519
Tutorial
Software Dev.
45.269699
Climate Change and Giant Sequoias The world’s largest living species, native to California's Sierra Nevada, faces a two-pronged risk from declining snowpack and rising temperatures. The threat to sequoias mirrors a growing danger to trees worldwide, with some scientists saying rapid warming this century could wipe out many of the planet's old trees. Few living things seem as permanent as the giant sequoia trees of California's Sierra Nevada. The largest species of flora or fauna on Earth, these towering redwood trees have held sway for millions of years in a narrow band of their native mountain habitat. With heights reaching 300 feet and girths as large as 150 feet, some sequoias can live in excess of 3,000 years before being naturally toppled by a combination of weather and gravity. Although giant sequoias (Sequoiadendron giganteum) have survived previous eras of climate variability, human-caused climate change has so far not been their nemesis. But U.S. government and university researchers say the long-term existence of these trees could be threatened by the vagaries of a changing Sierra Nevada mountain snowpack and global warming. This combination could make it difficult for giant sequoias, particularly seedlings and young trees, to survive because they would be left with insufficient water to endure longer and warmer summers. Nate Stephenson, a research ecologist with the U.S. Geological Survey (USGS), based near Sequoia & Kings Canyon National Parks, says that if climate warming continues as projected, tens of thousands of these ancient trees will be at risk in the coming century from destruction by either drought or climate-induced pathogens. "In 25 years, we would see trouble for sequoia seedlings, then in 50 years trouble for the whole population," Stephenson said in an interview. "And in 100 years time, we could lose most of the big sequoias." Family hugging Giant Sequoia via Shutterstock. Read more at Yale Environment360.
<urn:uuid:b89274d2-f22c-495a-85ed-69c30e2d40cf>
3.984375
414
Truncated
Science & Tech.
47.646643
Jet Propulsion Laboratory NASA's leading space science lab started by a co-founder with deep ties to the occult Located in Pasadena, California, Jet Propulsion Laboratory (JPL) is best known for its groundbreaking technology and research in the fields of astronomy and physics. As a branch of NASA, it has been responsible for projects such as Explorer 1, the United States' first satellite (which eventually led to the "space race" with the Soviet Union), and the creation of the Wide Field and Planetary Camera, the main image-capturing component of NASA's Hubble Space Telescope. Today, JPL is responsible for many space science projects, such as the Cassini-Huygens mission to Saturn, the Mars Exploration Rovers, and the Mars Reconnaissance Orbiter. Though the facility has been a pioneer in space science since the late 1950s, its roots were in rocket technology. In 1936, California Institute of Technology (Caltech) student Frank Malina, mechanic Ed Forman, and chemist Jack Parsons executed the first successful rocket experiment in JPL's history. These men, known as the "Rocket Boys," along with the help of Caltech professor and aerodynamicist Theodore von Karman, founded JPL. Though the story of the Rocket Boys has become popular, what is less mentioned is that one of its founding members, Parsons, was a passionate occultist. Jack Parsons, born as Marvel Whiteside Parsons, was born in 1914. Unlike the other Rocket Boys, Parsons skills were self-taught. Despite his lack of formal education Parsons demonstrated excellence in chemistry, and his work in solid fuel has paved the way for what space travel is today. Nevertheless, Parsons was a firm believer in both science and magick, and it is said that he invoked the Greek god Pan before every rocket test launch. Parsons, a devoted Thelemite, was one of the earliest American devotees of Aleister Crowley, the notorious British occultist who was denounced by the popular press as "The Wickedest Man in the World." In 1942, Crowley chose Parsons to lead the Agape Lodge of the secret society, Thelemic Ordo Templi Orientis (O.T.O.), in California. During this time Parsons performed a ritual known as Babylon Working, while his friend and Scientology founder, L. Ron Hubbard took notes. These rituals were a series of sexual magick ceremonies that, in effect, would produce a living Goddess who would help Parsons, playing the Anti-Christ, to change the course of history. It is said that after completing the first phase of Babylon Working, Parsons immediately met a woman, Marjorie Cameron, in his own home. Parsons and Hubbard believed that Cameron was the living incarnation of the divine feminine Babylon, or the Scarlet Woman, who Crowley had often written about in his texts. Despite dating Sara Northrup at the time, Parsons began a series of sexual magick workings with Cameron in an attempt to conceive a Moonchild. Though a child was never conceived, Cameron and Parsons eventually married, and Hubbard ran away with Northrup. Parsons died in 1952 from a freak mercury explosion in his home laboratory. He was a devoted pioneer in both fields of science and magick and saw no contradictions between the two. A crater on the far side of the moon is named Parsons in his honor, and his short text Book of Babylon or Liber 49, remains an influential addition to the Magickal Philosopy of Thelema. Though many of Parsons' science contemporaries refused to work with him due to his dark beliefs, his contribution to space science is irrefutable. Today, Jet Propulsion Laboratory is managed by Caltech and operates NASA's Deep Space Network, a network of communications devices and facilities that supports interplanetary spacecraft missions. JPL's Space Flight Operations Facility and Twenty-Five-Foot Space Simulator are designated National Historic Landmarks. Additionally, the Laboratory advises many Hollywood studios on the scientific accuracy of their sci-fi productions.
<urn:uuid:3c5f3bee-5923-4e7e-9d4d-b1e665b79872>
3.1875
818
Comment Section
Science & Tech.
35.877007
Report an inappropriate comment Wings Don't Generate Lift Through Pressure Differences Wed Dec 08 16:47:37 GMT 2010 by Maggie McKee Thank you for your comment. It's true that the shape of the wing is not the only factor that keeps airplanes aloft, so I have tweaked the text to read: "Wings, whether bird or Boeing, soar *in part* because air moves faster over their top sides, reducing the pressure above." (As I understand it, the angle of the wing is also important in pushing air downwards, as is the speed of the plane. Flying upside-down is achieved by travelling so fast you don't need the difference in pressure achieved due to the wing's shape.) Space news editor
<urn:uuid:ac4eb4a6-c908-46b2-b321-b9f534f418a0>
3.125
152
Comment Section
Science & Tech.
54.610323
Writing VBA code for Microsoft Outlook Visual Basic for Applications (VBA) is one of two programming languages available for writing code in Outlook. (The other is VBScript, which is used by Microsoft Outlook custom forms. You can also automate Outlook using VBA in other Outlook applications. Even developers who program in other languages, such as VB.NET and C# from Visual Studio, will find Outlook VBA useful for exploring Outlook's capabilities and prototyping. Among the useful procedures that you can write with Outlook VBA are: - Macros to run code on demand by pressing Alt+F8 or by adding the macro as a custom toolbar button - "Run a script" rule procedures that can be used to process incoming messages and meeting requests - Event handlers to respond to the user's interaction with Outlook or with such events as Reminders.ReminderFire Getting started with Outlook VBA To launch VBA in Outlook, press Alt+F11. Macro security is set to High by default, which means that unsigned projects will not run. You can change macro security with the Tools | Macro | Security command. Also, you can use the Selfcert.exe program that comes with Office to generate a local security certificate for signing the project. For more pointers on getting started with Outlook VBA, see: The key to using VBA with Outlook is understanding the Outlook Object Model, which defines what objects, properties, methods, and events are available to your code. The object model is the same for every language. Therefore, the information here on Outlook techniques can be applied to any code environment that uses Outlook objects. Any code that you create in the Outlook VBA environment is stored in a file named VBAProject.otm. The location of this file depends on your exact Windows setup; see Outlook & Exchange/Windows Messaging Backup and Dual-Boot. When you first open Outlook VBA, it will already have a built-in class module named ThisOutlookSession which supports an intrinsic Application object and its events. In addition to backing up the VBAProject.otm file, you might also want periodically to export the individual modules from the project and back them up, too. You can copy the VBAProject.otm file to another user's machine but the macros won't run until the user has actually used one of the Tools | Macro commands. See Distributing Microsoft Outlook VBA Code for other thoughts and techniques on distributing VBA macros. Even though you can copy the VBAProject.otm file, it's still not a supported method for distributing Outlook macros company-wide. A better method is to create an Outlook add-in. Key Outlook VBA Techniques Outlook developers commonly work with items in collections, such as the Folders and Items collections. These articles provide useful pointers and samples: "Object model guard" security prompts Always derive all Outlook objects in VBA from the intrinsic Application object. In most cases, that will ensure that your code doesn't trigger security prompts. If you are building your own VBA code by adapting a sample that creates an Outlook object with CreateObject("Outlook.Application"), you don't need that statement. Use the intrinsic Application object instead. For more information on the Outlook "object model guard" prompts, see: Working with toolbars and menus General VBA techniques If you can't get VBA to run at all, see: For version-specific issues, including fixes in various service packs, see: If you add a macro to a toolbar or menu, but it doesn't execute when you click the toolbar button, make sure that the macro subroutine has name that is different from the code module's name. You might want to write all your toolbar macros in a module named ToolbarMacros. From other resources: Updated free tool for digging deeply into the Outlook & Exchange folder and item structure. See Announcing MAPI Editor (Formerly MFCMAPI) for what's new. Free COM component designed to read MAPI-properties of CDO and Outlook Object Model objects for Microsoft Outlook 2000, 2002/XP, 2003 without triggering security prompts. Microsoft Visual C++ 6.0 source code included. Inexpensive developer library that enhances the events available for the Items and Folders collection to provide more reliable Add and Change events and Remove events that include the EntryID of the removed item or folder. Provides an interface to Outlook objects that avoids the "object model guard" of the Outlook E-mail Security Update and exposes properties and methods not available through the Outlook model, such as sender address, the RTF body of an item, and Internet message headers. Several security features protect it from being used by malicious programs to send Outlook mail. For the redistributable version, it adds a Profman.dll component with the ability to enumerate, add, delete, and modify Outlook mail profiles using VB or VBScript. Developer utility for finding out what's going on inside Outlook, via the Outlook object model, CDO and MAPI. You can edit and delete most properties, drag properties from one item to another, copy values to the clipboard, run scripts, monitor events, browse toolbars to get CommandBars IDs. This tool comes with Office so you can add a digital signature to your VBA project. Once you run it, restart Outlook and in the VBA window, choose Tools | Digital Signature, then click Choose to sign your project with the new certificate. Also see: Code Signing Office XP Visual Basic for Applications Macro Projects OFF2000- Using SelfCert to Create a Digital Certificate for VBA Projects Office XP Macro Security White Paper HOW TO - Add a Digital Signature to a Custom Macro Project in Office 2003 and Office XP Office Automation and Digital Certificates Demonstration
<urn:uuid:d287780d-2356-4a08-8fd8-97ffbf090781>
2.921875
1,203
Tutorial
Software Dev.
40.39312
On December 14, 1911, Norwegian explorer Roald Amundsen's five-man expedition arrived at the South Pole on skis and dogsleds, beating Robert F. Scott's ill-fated team by a month. Amundsen, who left medical school at age 21 for a life at sea, was also the first person to cross the North Pole by airship and the first to traverse Canada's Northwest Passage. After he disappeared trying to rescue a fellow explorer in 1928, Popular Science published a tribute to the lost adventurer, calling him "the last of the vikings" who, of all humans, "alone had stood at both frozen tips of our spinning world." Read the full story of Amundsen's boyhood, and his famous expeditions, in our December 1928 issue. The incredible innovations, like drone swarms and perpetual flight, bringing aviation into the world of tomorrow. Plus: today's greatest sci-fi writers predict the future, the science behind the summer's biggest blockbusters, a Doctor Who-themed DIY 'bot, the organs you can do without, and much more.
<urn:uuid:96f70255-5b53-4799-92c1-1e92d2c4e48d>
2.984375
228
Truncated
Science & Tech.
51.787759
The data from the seven participants were unambiguous. Paying attention to the target consistently and strongly increased the fMRI activity, regardless of whether the subject saw the target or not. This result was expected because many previous studies had shown that attending to a signal reinforces its representation in the cortex. Much more intriguing, though, was that whether or not the stimulus was consciously perceived made no difference to signal strength. Visibility didn’t matter to V1; what did was whether or not selective visual attention focused on the grating. Indeed, the experimentalists could not decode from the signal whether or not the subject saw the stimulus. I am very pleased by their finding because it is fully in line with the hypothesis that Nobel laureate Francis Crick and I advanced in 1995. Writing in Nature, we had argued that neurons in V1 do not directly contribute to visual consciousness. Our speculation was based on the absence of a direct connection between cells in V1 and their partners in the frontal lobe in macaques. The fMRI experiment described here provided evidence for our conjecture. Whether or not our connectional argument is valid remains open, of course. It appears that the habitat of consciousness is not the cortical region at the bottom of the extended hierarchy of cortical areas dedicated to vision. Consciousness is restricted to higher regions, possibly those that are engaged in a reciprocal, two-way communication with the prefrontal cortex, the seat of planning. The history of any scientific concept—energy, atom, gene, cancer, memory—is one of increased differentiation and sophistication until it can be explained in a quantitative and mechanistic manner at a lower, more elemental level. These and related experiments put paid to the notion that consciousness and attention are the same. They are not, and the brain responds differently to them. This distinction clears the decks for a concerted, neurobiological attack on the core problem of identifying the necessary causes of consciousness in the brain. This article was originally published with the title Consciousness Does Not Reside Here.
<urn:uuid:38806541-7c29-4790-82a1-7e6138e179e1>
2.984375
407
Truncated
Science & Tech.
32.255245
There are three steps to solving a math problem. - Figure out what the problem is asking. - Solve the problem. - If you get stuck, figure out why you're stuck. - Check the answer. What does the Mean Value Theorem say about the function...Please purchase the full module to see the rest of this course
<urn:uuid:9603016f-89f6-4e99-b80a-34febd162ee5>
3.125
72
Tutorial
Science & Tech.
79.350417
When I tell people that I spend my days testing the possibility of life on Mars they usually reply in one of two ways. 'No seriously, what do you do?' is only slightly more common than the wittier 'So you're not holding out for much fieldwork, then?' Astrobiology is a bright young discipline, aiming to answer some of the most fascinating questions within science and dinner-table conversation alike. Does life exist 'out there' among the pinpricks of light in the heavens, or are we alone in the cosmos? No current scientific field fires people's fascination more than the quest for extraterrestrial life, and a large proportion of students have cited the reason for continuing science is their interest in astrobiology. For now many astrobiologists' money is on Mars, our planetary neighbour, as it was once a lot like Earth. The big question, though, is where on Mars are the best places to look for signs of life. We're not talking about green bug-eyed monsters here, but ultra-hardy bacteria-like cells. For the foreseeable future our probes won't drill more than a few meters into the hard-frozen ground. Any bacteria within our robotic reach near the harsh surface would be forced to remain dormant for long periods by the freezing conditions. And I'm looking into just this possibility. One of the most critical hazards for surface life on Mars is the constant rain of radiation from space. This is composed of sub-atomic particles accelerated to near-lightspeed by solar flares or exploding stars throughout the galaxy. Unlike Earth, Mars isn't shielded by a strong magnetic field or thick atmosphere and these energetic particles slam straight into the surface. Dormant bacteria are unable to repair the cellular damage inflicted by this radiation and it steadily accumulates. The real Martian death rays aren't wielded by tripods, but are the cosmic radiation steadily destroying any life in the soil of the red planet. I've built a computer simulation of the situation, modeling the penetration of cosmic rays through the Martian atmosphere and rock. From this I can calculate the radiation level at any location on Mars, and for different depths beneath the surface. I then use this information to determine from experimental results how long different kinds of bacteria would survive before becoming irrecoverably damaged by the radiation. There are lots of exciting places on Mars we want to check for life; ancient dry river beds or lakes, or ice in crater bottoms and the polar caps. But which location provides dormant cells the best protection from radiation and how deep will we need to drill? Addressing these questions is vital for any chance of finding survivors, viable cells that could be awoken with a little warming and nutrient. One very promising location that my research has identified is the ‘frozen sea’ near the Martian equator (NewScientist.com 30 January 2007). Finding irrefutable evidence for extraterrestrial life in the Martian soil would rank among the greatest discoveries of history, with deep ramifications for religion, philosophy and the popular perception of our place in the cosmos. The consequences of such a discovery are played out again and again in the public mind through countless books, films, magazine articles, and at dinner table discussions with friends and family. But it’s also crucial that society starts seriously considering these possibilities now and becomes involved in making the most important decisions. Should humans land on Mars and potentially infect the red planet with earthly germs? Should samples of Martian soil be returned to Earth with the risk of contaminating our biosphere? Why should millions of pounds be spent on a space probe rather than funding pharmaceutical research, for example? If the potential rewards of astrobiological research are so great, so too are the potential risks, and society needs to be involved from the outset to avoid the pitfalls of public distrust, such as encountered by genetic engineering. My research on the Martian death rays is one piece of the puzzle in the search for life beyond Earth, but your role is just as important. What do you think?
<urn:uuid:a8ad1de9-cc5d-41ae-b5bd-693457e5ec1e>
3.515625
815
Personal Blog
Science & Tech.
43.004091
|Launched in 1999, Stardust will collect material from a comet and return it to Earth. Scheduled to intercept comet Wild-2 in 2004, the spacecraft will obtain samples using an extremely light, silicon-based solid called aerogel. The samples will be returned to Earth in 2006.| Image courtesy of NASA/Jet Propulsion Laboratory Ancient material retrieved from the comet will be compared to younger particles of interstellar dust collected by Stardust in 2000 and 2002. These samples may help us to better understand the evolutionary history of our galaxy. Stardust will collect interstellar dust from August 5th to December 9th 2002. Former What's New Topics: ©2002 National Air and Space Museum
<urn:uuid:4402b429-f398-47cd-a1c0-1aeb65b52b3a>
3.578125
145
Knowledge Article
Science & Tech.
46.636548
Why do Marine animals have fins. It is more efficient to use fins than feet, hooves, or other similar body parts. It is the same reason that you can swim faster while wearing flippers. Having a larger surface area allows animals to push against more water, so that they have more force when swimming. Here is a picture of the bones in a dolphin fin. They are extremely similar to the bones in a hand, because dolphins evolved from land animals that had individual fingers. Amphibians like frogs and salamanders have webbed feet, which is a "compromise" between feet and fins. These feet allow them to push against water with more power, since they have webbing instead is separate fingers. It also allows them to grip the ground on land, since their fingers can move independently of each other.
<urn:uuid:2667b4c1-c94e-49c7-99d0-0cfdafa196ef>
3.59375
170
Q&A Forum
Science & Tech.
49.347378
Found 0 - 10 results of 14 programs matching keyword " wind" Southeast of San Francisco, on the way out to California's Central Valley, thousands of wind turbines dot the landscape of Altamont Pass. Mounted both in rows and individually, machines with large propellers catch the wind, turning round and round at different speeds. Learn how wind energy is generated and stored for use in this most peculiar area, and its impact on living things both near and far. Admit it: Hasn't the Godzilla inside you always wanted to grab the Golden Gate Bridge and shake it silly? Finally, you can. In honor of the iconic span's 75th birthday, Exploratorium exhibit developer Dave Fleming presents a dynamic model of the Golden Gate Bridge. What happens to the bridge during an earthquake? How about strong winds and heavy traffic? The model dances and wiggles realistically, displaying the same vibrational modes and motions that occur in the actual bridge. The Southern California Coastal Ocean Observing System (http://sccoos.org/) gathers live data about winds, waves, surface currents, temperature, and water quality, and makes it available to everyone. In this piece, Oceanographer Art Miller tells us about this system, and about how America's Cup sailors can use this kind of data and modeling to improve their race performances. To access wind modeling data, visit: How can a wind-powered sailboat move faster than the wind? Why do the America's Cup sails look like airplane wings? With the beginner in mind, Exploratorium senior scientist Paul Doherty introduces the basic physics of sailing and sail design. Have you ever wondered exactly what clouds are made of, or what the difference is between a cumulus and lenticular cloud? Clouds are an ever-present, ever-changing part of our natural landscape. They come in a huge variety of shapes and sizes, and capture our imagination with their endless permutations. Join Exploratorium Senior Scientist Paul Doherty for a live Webcast about cloud physics. Paul will discuss the basic makeup of clouds, and explore some of the aspects that make them such a rich part of our daily lives. Dr. Laura Peticolas is a physicist at UC Berkeley's Space Physics Research group. She studies the Aurora to learn more about the Earth and the workings of our Solar System. She's currently working with NASA's Mars data to understand why the Martian aurora looks the way it does. In this podcast she discusses her research, her inspiration and how and why scientists sonify data. View a selection of video clips from three exhibits that are part of the new Outdoor Exploratorium collection at Fort Mason. Ice Stories correspondent Kelly Carroll reports from a storm at Tango 1 Camp, a remote camp deep in the Transantarctic Mountains. Thanksgiving Day weather at McMurdo Station, Antarctica, turned out to be pretty interesting, as weather always can change quickly here. Our holiday weekend greeted us with 50 mph winds, but it didn’t affect the great feast we had in the dining hall. For two days Summit Camp, Greenland experienced strong winds and blowing snow, making work, and even walking around camp, difficult.
<urn:uuid:2172781d-911d-4ea7-b6d1-bf9f82a50d32>
3.046875
643
Content Listing
Science & Tech.
49.2005
Many climate scientists felt the conclusions on effects of global warming in the 2007 IPCC review were too conservative. One reason was the estimation of likely melting of ice sheets and its effects. Problem was that there was insufficient knowledge to draw definite conclusions. And the measurements of changes in ice sheets just wasn’t accurate enough. That’s now changed and a large number of experts agree global warming has caused loss of ice from these ice sheets. And this has contributed to measured increases in sea level. Richard A. Kerr reports in Science (see Experts Agree Global Warming Is Melting the World Rapidly): “Forty-seven glaciologists have arrived at a community consensus over all the data on what the past century’s warming has done to the great ice sheets: a current annual loss of 344 billion tons of glacial ice, accounting for 20% of current sea level rise. Greenland’s share—about 263 billion tons—is roughly what most researchers expected, but Antarctica’s represents the first agreement on a rate that had ranged from a far larger loss to an actual gain. The new analysis, published on page 1183 of this week’s issue of Science, also makes it clear that losses from Greenland and West Antarctica have been accelerating, showing that some ice sheets are disconcertingly sensitive to warming.” He’s referring to the major paper by Andrew Shepperd and others, A Reconciled Estimate of Ice-Sheet Mass Balance. Over recent years climate change deniers/contrarians/sceptics have cherry picked data to counter any suggestion that the earth’s large ice sheets are melting. They have pointed to increased amounts of ice in Eastern Antarctica to balance reports of massive losses of ice in the Arctic. (Have a look at this animation to see how such data can be cherry picked). Similarly they have tried to hide concern of the loss of land ice by stressing reports of local increases in sea ice. But the paper by Shepperd et al. combined data from satellite altimetry, interferometry, and gravimetry measurements. This provides more reliable estimates of changes in the ice sheets, and gives some detail of these changes. This figure from the paper gives an idea of the detail of their findings. It shows that all the major regions of the polar ice sheets except one (East Antarctica) have lost mass since 1992. The authors also estimate that mass loss from the polar ice sheets has contributed roughly 20 percent of the total global sea level rise during that period (at a rate of 0.59 ± 0.20 millimeter year−1 ). And to underline the fact that denier claims of amounts of ice increasing in Antarctica are false, NASA recently displayed this figure showing data from Antarctica from their satellite measurements Could those climate change deniers/contrarians/sceptics please stop hiding behind claims that gains by Antarctic ice sheets balance losses from ice sheets in Greenland and the Arctic. Science: Major Regions of Polar Ice Have Been Shrinking Since 1992 Polar Ice Sheets Losing Mass, Several Methods Show New Study Shows Global Warming Is Rapidly Melting Ice at Both Poles Human-Caused Climate Change Signal Emerges from the Noise Study: Polar ice sheets in Antarctica, Greenland melting 3 times faster than in ’90s Ice Sheet Loss at Both Poles Increasing, Study Finds Projections of sea level rise are vast underestimates “Hard” “Authoritative” Evidence Of Climate Change Begins To Overwhelm Even Fox
<urn:uuid:a1358e60-9e42-4cbb-8e67-d7be0ce661c3>
3.5625
735
Personal Blog
Science & Tech.
38.035205
In February, Asteroid 2012 DA 14 will come so close to earth that it will be nearer to our planet than many satellites are. This asteroid, which really should get a new name, is about half the size of a football field. Its orbit is similar to that of the Earth itself, in size and shape, but at an angle to the Earth’s plane, so it’s like the asteroid and the earth are driving in circles on two oval tracks that intersect at two points but there is no red light. Asteroid 2012 DA 14 was discovered with gear provided to an observatory with a grant from the Planetary Society. Which makes me want to join the Planetary Society. This asteroid is not going to hit the earth now or during any of the next few decades, but eventually it may well do so. We need to keep an eye on it. The closest approach will be on Feb 15th, when it will be a mere 27,330 kilometers from the surface of the earth. You would be able to see it with binoculars or a telescope. You’ll be able to spot it, conditions and optics permitting, in Europe, Asia and Africa. (For reference, the International Space Station skims at about 350 kilometers; a geostationary orbit is 35,786 kilometers.) The following video from the Interplanetary Society has all the details:
<urn:uuid:51990608-c4fe-4924-952d-8fc9a379d3b4>
3.140625
286
Personal Blog
Science & Tech.
59.427797
When you go outside at night, on a clear night away from all lights, you see the sky the same way the ancients did: full of stars. Now, if you looked up periodically, you would find that the sky appears to rotate! Some constellations rise while others set, and one point — either due north or due south depending on your hemisphere — appears to not move at all. With the advent of time-lapse photography (and go here for a fantastic video), we can see that the sky does something like this: So there’s some pretty good evidence, right away, that either the Earth is rotating or the entire sky is rotating. But there are a few bright objects in the sky that don’t make this same motion every night. A few of them move to a different point in the sky each night. The Ancient Greeks called them planetas, or wanderers. If you look at a planet, like Mars, relative to the other stars, it appears to zip through the sky in one predominant direction. But every once in a while, it does something bizarre. It stops, goes backwards, stops again, and then resumes its original direction. This — today — is wikipedia’s featured picture of the day: So, how do you explain that one, folks? Well, Ptolemy came up with a simple, elegant, and completely wrong explanation. He said that instead of Mars moving in a circle around Earth, it moved on a “circle within a circle”, allowing it to sometimes go backwards: It wasn’t until nearly 1500 years later, when Copernicus realized that if an inner planet moved faster than an outer planet, it would appear that Mars moved backwards from the point of view of Earth. So that’s the cause of the apparent “retrograde motion” of Mars. What the history books don’t tell you? By time Copernicus came along, Mars’ orbit had been so carefully studied that geocentric modelers of the Solar System had placed seventy-eight epicycles on Mars’ orbit! And, much like you, I wonder what blind alley we’re inadvertently treading down, adding epicycles to, all because we don’t have the proper perspective? (My guess is dark energy, but who knows?)
<urn:uuid:5ca37d5e-7606-4085-a994-61cae6aa14df>
3.9375
492
Personal Blog
Science & Tech.
55.319624
Milky Way's Luminous Halo |Home | X-Objects | Stars | Habitability | Life || The Milky Way's spiral disk is surrounded by a luminous halo of older Population II stars and stellar remnants, which recently has been found to be composed of two nested components rotating in opposing directions (more). The Luminous Halo Larger and jumbo image. Spiral galaxies like the Milky Way and its largest neighbor, Andromeda, have large central bulges of mostly older stars, as well as a relatively young thin spiral disk (surrounded by older, thick disk stars that may have come from mergers with satellite galaxies) and a luminous halo that includes numerous globular clusters (more). The Milky Way's spiral disk of stars, gas, and dust is mixed into and surrounded by a luminous halo of mostly ancient "Population II" stars and stellar remnants (e.g., white dwarfs) in independent motion and as part of globular clusters and satellite galaxies that were captured gravitationally and are being shredded and absorbed into the galactic halo. In turn, this luminous halo of visible "normal" or ordinary matter is mixed into and surrounded by an overlapping, larger halo of some nonluminous ordinary matter of mostly gas and dust and much more dark matter. Most halo stars are thought to have been born in an earlier age than disk stars, when hydrogen and helium gas was less "polluted" by heavier elements ("metals") from stellar winds and supernovae of the most massive stars. Hence, halo stars are composed typically of only 0.1 percent metals, relatively "metal poor" compared to the "Population I" stars of the spiral disk. However, analysis of the galaxy's distribution of RR Lyrae populations and concentrations suggest that the oldest stars may have been polluted with heavy elements earliest by the first stars (Population III) which were more likely to have been located or incorporated early into the galaxy's central bulge, as more massive galaxies like the Milky Way and Andromeda probably formed earlier than smaller satellite galaxies now being incorporated into them (Tim Folger, Discover, May 1, 1993; Ken Croswell, New Scientist, September 19, 1992; and Young-Wook Lee, 1992). (More discussion of stellar populations.) The cosmos appears to be comprised of very little ordinary matter made of atoms (around four percent), that form the stars, planets, and clouds of gas and dust found in the halo as well as in the spiral disk and central bulge (latest WMAP results). On December 12, 2007, astronomers using the Sloan Digital Sky Survey (SDSS) announced that observation and analysis of the stellar motion and spectra of 20,000 stars in the luminous halo indicated that the luminous halo is composed a mix of two distinct components rotating in opposite directions (SDSS news release; Texas Tech University press release; and Carolo et al, 2007). While stars in the Milky Way's spiral disk orbiting around the the galactic center at some 500,000 miles or 805,000 kilometers (km) per hour. The inner halo, located well outside the disk, rotates in the same direction, but at a much sedate 50,000 miles (80,500 km) per hour -- of one-tenth of the disk's speed, while the outer halo spins twice as fast as the inner halo in the opposite direction at about 100,000 miles (161,000 km) per hour. In addition to differences in stellar motions, the two halo components also display relative differences in elemental composition which indicate were probably formed in different ways at different times. Edward L. Wright, COBE, DIRBE, NASA -- larger infrared image (In their motions around the galactic center, both inner- and outer-halo stars may pass through the spiral disk, sometimes in the vicinity of the Sun -- more.) Inner Stellar Halo The inner halo is somewhat more flattened in shape. It dominates the population of stars up to 50,000 light-years from the galactic center. These stars are relatively more metals rich than outer-halo stars, with around three times more heavy atoms such as iron and calcium. These heavier elements were forged in early forming stars and ejected into surrounding space by supernovae or in stellar winds. Its discoverers believe that it formed before the outer halo from the collision of smaller but massive galaxies that rotated with the Galaxy. Past research suggests that there have been no major accretion events in the inner halo over the last few billion years (Brown el al, 2003). V. Johnston, Chris Mihos, Van Kleck Observatory/Wesleyan University (Used with permission) "Fossil" remnants of satellite galaxies that have collided with the Milky Way may be observed as star streams (more discussion and simulations). Outer Stellar Halo The outer halo is more spherical in shape than the inner halo. It dominates the stellar population beyond 65,000 light-years from the galactic center and may extend out to more than 300,000 light-years. This outer halo is believed to have formed later than the inner one from small galaxies orbiting the Milky Way in the reverse direction, which did not share the chemical history of the Milky Way Galaxy.. These galaxies and their globular clusters were eventually torn apart by the Milky Way's gravitational forces which dispersed their stars into the halo (I.I. Ivans, 2006). Past research has suggested that the Milky Way is embedded in an extended, highly inclined, triaxialhao defined by the spatial distribution of its companion galaxies (
<urn:uuid:cca3a84f-cab9-4e08-8174-897b3a38ee78>
3.9375
1,191
Knowledge Article
Science & Tech.
35.959353
Learn more physics! Ive searched on-line, and many sources have gotten the same answer as my science experiment, but they have not answered WHY. So, my question is WHY does extra co2 help plants reproduce better? I found that it does, but WHY?? Plants need CO2 as the source of carbon for photosynthesis, the process by which the make sugars and other big molecules. If thereís no CO2 , the process stops altogether. Itís not surprising that a little extra CO2 can speed it up a bit, just as supplying a little extra of any nutrient tends to speed up growth. (published on 11/16/2007) Follow-up on this answer.
<urn:uuid:7b5a0e2d-02bc-4d05-adc5-0384bc2e78fa>
3.1875
148
Q&A Forum
Science & Tech.
70.084234
Steel Ball Dropped in a Viscous Fluid Five steel balls of different sizes are dropped into corn syrup. The balls reach a constant velocity shortly after entering the fluid. The velocity is constrained due to the drag balancing the force of gravity in the fluid. This demonstrates the relationship between the size of the ball and the maximum velocity it can obtain.
<urn:uuid:ab1640ca-3947-4f88-bda8-d58c4be21031>
3.109375
71
Truncated
Science & Tech.
49.192823
Sean Beatty explains what a deadlock is and why testing probably won't catch it. Most software development projects rely upon some combination of inspection, structural testing, and functional testing to identify software defects. While testing is invaluable and does uncover the vast majority of software problems, sometimes testing fails to uncover certain errors—errors such as deadlocks Before we can discuss deadlocks, we need to understand why they occur. A typical program contains separate threads of execution, or separate processes. For simplicity, we’ll call them tasks. In a multitasking system, these tasks operate concurrently and sometimes need to access the same resource at the same time. A resource that more than one task may need to access is a shared resource. A shared resource may be a certain data item in memory, or it could be a particular hardware resource. It could even be a specific file on the disk, or a single record in a database. When two tasks attempt to access one of these shared resources at the same time, serious problems usually result. One task may overwrite the data that the other task just wrote. Worse yet, if two tasks access the same resource at the same time, the result may be a data record that contains some data written by one task, and some data written by the other task, making the data record, as a whole, inconsistent. To prevent these problems, programmers lock the shared resource to prevent any other task from interfering with the resource before it is safe to do so. Locked resources, while vitally important in avoiding access conflicts, can lead to deadlock. Testing for deadlock is generally ineffective, since only a particular order of resourcelocking may produce deadlock, and the most common tests may not produce that specific order. Deadlock is best avoided by design. Figure 1 shows how resources are used by each of three tasks. Task 1 first allocates (locks) the resource A, and while holding A, it acquires B. Then later in the program, Task 1 also locks D, all the while still holding A and B. In similar manner, Task 2 allocates B, then C, and finally D. Task 3 allocates only two resources, first C, and then A. What happens as the system runs and the tasks all lock and unlock their resources? Let's overlay the three tasks and their shared resources into one resource allocation graph, as shown in Figure 2. This makes it simple to see the effect of the interaction among the tasks. Suppose, at some instant, Task 1 holds A, Task 2 holds B, and Task 3 holds C. If a task is unable to acquire a needed resource, it will stop running (block) until the resource becomes available. In this scenario, Task 1 cannot acquire B since Task 2 has already locked it. Task 2 cannot acquire C since Task 3 has C. Task 3 cannot acquire A because it has been locked by Task 1. The system would be deadlocked—none of the tasks could run. Analysis of Figure 2 would reveal the closed loop among the resources (indicated by the red arrow). Deadlock Avoidance by Design The most effective way to deal with deadlocks is to avoid them by design. Adopting any of the following design constraints eliminates the possibility of deadlock: - Tasks have no more than one resource locked at any time. (There are no arrows at all in the graph, indicating that there is no possibility of a circular wait.) - A task completely allocates all its needed resources before it begins execution. This prevents any other task from locking
<urn:uuid:6b9a86cb-7432-46e8-a5e1-3a4b23d6d00d>
3.453125
725
Tutorial
Software Dev.
49.315455
Return to Vignettes of Ancient Mathematics Introduction to Hippocrates Introduction to Lunules Comparison of Alexander's and Eudemus' Methods (diagram 1) The basic idea is to construct an outer arc A and an inner arc B, concave in the same direction. (diagram 2) Suppose (condition 1 of 2) that we have constructed the arcs A and B so that each can be divided into respectively into m arcs Ai and n arcs Bj, all of which are similar, i.e ., A1 = … = Am and A1 + … + Am = A B 1 = … = Bn and B 1 + … + Bn = B A1 … Am B 1 … Bn Although this condition is not formally necessary, it greatly simplifies matters. Observe that m and n must be whole numbers, which means that the two whole arcs must be commensurable. (diagram 3) If Ai and Bj are similar segments bounded by the similar arcs Ai and Bj and their respective bases ai and bj, we can stipulate segments A, B,with respective bases a and b: A1 = … = Am = A a1 = … = am = a B1 = … = Bn = B b1 = … = bn = b Also, by the basic theorem of similar segments, given that A is similar to B, A : B = asqr : bsqr. Now suppose (condition 2 of 2) that we have constructed the arcs and segments so that the bases have the ratio asqr : bsqr = n : m. Hence A : B = asqr : bsqr = n : m. Now: A1 + … + Am = m*A and B1 + … + Bn = n*B Hence, m*A : n * B = m * n : n * m. That is, A1 + … + Am = B1 + … + Bn. (diagram 4) If we can add some area C such that C + m*A = C + n*B, where C + m*A will be a lunule and C + n*B will be rectilinear (diagram 6). This is how Eudemus conceives of the problem. The different lunules mentioned by Eudemus may have been discovered simply by attempting all simple ratios. A brief note on Eudemus' method of finding a circle and lunule that can be squared is also in order. (diagram 7) Suppose that one were to look for a lunule to be squared that is the next simplest after the lunule on the semicircle with ratio 2 : 1. This might be another lunule, whose outer arc A has as its base an inscribed equilateral triangle, also with ratio 2 : 1. The outer arc is thus immediately divided into two sides of a regular hexagon, so that the inner arc also needs to be on an inscribed hexagon. (diagram 8) Since the ratio of the square on the side of the equilateral triangle to the square on the side of the hexagon (equal to the radius of the circle) is 3 : 1, one quickly finds that each outer segment is 1/3 the inner segment B. Hence, 3 A = B, (diagram 9) so that the lunule will be 2/3 B + C, i.e. it is less than the rectilinear area (here a triangle) by 1/3 B = A. Hence, the triangle is greater than the lunule by A. (diagram 10) From here it is merely a matter of finding a mathematically interesting figure equal to A. We can add a triangle to the third segment to get a sector of the circle, where the curvilinear figures (diag. 11) = the rectilinear figures (diag. 12): |lunule + segment-on-hexagon-side = triangle-in-segment-on-equilateral-triangle| |lunule + segment-on-hexagon-side + triangle-in-1/6-sector = triangle-in-segment-on-equilateral-triangle + triangle-in-1/6-sector |lunule + 1/6-sector = triangle-in-segment-on-equilateral-triangle + triangle-in-1/6-sector| (diagram 13) We could note that these two triangles in the equality are equal as the right triangles in the diagram have equal respective angles while the radii and sides of the hexagon are all equal. Eudemus does something different. It is a matter of making this equality more elegant. (diagram 14) He chooses a circle that is 1/6 the initial circle (equal to the 1/6 sector of the circle) and a hexagon that is 1/6 the hexagon (equal to either of the triangles in the equality). In the account of Eudemus, this will be the segments on an hexagon inscribed in a circle, with each segment equal to 1/6 A. Hence, (diagram 15) the triangle and the hexagon will equal (diagram 16) the lunule and these segments together with the hexagon, i.e., the lunule and the circle. The aesthetics behind this decision lie, one might well infer, in his having a whole circle in the equality and one complete rectilinear figure associated with each of the two curvilinear figures.
<urn:uuid:bcbbca22-0929-4e27-8716-274efa7fa059>
3.796875
1,181
Academic Writing
Science & Tech.
63.600636
As a starting point, we chose to focus on development of ribosomal RNA (rRNA) targeted probes as tools for identifying and estimating the abundance of a variety of marine organisms, and to devise methods for applying those probes in a fashion consistent with their use outside of a conventional laboratory. This effort is rooted in the field of harmful algal bloom (HAB) research and the need to quantify harmful and toxic organisms collected from natural samples. We have explored use of rRNA probes in both whole cell (fluorescent in situ hybridization) and cell-free formats (sandwich hybridization; Scholin et al. 1996, 1997), targeting harmful organisms that span several classes of algae: diatoms, dinoflagellates and raphidophytes. The species studied are found in many coastal regions of the world where they pose substantial public health concerns and economic impacts (Hallegraeff et al. 2003). Blooms of these organisms can also have deleterious impacts and on wildlife (e.g., Scholin et al. 2000). Utilizing relatively simple off-the-shelf and semi-custom sample processing apparatus we prototype and evaluate the performance of various assays, particularly with respect to their suitability for automation (e.g., Anderson et al. in review, Tyrrell et al. 2001, Scholin et al. 1999, Miller and Scholin 1998). The same techniques developed for use with HAB species have been applied to aid the detection of marine microbes and invertebrate larvae in collaboration with the DeLong and Vrijenhoek labs. We have also worked to establish protocols for sample archival that are also suitable for automation. Our challenge here is to preserve samples in a way that obviates the need for refrigeration or freezing, preserves gross cell morphology when microscopy is necessary, and does not interfere with standard molecular biological techniques like DNA extraction and sequencing (e.g., Miller and Scholin, 2000; Preston and Scholin, unpublished data). Techniques that have proved most useful for near real-time detection of target species and sample archival have served to define functional requirements of a new class of instrumentation – the ESP. We have conducted extensive field surveys and time series studies of HAB species using traditional microscopy, whole cell probing and cell-free assays, and are now applying the same techniques for studies of picoplankton and invertebrates. For most applications, cell-free analytical methods clearly offer greater analytical throughput and the potential for detecting many more molecular signatures in a single sample simultaneously than techniques based on intact cells. Nevertheless, the whole cell analyses have proved invaluable for evaluating the performance of cell-free assays, and thus remain a centerpiece of our research (e.g., Anderson et al. in review, Lundholm et al. in revision, Miller and Scholin 2000). We have demonstrated how detection of species-specific (HAB) rRNA sequences in the context of recognizable, intact cells does not always equate with detection of the same signature sequences found in sample homogenates (e.g., Anderson et al. and O’Halleron et al. submitted, Tyrrell et al 2002, Scholin et al. 1999). These techniques reveal how organisms’ genetic signatures, as well substances they produce (like toxins), may be transferred through the food web in the absence of recognizable cells indicative of the target species. At present the majority of our work is focused on developing DNA probe arrays to detect multiple targets in a single sample simultaneously. Results leading up to the present demonstrate that species ranging from marine bacteria to phytoplankton to invertebrate larvae can be detected and in some cases enumerated in near real-time using a common sample collection, preparation and processing protocol that can run on relatively little electrical power. The reagents employed in these assays appear to be stable for extended periods (none used in the ESP require refrigeration), and the chemical reactions themselves are amenable to microfluidic scaling. Different arrays are tailored to detection of specific groups of organisms such as ‘planktonic microbes’, ‘harmful algae,’ or ‘invertebrate larvae,’ etc. (Figure 1). Working with a company called Orca Research (Seattle, WA), complete, prepackaged tests for harmful algae as well as bulk reagents for the sandwich hybridization assay made to our specification can be purchased for research purposes, making it possible for other groups to use and evaluate assays we have published and patented. Our protocols are widely distributed and utilized by workers outside of MBARI, especially those at University of California at Santa Cruz, Woods Hole Oceanographic Institution, NOAA NOS/NMFS (Seattle, WA and Charleston, SC), Florida Marine Research Institute (St. Petersburg, FL), University of Miami, the Cawthron Institute in New Zealand, and several other labs in Europe. Beyond basic lab research, we have also successfully employed the tests shipboard to achieve near real-time mapping of HAB species in California waters, as well as in the Gulf of Maine and Gulf of Mexico (~10 different species in total). More recently we used this approach for detecting groups of marine bacteria in the Juan de Fuca study area (Preston et al., unpublished data).
<urn:uuid:dca12e67-12aa-4379-93ab-073bb3066971>
3.09375
1,086
Academic Writing
Science & Tech.
29.281995
The Natural History Museum is home to several specimens of Parides agavus collected in Brazil, Paraguay and Argentina. This beautiful butterfly prefers to live in areas of dense vegetation with little light, where the caterpillars feed on plants from the family Aristolochiaceae. Many plants from this family contain aristolochic acids, which are toxic to many mammals, but can be tolerated and stored by Parides agavus.This may act as a protective measure against potential predators. This species is relatively common and is not thought to be threatened according to the International Union for the Conservation of Nature (IUCN) red list criteria (Collins and Morris, 1985). But it is possible that it may be under threat in the future due to extensive logging of forests in South America, especially in the Brazilian forest where forest is being converted to agricultural land (Food and Agriculture Organisation, 2009). Parides agavus was first described by Drury in 1792 from specimens collected in Brazil. Find out more. Parides agavus occurs in South America. Discover where the butterflies like to live, and where the Museum’s specimens were collected. The caterpillars of this butterfly species feed on plants that contain acids that are toxic to mammals, but not to Parides agavus. Find out how these chemicals may protect the caterpillars from predators. Parides agavus is a strong flier, but tends to stay close to home. Find out more. Parides agavus aberration aurimaculatus male. Parides agavus female underside. Parides agavus female upperside. Parides agavus male underside. Parides agavus male upperside. Volunteer - Lepidoptera Department of Entomology Curator of the Museum's Lepidoptera collections. "Parides agavus is a stunning and beautiful but strong butterfly. Its caterpillars feed on plants that are poisonous for other organisms such as mammals, plus adults are distasteful for predators."
<urn:uuid:c4c083da-8f88-498c-bae4-03e87f7507c0>
4
432
Knowledge Article
Science & Tech.
25.458671
Saturday, July 28, 2012 Check it out here. This simple site lets you input a value into a size or distance, which it then uses to scale to other well-known bodies throughout the universe, including the universe itself. Here are some quick examples of the quick use one can find with it: First take Venus. It has a diameter of 12,104 km. Now let's scale that down and make Venus one cm in diameter. That makes the sun 90 metres away, the moon is now 32 cm away from us (the site says from the sun at the moment but that's a typo that will soon be changed), Proxima Centauri is 33 km away, and the observable universe is still a full 76 light years across. If we make the sun a single millimetre, the observable universe is now just...630 billion km in length. Or let's say I'm explaining to someone how large and far away the moon is and I have a basketball to demonstrate the Earth. The basketball is 24 cm in diameter, so to demonstrate the moon I'll need a sphere 6.5 cm in diameter (roughly a baseball), and it will be placed 7 metres away. I think I'll email the author and see if he'll be interested in an asteroid density vs. size to calculate gravity program as well.
<urn:uuid:9d873e3f-55fb-41ee-bcf0-0e337ae749d2>
3.46875
273
Personal Blog
Science & Tech.
73.501
I think that the best solution is for authors to spend more time, and use more words, outlining the details of their mathematical equations. It is important to explain how equations are derived and how the scientific implications of the mathematics are employed. You cannot simply cram a paper with equations. Dr Tim Fawcett Researchers from the University of Bristol have found that scientists pay less attention to theories that are laden with mathematical formulae. Although one might assume that the popularity of new research varies according to its academic merit, the study, published in the journal Proceedings of the National Academy of Sciences , shows that maths-heavy articles are referenced 50 per cent less frequently than those containing little to no maths. I spoke to Dr Tim Fawcett, Research Fellow at Bristol’s School of Biological Sciences, to find out why maths appears to be turning scientists off, and to ask what he believes can be done to rectify this situation…Were you surprised to find that the likelihood of a paper being referenced by another scientist is influenced by the amount of mathematics that it contains? I wasn’t massively surprised. In fact, that is why we decided to investigate this issue in the first place. We suspected that papers containing lots mathematics would tend to be avoided by many scientists, or at least not cited in their work. We therefore decided to examine articles to see whether this was really the case, and it turned out that it was.Do you have any idea as to why it is that scientists are paying less attention to maths-heavy articles? I think that these papers can sometimes be heavily technical. If the maths is not explained in the clearest possible manner, it can require a lot of effort on behalf of the reader to really grasp what the article is talking about. Often, because scientists are under considerable pressure to write concisely, you end up with articles that are densely packed with mathematics. Such articles can be difficult for people, including scientists, to understand.Did you question scientists directly or did you analyse other data that were available? We analysed other data that were available. We looked at how often papers were cited by other scientists. This type of information is readily available, and provides a measure of how influential a paper has been within its particular field. Authors hope that the number of citations that their work receives reflects the scientific quality of that work, but in fact, our analysis reveals that the number of equations contained within a paper also has a strong impact.Is there is a danger that this tendency could lead to scientifically significant work being overlooked? Absolutely. I think that it is certainly the case that some very important theoretical papers are not getting the recognition that they deserve because their mathematical content is not being explained in the most user-friendly way.Could you explain more about how maths-heavy theories might be better presented in order to gain the attention of their audience? I think that the best solution is for authors to spend more time, and to use more words, outlining the details of their mathematical equations. It is important to explain how equations are derived and how the scientific implications of the mathematics are employed. You cannot simply cram a paper with equations. There need to be verbal explanations alongside these equations in order to take the reader through all of the assumptions and implications involved in the theory.You have said that the limited page space offered by peer-reviewed journals poses a potential problem for scientists trying to fully explain their mathematics. Do you think that the onset of online journals might provide a solution to this problem? As I understand it, all of the online journals produce printed copies as well, so page space still poses a challenge. Publishers allow journals a certain page allocation every year, and so journals are under pressure to reduce the length of the articles that they publish. Although many of these journals are now predominantly read online, there is still a challenge posed by limited page space. However, appendices are often published exclusively online so page space is not constrained within these sections. In fact, our analysis shows that if authors put a lot of their mathematics in an appendix, other scientists do not seem to be deterred from citing their articles.So a compromise can be reached whereby the online appendix supplements the printed article? Yes, that’s right. This is a pragmatic solution, given that there is such a strong constraint on page space for the main article. If you can include the majority of your mathematics in an appendix, as long as you have carefully explained the assumptions of your work, you can keep the detail intact whilst maintaining the attention of your peers.
<urn:uuid:1af531e8-ab60-4fa7-9366-333f56d39c36>
2.921875
925
Audio Transcript
Science & Tech.
37.858006
July was hottest month in 100-plus years of U.S. records Explore This Story WASHINGTON — This probably comes as no surprise: U.S. scientists say July was the hottest month ever recorded in the lower 48 states, breaking a record set during the Dust Bowl of the 1930s. They say climate change is a factor. The average temperature last month was 25 Celsius. That breaks the old record from July 1936, according to the National Oceanic and Atmospheric Administration. Records go back to 1895. “It’s a pretty significant increase over the last record,” said climate scientist Jake Crouch of NOAA’s National Climatic Data Center. In the past, skeptics of global warming have pointed to the Dust Bowl to argue that recent heat isn’t unprecedented. But Crouch said this shows that the current year “is out and beyond those Dust Bowl years. We’re rivalling and beating them consistently from month to month.” Three of the nation’s five hottest months on record have been recent Julys: this year, 2011 and 2006. Julys in 1936 and 1934 round out the top five. The first seven months of 2012 were the warmest on record for the nation. And August 2011 through July this year was the warmest 12-month period on record, just beating out the July 2011-June 2012 time period. But it’s not just the heat that’s noteworthy. NOAA has a measurement called the U.S. Climate Extreme Index which dates to 1900 and follows several indicators of unusually high and low temperatures, severe drought, downpours and tropical storms and hurricanes. NOAA calculates the index as a percentage, which mostly reflects how much of the nation experience extremes. In July, the index was 37 per cent, a record that beat the old mark for July last year. The average is 20 per cent. For the first seven months of the year, the extreme index was 46 per cent, beating the old record from 1934. This year’s extreme index was heavily driven by high temperatures both day and night, which is unusual, Crouch said. “This would not have happened in the absence of human-caused climate change,” said Pennsylvania State University climate scientist Michael Mann. Crouch and Kevin Trenberth, climate analysis chief of the National Center for Atmospheric Research, said what’s happening is a combination of weather and climate change. They point to long-term higher night temperatures from global warming and the short-term effect of localized heat and drought that spike daytime temperatures. Drought is a major player because in the summer “if it is wet, it tends to be cool, while if it is dry, it tends to be hot,” Trenberth said. - Elijah Harper’s body to lie in state at Manitoba legislature - Markham to Mogadishu: Why westerners are joining the jihad - Blue Jays fall flat against Yankees in New York - DiManno: The mayor should speak up - Doug Ford tells radio show he’s never seen Rob Ford involved with coke - Tim Bosma: The painful search for a missing man - What’s open and closed on Victoria Day - Toronto's pro ultimate frisbee team might be a nice score for father and son
<urn:uuid:34525966-a99a-43bc-adda-4e7daac9045c>
3.5
699
Truncated
Science & Tech.
61.822471
Author(s): G. Li & M. F. Modest Traditional modeling of radiative transfer in reacting flows has ignored the interactions between turbulence and radiation (TRI). Evaluation of radiative fluxes, divergences of flux and radiative properties have been based on mean temperature and However, both experimental and theoretical work have suggested that mean radiative quantities may differ significantly from those predictions based on the mean parameters because of their strongly nonlinear dependence on the temperature and concentration fields. Probability density function (PDF) methods have been found to be effective tools for the study of TRI. are able to treat turbulence˝radiation interactions in a rigorous way: many unclosed terms due to turbulence˝radiation interactions in the traditional Reynolds or Favre averaging process can be calculated exactly and all others can be accurately modeled by using the so-called optically thin-eddy approximation. This chapter shows the application of such methods in the study of TRI. On employing such methods many basic questions on TRI can be answered: (1) whether turbulence˝radiation interactions are important in turbulent flames or not; (2) if they are important, then what correlations need to be considered in a simulation to capture them. focuses on the introduction of such methods and their applications to diffusion Most fires and commercial combustion processes, such as internal combustion engines and gas-turbine combustors, involve high temperatures and, therefore, radiation tends to make up an important fraction of heat transfer rates and, in some Size: 882 kb Paper DOI: 10.2495/978-1-85312-956-8/03 the Full Article You can purchase the full text version of this article in Adobe PDF format for the above price. Please click the 'Buy Paper' icon below to purchase this paper. this page to a colleague.
<urn:uuid:cfb01f5b-dc4a-4db0-a522-c3dc0ed1b199>
2.75
407
Truncated
Science & Tech.
27.005032
See also the Dr. Math FAQ: Browse High School Sequences, Series Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Strategies for finding sequences. - Second-Order Linear Recurrences [06/08/2001] Three problems involving recurrence equations. - Second Order Recurrence with Non-Constant Coefficients [05/27/2005] I'm trying to find a closed form solution of a second order recurrence relation with no constant coefficients, specifically: u(n+2) = 2*(2*n+3)^2 * u(n+1) - 4*(n+1)^2*(2*n+1)*(2*n+3)*u(n). Can you help? - Sequence and Series Terminology and Concepts [11/27/2005] I'm studying sequences and series and am confused about how they are defined in terms of functions as there seem to be some inconsistencies. Can you help clarify things for me? - Sequence of Squares [07/25/1998] Do you have any information on the sequence of squares? - Sequence of Triangular Numbers [7/13/1995] What is the sequence called 1, 3, 6, 10, 15 and how is it generated? - Sequence Pattern and Closed Form [7/16/1996] Given the pattern for a sequence, I can't figure out a general rule for the nth term. - Sequences [7/29/1996] The sum of three numbers is 147 and when multiplied together they yield 21952... Find a formula for 60, 30, 20, 15... - The Sequence Sin(n) [02/20/2002] I am trying to prove that the sequence sin(n), for n, a natural number, does not converge. - Series Convergence [01/27/2001] Test these series for convergence; if the series is alternating, tell whether the convergence is conditional or absolute... - Series Convergence [02/28/2001] Why does 1 + 1/2^z + 1/3^z + ... converge for Re(z) greater than 1? - Series Divergence [03/03/1999] Show that the series sum(k=0 -> infinity): (k/e)^k/k! is divergent. - Series Expansion of 1/(1-x) [08/01/1998] Can you explain the series expansion identity 1/(1-x) = 1 + x + x^2 + x^3 + ... ? In what region does it converge? - Series for which Convergence is Unknown [11/09/2000] Are there series for which it is unknown whether they converge or - Series Problem: Find the Sum [6/24/1996] Find sum[sin(nx)/(3^n),(n,0,oo)] if sin x=1/3 and x is in the first - A Series that Converges and Diverges? [04/30/2002] Let N = 1 + 2 + 4 + 8 + 16 + 32 + 64 + 128 + ... Does this series both converge and diverge? - Series Types [05/11/1997] What are the definitions of convergent, divergent, and oscillating - Showing That the Sum of the Infinite Series cos(n)/n Converges [04/01/2008] I'm trying to determine if the sum of the series (cos n)/n for n = 1 to infinity converges. I've tried some tests but they have been - Sigma Notation [4/14/1996] I am trying to find questions regarding sigma notation. - Sigma Notation [09/07/2001] To prove that sigma (i^2) from i = 1 to n i equal to (n(n+1)(2n+1))/6 start with (i+1)^3 - i^3... - A Simple Expression? [1/26/1996] Is it possible to have a simple expression for a certain series starting - Simple Number Pair Series Yields Surprising Ratio ... Why? [12/31/2009] An enthusiast wonders about the curious ratio that emerges from a simple pattern for generating number pairs. Doctor Rick builds an algebraic argument for why its phi-like recursive relationship approaches the square root of 2. - Simplify a Geometric Series [05/06/2003] x^n + x^(n - 1) + x^(n - 2) + ... + x^(n - n) - Solutions to X^Y = Y^X [12/21/2000] How can I find the solutions to the equation x^y = y^x? I have been told that it involves the Lambert W Relation. - Solving an Equation with Infinite Exponents [03/15/2007] If x^x^x^x^x^x^x^x^x^x^x...... = 2, solve for x. How can I solve that - Solving a Sequence [12/7/1995] Write an expression to find the nth term of the following sequence: 3, 9, 18, 30, 45 . . . - Solving Continued Fractions [08/08/1998] How do you get sqrt(2) from 1/(2 + 1/(2 + 1/(2 + ...)))? How do you solve continuous fractions in general? - Some Algebra Problems [6/1/1996] If z=(3-2i)^1/2 then find z^-... - Square Root of 3 minus 1 [09/24/1997] Express sqrt3-1 as a continued fraction. - Square Root Theory [11/16/2001] When I enter any positive number in the calculator or a fraction like 0.1, then take the square root of that number, then take the square root of that number, and keep pressing the square root button over and over, I eventually get to number 1. Why? - Stair Patterns [02/27/2001] The 1st step is made with 4 matches, the 2nd with 10 matches, the 3rd with 18, the fourth with 28. How many matches would be needed to build 6, 10, and 50 steps? - Subtracting Finite Sums of Integers [08/03/1998] If n = 1 + 3 + 5 + 7 + ... + 999 and m = 2 + 4 + 6 + 8 + ... + 1000, what does m-n equal? - Summation by Parts [01/07/2004] Using 'E' to represent sigma, is there an approximate solution to E(Ai*Bi) = ? where i = 0,1,...,n if Ai is known explicitly and E(Bi) - Summation Notation and Arithmetic Series [07/27/2001] Do I need to use the arithmetic series formulas when doing sigma - Summation of Floor Function Series [01/12/2009] Is there a formula for the sum [p/q] + [2p/q] + [3p/q] + ... + [np/q] where p, q, and n are natural numbers? - Summation of Series: Faulhaber's Formula [07/30/2003] I am asked to solve a series... - Summations of n^(-2k) [09/10/2000] How can I find the summations of the following series for n = 1 to infinity: (n^-2), (n^-4), (n^-[2k]) and (n^-[2k+1])? - Summing a Binary Function Sequence [07/16/1998] How do you compute the sum of B(n)/(n(n+1)) from 1 to infinity, where B(n) denotes the sum of the binary digits of n? - Summing an Oscillating Series [08/10/1998] Does 1 - 1 + 1 - 1 + 1 - ... equal 1 or 0 - Summing a Series Like n*(n!) [10/28/2001] How can I add up a series like 1*1! + 2*2! + 3*3! ... n*n! ? - Summing Consecutive Integers [08/30/1998] Express 1994 as a sum of consecutive positive integers, and show that this is the only way to do it.
<urn:uuid:f90288e5-3b7d-4bb2-ae8a-2b884a1e7fda>
2.984375
1,888
Q&A Forum
Science & Tech.
98.9811
Ice melt found across 97 percent of Greenland, satellites show Three satellites found that 97 percent of Greenland -- the land mass second only to Antarctica for its volume of ice -- underwent a thaw never before seen in 33 years of satellite tracking, NASA reported Tuesday. Satellite experts at first didn't trust their readings, especially since they showed an incredible acceleration. Over four days, Greenland's ice sheet -- which covers 683,000 square miles -- went from 40 percent in thaw to nearly entirely in thaw. "This was so extraordinary that at first I questioned the result: Was this real or was it due to a data error?" Son Nghiem of NASA's Jet Propulsion Lab in Pasadena, Calif., said in NASA's statement about the findings. Scientists on the ground in Greenland had been reporting an unusually warm summer thaw, including damage at a snow airfield and strong runoff threatening a bridge, Tom Wagner, who manages NASA's ice research programs, told NBC News. Ice cores from Greenland's highest region do reveal that such island-wide thaws have happened every 150 years or so, at least over the last few thousand years, but the fear now is that it might occur much more frequently due to warming sea and air temperatures.http://worldnews.nbcnews.com/_news/2012/07/24/12927340-ice-melt-found-across-97-percent-of-greenland-satellites-show?lite
<urn:uuid:87f05965-180a-4ec9-b72d-d96ef4f780db>
3.265625
304
Comment Section
Science & Tech.
57.433762
National-scale analyses of habitat associations of Marsh Tit Poecile palustris and Blue Tits Cyanistes caeruleus: two species with opposing population trends in Britain Carpenter, Jane; Smart, Jennifer; Amar, Arjun; Gosler, Andrew; Hinsley, Shelley; Charman, Elizabeth. 2010 National-scale analyses of habitat associations of Marsh Tit Poecile palustris and Blue Tits Cyanistes caeruleus: two species with opposing population trends in Britain. Bird Study, 57 (1). 31-43. 10.1080/00063650903026108Full text not available from this repository. Capsule Marsh Tits were strongly associated with both the amount and species diversity of woodland understorey; Blue Tits were associated with large trees and deadwood. Aims To gather quantitative information on the habitat requirements of Marsh Tits, in comparison with those of Blue Tits, across a large number of sites in England and Wales, and secondly to evaluate the range of habitat conditions likely to encourage the presence, and increase the abundance of, each species. Methods Counts of birds were made at each of 181 woods across England and Wales, and habitat data were collected from the same locations in each woodland. Marsh Tit and Blue Tit presence and abundance were related to habitat characteristics, interspecific competition and deer impact. Results Shrub cover and species diversity were important for the presence and abundance of Marsh Tits, across their geographical range in Britain. Blue Tits were associated with large trees and deadwood. Conclusion Our results support the hypothesis that changes in woodland management, leading to canopy closure and a decline in the understorey available, could have had an impact on Marsh Tits, and may have led to the observed population decline. These same changes were also consistent with population increase in Blue Tits. |Programmes:||CEH Topics & Objectives 2009 onwards > Biodiversity CEH Programmes pre-2009 publications > Biodiversity |NORA Subject Terms:||Zoology Ecology and Environment |Date made live:||29 Mar 2010 14:15| Actions (login required)
<urn:uuid:f2714b33-37a7-4617-8528-381a050da89c>
2.734375
448
Academic Writing
Science & Tech.
23.993665
Reallocation of compensation releases to restore river flows and improve instream habitat availability in the Upper Derwent Catchment, Derbyshire, UK Article first published online: 28 SEP 2001 Copyright © 2001 John Wiley & Sons, Ltd. Regulated Rivers: Research & Management Special Issue: Eighth International Symposium on Regulated Streams Volume 17, Issue 4-5, pages 417–441, July - October 2001 How to Cite Maddock, I.P., Bickerton, M.A., Spence, R. and Pickering, T. (2001), Reallocation of compensation releases to restore river flows and improve instream habitat availability in the Upper Derwent Catchment, Derbyshire, UK. Regul. Rivers: Res. Mgmt., 17: 417–441. doi: 10.1002/rrr.663 - Issue published online: 28 SEP 2001 - Article first published online: 28 SEP 2001 - Manuscript Accepted: 3 APR 2001 - Manuscript Revised: 28 JAN 2001 - Manuscript Received: 1 AUG 2000 - The Environment Agency - compensation flow; - flow reallocation; - physical habitat; - River Derwent The Upper Derwent catchment is situated in the Peak District National Park in North Derbyshire, England and includes the Derwent Valley Reservoir System. The natural inflows to the reservoir system are boosted by flow diversion schemes from the River Ashop and River Noe, leaving almost dry stretches in these rivers for long periods of time. Compensation releases are made into Jaggers Clough and the River Derwent. This study examined the possibility of altering the operation of the diversion scheme and compensation flow releases, both temporally and spatially to restore flows within these dry reaches. The overall intention was to minimize the ecological impacts of regulation in the four rivers whilst protecting the yield of this critical public water supply. The study utilized the Physical Habitat Simulation System (PHABSIM) to identify and compare feasible operational changes. This technique enables quantitative comparisons of the suitable habitat available under different flow regime scenarios. Brown Trout is the most abundant fish species in the Upper Derwent streams, with Grayling, Brook Lamprey and Bullhead also present. The invertebrate fauna is typical of upland streams with neutral to acid waters. The ecological data were assessed to identify suitable target species/life stages for use with PHABSIM. Brown Trout, Grayling and four invertebrate families (Rhyacophilidae, Leuctridae, Chloroperlidae and Heptageniidae) were selected. Habitat mapping along four stretches of river totalling 10 km was carried out in the summer of 1998, followed by PHABSIM fieldwork on 24 transects in the autumn. This information was utilized to examine the tradeoffs in habitat availability between reinstating flows in the dry stretches of river, and reducing compensation flows elsewhere to minimize the supply impact. Various operating scenarios were examined and two sets of compensation control rules proposed for normal and drought years. Each set included seasonal variability in the rules. The PHABSIM work described here is the first stage in the process of developing a more ecologically acceptable flow regime in the Upper Derwent catchment. The decision on the final implementation will be subject to further resource modelling and negotiation between the Environment Agency, the water company and local interested stakeholders. Copyright © 2001 John Wiley & Sons, Ltd.
<urn:uuid:a240e684-c968-4141-bae3-0d182c53689e>
2.71875
724
Academic Writing
Science & Tech.
29.190298
The New York Times has taken notice of the history and philosophy of chemistry in a small piece about a new book, The Periodic Table: Its Story and Significance by Eric R. Scerri. In particular, the Times piece notes the issue of whether Dimitri Ivanovich Mendeleev was “borrowing” from the work of others (without acknowledging that he had done so) when he put forward his version of the periodic table of the elements: The first [of six scientists who formulated periodic tables before Mendeleev] was a French geologist named Alexandre Emile Béguyer de Chancourtois, but his publisher was unable to publish the complex diagram of the periodic table that he submitted with the article, according to Scerri, a chemist at the University of California at Los Angeles. Although Mendeleev said the idea for the table came to him in a dream one night during a time when he toiled over a textbook, the Russian probably had a peek at Chancourtois’ work. “I frankly don’t believe it,” Scerri, in a prepared statement, said of Mendeleev’s historical claims. “Mendeleev wasn’t isolated in Siberia, which is the way he is sometimes portrayed. He spoke all the major European languages, was familiar with the literature and had traveled in Europe. He mentioned the precursors of the periodic table, but not the ones who actually devised systems. He surely must have known about them.” Does this mean Mendeleev should be knocked out of the scientific pantheon? No. His version of the table became the standard and the fundamental organizing principle of modern chemistry. He also championed the idea until it became widely accepted, and he was a celebrated scientific figure who helped refine industrial chemistry. Mendeleev should have acknowledged any earlier versions of the periodic table of which he was aware — even if he didn’t feel that they had influenced his own thinking in formulating his famous version of the periodic table. That’s how scientists are supposed to behave. Why wouldn’t he acknowledge the efforts of other scientists here? It beats me. I don’t know whether the project of systematizing was viewed differently by Mendeleev and his colleagues than, say, discovery of a new substance or a new reaction. I also don’t know whether animosity toward the French (which seems to come up a lot in the history of science) could explain his lapse in acknowledging Chancourtois’ work. Or maybe it was standard issue human frailty Indeed, my sense is that Mendeleev probably had very little to lose by acknowledging earlier versions of the periodic table, simply because his was a more useful way to systematize the elements. Chemists already knew that there were important chemical trends to attend to. Mendeleev’s table captured the organizing principles that made the most sense of these trends.
<urn:uuid:c2e46275-c6c8-4959-b2e4-12191ab18cb9>
2.796875
620
Personal Blog
Science & Tech.
38.097028
How Tidal works? When water levels are high in oceans and tides are producing and rushing to and fro, it has potential to produce electricity out of it.† For producing electricity out of such wild potential oceans barrage is installed around the corner of river, then water turbines are installed inside the barrage. When water rushes through these turbines it produces electricity. For producing significant amount of energy out of tidal water turbines, range of tides should be high and substantial amount of water should be there for pushing water through the turbine.† Approximately 4 to 5 r meters range of tides are require to produce significant amount electricity. It is significantly important to spot the appropriate place which provide suitable and sustainable conditions to produce tidal energy, there are plenty of places around the globe which provide good conditions for installing water turbines and then produce electricity use tidal power of oceans in the location. For instance in Canada, there is place with name of Bay of Fundy which produces highest and largest tide ranges in the world. Its average range is 10.8 meters There are many on going tidal power projects world wide, out of which the largest tidal energy station is in Europe ,it is in Rance estuary in north France, and it was developed in 1966. This tidal station is only one in entire Europe. Proposed Tidal energy projects world wide There is a proposed project with name Severn Barrage, in Wales. This project has been proposed in past but it never got initiated. This project estimated cost is about †£15 million. It is also stated that it will produce energy which is massive 8000 MW, which is more than 12 nuclear worth of power station, some proposals have mentioned that it will provide 2500 MG worth of power, there is huge difference between power generation estimation and it is not appropriate to risk £15 millions worth of money. This is the main reason why this project is yet start. Other projects which are yet to be completed: |Tidal Power Projects ( Click the link for details)||Capacity in MegaWatts|| Expected years for completion |In Canada, CORE Project||15MW||2011| |In South Korea, Wando Hoenggan Waterway,||300MW||2015| |In New Zealand, Kaipara Harbour||200MW||2018| |In Scotland, Pentland Firth Tidal Energy Project||10MW| |In Scotland, Islay Project||2MW| The cost associated for developing tidal power station can vary from project to project. if capacity of generating electricity of the project is in megawatts that is obviously going to cost a lot more than if capacity is limited in kilowatts. We generally do not see tidal power plants capacity worth in Kilowatts. It is more suitable and economical when developed for large scale electricity generation. For instance, project Severn Estuary in UK †cost US $15 billion which produces 8000 MW. 2200 MW worth of tidal power station project in San Berandino which spent US $ billion. There are lots of advantages associated with renewable energies such Tidal energy when we compare it with fossil fuels. Below are the bullets which are convincing enough to raise the vote of tidal energy and against the consumption of current fossil fuel sources. - Once tidal power plant is built itsí electricity is free. - It does not emit greenhouse gasses, carbon emission gasses which pollute environment. - It does not have any dependency of any fossil fuel including furnace oil, gasses, etc; it needs no oil what so ever to produce electricity. - Tidal power technology is renewable energy, which uses tidal and waves of same water for producing electricity over and over again. - Tidal power technology like all renewable energy is clean energy and does not leave much impact on environment. - Tidal power plants does not require much maintenance, therefore it is maintenance cost free. - Tidal energy stations have about 80 % efficiency ratio, where as fossil fuel have approximately 30 % for efficiency levels. - Tides in oceans are very predictable, its easy to judge when strong tides are going to show up from water consider weather and other conditions. - The better tides and wave strengths of the oceans is, improved the efficiency of the station is. - Electricity does not fluctuate on large scale using tidal energy as it happens in solar power technology. - Tidal power plants are not cost effective. Millions of dollars are utilized for developing tidal power which could provide electricity in Megawatts. - It causes some sort of environmental changes; and destruction beneath and upstream on entire area covered by tidal power plant. - There are not many places in the world where tidal power plant can be built; it needs ocean which provide certain flow of waves and tides of water. - Tidal power plants can only be built in water; it cannot be built on inland area, like solar power plant, wind power plants etc. - Electricity can only be produced when tides are high in the sea, once ocean is calm and does not flow certain level of waves, it cannot produce electricity. Therefore electricity can be produced for only 10 hrs a day in presence of tides. - Barrage installation is tough task as even single part of entire turbine is required to be place in appropriate position. - Barrage system are require to be salt resistant part as it is always remain under water, therefore barrage part of tidal power plant need bit of maintenance time to time. - Wings of turbine inside water can affect life under water, and in some cases can produce distraction on the way of ships passing through the area around tidal power station. - Electricity produced using tidal power station is on expensive side when it is compared with conventional ways of producing electricity for instance with oil, coal and natural gas etc. - During development phase of tidal power plant ecosystem gets disturbed, which effect marine life and fishermen associated with fishing business on that area. - Turbines can be faulty some times which is very tough job to do. Tidal technology is used but itsí technology is not yet fully developed. Important : If you like the article kindly consider sharing with your friends using links below - How Tidal works? - On going Tidal energy projects - Proposed Tidal energy projects world wide - Cost estimation for tidal energy station - Advantages of tidal energy station - Disadvantages of tidal energy station
<urn:uuid:8f89c698-8733-435e-9bb3-de223b3f10e2>
4.09375
1,312
Knowledge Article
Science & Tech.
32.592415
If you've ever read the book The Physics of Star Trek by Lawrence M Krauss (and if you haven't, you should!), you already know that in 1994 physicist Miguel Alcubierre proved that a Star Trek-like warp drive was theoretically possible, but would require an insane amount of energy. Interesting, but not altogether useful. It seems that a solution to this has been found which makes the power requirements for a functional warp drive reasonable and plausible. The article below, from Space.Com, is poorly organized and disjointed, and appears to have been cut and pasted from a longer press release. Nonetheless, it gives the general sense of the discovery. If in fact we have the ability to build a drivetrain which propels vessels at speeds which are functionally greater than the speed of light but physically lower than 5% of the speed of light, then interstellar travel is genuinely within our grasp. This research is still in its infancy. But it is the first real indication I've ever seen that we could ever attain supralight transport. If we can, presumably others can as well. This potentially has enormous ramifications for the Fermi Paradox. Even without supralight drivetrains, the apparent absence of technologically capable extraterrestrial species here on earth is statistically suspect. If warp drive or other technologies can provide feasible faster-than-light travel, the plausibility that we have not been contacted by other worlds becomes vanishingly slim. Anyway, here's the article from Space.com. ========================================== SPACE.COM — A warp drive to achieve faster-than-light travel — a concept popularized in television's Star Trek — may not be as unrealistic as once thought, scientists say. A warp drive would manipulate space-time itself to move a starship, taking advantage of a loophole in the laws of physics that prevent anything from moving faster than light. A concept for a real-life warp drive was suggested in 1994 by Mexican physicist Miguel Alcubierre; however, subsequent calculations found that such a device would require prohibitive amounts of energy. Now physicists say that adjustments can be made to the proposed warp drive that would enable it to run on significantly less energy, potentially bringing the idea back from the realm of science fiction into science. An Alcubierre warp drive would involve a football-shape spacecraft attached to a large ring encircling it. This ring, potentially made of exotic matter, would cause space-time to warp around the starship, creating a region of contracted space in front of it and expanded space behind. Meanwhile, the starship itself would stay inside a bubble of flat space-time that wasn't being warped at all. "Everything within space is restricted by the speed of light," explained Richard Obousy, president of Icarus Interstellar, a non-profit group of scientists and engineers devoted to pursuing interstellar spaceflight. "But the really cool thing is space-time, the fabric of space, is not limited by the speed of light." With this concept, the spacecraft would be able to achieve an effective speed of about 10 times the speed of light, all without breaking the cosmic speed limit. The only problem is, previous studies estimated the warp drive would require a minimum amount of energy about equal to the mass-energy of the planet Jupiter. But recently White calculated what would happen if the shape of the ring encircling the spacecraft was adjusted into more of a rounded donut, as opposed to a flat ring. He found in that case, the warp drive could be powered by a mass about the size of a spacecraft like the Voyager 1 probe NASA launched in 1977. Furthermore, if the intensity of the space warps can be oscillated over time, the energy required is reduced even more, White found. "The findings I presented today change it from impractical to plausible and worth further investigation," White told SPACE.com. "The additional energy reduction realized by oscillating the bubble intensity is an interesting conjecture that we will enjoy looking at in the lab." White and his colleagues have begun experimenting with a mini version of the warp drive in their laboratory. They set up what they call the White-Juday Warp Field Interferometer at the Johnson Space Center, essentially creating a laser interferometer that instigates micro versions of space-time warps. "We're trying to see if we can generate a very tiny instance of this in a tabletop experiment, to try to perturb space-time by one part in 10 million," White said. He called the project a "humble experiment" compared to what would be needed for a real warp drive, but said it represents a promising first step. And other scientists stressed that even outlandish-sounding ideas, such as the warp drive, need to be considered if humanity is serious about traveling to other stars. "If we're ever going to become a true spacefaring civilization, we're going to have to think outside the box a little bit, we're going to have to be a little bit audacious," Obousy said.
<urn:uuid:da074a14-e5cc-4b77-a838-c759fbf1928d>
3.4375
1,034
Personal Blog
Science & Tech.
40.081839
What is Biodiversity? Definition: Biodiversity or Biological diversity is simply the variety of life - all life in all forms. Biodiversity includes all plants, animals and microorganisms, their genetic material, the variety of species, and the ecosystems of which they are part. It is this variation that allows plants and animals and human beings to adapt to the different climates, soils, waters and other physical aspects of our world. Biodiversity enables ecosystems, landscapes and human settlements to function successfully, providing essential services like air to breathe, clean water, increased soil fertility and structure as well as food and shelter.
<urn:uuid:98a059f1-c302-4063-95de-b1b268c174db>
3.4375
129
Knowledge Article
Science & Tech.
20.330796
The trouble with aposematism, though, is that it requires giving up another, more common defensive color scheme: camouflage. If you're a poisonous critter, and you evolve bright coloration for the first time, predators don't yet know that you're poisonous - but you're really brightly colored and easy to see. How, then, does aposematism evolve from non-aposematic ancestors? A new study on early release from Biology Letters suggests that it isn't easy. The authors, Noonan and Comeault, set out to determine whether brightly-colored poison dart frogs are more likely to be attacked when they evolve new color patterns [$-a]. It's possible that the frogs' predators avoid all brightly-colored prey regardless of pattern, in which case new frog patterns would be just as good for predator deterrence as the old ones. But it's also possible that predators only avoid patterns they've run across (and spat out) before - so that new, rare patterns would have all the disadvantages of giving up camouflage with none of the benefits of aposematism. Photo by dbarronoss. Photo by dbarronoss. Noonan and Comeault performed an elegant behavioral experiment, setting out clay model frogs in an area where frogs of one color pattern predominate. One set of models matched the local color pattern, another was brightly colored but different from the local pattern, and a third was drab and camouflaged. Birds were much more likely to attack the "new" color pattern than either the "local" version or the drab one. This result is hard to understand at the first pass - if new color patterns are vulnerable to attack, how can aposematism evolve in the first place? The answer is, not by natural selection, but by genetic drift. Genetic drift is a natural, mathematical consequence of finite populations: imagine a bag full of marbles, half of them black and half white. If you pull a sample of marbles from the bag, you expect them to be half black and half white on average (i.e., over many samples) - but any individual sample might have a very different frequency of white and black marbles, especially if it's small. If the probability of picking a white marble from the bag is 0.5 (because half the marbles are white), then the probability of picking a sample of four white marbles is 0.5 × 0.5 × 0.5 × 0.5 = 0.0625. That's a small probability, but not zero. Drift is a very real effect in the natural world, especially during the establishment of new local populations, when the population size is initially quite small. The key to understanding Noonan and Comeault's result is that aposematism is frequency dependent - it favors not the old pattern as such, but whatever bright color pattern is most common in the frog population. Birds attacked the "local" color pattern at a low rate, which suggests that they're always re-learning which pattern to avoid. A new color pattern might be hard to establish within a population of frogs that look very different from it, but if a new pattern pops up in the course of establishing a new population, then - thanks to genetic drift - it may be common enough for predators to learn to avoid it. B.P. Noonan, A.A. Comeault (2008). The role of predator selection on polymorphic aposematic poison frogs. Biology Letters DOI: 10.1098/rsbl.2008.0586
<urn:uuid:639cf911-3b9f-4dfd-b0c0-9f485bd4a940>
3.953125
731
Personal Blog
Science & Tech.
54.279549
After the conclusion of space shuttle Atlantis's final mission, NASA will roll the shuttle fleet into retirement, also grounding US-backed human space travel for the foreseeable future. Since the US does not have a ready replacement to send crews into space, US astronauts en route to the International Space Station will have to find transportation via Russian Soyuz space flights, which can cost up to $56 million per seat. NASA has been directed by President Obama and Congress to develop a capsule, the Multi-Purpose Crew Vehicle for possible future flights, though there are no concrete dates in place for its completion. During Wednesday's Twitter town hall meeting, President Obama said he would like to see NASA focus on groundbreaking research including discovering new ways to live in space and send astronauts to galactic destinations like Mars, an asteroid, or further reaches of deep space. Unfortunately, the general public will not see the launch of new space capsules or missions for several, if not ten years or more. The shuttle program may be coming to an end, but research and broader NASA missions have yielded technological findings that affected life on the Earth's surface. Click through for three recent uses of NASA technology at home. - Based on lighting technology developed for plant growth research on the space shuttle, surgeons have used Photodynamic Therapy to treat brain cancer with successful results. - Algorithms developed for the Hubble Telescope were used to improve the image processing techniques in mammography. - Miniature sensors that explore air on other planets for traces of life eventually led to the development of hand-held devices to detect explosives and chemical agents in combat situations.
<urn:uuid:9f689778-74f0-4e84-9528-5e18aea8cdc5>
3.59375
322
Content Listing
Science & Tech.
25.832857
The angles in triangle ABC satisfy 6sin∠A=3√3sin∠B=2√2sin∠C. If sin2∠A=a/b, where a and b are coprime positive integers, what is the value of a+b? ABCD is a parallelogram. Let C′ be a point on AC extended such that the length of AC′=1.2AC. Let D′ be on the segment BD such that the length of BD′=0.9BD. The ratio of the area of the quadrilateral ABC′D′ to the area of the parallelogram AB... AT WHAT HEIGHT ABOVE THE EARTH'S SURFACE DOES THE ACCELERATION DUE TO GRAVITY FALL TO 1% OF ITS VALUE AT THE EARTH SURFACE The sum of 4 term of an G. P. Is 15/32 and sum of infinite term of same G.P. Is equal to 1/3 what is the integration of dx/Cos^4(x) Find the average velocity of a projetile between the instants it crosses half the maximum heght.It is projected with a speed u at an angle ø(thita) with horizontal. how many gram of oxygen in persent in 36 gram of water? In this concept a=force/mass.if mass is doubled the a will be half according to first a. find the maximum value of √3 cosθ-sinθ.??? For Further Reading
<urn:uuid:8a89f3c2-d63f-485e-9209-be3d64baf8f8>
2.84375
330
Content Listing
Science & Tech.
90.436576
Why doesn't our summer weather usually begin until after July 4th...instead of near the Solstice? The answer is related to our northerly latitude and our coastal location...on the west coast! The Pacific High which blocks incoming disturbances and typically brings a drier northerly flow takes time to build northward. The farther your location to the north...the later the Pacific High provides protection. Although the jet stream weakens and retreats northward by late spring and summer, our far northerly latitude exposes us to more disturbances carried by the jet. We're as far north as Bemidji Minnesota and St. John's Newfoundland! And water temperatures are slower to rise than air temperatures. Given the frequent motion of air over the Pacific on it's way inland, that slow warming also is a major factor.
<urn:uuid:aec3885a-f5e3-4e7c-a2ec-4a004a125881>
3.203125
168
Q&A Forum
Science & Tech.
58.93308
The Earth And Asteroids Name: david a hart Date: 1993 - 1999 Will Earth ever be hit by a giant asteroid or several? How often the Earth is hit depends on the size of the asteroid or meteor. One as large as the one that took out the dinosaurs 65 million years ago probably only hits the Earth every few hundred million years. Astronomers are trying to identify all the Earth-orbit-crossing asteroids, so that in the future we might have enough warning before one hits that we can do something about it. Currently, only about 1% have been Click here to return to the Astronomy Archives Update: June 2012
<urn:uuid:e943cf75-5256-4e9d-9090-7f273284f9ac>
3.328125
140
Knowledge Article
Science & Tech.
46.638864
Probing Ecosystem Resilience to Climate Change in Arctic-Alpine Plants de Witte, L.C., Armbruster, G.F.J., Gielly, L., Taberlet, P. and Stocklin, J. 2012. AFLP markers reveal high clonal diversity and extreme longevity in four key arctic-alpine species. Molecular Ecology 21: 1081-1097. In discussing their findings De Witte et al. report that "the oldest genets of D. octopetala, S. herbacea and V. uliginosum were found to be at least 500, 450 and 1400 years old, respectively," but they say that "the largest C. curvula genet had an estimated minimum age of c. 4100 years and a maximum age of c. 5000 years, although 84.8% of the genets in this species were <200 years old." The French and Swiss scientists say their results indicate that "individuals in the studied populations have survived pronounced climatic oscillations, including the Little Ice Age and the postindustrial warming," and they note that "the presence of genets in all size classes and the dominance of presumably young individuals suggest repeated recruitment over time," which they say is "a precondition for adaptation to changing environmental conditions." Therefore, they conclude that, acting together, "persistence and continuous genet turnover may ensure maximum ecosystem resilience," noting that their results indicate that "long-lived clonal plants in arctic-alpine ecosystems can persist, despite considerable climatic change," and that they "may indeed show a previously underestimated resilience to changing climatic conditions." In fact, they say their findings suggest that "moderate climate change with an average temperature increase of 1.8°C over the next hundred years and a moderate frequency of extreme climatic events will not lead to local extinctions of long-lived clonal plant populations [italics added]." Grabherr, G. and Nagy, L. 2003. Alpine vegetation dynamics and climate change: a synthesis of long-term studies and observations. In: Nagy, L., Grabherr, G., Korner, C. and Thompson, D.B.A. (Eds.). Alpine Diversity in Europe, Springer-Verlag, New York, New York, USA, pp. 399-409. Guisan, A. and Thuiller, W. 2005. Predicting species distribution: offering more than simple habitat models. Ecology Letters 8: 993-1009. Korner, C. 2003. Alpine Plant Life: Functional Plant Ecology of High Mountain Ecosystems. Springer-Verlag, Berlin, Germany. Steinger, T., Korner, C. and Schmid, B. 1996. Long-term persistence in a changing climate: DNA analysis suggests very old ages of clones of alpine Carex curvula. Oecologia 105: 94-99.
<urn:uuid:c03ac71d-ee13-4cd7-b1c9-70aa5c4b9331>
3.03125
607
Academic Writing
Science & Tech.
52.910317
An ambitious project to create an accurate computer model of the brain has reached an impressive milestone. Scientists in Switzerland working with IBM researchers, part of the Blue Brain Project, have shown that their computer simulation of the neocortical column, arguably the most complex part of a mammal's brain, appears to behave like its biological counterpart. By demonstrating that their simulation is realistic, the researchers say, these results suggest that an entire mammal brain could be completely modeled within three years, and a human brain within the next decade. Also, by mimicking the behavior of the brain down to the individual neuron, the researchers aim to create a modeling tool that can be used by neuroscientists to run experiments, test hypotheses, and analyze the effects of drugs more efficiently than they could using real brain tissue. This representation shows the connectivity of the 10,000 neurons and 30 million connections that make up a single mammalian neocortical column, the basic building block of the cortex. The different colors correspond to different levels of electrical activity. The project began with the initial goal of modeling the complexity of this part of a rat's brain using a supercomputer. The neocortical column was chosen as a starting point because it is widely recognized as being particularly complex, with a heterogeneous structure consisting of many different types of synapse and ion channels. As the project lead, Henry Markram, points out: "There's no point in dreaming about modeling the brain if you can't model a small part of it". The model itself is based on 15 years' worth of experimental data on neuronal morphology, gene expression, ion channels, synaptic connectivity, and electrophysiological recordings of the neocortical columns of rats. Software tools were then developed to process this information and automatically reconstruct physiologically accurate 3-D models of neurons and their interconnections. Having created a biologically accurate computer model of a neocortical column scientists are now planning to model the entire human brain within just 10 years.
<urn:uuid:9d0a47f8-6573-451f-b0e1-b6939cb8ef8a>
3.90625
396
Knowledge Article
Science & Tech.
23.189136
The CWEBx system is a system for Structured Software Documentation (also known as Literate Programming) in the programming language C. It is a derivative of the CWEB system by Sylvio Levy and Donald E. Knuth, who originally conceived the idea of Literate Programming; CWEBx is a compatible extension of CWEB. The CWEBx distribution is distributed as a gzipped tar archive. There is a summary of changes with respect to the previous release called CWEB 3.x, and the announcement as posted on the news group comp.programming.literate. More documentation is available in the distribution itself. Before you can use CWEBx, you must have an (ANSI) C-compiler installed, as well as the TeX system for document preparation (a very basic installation suffices; no LaTeX or special fonts are required). Very briefly, literate programming is a methodology to write computer programs in such a way that they not only instruct the computer how to perform some task, but also explain to you, and other people, how and why the computer does what it is expected to do (or, if you are less fortunate, to help you find out why it doesn't). Literate programming allows you to include explanations not only at the level of single statements, as traditional comments do, and at the global level, as documents accompanying the software can, but also at every intermediate level. This is achieved by breaking up a program into small pieces according to its logical (as opposed to textual) structure, and to present these pieces, each accompanied by an explanatory text, in an order most suited to understanding the design of the program; the result has some resemblance to a hypertext document (the use of the term WEB has the same connotation as in WorldWide Web, but its use in connection with literate programming dates back much further). Take a look at the sample program from the CWEBx manual to see what the resulting program (or actually, the document describing it) looks like; then look at the source for the sample program for comparison. Marc van Leeuwen.
<urn:uuid:07389b07-7822-4794-8dd4-7161b8a1cc7c>
2.921875
443
Knowledge Article
Software Dev.
36.359565
Young Star Cluster Found Aglow With Mysterious X-Ray Cloud At a distance of 6,000 light years from Earth, the star cluster RCW 38 is a relatively close star-forming region. This image covers an area about 5 light years across, and contains thousands of hot, very young stars formed less than a million years ago. X-rays from the hot upper atmospheres of 190 of these stars were detected by Chandra. In addition to the point-like emission from stars, the Chandra image revealed a diffuse cloud of X-rays enveloping the star cluster. The X-ray spectrum of the cloud shows an excess of high-energy X-rays, which indicates that the X-rays come from trillion-volt electrons moving in a magnetic field. Such particles are typically produced by exploding stars, or in the strong magnetic fields around neutron stars or black holes, none of which is evident in RCW 38. One possible origin for the high-energy electrons is an undetected supernova that occurred in the cluster. Although direct evidence for such a supernova could have faded away thousands of years ago, a shock wave or a rapidly rotating neutron star produced by the outburst could be acting in concert with particles evaporating off the young stars to produce the high energy electrons. Regardless of the origin of the energetic electrons, their presence could change the chemistry of the disks that will eventually form planets around stars in the cluster. For example, in our own solar system, we find evidence of certain short-lived radioactive nuclei (Aluminum 26 being the most well known). This implies the existence of a high-energy process late in the evolution of our solar system. If our solar system was immersed for a time in a sea of energetic particles, this could explain the rare nuclides present in meteorites found on Earth today.
<urn:uuid:05689212-0a55-45bf-98f9-1fb8cfd885ed>
3.5
370
Knowledge Article
Science & Tech.
44.405784
The Mt. Redoubt volcano (located about 100 miles southwest of Anchorage, Alaska) produced a series of explosive eruptions beginning around 06:38 UTC on 23 March 2009. GOES-11 10.7 µm IR images (above) showed a few of the volcanic eruption clouds, which exhibited IR brightness temperature values of -50 to -58º C (yellow to red colors). Note that there was a 2 hour gap in the imagery, with no GOES-11 images available from 08:00 to 10:15 UTC — this was due to the fact that the GOES-11 satellite was in a “Spring eclipse” period, where the satellite was in the Earth’s shadow (and the solar panels could not generate the power necessary to operate the instruments). AWIPS images of the “GOES IR Satellite” data (below) demonstrated that the substitution of GOES-12 (GOES-East) imagery during the GOES-11 (GOES-West) eclipse period did not allow the continual tracking of the volcanic plume features (the images with missing data and the jagged edges are from GOES-12). The GOES-11 IR imagery indicated that most of the initial volcanic plumes headed toward the northeast, remaining to the north of Anchorage — but the plume from the later (and stronger) eruption that began after 12:30 UTC was seen to begin elongating and spreading out in more of a north-south direction, with the southern edge of that plume taking a path that appeared to be approaching Anchorage. Using AWIPS cursor sampling and referencing the 12:00 UTC Anchorage AK rawinsonde data, the coldest GOES-11 IR brightness temperature of -58º C corresponded to an altitude just over 30,000 feet — but the maximum height of the eruption cloud was reported to be as high as 50,000-60,000 feet above ground level. A number of Volcanic Ash Advisories were issued, and Alaska Airlines canceled 35 flights in and out of Anchorage International Airport as a precaution (since airborne volcanic ash is known to be a significant hazard to aviation). The beginning phase of the later, stronger eruption that began around 12:30 UTC can be seen on MODIS imagery (below). Note the cluster of very hot pixels on the 3.7 µm Channel 20 shortwave IR image (red color enhancement, temperatures as high as +57º C), which was a signature of the heat of the eruption at the summit of the volcano — in contrast, very cold IR brightness temperatures seen on the 11.0 µm Channel 31 IR image (as cold as -57º C, orange to red color enhancement) highlighted the portion of the volcanic eruption cloud that had reached very high altitudes in a very short time. A sequence of 1-km resolution NOAA-15, NOAA-17 and NOAA-18 10.8 µm IR images (below) shows a few of the initial volcanic plume features (circled in cyan) at 4 different times — 06:52 UTC, just after the initial eruption; 11:46 UTC, with an elongated plume which had drifted off to the northeast of Redoubt; 13:27 UTC, with a more dense plume feature that appeared to be spreading out in a NW-SE direction; and 14:30 UTC, showing another dense plume that had spread out even further in the N-S direction. The volcanic plume could also be seen on imagery from the WSR-88D radar located near Kenai, Alaska (below). Surface ash falls of 1/8 to 1/4 inch were reported at Skwentna (northwest of Anchorage), and ash was reported on all airport surfaces at Talkeetna (north of Anchorage) — in fact, ash was reported on the ground as far north as Healy. A 500-meter resolution MODIS “true color” Red/Green/Blue (RGB) composite image (below) shows a signature of ash fall on top of the pristine white snow cover of the Alaska Range (as denoted by the lighter brown tint). A higher resolution version is available from the Alaska Volcano Observatory. ===== 24 MARCH UPDATE ===== (courtesy of Mike Pavolonis, NOAA/NESDIS/ASPB) Shown below are some AVHRR ash retrievals (ash concentration, ash height, and ash effective particle radius) from the 23-24 March eruptions. All of these AVHRR products will be produced operationally by NOAA/NESDIS starting sometime next Spring (2010). Ash shows up as red in the accompanying false color images (upper left panels). Notice that the retrieved ash particle sizes are fairly large (mean effective radius of 7-10 micron). This may be one of the reasons that the 11 – 12 micron brightness temperature difference (BTD) signal was “weak” (very few negative BTD’s were present in the imagery). The presence of larger ash particles may also speak to the relatively quick dissipation of the visible ash cloud as seen in the imagery. Of course multi-layered meteorological clouds complicate matters further. Our retrieval takes into account the lower level meteorological clouds (limited by certain assumptions), but the complicated nature of the scene still results in additional uncertainty. Nevertheless, the results indicate that the largest amount of ash (only including ash that is not obscured by higher level hydrometeors), 155 kilo-tons, was seen after the 7:41 PM AKDT eruption on March 23 (03:41 UTC on March 24). We estimated the maximum height of the ash-dominated portion of the various volcanic clouds to be around 8-km, which is about 1 km shy of the Anchorage tropopause. The ice/SO2 dominated portion of the cloud likely went much higher. ===== 26 MARCH UPDATE ===== Another explosive eruption of the Mt. Redoubt volcano occurred around 17:24 UTC on 26 March 2009, sending ash to an estimated 65,000 feet. GOES-11 visible images (below) show the volcanic eruption cloud. The large viewing angle from the MTSAT-1R satellite offered a nice depiction of the initial volcanic eruption plume (below). Using a GOES-11 Sounder IR difference product — Band 10 (7.4 µm) minus Band 5 (13.3 µm) – that is sensitive to SO2, one can follow the signature of an SO2 plume (darker black filaments) as it moved southward from British Columbia in Canada over the Intermountain West region of the US (below). ===== 27 MARCH UPDATE ===== MODIS Band 26 near-IR (1.3µm) data can be used to detect particles that are good scatterers of light (such as cirrus ice crystals, airborne dust/haze/ash, etc); such scattering particles will exhibit a “brighter” signal on greyscale MODIS “cirrus detection” images. On 2 consecutive overpasses of the MODIS instrument (on Terra at 18:21 UTC, and on Aqua at 19:59 UTC) there was a subtle signal of an elevated volcanic plume that was oriented SW-NE across Iowa, southern Wisconsin, and far northern Illinois during the day on 27 March 2009 — this plume originated from one of the eruptions of the Redoubt volcano in Alaska a few days earlier. Ground-based lidar at the Space Science and Engineering Center (University of Wisconsin – Madison) depicted enhanced aerosol backscatter aloft, with multiple layers seen between 11-13 km around the time of the 2 MODIS images (below). Taking advantage of the large “forward scattering angle” of GOES-12 imagery late in the day, the volcanic plume could also be seen as a hazy feature on the visible channel imagery (below).
<urn:uuid:502ba1ee-11bd-4a4f-b0ee-02756936e4a6>
2.796875
1,632
Academic Writing
Science & Tech.
43.654271
Name: Liz S. In the "General Science Topics" section archives, there was a question concerning recharging magnets. The answer given did not really, for me, help me at all. Was the answer saying that we could take our magnets somewhere and could have someone recharge them? Is there any way a 5th grade teacher like myself can recharge permanant magnets? You cannot achieve much of a "recharge" with ordinary equipment available to you. However there is a way that will give a weak restorative effect to a (bar not horseshoe) magnet. Float the magnet atop a board in a plastic pan of water and allow it to come to rest "pointing north-south." Carefully remove it and hold it in the exact same N-S orientation as when floating on the board then repeatedly strike one end of the magnet sharply with a hammer. Wear safety glasses lest a chip of metal break off. The blows from the hammer will jiggle the magnetic domains in the magnet to realign with the earth's magnetic field. The process will work on any bar made of iron. If you successfully perform the experiment, you can explain the "why" of it to your students. Click here to return to the Physics Archives Update: June 2012
<urn:uuid:2652cd91-1d3c-4530-adc1-f34258cb13f7>
3.203125
275
Q&A Forum
Science & Tech.
58.194956
What is Array? An array in PHP is actually an ordered map, what is a map? A map is a type that associates values to the keys, you can consider as list, stack, hash table, dictionary or an array. An array can be created with the help of array() language construct. It takes number of parameters in the form of key=>value pair separated by comma. General format of an array is: An example of an array is given below: $first=array('Name' =>'Rose', 18=>"true"); Output of the above example is: In above example <?php ?> is the delimiter, which is mandatory for any PHP program. $first is the name of the variable array() is the language construct 'Name' and 18 are the keys, here character type keys should be enclosed within single or double quote and these are case sensitive, if you declare a key as 'Name' and if you write 'name' to access that key then an error message will be generated, echo is the language construct $first['Name'] is the actual way to access the array. ' . ' operator is used to concatenate strings. <br/> is used to break line Each line should be ended with a semicolon but at the last line it is optional. A key may be either an integer or string, floats in the key are truncated to integer. If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:549372a7-7ab1-4e37-8e05-533e8efea1f1>
3.640625
342
Documentation
Software Dev.
56.159359
Right after my previous article that discussed the historical problems in Antarctica, I found another article about Antarctica in National Geographic that uses modelling to answer the “mystery” of the Antarctic sea ice increase over the past 30 years. The reason it is a mystery is because that increase in sea ice coverage is contrary to the theory of global warming. This paper got plenty of attention when it was released last August and many, many problems have been pointed out by others, but most of the discussion has focused on the inappropriate definition of warming that has taken place in the Southern Ocean. The main problem that was brought up last summer was there is very little accurate data prior to 1978 (pre satellite data problem once again, as usual). The paper is specific in its discussion of warming from 1950-1999. The main prior discussion was about the lack of valid data for the pre-1978 period. It is easy to make a warming trend when half the period has no useful data. What didn’t get discussed was the actual behavior of temperature and Antarctic sea ice. From 1979-present there is plenty of satellite data that provides good coverage of the area. Since that also coincides with the period that has accurate coverage of the sea ice, it’s worth taking a look at the behavior of the period in question. This is important because once again the results of the paper depend on GCM results. This paper is a great example of ignoring the obvious answers in favor of model results. The biggest problem I have is trying to figure out how to organize the argument clearly enough to properly tear it apart… So I will start by building on the parts that are correct. The article even manages to start off on the right foot with this accurate summary of what has been happening to the sea ice in both polar regions. “Satellite data show that, over the past 30 years, Arctic sea ice has declined while Antarctic sea ice has mysteriously expanded” So far so good, that part is correct. It also makes another profoundly correct statement that “the two polar ecosystems are so different” from each other. I fully agree that they are different. As I pointed out in my last article about Antarctica. Here are the charts for the two regions in question. Since the warmists argue that the decreasing Arctic trend is caused by global warming, the theory should also cause the Antarctic sea ice to behave in the same manner. That is not happening. This is why the whole thing is a mystery. Antarctica is not behaving like their theory predicts, so GCM’s once again to the rescue. The Antarctic Circumpolar Current (ACC) dominates the climate of the ocean and of Antarctica. This can be easily be seen in animations of the Antarctic sea ice. While I agree with the point that the ACC (which isn’t mentioned in the paper) is dominant, the article makes the following suspiciously unsupported and odd statement. “Antarctic ice forms and melts each year and has always been governed more by wind and ocean circulation than air temperatures” Based on that statement it would seem that wind and ocean cause the ice to form and not the actual temperature of the air. So does that mean warm air would cause the ocean to freeze? They make the argument that warming has been happening, but that is based on a lack of data prior to 1978, but then they ignore the modern satellite data of the region, but the satellite data of the region shows a slight cooling trend. The fact is that the RSS satellite data covers 60S to 70S latitude. This is a very interesting region because it covers a majority of where the sea ice forms. The RSS data for the past 30 years shows a small cooling trend of -0.02 °C/decade for the region where the sea ice forms. Perhaps that is why they argue that air temperature doesn’t matter for the sea ice, because then the cause of the increasing ice would be decreasing temperature, but decreasing temperatures is also counter to the theory of global warming. The UAH covers from 60S to 85S. This covers the same region as the RSS and also much more into the Antarctic continent. The coverage that includes much more of the continent itself shows an even stronger cooling trend of -0.07 °C/decade. It would make sense that the ocean would change temperature at a slower rate than the land would. So the ocean is weakly cooling and the Antarctic continent is strongly cooling. So the real purpose of the paper is to build a model that shows warming temperatures that results in increased sea ice. They do this by saying that precipitation is causing the salinity of the Southern Ocean to drop which is why the sea ice is increasing. So less salty water is the reason that the sea ice is expanding. So the paper builds a model that shows how warming can cause the sea ice to increase. That is some really impressive modelling. I am sure that it is… something. But wait, even the warmists are shredding this paper. Trenberth himself has waded in saying that the model is missing the ozone hole. So Trenberth himself says the model is missing key components, but still the paper survived peer review and gets published….. Then gets touted as proof that global warming is legit. Not that the ozone hole matters, but that is another article. So here is a paper that pretty much everyone agrees is full of holes (almost no pun intended), but it is still got real news coverage. This is a paper that just has a big fat target on it. This is so bad that it almost makes me think that they are trying to poke fun at global warming. With that in mind, I saved the best for last. This is supposed to be the main scientific takeaway from the paper. “Climate scientists have cracked the mystery of why Antarctic sea ice has managed to grow despite global warming—but the results suggest the trend may rapidly reverse, a new study says.” “cannot give you a precise year—but definitely in this century” So the trend in growing sea ice might definitely reverse in this century based on a model that shows warming air causes more sea ice. Take that skeptics.
<urn:uuid:f7e026c4-e4e6-4a11-8d5d-ac8aaa25e651>
2.9375
1,277
Personal Blog
Science & Tech.
54.566945
While most pundits are focusing on the campaign, New York Times columnist Kristof has taken the time to write a thoughtful op-ed (subs. req’d) on a story that will be with us long, long after congressional pages or macaca or botched jokes about Iraq. He deserves special credit for devoting considerable ink to an area that has not gotten sufficient attention in the media (although it has been well-studied in the scientific community): ocean acidification: If you think of the earth’s surface as a great beaker, then it’s filled mostly with ocean water. It is slightly alkaline, and that’s what creates a hospitable home for fish, coral reefs and plankton — and indirectly, higher up the food chain, for us. But scientists have discovered that the carbon dioxide we’re spewing into the air doesn’t just heat up the atmosphere and lead to rising seas. Much of that carbon is absorbed by the oceans, and there it produces carbonic acid — the same stuff found in soda pop. That makes oceans a bit more acidic, impairing the ability of certain shellfish to produce shells, which, like coral reefs, are made of calcium carbonate. A recent article in Scientific American explained the indignity of being a dissolving mollusk in an acidic ocean: “Drop a piece of chalk (calcium carbonate) into a glass of vinegar (a mild acid) if you need a demonstration of the general worry: the chalk will begin dissolving immediately.” The more acidic waters may spell the end, at least in higher latitudes, of some of the tiniest variations of shellfish — certain plankton and tiny snails called pteropods. This would disrupt the food chain, possibly killing off many whales and fish, and rippling up all the way to humans. We stand, so to speak, on the shoulders of plankton.
<urn:uuid:26854f57-fa79-4899-b813-e3293205e0a0>
2.703125
406
Nonfiction Writing
Science & Tech.
41.112802
Report links global warming, storms - Keay Davidson, Chronicle Science Writer Tuesday, September 12, 2006 Scientists say they have found what could be the key to ending a yearlong debate about what is making hurricanes more violent and common -- evidence that human-caused global warming is heating the ocean and providing more fuel for the world's deadliest storms. For the past 13 months, researchers have debated whether humanity is to blame for a surge in hurricanes since the mid-1990s or whether the increased activity is merely a natural cycle that occurs every several decades. Employing 80 computer simulations, scientists from Lawrence Livermore National Laboratory and other institutions concluded that there is only one answer: that the burning of fossil fuels, which warms the climate, is also heating the oceans. Humans, Ben Santer, the report's lead author, told The Chronicle, are making hurricanes globally more violent "and violent hurricanes more common" -- at least, in the latter case, in the northern Atlantic Ocean. The findings were published Monday in the latest issue of the Proceedings of the National Academy of Sciences. Hurricanes are born from tropical storms fueled by rising warm, moist air in the tropics. The Earth's rotation puts a spin on the storms, causing them to suck in more and more warm, moist air -- thus making them bigger and more ferocious. In that regard, the report says, since 1906, sea-surface temperatures have warmed by between one-third and two-thirds of a degree Celsius -- or between 0.6 and 1.2 degrees Fahrenheit -- in the tropical parts of the Atlantic and Pacific oceans, which are hurricane breeding grounds. Critics of the theory that greenhouse gases are making hurricanes worse remained unconvinced by the latest research. Chris Landsea, a top hurricane expert, praised the Proceedings paper as a worthwhile contribution to science, but said the authors failed to persuasively counter earlier objections -- that warmer seas would have negligible impact on hurricane activity. Landsea, science and operations officer at the U.S. National Hurricane Center in Miami, noted that modern satellite observations have made hurricanes easier to detect and analyze, and that could foster the impression of long-term trends in hurricane frequency or violence that are, in fact, illusory. The surge in hurricane activity since the mid-1990s is just the latest wave in repeating cycles of hurricane activity, he said. Philip Klotzbach, a hurricane forecaster at Colorado State University, said that "sea-surface temperatures have certainly warmed over the past century, and ... there is probably a human-induced (global warming) component." But his own research indicates "there has been very little change in global hurricane activity over the past 20 years, where the data is most reliable." Researchers report in the Proceedings paper an 84 percent chance that at least two-thirds of the rise in ocean temperatures in these so-called hurricane breeding grounds is caused by human activities -- and primarily by the production of greenhouse gases. Tom Wigley, one of the world's top climate modelers and a co-author of the paper, said in a teleconference last week that the scientists tried to figure out what caused the oceans to warm by running many different computer models based on possible single causes. Those causes ranged from human production of greenhouse gases to natural variations in solar intensity. Wigley said that when the researchers reviewed the results, they found that only one model was best able to explain changing ocean temperatures, and it pointed to greenhouse gases in the atmosphere. The most infamous greenhouse gas is carbon dioxide, a product of human burning of fossil fuels in cars and factories. Wigley estimated the odds as smaller than 1 percent that ocean warming could be blamed on random fluctuations in hurricane activity, as some scientists suggest. The debate among scientists was triggered in August 2005, a few weeks before Hurricane Katrina struck New Orleans, when hurricane expert Kerry Emanuel of MIT wrote an article for the journal Nature proposing that since the 1970s, ocean warming had made hurricanes about 50 percent more intense in the Atlantic and Pacific oceans. Later, two scientific teams, both at Georgia Tech, estimated that warmer sea-surface temperatures were boosting both hurricane intensity and the number of the two worst types of hurricanes, known as Category 4 and Category 5 storms. Nineteen scientists from 10 institutions were involved in the Proceedings paper. In addition to Lawrence Livermore, other U.S. institutions included Lawrence Berkeley National Laboratory, the National Center for Atmospheric Research, NASA, UC Merced, Scripps Institution of Oceanography in La Jolla (San Diego County), and the National Oceanic and Atmospheric Administration. Santer's co-authors included six Livermore colleagues -- Peter J. Gleckler, Krishna AchutaRao, Jim Boyle, Mike Fiorino, Steve Klein and Karl Taylor -- and 12 other researchers from elsewhere in the United States and from Germany and England. Assuming that warmer water equals more bad hurricanes, scary times could be ahead for inhabitants of hurricane-prone regions. That's because "the models that we've used to understand the causes of (ocean warming) in these hurricane formation regions predict that the oceans are going to get a lot warmer over the 21st century," Santer said in a statement. "That causes some concern." You all can argue whether mankind is responsible for global warming....and make an interesting discssuin/debate of it. My money is on the credentialed climatologists. Ignoring this problem will not make it go away. "Leave it better than you found it"
<urn:uuid:64cc2d2e-7f51-4469-9c5f-2c509fce9e69>
3.0625
1,138
Comment Section
Science & Tech.
32.992077
An Inordinate Fondness for Eukaryotic Diversity Why do some groups of organisms, like beetles, have so many species, and others, like the tuataras, so few? This classic question in evolutionary biology has a deep history and has been studied using both fossils and phylogenetic trees. Phylogeny-based studies have focused on tree balance, which compares the number of species across clades of the same age in the tree. These studies have suggested that rates of speciation and extinction vary tremendously across the tree of life. In this issue, Rabosky et al. report the most ambitious study to date on the differences in species diversity across clades in the tree of life. The authors bring together a tremendously large dataset of multicellular eukaryotes, including all living species of plants, animals, and fungi; they divide these organisms into 1,397 clades, accounting for more than 1.2 million species in total. Rabosky et al. find tremendous variation in diversity across the tree of life. There are old clades with few species, young clades with many species, and everything in between. They also note a peculiar aspect of their data: it is difficult or impossible to predict how many species will be found in a particular clade knowing how long a clade has been diversifying from a common ancestor. This pattern suggests complex dynamics of speciation and extinction in the history of eukaryotes. Rabosky et al.'s paper represents the latest development in our efforts to understand the Earth's biodiversity at the broadest scales.
<urn:uuid:19ea8bf1-ff18-40d9-b0f6-6bbf67a15098>
3.53125
322
Nonfiction Writing
Science & Tech.
37.033
|It almost looks like toothpaste for an elephant, doesn't it?| Here's what you'll need: - 1/2 cup 6% hydrogen peroxide (sold in beauty supply stores or online as 20 Volume Clear Developer ) - 2 tsp. yeast (1 packet) - 3 tbsp. warm water - dish detergent - food coloring (optional) - empty 16oz plastic bottle - safety goggles - tray or container to catch the foaming fun Here's what to do: - Pour 1/2 cup of the peroxide into the empty water bottle (Hydrogen peroxide can irritate skin and eyes, so make sure you protect your eyes and skin and let a grown up do the pouring.) - Add about 8 drops of food coloring to the bottle (optional) - Add about 1 tbsp. of liquid dish soap into the bottle and swish it just a bit to mix it. - In a separate cup, combine yeast and warm water. Mix for about 30 seconds until most lumps are gone. - Now the fun begins!! Pour the yeast mixture into the bottle ( using a funnel) and watch the foaminess begin. - The foam is just water, soap and oxygen so it's safe to touch, but it will be warm because of the reaction! And here it is on video so you can see the fun foaming fountain in action!! Here's the science behind this experiment (from ScienceBob.com) Foam is awesome! The foam you made is special because each tiny foam bubble is filled with oxygen. The yeast acted as a catalyst (a helper) to remove the oxygen from the hydrogen peroxide. Since it did this very fast, it created lots and lots of bubbles. Did you notice the bottle got warm. Your experiment created a reaction called an Exothermic Reaction - that means it not only created foam, it created heat! The foam produced is just water, soap, and oxygen so you can clean it up with a sponge and pour any extra liquid left in the bottle down the drain. This experiment is sometimes called "Elephant's Toothpaste" because it looks like toothpaste coming out of a tube, but don't get the foam in your mouth! Sharing this awesome experiment at:
<urn:uuid:83a490f1-76a3-419b-ac92-f4ed1ddaf9c8>
2.75
472
Tutorial
Science & Tech.
66.905645
An improved method for nanocrystal placement for the floating gate of a flash memory cell, using a protein-mediated self-assembly approach, was described at IEDM by Shan Tang, U. of Texas, Austin. A template formed by a chaperonin protein lattice can be used to place nanocrystals of different types in a regular array at high density from a colloidal suspension, according to Tang and her coworkers. Chaperonins are large multimeric structures with two stacked rings having a central cavity into which proteins bind. The interior of the cavity is hydrophobic, so that nanocrystals combined with hydrophobic molecules can be trapped inside cavities of some 4.6nm diameter with 4.5nm walls for the protein used in the experiments. The chaperonins can be self-assembled into a crystal lattice on a silicon surface through noncovalent interactions. Experiments showed that cavity size can be varied to provide a potential nanocrystal-size filter using magnesium or potassium ions or adenosine triphosphate (ATP), Tang explained. After the nanocrystals are uniformly distributed at high density, the protein is removed by annealing. Nanocrystals or quantum dots between the control and tunnel oxide in flash memory cells are being explored because they promise to greatly extend retention time, while avoiding leakage from any weak spot across the tunnel oxide (electrons are stored at specific sites, rather than across a film). They also might operate at lower power and at higher speed with longer lifetimes. The experiments with lead selenide (PbSe) and cobalt (Co) nanocrystals were done with chaperonin 60 (GroEL), the most studied chaperonin protein, with Co showing the best storage retention. This was expected, since a metal provides a higher density of states so that more electrons can be stored at each site. Small bottles of the protein are readily available commercially, according to Tang. The authors conclude that flash memories could be fabricated with the protein-mediated self-assembly process for floating gates using any existing nanocrystals. - B.H.
<urn:uuid:c0b39ea5-8381-41e1-8268-bac1ed264ef0>
2.734375
440
Truncated
Science & Tech.
28.159073
Ocean's Ability to fix Nitrogen Underestimated In order to predict how Earth's climate develops scientists have to know which gases and trace elements are naturally bound and released by the ocean and in which quantities. For nitrogen, an essential element for the production of biomass, there are many unanswered questions. Scientists from Kiel, Bremen and Halifax have now published a research study in the international journal Nature showing that widely applied methods are part of the problem. Of course scientists like it when the results of measurements fit with each other. However, when they carry out measurements in nature and compare their values, the results are rarely "smooth." A contemporary example is the ocean's nitrogen budget. Here, the question is: how much nitrogen is being fixed in the ocean and how much is released? "The answer to this question is important to predicting future climate development. All organisms need fixed nitrogen in order to build genetic material and biomass," explains Professor Julie LaRoche from the GEOMAR | Helmholtz Centre for Ocean Research Kiel. Despite scientific efforts, the nitrogen budget suffers from an apparent dilemma. The analysis of ocean sediment as a long-term climate archive has shown that the amounts of fixed nitrogen equaled those of released nitrogen for the past 3000 years. However, modern measurements in the ocean demonstrate that the amounts of released nitrogen exceed the amounts of nitrogen being fixed. These results leave a "gap" in the nitrogen budget and show inconsistencies between past, long-term reconstruction and short-term measurements. In 2010, the GEOMAR microbiologist Wiebke Mohr pointed out that these inconsistencies could be partially due to the methods widely used to measure modern biological nitrogen fixation. Following this finding, scientists from GEOMAR, Christian-Albrechts-University of Kiel (CAU), Max Planck Institute for Marine Microbiology (MPI) Bremen and Dalhousie University in Halifax (Canada) tested a new approach in the Atlantic Ocean which had been suggested by Mohr: The results of the study are now presented in the international journal "Nature." Storm over ocean photo via Shutterstock. Read more at ScienceDaily.
<urn:uuid:ff6bce7b-9a3b-4f11-98d8-e274802fc74b>
3.953125
440
Truncated
Science & Tech.
28.224164
Many geologists of the time were still skeptical of the new theory that Earth’s climate could change at all. It wasn’t easy to think that Mother Nature had once put the whole globe into a deep freeze, but my hero got on-board with the program early. He argued (correctly) that much of the upper part of our country had once been buried under thick, glacial ice, and he did so by pointing to specific pieces of evidence he could describe and draw. That alone would make Whittlesey commendable in my book. He looked at good evidence, published as widely as he could at the time, and argued for his views. But this is what really impresses me. Despite the fact Whittlesey was living and working in the Midwest – pretty far from the ocean – he had the insight to see that the massive glaciers of the past must have changed global sea level drastically. Here’s the picture: During times of bitter cold in the past 2 million years, extensive ice sheets and major glaciers have formed in North America and Scandinavia. While those glaciers have been draped on the land, they have “locked up” a great deal of Earth’s waters. More and bigger glaciers meant lower and lower sea level in the Ice Age, a point Whittlesey deduced early. One of his estimates put sea level of the Ice Age as around 300 feet lower than today, a value that stands up well to current scientific data. With sea level hundreds of feet lower than it is now, brown bears (and people) could walk from Siberia to Alaska – and they evidently did so, spreading down into North America. But climate naturally evolves on Earth, and the Ice Age came to its end in due time. When global temperatures shot upward into the warmth we enjoy in this epoch, the massive glaciers and ice sheet started to melt. Water that had been on land (as ice), flowed down into the seas. Ocean levels rose, and rose, and rose some more, an increase totaling hundreds of feet. But oddly enough, that’s not the end of the story. In some places – like Sweden and Hudson Bay – the land is rising out of the sea. In other words, in those places, local sea level has been falling even while global sea level has been rising. In Scandinavia and Hudson Bay, the evidence the sea is falling compared to the land is the many old beaches that are high and dry on the land well above current sea level. These “raised beaches” show us the land is moving upward even faster than global sea level has been rising. But there are not raised beaches like these everywhere on Earth, only in places where (interestingly enough) major glaciers used to lie. Our hero was one geologist who had some insight on this issue, too. The ice sheets of the Ice Age were literally a couple miles thick and covered whole regions. When the Ice Age glaciers melted, their staggering weight was removed. Gradually, the land under the ice has moved upward – and it’s still doing so. The land in Hudson Bay and Scandinavia is headed higher and higher at a faster rate than global sea level. So local sea level where my herring-eating ancestors live in southern Sweden is dropping relative to land. Climate change on Earth guarantees that sea level can go up a whole lot in some places and down in others. Those are the realities with which people have adapted for a long time, and will doubtless have to do so again. Like climate itself, the only constant we geologists can see when it comes to sea level is change. Dr. E. Kirsten Peters, a native of the rural Northwest, was trained as a geologist at Princeton and Harvard. Follow her on the web at rockdoc.wsu.edu
<urn:uuid:abf46d3c-1152-4f98-820f-e48bdcc687ed>
3.5625
794
Personal Blog
Science & Tech.
56.751302
Recent News Articles Genes are evidence of face shape Dutch researchers say they’ve found genes that might hold clues to what shape a face may have, providing a useful DNA tool for forensics. Researchers at the Erasmus University Medical Center in Rotterdam, in a study of almost 10,000 individuals, have discovered five genes responsible for facial shape in humans, the BBC reported Thursday. They used magnetic resonance imaging of people’s heads to map facial configurations then conducted a genetic study to search for small genetic variations found in people with particular facial shape types. “These are exciting first results that mark the beginning of the genetic understanding of human facial morphology,” Erasmus lead researcher Manfred Kayser said. “Perhaps some time it will be possible to draw a phantom portrait of a person solely from his or her DNA left behind, which provides interesting applications such as in forensics.” The research follows other recent studies, one suggesting DNA can also predict hair and eye color and a second that said age could be inferred from blood samples. Copyright 2012 by United Press International
<urn:uuid:322ad040-f621-4572-8d25-caedbb0ff2a0>
3.296875
231
Truncated
Science & Tech.
23.259167
|pythonware.com||products ::: library ::: search ::: daily Python-URL!| The text widget stores and displays lines of text. The text body can consist of characters, marks, and embedded windows or images. Different regions can be displayed in different styles, and you can also attach event bindings to regions. By default, you can edit the text widget's contents using the standard keyboard and mouse bindings. To disable editing, set the state option to DISABLED (but if you do that, you'll also disable the insert and delete methods). Indexes are used to point to positions within the text handled by the text widget. Like Python sequence indexes, text widget indexes correspond to positions between the actual characters. Tkinter provides a number of different index types: line/column indexes are the basic index type. They are given as strings consisting of a line number and column number, separated by a period. Line numbers start at 1, while column numbers start at 0, like Python sequence indexes. You can construct indexes using the following syntax: "%d.%d" % (line, column) It is not an error to specify line numbers beyond the last line, or column numbers beyond the last column on a line. Such numbers correspond to the line beyond the last, or the newline character ending a line. Note that line/column indexes may look like floating point values, but it's seldom possible to treat them as such (consider position 1.25 vs. 1.3, for example). I sometimes use 1.0 instead of "1.0" to save a few keystrokes when referring to the first character in the buffer, but that's about it. You can use the index method to convert all other kinds of indexes to the corresponding line/column index string. A line end index is given as a string consisting of a line number directly followed by the text ".end". A line end index correspond to the newline character ending a line. INSERT (or "insert") corresponds to the insertion cursor. CURRENT (or "current") corresponds to the character closest to the mouse pointer. However, it is only updated if you move the mouse without holding down any buttons (if you do, it will not be updated until you release the button). END (or "end") corresponds to the position just after the last character in the buffer. User-defined marks are named positions in the text. INSERT and CURRENT are predefined marks, but you can also create your own marks. See below for more information. User-defined tags represent special event bindings and styles that can be assigned to ranges of text. For more information on tags, see below. You can refer to the beginning of a tag range using the syntax "tag.first" (just before the first character in the text using that tag), and "tag.last" (just after the last character using that tag). "%s.first" % tagname "%s.last" % tagname If the tag isn't in use, Tkinter raises a TclError exception. The selection is a special tag named SEL (or "sel") that corresponds to the current selection. You can use the constants SEL_FIRST and SEL_LAST to refer to the selection. If there's no selection, Tkinter raises a TclError exception. You can also use window coordinates as indexes. For example, in an event binding, you can find the character closest to the mouse pointer using the following syntax: "@%d,%d" % (event.x, event.y) Embedded object name can be used to refer to windows and images embedded in the text widget. To refer to a window, simply use the corresponding Tkinter widget instance as an index. To refer to an embedded image, use the corresponding Tkinter PhotoImage or BitmapImage object. Expressions can be used to modify any kind of index. Expressions are formed by taking the string representation of an index (use str if the index isn't already a string), and appending one or more modifiers from the following list: The keywords can be abbreviated and spaces can be omitted as long as the result is not ambigous. For example, "+ 5 chars" can be shortened to "+5c". For compatibility with implementations where the constants are not ordinary strings, you may wish to use str or formatting operations to create the expression string. For example, here's how to remove the character just before the insertion cursor: def backspace(event): event.widget.delete("%s-1c" % INSERT, INSERT) Marks are (usually) invisible objects embedded in the text managed by the widget. Marks are positioned between character cells, and moves along with the text. You can use any number of user-defined marks in a text widget. Mark names are ordinary strings, and they can contain anything except whitespace (for convenience, you should avoid names that can be confused with indexes, especially names containing periods). To create or move a mark, use the mark_set method. Two marks are predefined by Tkinter, and have special meaning: INSERT (or "insert") is a special mark that is used to represent the insertion cursor. Tkinter draws the cursor at this mark's position, so it isn't entirely invisible. CURRENT (or "current") is a special mark that represents the character closest to the mouse pointer. However, it is only updated if you move the mouse without holding down any buttons (if you do, it will not be updated until you release the button). Special marks can be manipulated as other user-defined marks, but they cannot be deleted. If you insert or delete text before a mark, the mark is moved along with the other text. To remove a mark, you must use the mark_unset method. Deleting text around a mark doesn't remove the mark itself. If you insert text at a mark, it may be moved to the end of that text or left where it was, depending on the mark's gravity setting (LEFT or RIGHT; default is RIGHT). You can use the mark_gravity method to change the gravity setting for a given mark. In the following example, the "sentinel" mark is used to keep track of the original position for the insertion cursor. text.mark_set("sentinel", INSERT) text.mark_gravity("sentinel", LEFT) You can now let the user enter text at the insertion cursor, and use text.get("sentinel", INSERT) to pick up the result. Tags are used to associated a display style and/or event callbacks with ranges of text. You can define any number of user-defined tags. Any text range can have multiple tags, and the same tag can be used for many different ranges. Unlike the Canvas widget, tags defined for the text widget are not tightly bound to text ranges; the information associated with a tag is kept also if there is no text in the widget using it. Tag names are ordinary strings, and they can contain anything except whitespace. SEL (or "sel") is a special tag which corresponds to the current selection, if any. There should be at most one range using the selection tag. The following options are used with tag_config to specify the visual style for text using a certain tag. Table 42-1. Text Tag Options If you attach multiple tags to a range of text, style options from the most recently created tag override options from earlier tags. In the following example, the resulting text is blue on a yellow background. text.tag_config("n", background="yellow", foreground="red") text.tag_config("a", foreground="blue") text.insert(contents, ("n", "a")) Note that it doesn't matter in which order you attach tags to a range; it's the tag creation order that counts. You can change the tag priority using the tag_raise and tag_lower. If you add a text.tag_lower("a") to the above example, the text becomes red. The tag_bind method allows you to add event bindings to text having a particular tag. Tags can generate mouse and keyboard events, plus <Enter> and <Leave> events. For example, the following code snippet creates a tag to use for any hypertext links in the text: text.tag_config("a", foreground="blue", underline=1) text.tag_bind("a", "<Enter>", show_hand_cursor) text.tag_bind("a", "<Leave>", show_arrow_cursor) text.tag_bind("a", "<Button-1>", click) text.config(cursor="arrow") text.insert(INSERT, "click here!", "a")
<urn:uuid:bedbe06e-9ac5-44d0-9d73-20bc65c9ba51>
3.546875
1,868
Documentation
Software Dev.
56.03606
Rudyard Kipling’s Just So Stories tell tales not so much of evolution, but of the magic and wonder of the animal world. He describes the wizard who gave the camel a hump for its laziness, and the alligator who snapped and stretched the nose of a naïve young elephant to its current lengthy proportion. Those delightful fables, published some 70 years after Jean-Baptiste Lamarck’s death, provide entertaining explanations for such evolved traits, and were clearly inspired by Lamarck’s description of adaptive change, not Charles Darwin’s. In his 1809 publication Philosophie Zoologique, Lamarck wrote of the giraffe, from whose habit of reaching for the green leaves of tall trees “it has resulted . . . that the animal’s forelegs have become longer than its hind legs, and that its neck is lengthened to such a degree that the giraffe, without rearing up on its hind legs . . . attains a height of six meters.” Although biologists have generally considered Lamarck’s ideas to contain as much truth as Kipling’s fables, the burgeoning field of epigenetics has made some of us reconsider our ridicule. While no biologist believes that organisms can willfully change their physiology in response to their environment and pass those changes on to their offspring, some evidence suggests that the environment can make lasting changes to the genome via epigenetic mechanisms—changes that may be passed on to future generations. Epigenetics: genome gatekeeper Epigenetic changes can range from chemical modifications of histone proteins—such as acetylation and methylation—to modifications made to the DNA itself. Such changes usually cause chromatin compaction, which limits the ability of the RNA polymerase II transcription complex to access DNA, ultimately resulting in reduced messenger RNA (mRNA) and protein output. Many view epigenetics as an annotation or editing of the genome that defines which genes will be silenced in order to streamline protein production or squelch unnecessary redundancy. That annotation, they say, does not and cannot permanently change the original manuscript (i.e., DNA), but merely access to the manuscript. Just as epigenetics was gaining acceptance within the general scientific community, scientists began reporting observations of a newly identified phenomenon called transgenerational epigenetic inheritance, or the passage of epigenetic changes from a parent to its offspring. Recent experimental work in mice, worms, and pigs has found evidence that some degree of transgenerational epigenetic inheritance may take place.[1. B.T. Heijmans et al., “Persistent epigenetic differences associated with prenatal exposure to famine in humans,” PNAS, 105:17046–49, 2008.],[2. T.B. Franklin et al., “Epigenetic transmission of the impact of early stress across generations,” Biol Psychiatry, 68:408–15, 2010.],[3. O. Rechavi et al., “Transgenerational inheritance of an acquired small RNA-based antiviral response in C. elegans,” Cell, 147:1248–56, 2011.],[4. M. Braunschweig et al., “Investigations on transgenerational epigenetic response down the male line in F2 pigs,” PLoS ONE, 7: e30583, 2012.] A fascinating 2008 study that looked at people born during the Dutch Hunger Winter in 1944–1945 hints at the possibility that transgenerational epigenetic inheritance also occurs in humans.1 Adults who were conceived during the famine had distinct epigenetic marks that their siblings born before or after the famine did not. These marks reduced the production of insulin-like growth factor 2 (IGF2) and affected the growth of the famine-gestated children. Notably, these marks were retained for several decades in the afflicted individuals. While these observations suggest the possibility of transgenerational epigenetic inheritance, the modifications could also have occurred in utero as a result of famine conditions rather than being inherited in the germline. Therefore, whether such a distinct phenomenon occurs in humans remains to be definitively determined. However, in model experimental systems, there is strong evidence for transgenerational epigenetic inheritance.2,3,4 In one study carried out in mice, an environmental stress that resulted in aggressive behavior in males caused the same behavior in their offspring.[5. T.B. Franklin, I.M. Mansuy, “Epigenetic inheritance in mammals: evidence for the impact of adverse environmental effects,” Neurobiol Dis, 39:61–65, 2010.] Notably, the offspring had changes in the DNA methylation patterns of particular genes. Collectively, these and other transgenerational studies all point to the notion that selective pressure can be applied from the environment and passed on to daughter cells and offspring. While epigenetic modifications to the genome are well studied, far less is known about how particular epigenetic marks are directed to their target loci. Clearly, something is guiding the modifications, which appear to be differentially distributed based on particular stresses induced on the cell or organism. Recent studies suggest that epigenetic changes, and possibly transgenerational epigenetic inheritance, could be explained by a somewhat unexpected molecular player: long noncoding RNA. Long noncoding RNAs (lncRNAs) are transcripts generally expressed from regions of “junk” DNA that are not thought to code for proteins. Estimates of lncRNA abundance range from 70 to 98 percent of transcripts present in the cell, and some are several thousand bases long.[6. T.R. Mercer et al., “Long non-coding RNAs: insights into functions,” Nat Rev Genet, 10: 155–59, 2009.] Unlike short noncoding RNAs, such as short interfering RNA, which silence genes by cutting mRNAs in the cytoplasm, lncRNAs appear to bind to transcripts in the nucleus as they emerge from the replication fork of the DNA, and recruit enzyme complexes to induce epigenetic changes at these loci.[7. K.V. Morris, “Long antisense non-coding RNAs function to direct epigenetic complexes that regulate transcription in human cells,” Epigenetics, 4:296–301, 2009.] Some of these lncRNAs bind transcripts from the protein-coding gene during the normal transcription process.[8. K.V. Morris et al., “Bidirectional transcription directs both transcriptional gene activation and suppression in human cells,” PLoS Genet, 4: e1000258, 2008.],[9. W. Yu et al., “Epigenetic silencing of tumour suppressor gene p15 by its antisense RNA,” Nature, 451:202–06, 2008.],[10. P.G. Hawkins, K.V. Morris, “Transcriptional regulation of Oct4 by a long non-coding RNA antisense to Oct4-pseudogene 5,” Transcr, 1:165–75, 2010.] The associated chromatin remodeling proteins then modify the local chromatin and DNA, suppressing gene expression. One such modification is methylation of the DNA, which presumably occurs when the lncRNAs direct enzymes such as the DNA methyltransferase DNMT3a to targeted spots on the genome. Alternatively, lncRNAs can direct modifications of nearby histones, usually in the form of methylation of the histone tail. DNA methylation itself can be passed down from a cell to its daughter cells.[11. M.S. Weinberg et al., “The antisense strand of small interfering RNAs directs histone methylation and transcriptional gene silencing in human cells,” RNA, 12:256–62, 2006.] In addition, it has been known for some time that such modifications can also lead to permanent changes in the genetic code. Methylation of a cytosine (C), for example, can cause that nucleic acid to change to a thymine (T) through deamination, or the removal of an amine group. Nearly 80 percent of methylation sites in the human genome occur on a cytosine that is followed by a guanine, in a CpG sequence. Deamination occurs when the methylated C undergoes a hydrolysis reaction resulting in the production of ammonia, followed by the conversion of the cytosine to a thymine at that spot in the DNA sequence. While this C-to-T conversion is considered random, the spontaneous deamination of methylated CpGs has been found to be about 2-fold faster than C-to-T conversions in nonmethylated CpG sequences,[12. J.C. Shen et al., “The rate of hydrolytic deamination of 5-methylcytosine in double-stranded DNA,” Nucleic Acids Res, 22:972–76, 1994.] suggesting a bias toward CpG regions in the deamination process. Although these ideas have yet to be substantiated by complete experimental evidence, one can envision this as a model for how the system might work—a mechanism by which epigenetic changes, guided by lncRNAs, could make permanent and heritable changes to the genome. Indeed, such a lncRNA-based DNA editing system could be driving some aspects of genetic variation and could explain the common appearance of single nucleotide polymorphisms within a species. If this is true, one has to wonder what role lncRNA-directed DNA methylation has been playing in the evolution of the genome. View full size JPG | PDF Intriguingly, a greater frequency of targeted C-to-T changes could also result in an overall loss of complementarity between the sequence and the lncRNA that targets it. As a result, rather than initiating suppression of the target gene, the change could result in renewed transcription in subsequent generations. At the same time, this process could permit the target transcript to fold into a different conformation, thereby allowing other subsets of ?lncRNA interactions to occur at slightly different loci. Alternatively, changes to the lncRNAs themselves might lead to a loss of lncRNA-protein associations, resulting in different cellular machinery being localized to the particular target loci. Thus, the over-activity of one lncRNA could doom that lncRNA to a loss of function, but simultaneously result in the evolution of a new regulatory lncRNA network with potentially different downstream effects. Furthermore, a site frequently targeted by lncRNAs would likely contain a larger proportion of T:A bonding between the DNA strands, due to deamination events. Such permanent and heritable changes in the genetic code could change the shape of the encoded protein, its function, or its ability to be transcribed altogether. One can begin to envision how environmental variation, by instigating epigenetic changes, could increase organismal complexity, thus giving populations a greater chance at surviving new and perhaps permanent environmental threats. In other words, epigenetics, rather than random genetic point mutations, could provide the missing link between environmental pressure and the resulting genetic variability that generates robustness of a species. Most certainly, if such a pathway were to exist in human cells, one would expect it to be elusive purely due to the sheer complexity of the process—involving lncRNAs, epigenetic changes, DNA methylation, and deamination. Thus, it is not out of the realm of possibility that such a mechanism exists, but has yet to be elucidated by science. The inner molecular workings of the cell are vastly complex, and the emerging realization that lncRNAs are active modulators of gene transcription and epigenetic states only complicates the picture. Clearly, as more data emerges in this exciting area of research, additional layers of regulation will need to be added to the central dogma of molecular biology. Although an organism cannot pass down specific information about its own experiences—the giraffe will not be able to help its offspring reach taller trees just by stretching its own neck—it may give succeeding generations a fighting chance in a difficult environment by offering them a slightly altered arsenal of genetic tools. Kevin V. Morris is an associate professor at The Scripps Research Institute in La Jolla, California, and the University of New South Wales in Sydney, Australia.
<urn:uuid:50621577-6c76-4768-93d4-9599e0a0e736>
3.5
2,563
Nonfiction Writing
Science & Tech.
37.243348
An international research team led by Makoto Kishimoto from the Max Planck Institute for Radio Astronomy in Bonn presents some of the first long-baseline interferometric measurements in the infrared towards nearby Active Galactic Nuclei with the Keck interferometric telescope in Hawaii. The team finds the measurements to indicate a ring-like emission from sublimating dust grains, and its radius to yield insights into the morphology of the accreting material around the black hole in these nuclei. The results are published in “Astronomy and Astrophysics” in the first week of December 2009. The nuclei of many galaxies show very intense radiation from X-ray to optical, infrared, and radio, where the nucleus sometimes exhibits a strong jet. These Active Galactic Nuclei (AGN) are thought to be powered by accreting supermassive black holes. The accreting gas and dust are especially bright in optical and infrared (IR) radiation. In May 2009, Makoto Kishimoto and his team successfully observed 4 such AGN with the Keck Interferometer at Hawaii. Their target sources included NGC 4151, a relatively nearby galaxy only 50 million light-years away, but also a distant quasar at redshift 0.108 (corresponding to a distance of more than a billion light-years). “This was only possible due to the huge effort of the Keck staff members to improve the sensitivity of the instrument”, says Makoto Kishimoto, the paper’s leading author. The United Kingdom Infrared Telescope (UKIRT) was used to follow up the Keck observations in order to obtain up-to-date near-IR images of the galaxies. Astronomers have been trying to directly see how the supermassive black hole is eating up the surrounding gas and how the strong jet is being launched around the black hole. However, to spatially resolve such a distant object at IR wavelengths, a telescope having a diameter of the order of 100 m would be required. Instead of building such a huge telescope, a more practical way is to combine the beams from two or more telescopes that are so far apart in order to detect an interference pattern of the two beams and infer what the black hole vicinity looks like. “The technique we are using is very new and very demanding in terms of observing conditions and data analysis”, says Robert Antonucci from the University of California at Santa Barbara, co-author of the paper. In the future, there will be many telescopes, or a telescope array extended over several kilometers. Such arrays have already been used at radio, but not yet at IR or optical wavelengths. Optical/IR interferometry is still in an early stage – currently using two or three telescopes. A prototype array is formed by the two Keck telescopes of 10 m diameter each, the so-called Keck interferometer (KI). While the Keck Interferometer has been used to observe many stars in our Galaxy, it has been quite challenging to observe objects outside of our Galaxy, especially supermassive black holes in the nuclei of other galaxies. This is simply because they are much fainter. Interferometric observations of such objects especially at the shorter side of IR wavelengths, or near-IR, has been particularly difficult. The difficulty is directly related to the size of the wavelength – e.g., in the radio wavelength which is much longer than IR wavelengths, the interferometric technique is already used routinely. Until recently, only one AGN has been successfully observed with the KI. This galaxy, NGC 4151, is one of the brightest of these sources in the optical/IR wavelengths. The new, more sensitive observations of four galaxies have lead to quite a clear picture of what is being resolved – a ring-like emission of dust grains, co-existing in the accreting gas, which are hot enough to be sublimating. Utilizing different, independent measurements of the radius of this dust sublimation region (which come from the analysis of the variabilities of the optical and IR light), the team thinks that they have also possibly started to probe how the accreting material is distributed radially from the black hole – i.e., how compact or how extended the material distribution is. “While we have got the highest spatial resolution in the IR, this is still a relatively outer region of the central black hole system”, says Makoto Kishimoto. “We hope to achieve an even higher resolution using telescopes that are much further apart in order to get even closer to the center, and we also hope to observe many other supermassive black hole systems.”
<urn:uuid:4519e3ff-22cb-43f3-a2e4-898dfe4bb7fb>
3.578125
967
Knowledge Article
Science & Tech.
31.420537